title
listlengths
0
18
author
listlengths
0
4.41k
authoraffiliation
listlengths
0
6.45k
venue
listlengths
0
9
abstract
stringlengths
1
37.6k
doi
stringlengths
10
114
pdfurls
listlengths
1
3
corpusid
int64
158
259M
arxivid
stringlengths
9
16
pdfsha
stringlengths
40
40
text
stringlengths
66
715k
github_urls
listlengths
0
36
[ "RECURSION FORMULAS FOR INTEGRATED PRODUCTS OF JACOBI POLYNOMIALS", "RECURSION FORMULAS FOR INTEGRATED PRODUCTS OF JACOBI POLYNOMIALS" ]
[ "Sven Beuchler ", "Tim Haubold ", "ANDVeronika Pillwein " ]
[]
[]
From the literature it is known that orthogonal polynomials as the Jacobi polynomials can be expressed by hypergeometric series. In this paper, the authors derive several contiguous relations for terminating multivariate hypergeometric series. With these contiguous relations one can prove several recursion formulas of those series. This theoretical result allows to compute integrals over products of Jacobi polynomials in a very efficient recursive way. Moreover, the authors present an application to numerical analysis where it can be used in algorithms which compute the approximate solution of boundary value problem of partial differential equations by means of the finite elements method (FEM). With the aid of the contiguous relations, the approximate solution can be computed much faster than using numerical integration. A numerical example illustrates this effect.Key words. Hypergeometric function, orthogonal polynomials, high order finite element methods, recurrence equations AMS subject classifications. 33C45, 33C70, 65N30 1. Introduction. Finite element methods (FEM) are popular and versatile methods to compute approximate solutions of partial differential equations (PDE) on complicated domains, for which no exact solution is known. The solution is expanded in a basis that is constructed on a mesh of basic geometric objects, in our case simplices. The coefficients of these basis functions are computed as solution to a linear system Ku = f whose entries are the integral of products of (derivatives of) these basis functions, see[15,47,42,11]). If the solution of the underlying PDE is smooth, the local polynomial degree p is increased in order to improve the accuracy of the computed FEM-solution in comparison to the exact one,[45,16]. However, the computation of the FEM-solution requires a suitable basis in order to keep the computational cost as low as possible. In [10], a set of basis functions based for the Poisson problem in 2D was presented, that yields a sparse linear system. These basis function are based on classical orthogonal polynomials on a triangle, see e.g.[41],[34] and also[18]. The sparsity is obtained by the orthogonality relations. This work was later generalized to 3D as well as other partial differential equations, see[9]for an overview on these results.In all these cases, the sparsity for the respective basis functions was proven by explicitly evaluating the integrals using difference-differential relations satisfied by Jacobi polynomials. In the case of tetrahedral elements, these computations became so involved that they could not be carried out by hand, but were proven using symbolic computation. The basis for the application of computer algebra is that Jacobi polynomials P (α,β) n (x) are holonomic [51, 35], i.e., they satisfy certain types of difference-differential equations in all parameters. For various representations, it is possible to automatically derive new identities involving Jacobi polynomials automatically[30,8]. These procedures are rigorous and many come with easy-to-verify certificates.The outcome of the algorithm are the coefficients K ik of the linear system (matrix) K computed explicitly as rational functions in the parameters. For a rational function, it is fairly easy to derive recurrence relations. In order to find a minimal order recurrence, we have used a Mathematica package by Kauers [29] for guessing recurrences. These are easily proven by plugging in the actual entries. This guess-and-prove approach is very common also with more complicated input than rational functions. There exist implementations of algorithms for holonomic functions in several computer algebra systems such as, e.g., mgfun in Maple[14]or ore algebra in Sage[31].The results in this article show how recurrences, for the integrals of two multivariate orthogonal polynomials and H 1 basis functions on a triangle, can be found directly from a multivariate hypergeometric series representation p F q of the integrals using contiguous relations and terminating p F q identities. This kind of techniques is well known *
10.1007/s00365-023-09655-z
[ "https://arxiv.org/pdf/2105.08989v1.pdf" ]
234,777,992
2105.08989
1ae212cad153aff729fb20a1a3f54f835d0d2ac0
RECURSION FORMULAS FOR INTEGRATED PRODUCTS OF JACOBI POLYNOMIALS Sven Beuchler Tim Haubold ANDVeronika Pillwein RECURSION FORMULAS FOR INTEGRATED PRODUCTS OF JACOBI POLYNOMIALS From the literature it is known that orthogonal polynomials as the Jacobi polynomials can be expressed by hypergeometric series. In this paper, the authors derive several contiguous relations for terminating multivariate hypergeometric series. With these contiguous relations one can prove several recursion formulas of those series. This theoretical result allows to compute integrals over products of Jacobi polynomials in a very efficient recursive way. Moreover, the authors present an application to numerical analysis where it can be used in algorithms which compute the approximate solution of boundary value problem of partial differential equations by means of the finite elements method (FEM). With the aid of the contiguous relations, the approximate solution can be computed much faster than using numerical integration. A numerical example illustrates this effect.Key words. Hypergeometric function, orthogonal polynomials, high order finite element methods, recurrence equations AMS subject classifications. 33C45, 33C70, 65N30 1. Introduction. Finite element methods (FEM) are popular and versatile methods to compute approximate solutions of partial differential equations (PDE) on complicated domains, for which no exact solution is known. The solution is expanded in a basis that is constructed on a mesh of basic geometric objects, in our case simplices. The coefficients of these basis functions are computed as solution to a linear system Ku = f whose entries are the integral of products of (derivatives of) these basis functions, see[15,47,42,11]). If the solution of the underlying PDE is smooth, the local polynomial degree p is increased in order to improve the accuracy of the computed FEM-solution in comparison to the exact one,[45,16]. However, the computation of the FEM-solution requires a suitable basis in order to keep the computational cost as low as possible. In [10], a set of basis functions based for the Poisson problem in 2D was presented, that yields a sparse linear system. These basis function are based on classical orthogonal polynomials on a triangle, see e.g.[41],[34] and also[18]. The sparsity is obtained by the orthogonality relations. This work was later generalized to 3D as well as other partial differential equations, see[9]for an overview on these results.In all these cases, the sparsity for the respective basis functions was proven by explicitly evaluating the integrals using difference-differential relations satisfied by Jacobi polynomials. In the case of tetrahedral elements, these computations became so involved that they could not be carried out by hand, but were proven using symbolic computation. The basis for the application of computer algebra is that Jacobi polynomials P (α,β) n (x) are holonomic [51, 35], i.e., they satisfy certain types of difference-differential equations in all parameters. For various representations, it is possible to automatically derive new identities involving Jacobi polynomials automatically[30,8]. These procedures are rigorous and many come with easy-to-verify certificates.The outcome of the algorithm are the coefficients K ik of the linear system (matrix) K computed explicitly as rational functions in the parameters. For a rational function, it is fairly easy to derive recurrence relations. In order to find a minimal order recurrence, we have used a Mathematica package by Kauers [29] for guessing recurrences. These are easily proven by plugging in the actual entries. This guess-and-prove approach is very common also with more complicated input than rational functions. There exist implementations of algorithms for holonomic functions in several computer algebra systems such as, e.g., mgfun in Maple[14]or ore algebra in Sage[31].The results in this article show how recurrences, for the integrals of two multivariate orthogonal polynomials and H 1 basis functions on a triangle, can be found directly from a multivariate hypergeometric series representation p F q of the integrals using contiguous relations and terminating p F q identities. This kind of techniques is well known * since the works of Gauß and Euler, see e.g. [4] or [43], but are seldom applied to multivariate series. Usually one would refer to the works of Wilson [49] or would use symbolic computation to derive such recursion formulas, but a direct derivation approach happened to be more insightful, not only for the proof, but for the general structure of such series as well. Multivariate hypergeometric series are more difficult to handle than general p F q series. In some cases one can reduce a multivariate series to a general one, by using well known summation theorems like the Pfaff-Saalschütz, Dougall's Summation or Whipple's transformation, see e.g. [7] or [4], but usually there is no transformation, which holds for all parameter configurations, which we are interested in. A broad overview of convergence theory for multivariate series can be found e.g. in [21], but does not need to be applied here, since our series are based on Jacobi polynomials, which are terminating series in itself. The notation which will be used goes back to Burchnall and Chaundy [12]. For the first time, this sheds light on the underlying structure of the integral values and furthermore the sparsity pattern of finite element matrices can be read out from the coefficients of the recursion formulas. To our knowledge, these identities are unknown and interesting in their own right. For the community in numerical analysis theorem 4.1 is the most important result stating how the nonzero entries of K can be computed recursively in optimal arithmetical complexity. It is a consequence of the main result of this article, theorem 4.12. The results can be extended to an arbitrary simplex or basis functions in the function spaces H(curl) and H(div). Optimal complexity was first achieved for finite element matrices not so long ago by [1] for basis functions based on Bernstein polynomials, but the resulting element matrices were dense. One can transform the basis functions based on Jacobi polynomials to a basis based on Bernstein polynomials, see e.g. [3], [2], but this transformation loses optimal arithmetical complexity for the assembly, though it has other useful properties, which will not be discussed here. For element mass matrices, based on Bernstein polynomials, one can use a block recursive structure as can be seen in [33], which results in an efficient inversion technique. Furthermore recursion formulas for the orthogonal polynomials on a triangle have already been computed as can be seen in e.g. in [50] and [38]. But to the knowledge of the authors, there are only a few publications in which the matrix entries K ik of a FEM-Matrix are computed completely recursively, see [40] for a special case. Part of the here shown recursion formulas were first published in [25]. This paper generalizes the recursion formulas and presents a proof for those relations. This paper is organized as follows: In section 2, the authors introduce hypergeometric series and Jacobi polynomials and give an overview on several known identities and notations that are used throughout the paper. A short motivation of the background from FEM is given in section 3. The main theoretical results are formulated in section 4. Finally, some algorithmic aspects and numerical experiments are presented in section 5. Throughout this paper, the indices n, m, i, j, k, l denote natural numbers. P (α,β) n is the n-th Jacobi Polynomial with indices α, β. The set ρ, δ usually denotes another set of Jacobi polynomial indices. F stands for some kind of (generalized or multivariate) hypergeometric series and I for the exact value of an integral. 2. Introduction to special functions. Hypergeometric series. For a ∈ R let (a) n = (a)(a + 1) . . . (a + n − 1) = Γ(a + n) Γ(a) be the Pochhammer symbol or rising factorial and Γ() denote the Gamma function [17], which is given by Γ(n) = (n − 1)! for n ∈ N. DEFINITION 2.1 (Gaussian hypergeometric series). For a, b, c arbitrarily the series (2.1) 2 F 1 a, b c ; x = ∞ ∑ n=0 (a) n (b) n (c) n x n n! is called (Gaussian) hypergeometric series. The Gaussian hypergeometric series converges absolutely for Re(c − a − b) > 0, see [4] or [43]. Classical orthogonal polynomials can be expressed as hypergeometric series, see e.g. [43]. For Jacobi polynomials in particular, we have (2.2) P (α,β) n (x) = (α + 1) n n! 2 F 1 −n, n + α + β + 1 α + 1 ; (1 − x) 2 . For x = 1 series (2.1) can be summed and written in closed form. THEOREM 2.2 (Gauß[1812]). For Re(c − a − b) > 0 (2.3) 2 F 1 a, b c ; 1 = ∞ ∑ n=0 (a) n (b) n (c) n n! = Γ(c)Γ(c − a − b) Γ(c − a)Γ(c − b) . This can be proven by using an integral representation due to Euler, see [4]. If a (or b) is a negative integer, the identity simplifies even more: COROLLARY 2.3 (Chu-Vandermonde). Let a = −m with m ∈ N. Then ∞ ∑ n=0 (−m) n (b) n (c) n n! = (c − a) m (c) m . A generalized version of the 2 F 1 () is called a generalized hypergeometric series. DEFINITION 2.4. Let (a i ), (b j ), i = 1, . . . p, j = 1, . . . q. Then the series (2.4) p F q a 1 , a 2 , . . . , a p b 1 , b 2 , . . . b q ; x = ∞ ∑ n=0 (a 1 ) n (a 2 ) n . . . (a p ) n (b 1 ) q (b 2 ) q . . . (b n ) q x n n! is called a generalized hypergeometric series. The series p F q a 1 , a 2 , . . . , a p b 1 , b 2 , . . . b q ; x converges absolutely for all x if p < q or for |x|< 1 if p = q + 1, see e.g. [4]. Higher classes of polynomials, e.g. polynomials of the Hahn class (see [4]), can be described by those series. For the special case 3 F 2 −m, a, b c, 1 + a + b − c − m ; 1 , m ∈ N,(2.5) ∞ ∑ n=0 (−m) n (a) n (b) n (c) n (1 + a + b − c − n) n n! = (c) m (c − a − b) m (c − a) m (c − b) m . This can be proven by equating the coefficients of a transformation for the 2 F 1 which again is due to Euler, see [4] or [43]. Important for summability is the fact that the series is balanced or Saalschützian. DEFINITION 2.6. A generalized hypergeometric series (2.4) is called s-balanced if x = 1 and a 1 + a 2 + . . . a p = b 1 + b 2 + . . . b q + s. The case s = 1 is also called Saalschützian. There are some summation theorems like Chu-Vandermonde or the Pfaff-Saalschütz theorem for p, q greater than 3, 2, but they are usually more restrictive. For example for a balanced 7 F 6 series, there holds Dougall's summation, but the series needs to fulfil additional properties, see e.g. [4], [7] for more information. Contiguous Relations. Recurrence relations can be proven by using the contiguous relations of the hypergeometric series. We will briefly summarize the basic ideas as can be found in the book of Rainville [43]. Alternative and/or equivalent methods for more general hypergeometric series, can be found for example in the book of Andrews et al. [4] or in the work of Bailey [6] and Wilson [49]. Let F := 2 F 1 a, b c ; x be a Gaussian hypergeometric series. Taking the derivative yields ∂ ∂x F = ∞ ∑ n=0 (a) n (b) n (c) n nx n−1 n! = ∞ ∑ n=1 (a) n (b) n (c) n x n−1 (n − 1)! = ∞ ∑ n=0 (a) n+1 (b) n+1 (c) n+1 x n n! = ab c 2 F 1 a + 1, b + 1 c + 1 ; x . (2.6) For the ease of notation when stating the contiguous relations, we follow common practice [4] and write F(a+) : = 2 F 1 a + 1, b c ; x , F(a−) := 2 F 1 a − 1, b c ; x , F(b+) := 2 F 1 a, b + 1 c ; x , F(b−) := 2 F 1 a, b − 1 c ; x , F(c+) := 2 F 1 a, b c + 1 ; x , F(c−) := 2 F 1 a, b c − 1 ; x . Therefore (2.6) can be written as ∂ ∂x F = ab c F(a+, b+, c+). Now define the differential operator θ x = x ∂ ∂x , also known as Euler operator. It is applied as follows (2.7) (θ x + a)F = ∞ ∑ n=0 (a) n (b) n (c) n (a + n)x n n! = ∞ ∑ n=0 (a) n+1 (b) n (c) n x n n! = a ∞ ∑ n=0 (a + 1) n (b) n (c) n x n n! = aF (a+) . Analogously the formulas (θ x + b)F = bF(b+) and (2.8) (θ x + c − 1)F = (c − 1)F(c−) can be proven. (2.9) From (2.7) and (2.8) follows (a − b)F = aF(a+) − bF(b+), this is one of the 15 contiguous relations 1 for the 2 F 1 and its six contiguous functions. The other can be obtained by the same kind of straight forward computations. For a complete list, see e.g. [4], [7], [43] or [46]. Many of the important recurrence relation between Jacobi polynomials can be proven by using the contiguous relations of the 2 F 1 . Jacobi Polynomials. The Jacobi polynomials (2.2) are the polynomials, which are orthogonal on [−1, 1] with respect to the weight function w(x) := (1 − x) α (1 + x) β , α, β > −1. They can either be given by P (α,β) n (x) = 1 (x − 1) α (x + 1) β 2 n n! ∂ ∂x n (x − 1) n+α (x + 1) n+β , which is the Rodrigues formula or by the hypergeometric representation (2.2). Furthermore the property (2.10) P (α,β) n (x) = (−1) n P (β,α) n (x) yields the representation (2.11) P (α,β) n (x) = (−1) n (1 + β) n n! 2 F 1 −n, n + α + β + 1 β + 1 ; 1 + x 2 . 1 Although this reduces to 9 relations, if one takes symmetry into account, see [4] We refer to the standard literature for more information on these properties, e.g. [4], [43], [48]. Since the Jacobi polynomials are orthogonal polynomials they satisfy a three term recurrence relation. It is given by, (2.12) 2n(α+β + n)(α + β + 2n − 2)P (α,β) n (x) = (α + β + 2n − 1)(α 2 − β 2 ) + x(α + β + 2n − 2) 3 P (α,β) n−1 (x) − 2(a + n − 1)(β + n − 1)(α + β + 2n)P (α,β) n−2 (x). Next, let us recall several difference equations satisfied by Jacobi polynomials, (see e.g. [43,Ch. 16]) that are summarized in the following lemma. LEMMA 2.7. The Jacobi polynomials (2.2) satisfy (α + β + n)P (α,β) n (x) = (β + n)P (α,β−1) n (x) + (α + n)P (α−1,β) n (x) (2.13) 1 2 (2 + α + β + 2n)(x − 1)P (α+1,β) n (x) = (n + 1)P (α,β) n+1 (x) − (1 + α + n)P (α,β) n (x) (2.14) 1 2 (2 + α + β + 2n)(x + 1)P (α,β+1) n (x) = (n + 1)P (α,β) n+1 (x) + (1 + β + n)P (α,β) n (x) (2.15) (α + β + 2n)P (α,β−1) n (x) = (α + β + n)P (α,β) n (x) + (α + n)P (α,β) n−1 (x) (2.16) (α + β + 2n)P (α−1,β) n (x) = (α + β + n)P (α,β) n (x) − (β + n)P (α,β) n−1 (x) (2.17) 2P (α,β) n (x) = (1 + x)P (α,β+1) n (x) + (1 − x)P (α+1,β) n (x) (2.18) P (α,β) n−1 (x) = P (α,β−1) n (x) − P (α−1,β) n (x) (2.19) Further relations between different Jacobi polynomials that can be found in [43] will be introduced if required. In a high order finite element context it is often useful to use integrated Jacobi polynomialsP (α,0) n (x),, which can be written as Jacobi polynomials, (2.20) x −1 P (α,0) n−1 (s) ds =:P (α,0) n (x) = 2 n + α − 1 P (α−1,−1) n (x), where the Jacobi polynomials with index β = −1 and α > −1 are defined properly, see e.g. [48], as (2.21) P (α,−1) n (x) = 1 + x 2 n + α n P (α,1) n−1 (x). Integrated Legendre polynomials can be defined similarly, (2.22) x −1 P (0,0) n−1 (s) ds =P (0,0) n (x) = 2 n − 1 P (−1,−1) n (x) = x 2 − 1 2(n − 1) P (1,1) n−2 (x). Background from the Finite Element Method (FEM). Variational formulation and the function space H 1 . In this paper, we investigate the following problem in variational formulation: For a given bounded domain Ω ⊂ R d find u in the Sobolev space H 1 such that (3.1) a(u, v) := Ω µ∇u · ∇v + Ω κ u v = Ω f v =: F(v) ∀v ∈ H 1 holds. For ease of notation, we assume Neumann boundary conditions. The bilinear form a(·, ·) and the linear form F(·) are well-defined and bounded for f ∈ [L 2 (Ω)] and µ, κ ∈ [L ∞ (Ω)] with µ > 0 and κ, µ are assumed to be piecewise constant. The variational formulation is well-defined for square-integrable vector-valued functions u : Ω → R 3 with square-integrable gradient. We denote the according function space by H 1 := {u ∈ [L 2 (Ω)] : ∇u ∈ [L 2 (Ω)] d }, d = 2, 3. (3.2) The variational formulation (3.1) is obtained from the discretization of the reaction-diffusion equation −∇ · (µ∇u) + κu = f by multiplication with a test function v, integration over the domain Ω and applying Green's formula to the second order part. We refer the interested reader to [20,36,22] for more informations concerning this topic. For complicated geometries Ω and real-life data it is not possible to solve the equations (3.1) analytically. The finite element method (FEM) provides a general method for the numerical solution of partial differential equations. It is based on the variational formulation of the underlying PDE and provides a profound analysis. Finite element discretization of H 1 . Galerkin methods such as, e.g., FEMs are among the most powerful methods for the solution of boundary value problems of the form (3.1). The Galerkin approximation relies on the orthogonal projection of the implicitly given solution u in (3.1) onto a N-dimensional subspace V N ⊂ H 1 with respect to the bilinear form a(·, ·). Therefore, we construct a sequence of finite dimensional spaces V N ⊂ H 1 and consider the solution of (3.1) on V N (see e.g. [15,47] or the textbooks [42,11]), namely (3.3) Find u N ∈ V N such that a(u N , v N ) = F(v N ) ∀v N ∈ V N . The finite element method provides a special construction of these discrete spaces V N by piecewise polynomial functions on an admissible subdivision (see [15]) T h of Ω into simplices τ s with s = 1, ..., nel, i.e., (3.4) V N := {u ∈ H 1 : u| τ s ∈ P p (τ s ) ∀τ s ∈ T h }, where P p is the space of all polynomials defined on τ s of maximal total degree p. The elements τ s are chosen such that κ and µ are constant on the elements. In hp-finite element methods the polynomial degree p can vary on each element τ s which provides extraordinary fast convergence of the finite element method with respect to the number of degrees of freedom N = dim(V N ), see e.g. [45]. This is crucial for the solution of real world problems of the form (3.1). Since the space V N is finite dimensional, the space is equipped with a row vector of basis functions [Ψ] := [ψ 1 , . . . , ψ N ]. The basis functions ψ j are chosen such that they have local support (see e.g. [15]). Then using the ansatz u N (x) = ∑ N i=1 u i ψ i (x) and setting v = ψ j for j = 1, ..., N in (3.3) the problem becomes equivalent to solving the following system of N linear algebraic equations (3.5) Find a coefficient vector u := [u i ] N i=1 ∈ R N s.t. Ku = f with K = a(ψ j , ψ i ) N i,j=1 ∈ R N×N system matrix, f = F(ψ j ) N j=1 ∈ R N right hand side vector. Note that the matrix K depends on the choice of the basis functions. Efficient solution of algebraic system. In practical problems, the dimension N usually becomes very large ( 10 6 ). Iterative methods as the preconditioned GMRES method or the preconditioned conjugate gradient-method (pcgmethod) for positive definite systems are preferred for the solution of (3.3). The two main important issues for the fast solution of the system Ku = f are • the fast multiplication Ku, • the choice of a good preconditioner C −1 for K such that the condition number κ(C −1 K) becomes small, in order to obtain a fast convergence of the iterative solver for the solution of (3.5). If K is a dense matrix, the operation Ku requires N 2 flops. If K is a sparse matrix with a bounded number c of nonzero entries per row, the computational complexity of the matrix vector-multiplication is bounded by cN. Since K in (3.5) depends on the choice of the basis [Ψ] it is essential to choose a basis with as many orthogonality relations as possible with respect to the bilinear form a(·, ·). The choice of the basis heavily influences the properties of the matrix K: • the local support of finite element basis functions yields sparse system matrices K and hence a cheap matrix vector multiplication Ku • the condition number of K and C −1 K, respectively, stability and less iterations in iterative solution methods. In the lower order version of FEM, i.e., the h-version, multigrid solvers are the most powerful methods for discretizations of boundary value problems of partial differential equations, see [23] and the references therein. For hp-FEM this strategy is combined with appropriate local smoothers and static condensation. hp-FEM and choice of basis functions. In hp-FEM, the local polynomial degree p s on the elements may be large. Despite of the local support of the basis functions [Ψ], the local dimension n s grows as O(p d s ), where d = 2, 3 is the spatial dimension. Hence we are interested in a bounded number of nonzero entries in the system matrix independent of the polynomial degrees. Let [Φ s ] = [φ i,s ] n s i=1 be the set of all basis functions ψ j with supp ψ j ∩ τ s = ∅, e.g. [Φ s ] = [Ψ]L s with the (boolean) finite element connectivity matrices L s . In finite element methods, the global system matrix K is the result of assembling local matrices, see [15]. In our case, one obtains K = Ω µ∇[Ψ] · ∇[Ψ] + κ [Ψ] [Ψ] = nel ∑ s=1 τ s µ∇[Ψ] · ∇[Ψ] + κ [Ψ] [Ψ]. Together with [Φ s ] = [Ψ]L s on τ s , this implies (3.6) K = nel ∑ s=1 L s K s L s with the local stiffness and mass matrices K s = τ s µ −1 ∇[Φ s ] · ∇[Φ s ] + κ[Φ s ] · [Φ s ] (3.7) = µ s τ s ∇[Φ s ] · ∇[Φ s ] + κ s τ s [Φ s ] [Φ s ] =: µ s A s + κ s M s on the elements τ s , respectively, where µ s = µ | τ s and κ s = κ | τ s are constants. Therefore, the sparsity of the matrices K s in (3.7) implies sparsity of the matrix K, cf. (3.6). Our aim is to develop a local polynomial basis [Φ s ] such that the matrices A s and M s in (3.7) have a bounded number of nonzero entries per row. The global basis is the obtained in the usual way, see e.g. [16]. Model problem. For ease of presentation, we are focusing on the following model problem: Let denote an arbitrary non degenerated simplex ⊂ R 2 . Find a polynomial basis [Φ] = [φ i ] n(p) i=1 of degree p with φ i : → R 2 such that the matrices M := φ j φ i n i,j=1 = [Φ] [Φ] (3.8) A := ∇φ j · ∇φ i n i,j=1 = ∇[Φ] ∇[Φ] have O(n) nonzero entries. This basis should be suited for H 1 conformity. Definition of the basis functions. Let denote an arbitrary non-degenerated simplex ⊂ R 2 , its set of four vertices by V = {V 1 , V 2 , V 3 }, V i ∈ R 2 and λ 1 , λ 2 , λ 3 ∈ P 1 ( ) its barycentric coordinates uniquely defined by λ i (V j ) = δ ij . Using the integrated Jacobi polynomials (2.20), we define the shape functions on the affine triangle with baryzentrical coordinates λ m (x, y), m = 1, 2, 3. • The vertex functions are chosen as the usual linear hat functions 3 ] be the basis of the vertex functions. ψ V,m (x, y) := λ m (x, y), m = 1, 2, 3. Let Ψ 2 V := [ψ V,1 , ψ V,2 , ψ V, • For each edge E = [e 1 , e 2 ], running from vertex V e 1 to V e 2 , we define ψ [e 1 ,e 2 ],i (x, y) =p (0,0) i λ e 2 − λ e 1 λ e 1 + λ e 2 (λ e 1 + λ e 2 ) i . By Ψ [e 1 ,e 2 ] = ψ [e 1 ,e 2 ],i p i=2 , we denote the basis of the edge bubble functions on the edge [e 1 , e 2 ]. Ψ 2 E = Ψ [1,2] , Ψ [2,3] , Ψ [3,1] is the basis of all edge bubble functions. • The interior bubbles are defined as (3.9) ψ ij (x, y) := g i (x, y)h ij (x, y), i + j ≤ p, i ≥ 2, j ≥ 1, where the auxiliary bubble functions g i and h ij are given by g i (x, y) :=p (0,0) i λ 2 − λ 1 λ 1 + λ 2 (λ 1 + λ 2 ) i and h ij (x, y) :=p (2i−1,0) j (2λ 3 − 1), Moreover, Ψ 2 I = ψ ij i+j≤p i≥2,j≥1 denotes the basis of all interior bubbles. Finally, let Ψ ∇,2 = Ψ 2 V , Ψ 2 E , Ψ 2 I be the set of all shape functions on s . Sparsity Results. It can be proved, [10], that the matrices M and A (3.8) have a limited number of nonzeros entries. Usually the stiffness matrices is the more important part, but for the ease of presentation, we will focus on the mass matrix. In particular, one obtains (3.10) m ( ) ij,kl = ψ ij (x, y)ψ kl (x, y) = 0 if |i + k − j − l|> 4 or |i − k| / ∈ {0, 2}. for the mass matrix. Similar results (with a similar sparsity pattern) can be proved for stiffness matrix as well as for the 3D case and other applications, see [9] for an overview. Nevertheless, it remains to compute the nonzero entries in optimal arithmetical complexity. The type of integrals is similar to all of the above mentioned applications. For simplicity, it is explained for m ij,kl in (3.10) on the reference elementˆ with the vertices V 1 = (−1, −1), V 2 = (1, −1) and V 3 = (0, 1). Then, one has to compute mˆ ij,kl = 1 −1 1−y 2 y−1 2p (0,0) i 2x 1 − y p (0,0) k 2x 1 − y 1 − y 2 i+kp (2i−1,0) j (y)p (2k−1,0) l (y) dx dy z= 2x 1−y = 1 −1p (0,0) i (z)p (0,0) k (z) dz :=I (0,0,0,0) i,k 1 −1 1 − y 2 i+k+1p (2i−1,0) j (y)p (2k−1,0) l (y) dy. (3.11) The integral I (0,0,0,0) i,k is only nonzero if |i − k| / ∈ {0, 2}. Inserting this into the second integral, one has to compute (3.12) I (i+k+1,0,2i−1,2k−1) j,l = 1 −1 1 − y 2 i+k+1p (2i−1,0) j (y)p (2k−1,0) l (y) dy with i, j ∈ N and |i − k|≤ 2. For the other applications, the weights in (3.12) differ slightly. Our aim is to develop recursion formulas for (3.12) in a more general setting. Recursion identities. Application: Building a bridge between special functions and FEM. The FEM basis functions are defined on a triangle. By using the so called Duffy transformation one can transform these onto the square (−1, 1) × (−1, 1), see e.g. [18], [28] for details. Thus the problem in the FEM application boils down to one dimensional integrals over integrated Jacobi polynomials. A generalized version of one of these integrals is by (2.21) and (2.22) equivalent to I n,m := 1 −1 (1 − x) µ (1 + x) ν P (α,β) n (x)P (ρ,δ) m (x) dx, with n + α + β ≥ 0 as well as m + ρ + δ ≥ 0. We are interested in finding recursion formulas for I n,m with respect to n, m. Our main result for the numerical application of this paper is the following theorem. m (x) dx can be calculated recursively by (m + n + µ + 1) I n,m = (m + n + α + ρ − µ − 1) I n−1,m−1 + (n − m + α − µ − 1) I n−1,m + (m − n + ρ − µ − 1) I n,m−1 , (4.1) where α, ρ > −1, and µ ≥ 0. Moreover the exact value I (2) n,m := 1 −1 1−x 2 µ p (α,0) n (x) p (ρ,0) m (x) dx can be calculated recursively by (m + n + µ + 1) I (2) n,m = (m + n + α + ρ − µ − 5) I (2) n−1,m−1 + (n − m + α − µ − 3) I (2) n−1,m + (m − n + ρ − µ − 3) I (2) n,m−1 (4.2) Proof. The results are an immediate consequence of theorem 4.12 and corollary 4.13. Both recursion formulas shown in theorem 4.1 are just special cases of the more general theorem 4.12, which will be derived and proven analytically in chapter 4.4. The right picture in figure 4.1 shows a schematic representation of the recursion formulas, whereas the left picture displays the typical nonzero pattern of the element matrix. We will give now a short example, how these recursion formulas can be applied to the finite element context. EXAMPLE 1. The recursion formula for the entry I n,m can be applied directly to case of the mass matrix for the H 1 basis functions. On the triangleT with vertices (−1, −1), (1, −1) and (0, 1) the interior basis functions (3.9) take the form Ψ i,j = p (0,0) i 2x 1 − y 1 − y 2 i p (2i−1,0) j (y). This results in the following two one dimensional integrals, as seen above in (3.11), The remainder of this paper is dedicated to proof the generalized version of 4.1. We start by rewriting the exact integral value. By using a series representation of the Jacobi polynomial, one can write the exact value of the integral as mˆ ij,kl z= 2x 1−y = 1 −1p (0,0) i (z)p (0,0) k (z) dz :=I (0,0,0,0) i,k 1 −1 1 − y 2 i+k+1p (2i−1,0) j((4.3) I n,m = 2 µ+ν+1 (α + 1) n (ρ + 1) m n! m! n ∑ l=0 m ∑ r=0 (−n) l (n+α+β+1) l (α+1) l l! (−m) r (m + ρ + δ + 1) r (ρ + 1) r r! 1 −1 (1 − x) µ+r+l (1 + x) ν dx. The integral in the double sum is exactly Euler's Beta integral, i.e. (4.4) 1 −1 (1 − x) µ+r+l (1 + x) ν dx = B(µ + r + l + 1, ν + 1), where B(x, y) = Γ(x)Γ(y) Γ(x+y) . Thus I n,m can be written as with Z r,l := (−n) l (n + α + β + 1) l (α + 1) l l! (−m) r (m + ρ + δ + 1) r (ρ + 1) r r! B(µ + r + l + 1, ν + 1). The Beta function can be expressed as using Pochhammer symbols as well, i.e. B(µ + r + l + 1, ν + 1) = Γ(µ + r + l + 1)Γ(ν + 1) Γ(µ + ν + r + l + 2) = Γ(ν + 1) Γ(µ + 1) Γ(µ + ν + 2) (µ + 1) r+l (µ + ν + 2) r+l = B(µ + 1, ν + 1) (µ + 1) r+l (µ + ν + 2) r+l . 4.2. Recurrence relation for the special case ν, β, δ = 0. If we set ν = β = δ = 0 one can rewrite (4.5) in a generalized hypergeometric series. First, rewrite the Beta function using Pochhammer symbols, then split r + l by using combinatorial arguments and use the Pfaff-Saalschütz theorem (2.5) to reduce the double sum in (4.5). After some more combinatorial arguments (4.6) I n,m = 1 −1 (1 − x) µ P (α,0) n P (ρ,0) m dx = const. 4 F 3 −m, ρ + m + 1, 1, 1 + µ − α ρ + 1, n + 2, 1 + µ − α − n ; 1 with µ ≤ α or α ∈ R \ N, follows, see [27] or [19] for more details. Starting from this representation, we now prove a recursion originally obtained through the symbolic package Guess by Manuel Kauers [29]. c 1 = −(α + m − n − 1), c 2 = −(α − m + n − 1), c 3 = −(2α + m + n + 1) holds. Proof. We start by deriving a contiguous relation of the hypergeometric series in (4.6). Since 4 F 3 −m, α + m + 1, 1, 1 − α α + 1, n + 2, 1 − α − n ; 1 = ∞ ∑ κ=0 (−m) κ (m + α + 1) κ (1) κ (1 − α) κ (α + 1) κ (n + 2) κ (1 − α − n) κ 1 κ! =:φ κ the contiguous functions can be written as 4 F 3 −m − 1, α + m + 2, 1, 1 − α α + 1, n + 2, 1 − α − n ; 1 = ∞ ∑ κ=0 (−m − 1)(α + m + 1 + κ) (κ − m − 1)(α + m + 1) φ κ and 4 F 3 −m, α + m + 1, 1, 1 − α α + 1, n + 3, −α − n ; 1 = ∞ ∑ κ=0 (2 + n)(κ − α − n) (2 + n + κ)(−n − α) φ κ . Hence, in order to find a recursion of the form (4.7), 0 = ∞ ∑ κ=0 −c 0 (m + α + 1 + κ)(κ − α − n) (2 + n + κ)(m − 1 + κ) − c 1 (m + α + 1 + κ) (−m − 1 + κ) − c 2 (κ − α − n) (2 + n + κ) + c 3 φ κ needs to hold. Expanding the fractions in the same denominator reduces the condition to −c 0 (m + 1 + α + κ)(κ − α − n) − c 1 (m + 1 + a + κ)(2 + n + κ) − c 2 (κ − α − n)(κ − m − 1) + c 1 (κ − m − 1)(n + 2 + κ) = 0. To find the coefficients c i , i = 0, . . . , 3, we view this as polynomial of κ and equate the coefficients of κ to zero, leading to the equations − c 0 − c 1 − c 2 + c 3 = 0, − c 0 (k + 1 − n) − c 1 (−α − n − m − 1) − c 2 (m + n + α + 3) + c 3 (n + 1 − m) = 0, − c 0 (m + 1 + α)(−α − n) − c 1 (−α − n)(−m − 1) − c 2 (m + 1 + α)(n + 2) + c 3 (−m − 1)(n + 2) = 0. This leads to the underdetermined system   −(m + 1 + α)(−α − n) −(−α − n)(−m − 1) −(m + 1 + α)(n + 2) (−m − 1)(n + 2) −(m + 1 − n) −(−α − n − m − 1) −(m + n + α + 3) (n + 1 − m) −1 −1 −1 1       c 0 c 1 c 2 c 3     = 0, then we can choose c 0 = 1 and bring this to the right-hand side   −(−α − n)(−m − 1) −(m + 1 + α)(n + 2) (−m − 1)(n + 2) −(−α − n − m − 1) −(m + n + α + 3) (n + 1 − m) −1 −1 1     c 1 c 2 c 3   =   −(m + 1 + α)(−α − n) −(m + 1 − n) −1   Solving this system leads to c 1 = −(α + m − n − 1) (m + n + 3) , c 2 = −(α − m + n − 1) (m + n + 3) , c 3 = −(2α + m + n + 1) (m + n + 3) . Rescaling c 0 leads to the proposed recurrence relation. Further simplification of the generalized hypergeometric series in (4.6) is not that that trivial. There is a summation formula for a balanced 4 F 3 in ( [44], [32]), but this can not be applied here since the coefficients of the series don't match. A summation by using Whipple's transformation and Dougall's summation (see e.g. [4]) works only for the case n = m, otherwise the resulting transformed series isn't well-poised. Moreover the more general case µ, ν, β and δ arbitrarily can't be represented by a 4 F 3 since neither of the sums in (4.5) is summable by the Pfaff-Saalschütz theorem. To be precise, those general series are ν + 1 balanced. We therefore use a more general approach. Kampé de Fériet Series. The concept of hypergeometric series can be extended to the multivariate case. Examples are the Appell series [5], the Kampé de Fériet series [5] or the Lauricella series. DEFINITION 4.3. For p 1 , p 2 , p 3 , q 1 , q 2 , q 3 ∈ N and coefficients (a 1 , . . . a q 1 ), (b 1 , . . . b p 1 ), . . . arbitrarily the series F p 1 ;p 2 ;p 3 q 1 ;q 2 ;q 3 = ∞ ∑ n,m=0 ∏ p 1 i=1 (k i ) n+m ∏ p 2 i=1 (a i ) n ∏ p 3 i=1 (b i ) m ∏ q 1 i=1 (l i ) n+m ∏ q 2 i=1 (c i ) n ∏ q 3 i=1 (d i ) m x n y m n! m! is called Kampé de Fériet series. The notation is due to Burchnall and Chaundy (see [12], [13]). More information, in particular on the convergence theory of such series, can be found e.g. in [21]. We can write I n,m now as a generalized Kampé de Fériet series, i.e. (4.8) F := I n,m = 2 µ+ν+1 (α + 1) n (ρ + 1) m B(ν + 1, µ + 1) n! m! F 1;2;2 1;1;1 µ + 1 ; −n n + α + β + 1 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 1 ; ρ + 1 ; 1; 1 , where B(x, y) = Γ(x)Γ(y) Γ(x+y) is the usual Beta-function as above. F converges, since it is terminating due to the coefficients −n and −m. Furthermore we omit indices of F to keep the notation as simple as possible. We denote the forward shift of parame-ters by one analogously to earlier, i.e., F(α+) = 2 µ+ν+1 (α + 2) n (ρ + 1) m B(ν + 1, µ + 1) n! m! F 1;2;2 1;1;1 µ + 1 ; −n n + α + β + 2 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 2 ; ρ + 1 ; 1; 1 F(n+) = 2 µ+ν+1 (α + 1) n (ρ + 1) m B(ν + 1, µ + 1) (n + 1)! m! F 1;2;2 1;1;1 µ + 1 ; −n − 1 n + α + β + 2 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 1 ; ρ + 1 ; 1; 1 F(µ+) = 2 µ+ν+2 (α + 1) n (ρ + 1) m B(ν + 1, µ + 2) n! m! F 1;2;2 1;1;1 µ + 2 ; −n n + α + β + 1 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 1 ; ρ + 1 ; 1; 1 F(ν+) = 2 µ+ν+2 (α + 1) n (ρ + 1) m B(ν + 2, µ + 1) n! m(α + β + n)F = (β + n)F(β−) + (α + n)F(α−) (4.9) − 1 2 (2 + α + β + 2n)F(α+, µ+) = (n + 1)F(n+) − (1 + α + n)F (4.10) 1 2 (2 + α + β + 2n)F(β+, ν+) = (n + 1)F(n+) + (1 + β + n)F(4.13) (ρ + δ + 2m)F(δ−) = (ρ + δ + m)F + (ρ + m)F(m−) is one of them. Furthermore we can derive more recurrence relations by linear combinations of the above. COROLLARY 4.5. The following recurrence formula holds −2(1 + α + n)F − (1 + β + n)F(α+) + (2 + α + β + 2n)F(α+, µ+) − (α + β + 1)F(n+) + (2 + α + β + n)F(n+, α+) = 0 (4.14) Proof. The equation is just a linear combination of (4.9) and (4.12). Start by transforming (4.9) and (4.12) 0 = (2 + α + β + 2n)F(α+, µ+) − 2(n + 1)F(n+) − 2(1 + α + n)F, 0 = −(α + β + 2n + 3)F(n+) − (β + n + 1)F(α+) + (α + β + n + 2)F(n+, α+). Adding these equations lead to (4.14). One important, but rather trivial relation is given by (4.15) 2F = F(ν+) + F(µ+), which follows from 2P (α,β) n = (1 + x)P (α,β) n + (1 − x)P (α,β) n . The following relations will later be proven by using a generalized form of (4.8) in the appendix. Set x = y = 1 in lemma A.2 and A.3, then the following corollary holds COROLLARY 4.6. (n + m + µ + ν + 4)F(n+, m+, ν+) = (α + n + 1)F(m+, β+, ν+) + (ρ + m + 1)F(n+, δ+, ν+) + 2(ν + 1)F(n+, m+) (4.16) (n + α + β + m + ρ + δ − µ − ν + 1)F = (n + α + β + 1)F(β+) + (m + ρ + δ + 1)F(δ+) + 2νF(ν−) (4.17) Since the weights in the Jacobi polynomials are interchangeable, see (2.10), the following two relations can be derived as well. COROLLARY 4.7. (n + m + µ + ν + 4)F(n+, m+, µ+) = −(β + n + 1)F(m+, α+, µ+) − (δ + m + 1)F(n+, ρ+, µ+) + 2(µ + 1)F(n+, m+) (4.18) (4.19) see lemma A.5 and A.6 and set x = y = 1. (n + α + β + m + ρ + δ − µ − ν + 1)F = (n + α + β + 1)F(α+) + (m + ρ + δ + 1)F(ρ+) + 2µF(µ−), 5 -point recurrence relation. There are some known starlike recurrence relations, see [40]. Those can be derived in this context as follows. COROLLARY 4.8. We have two mixed recurrence relations (4.20) (2m + ρ + δ + 1) ((n + 1)F(n+, α−) − (α + n)F(α−)) = (2n + α + β + 1) (m + 1)F(m+, ρ−) − (ρ + m)F(ρ−) (4.21) (2m + ρ + δ + 1) (n + 1)F(n+, β−) − (β + n)F(β−) = (2n + α + β + 1) ((m + 1)F(m+, δ−) − (δ + m)F(δ−)) Proof. Take (4.9) and replace α by α − 1 to derive F(µ+) = −2 2n + α + β + 1 ((n + 1)F(n+, α−) − (α + n)F(α−)) and ρ by ρ − 1 to derive F(µ+) = −2 2m + ρ + δ + 1 (m + 1)F(m+, ρ−) − (ρ + m)F(ρ−) . Setting both right hand sides equal yields (4.20). The second mixed relation follows analogously from the recursion formula (4.10). The mixed relations (4.20) and (4.21) yield some 5-point recurrence relations with support (m, n), (m − 1, n), (m + 1, n), (m, n − 1), (m, n + 1), see also [40]. THEOREM 4.9. (4.22) (2m + ρ + δ) 3 [(n + 1) ((n + α + β + 1)(2n + α + β)F(n+) + (n + α + 1)F) + (β + n) ((n + α + β)F + (n + α)F(n−))] = (2n + α + β) 3 [(m + 1) ((m + ρ + δ + 1)(2m + ρ + δ)F(m+) + (m + ρ + 1)F) + (m + δ) ((m + ρ + δ)F + (m + δ)F(m−))] (4.23) (2m + ρ + δ) 3 [(n + 1) ((n + α + β + 1)(2n + α + β)F(n+) − (n + β + 1)F) − (α + n) ((n + α + β)F − (n + β)F(n−))] = (2n + α + β) 3 [(m + 1) ((m + ρ + δ + 1)(2m + ρ + δ)F(m+) − (m + δ + 1)F) + (m + ρ) ((m + ρ + δ)F + (m + ρ)F(m−))] (2m + ρ + δ) 3 [(n + 1) (2(n + α + β + 1)(2n + α + β)F(n+) + (α − β)F) + ((β − α)(n + α + β)F + (n + α)(n + β)F(n−))] = (2n + α + β) 3 [(m + 1) (2(m + ρ + δ + 1)(2m + ρ + δ)F(m+) + (ρ − δ)F) + ((δ − ρ)(m + ρ + δ)F + (m + ρ)(m + δ)F(m−))] Proof. Take the first mixed relation (4.20) and replace all series by (4.12), this yields the first equation. The second equation follows by using (4.21) with (4.11). Lastly, the third equation can be derived by linear combination of (4.22) and (4.23) Alternatively one can use the 3-term recursion (2.12) to proof the same result as in [40]. Recurrence relation. Multiple recurrence relations similar to (4.7) can be proven. (n + α + β + 1)(m + ρ + δ + 1) (n + m + µ + ν + 4)F(n+, m+, ν+) − 2(ν + 1)F(n+, m+) = (α + n + 1)(m + ρ + δ + 1) (n + α + β − m − µ − ν − 2)F(m+, ν+) + 2(ν + 1)F(m+) + (ρ + m + 1)(n + α + β + 1) (−n + m + ρ + δ − µ − ν − 2)F(n+, ν+) + 2(ν + 1)F(n+) + (ρ + m + 1)(α + n + 1) (n + α + β + m + ρ + δ − µ − ν)F(ν+) + 2(ν + 1)F . Proof. Start with equation (4.16) (n + m + µ + ν + 4)F(n+, m+, ν+) − 2(ν + 1)F(n+, m+) = (α + n + 1)F(m+, β+, ν+) + (ρ + m + 1)F(n+, δ+, ν+). Replace both terms of the RHS by using shifted versions of equation (4.17), i.e. (4.25) (n + α + 1) (n + α + β + 1) (n + α + β + 1)F(m+, β+, ν+) = (n + α + 1) (n + α + β + 1) [(n + α + β + m + ρ + δ − µ − ν + 1)F(m+, ν+) − (m + ρ + δ + 2)F(m+, δ+, ν+) + 2(ν + 1)F(m+)] , and (4.26) (m + ρ + 1) (m + ρ + δ + 1) (m + ρ + δ + 1)F(n+, δ+, ν+) = (m + ρ + 1) (m + ρ + δ + 1) [(n + α + β + m + ρ + δ − µ − ν + 1)F(n+, ν+) − (n + α + β + 2)F(n+, β+, ν+) + 2(ν + 1)F(n+)] . Moreover use the shifted relation (4.11) for the middle part of the last two equations, i.e. (m + ρ + δ + 2)F(m+, δ+, ν+) = (2m + ρ + δ + 3)F(m+, ν+) − (m + ρ + 1)F(δ+, ν+) (4.27) (n + α + β + 2)F(n+, β+, ν+) = (2n + α + β + 3)F(n+, ν+) − (n + α + 1)F(β+, ν+). (4.28) Lastly replace the remaining terms by a shifted version of (4.17), (n + α + 1) (n + α + β + 1) (m + ρ + 1)F(δ+, ν+) + (m + ρ + 1) (m + ρ + δ + 1) (n + α + 1)F(β+, ν+) = (n + α + 1)(m + ρ + 1) (n + α + β + 1)(m + ρ + δ + 1) ((n + α + β + 1)F(β+, ν+) + (m + ρ + δ + 1)F(δ+, ν+)) = (n + α + 1)(m + ρ + 1) (n + α + β + 1)(m + ρ + δ + 1) ((n + α + β + m + ρ + δ − µ − ν)F(ν+) + 2(ν + 1)F) The claim follows from combining the above with the remaining terms of (4.25), (4.26), (4.27) and (4.28). Since α and β or ρ and δ are interchangeable, the following lemma can be proven by using (4.18) and (4.19) instead of (4.16) and (4.17). Hence LEMMA 4.11. (4.29) (n + α + β + 1)(m + ρ + δ + 1) (n + m + µ + ν + 4)F(n+, m+, µ+) − 2(µ + 1)F(n+, m+) = −(β + n + 1)(m + ρ + δ + 1) (n + α + β − m − µ − ν − 2)F(m+, µ+) + 2(µ + 1)F(m+) − (δ + m + 1)(n + α + β + 1) (−n + m + ρ + δ − µ − ν − 2)F(n+, µ+) + 2(µ + 1)F(n+) + (δ + m + 1)(β + n + 1) (n + α + β + m + ρ + δ − µ − ν)F(µ+) + 2(µ + 1)F . Both of these recursion formulas have the drawback, that terms with ν + 1 or µ + 1 vanish only for ν = −1 or µ = −1, which correspond to the special cases (1 + x) 0 or (1 − x) 0 . If the steps of the proof are slightly adjusted, a recursion formula, which applies to more cases, can be proven. The following theorem is the main result of this paper: THEOREM 4.12. Let F = I n,m , where I n,m is as in (4.8). Then the following recurrence relation holds (4.30) (n + 1)(m + 1) (n + m + µ + ν + 4)F(n+, m+, ν+) − 2(ν + 1 − β − δ)F(n+, m+) = (n + β + 1)(m + 1) (n + α + β − m − µ − ν − 2)F(m+, ν+) + 2(ν + 1 − β − δ)F(m+) + (n + 1)(m + δ + 1) (−n + m + ρ + δ − µ − ν − 2)F(n+, ν+) + 2(ν + 1 − β − δ)F(n+) + (n + β + 1)(m + δ + 1) (n + α + β + m + ρ + δ − µ − ν)F(ν+) + 2(ν + 1 − β − δ)F . Proof. Again start with recursion (4.16), i.e. (n + m + µ + ν + 4)F(n+, m+, ν+) − 2(ν + 1)F(n+, m+) = (α + n + 1)F(m+, β+, ν+) + (ρ + m + 1)F(n+, δ+, ν+), now add 2(β + δ)F(n+, m+) to both sides and multiply by the factor (n + 1)(m + 1) on both sides. Thus (n + 1)(m + 1) ((n + m + µ + ν + 4)F(n+, m+, ν+) − 2(ν + 1 − β − δ)F(n+, m+)) = (n + 1)(m + 1) ((α + n + 1)F(m+, β+, ν+) + (ρ + m + 1)F(n+, δ+, ν+) + 2(β + δ)F(n+, m+)) = RHS . Instead of multiplying with 1, as in the proof for (4.24), we will add a 0 to expand the RHS. Hence RHS =(n + 1)(m + 1) (n + α + β + 1)F(m+, β+, ν+) + (m + ρ + δ + 1)F(n+, δ+, ν+) − (βF(m+, β+, ν+) + δF(n+, δ+, ν+)) + 2(β + δ)F(n+, m+) =(n + β + 1)(m + 1)(n + α + β + 1)F(m+, β+, ν+) + (n + 1)(m + δ + 1)(m + ρ + δ + 1)F(n+, δ+, ν+) − β(m + 1)(n + α + β + 1)F(m+, β+, ν+) − δ(n + 1)(m + ρ + δ + 1)F(n+, δ+, ν+) − β(n + 1)(m + 1)F(m+, β+, ν+) − δ(n + 1)(m + 1)F(n+, δ+, ν+) + 2(n + 1)(m + 1)(β + δ)F(n+, m+). After adding up the additional terms recurrence relation (4.10) can be used. This gives RHS =(n + β + 1)(m + 1)(n + α + β + 1)F(m+, β+, ν+) + (n + 1)(m + δ + 1)(m + ρ + δ + 1)F(n+, δ+, ν+) − 2β(m + 1) ((n + 1)F(n+, m+) + (n + β + 1)F(m+)) − 2δ(n + 1) ((m + 1)F(n+, m+) + (m + δ + 1)F(n+)) + 2(n + 1)(m + 1)(β + δ)F(n+, m + 1) =(n + β + 1)(m + 1)(n + α + β + 1)F(m+, β+, ν+) + (n + 1)(m + δ + 1)(m + ρ + δ + 1)F(n+, δ, ν+) − 2β(m + 1)(n + β + 1)F(m+) − 2δ(n + 1)(m + δ + 1)F(n+). Now use the mixed relation (4.17) RHS =(n + β + 1)(m + 1) [(n + α + β + m + ρ + δ − µ − ν + 1)F(m+, ν+) + 2(ν + 1)F(m+) + (m + ρ + 1)F(δ+, ν+)] + (n + 1)(m + δ + 1) [(n + α + β + m + ρ + δ − µ − ν + 1)F(n+, ν+) + 2(ν + 1)F(n+) + (n + α + 1)F(β+, ν+)] − 2β(m + 1)(n + β + 1)F(m+) − 2δ(n + 1)(m + δ + 1)F(n+) =(n + β + 1)(m + 1) [(n + α + β + m + ρ + δ − µ − ν + 1)F(m+, ν+) + 2(ν + 1 − β − δ)F(m+) + (m + ρ + 1)F(δ+, ν+)] + (n + 1)(m + δ + 1) [(n + α + β + m + ρ + δ − µ − ν + 1)F(n+, ν+) + 2(ν + 1 − β − δ)F(n+) + (n + α + 1)F(β+, ν+)] + 2δ(n + β + 1)(m + 1)F(m+) + 2β(n + 1)(m + δ + 1)F(n+). Consider only a part of the RHS to shorten the notation. Begin by transforming F(n+) or F(m+) back to the form F(β+, ν+) or F(δ+, ν+) by equation (4.10), i.e. (n + β + 1)(m + 1)(m + ρ + 1)F(δ+, ν+) + (n + 1)(m + δ + 1)(n + α + 1)F(β+, ν+) + 2δ(n + β + 1)(m + 1)F(m+) + 2β(n + 1)(m + δ + 1)F(n+) =(n + β + 1)(m + 1)(m + ρ + 1)F(δ+, ν+) + (n + 1)(m + δ + 1)(n + α + 1)F(β+, ν+) + δ(n + β + 1)(2m + ρ + δ + 2)F(δ+, ν+) + β(m + δ + 1)(2n + α + β + 2)F(β+, ν+) + 2δ(n + β + 1)(m + δ + 1)F + 2β(n + β + 1)(m + δ + 1)F =(m + 1)(n + β + 1)(m + ρ + δ + 1)F(δ+, ν+) + (n + 1)(m + δ + 1)(n + α + β + 1)F(β+, ν+) + δ(n + β + 1)(m + ρ + δ + 1)F(δ+, ν+) + β(m + δ + 1)(n + α + β + 1)F(β+, ν+) + 2β(n + β + 1)(m + δ + 1)F =(m + δ + 1)(n + β + 1)(m + ρ + δ + 1)F(δ+, ν+) + (n + β + 1)(m + δ + 1)(n + α + β + 1)F(β+, ν+) + 2β(n + β + 1)(m + δ + 1)F. In the last step use again (4.17), then the claim follows. Setting ν = −1, β = δ = 0 and α = ρ in (4.30) annihilates the coefficient (ν + 1 − β − δ) and thus reduces (4.30) to (n + 1)(m + 1) ((n + m + 3)F(n+, m+)) =(n + 1)(m + 1) ((n + α − m − 1)F(m+)) + (n + 1)(m + 1) ((−n + m + α − 1)F(n+)) + (n + 1)(m + 1) ((n + m + 2α + 1)F) , which is again theorem 4.2. COROLLARY 4.13. If ν + 1 = β + δ equation (4.30) reduces to (4.31) (n + 1)(m + 1) (n + m + µ + ν + 4)F(n+, m+, ν+) = (n + β + 1)(m + 1) (n + α + β − m − µ − ν − 2)F(m+, ν+) + (n + 1)(m + δ + 1) (−n + m + ρ + δ − µ − ν − 2)F(n+, ν+) + (n + β + 1)(m + δ + 1) (n + α + m + ρ − µ − 1)F(ν+) . REMARK 1. The case ν + 1 = β + δ is corresponds to the integrated Jacobi polynomials p (α,0) n (x) = 1−x n P (α−1,1) n−1 (x), where β = δ = 1, ν = 1, n ≥ 2, α > 1. Numerical aspects. Usually an arbitrary high order finite element matrix is computed by Gaussian quadrature. The steps of an element assembly algorithm can be summarized in 3 steps: 1. Calculate the 1D quadrature points and weights: O(n), see e.g. [24]. 2. Evaluate Jacobi polynomials or more generally the basis functions: O(n) in 1D for n quadrature points, see e.g. [26]. 3. Gaussian quadrature and in higher dimension sum factorization:O(n d+1 ), see [39], [37]. The ansatz by using the recursion formulas, skips the first two steps and replaces the third by a summation, which is independent of n. For the ease of representation we provide a 1D example. G = G i,j n 2 max i,j = φ i , φ j n 2 max i,j . The resulting sparsity pattern can be seen in figure 5.1(a). To compare the standard assembly routine with the recursive version, we assume now, that the quadrature points, weights and the basis functions are tabulated, i.e. we only need to perform step 3 of the standard assembly routine. In figure 5.1(b) the runtime of a Matlab implementation of both assembly routines are compared. We measured the assembly time of the Gram matrix for the total polynomial order 10 < n max < 160 for the integration of each non zero value φ i , φ j . As expected the recursive version is a lot faster than the Gaussian quadrature, even in 1D. While this recurrence relations hold even for low order polynomials, the main strength is the application to higher order or spectral methods, where two integrals in 2D or three in 3D are computed for each non zero entry. The number of flops per entry remains constant in the recursive case, thus we achieved optimal arithmetical complexity for each integral dimension. REMARK 2. Polynomial coefficients q n (x) of the degree n in the integrand, i.e. q n (x)φ i (x), φ j (x) can be handled as well. Since the Legendre polynomials are a basis of P n , we can write q n (x) = ∑ n i=0 c i L i . The following two corollaries show, that we are able to connect integrals of three orthogonal polynomials and hence we are able to compute polynomial coefficients for the integrand as well, though depending on q n a lot more flops are needed. COROLLARY 5.1. With given a ∈ N the integrand I a,n,m = 1 − x 2 i+j+1 L a (x)P (2i,0) n (x)P (2j,0) m (x) satisfies the following recursion formula (2 + a + i + j + m + n)I a,n,m = −(2 − a − i − j − m − n)I n−1,m−1,a−1 − (2 + a − i − j − m − n)I n−1,m−1,a + (−2 + a + i − j − m + n)I n−1,m,a−1 − (2 + a − i + j + m − n)I n−1,m,a + (−2 + a − i + j + m − n)I n,m−1,a−1 − (2 + a + i − j − m + n)I n,m−1,a + (−2 + a − i − j − m − n)I n,− x 2 i+j+1 L a (x) p 2i,0 n (x) p 2j,0 m (x) holds the following recursion formula (2 + a + i + j + m + n)I a,n,m = −(6 − a − i − j − m − n)I n−1,m−1,a−1 − (6 + a − i − j − m − n)I n−1,m−1,a + (−4 + a + i − j − m + n)I n−1,m,a−1 − (4 + a − i + j + m − n)I n−1,m,a + (−4 + a − i + j + m − n)I n,m−1,a−1 − (4 + a + i − j − m + n)I n,m−1,a + (−2 + a − i − j − m − n)I n,m,a−1 Conclusion. We were able to derive and prove generalized versions of recurrence formulas, which were originally derived by symbolic computation. One of the main complications in high order finite elements is the assembly of local mass and stiffness matrices. Using these newly computed recurrences one is able to gain a better asymptotic complexity than the state of the art method, sum factorization. Furthermore this approach does not only skip the numerical quadrature, but the initialization step as well. From the special functions point of view the representation as Kampé de Fériet series in itself is interesting. Setting a = −n, b = n + α + β + 1, c = α + 1 and x = 1 (and the respective values for f , g, h) leads after simplification to recursion formula (4.9). A.1.2. Proof of mixed relations. Moreover the recurrence relations, which we have seen in section 4.4, can be derived for the more general case (A.1). All of the following recursion hold for x = 1 and y = 1, which follows just from the recurrence formula of the Jacobi polynomials, as we have seen. To prove the mixed relations (4. 16 µ + 1 ; −n n + α + β + 1 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 1 ; ρ + 1 ; x; y . Denote the contiguous functions as usual, i.e. F(α+) = 2 µ+ν+1 (α + 2) n (ρ + 1) m B(ν + 1, µ + 1) n! m! F µ + 1 ; −n n + α + β + 2 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 2 ; ρ + 1 ; x; y F(n+) = 2 µ+ν+1 (α + 1) n+1 (ρ + 1) m B(ν + 1, µ + 1) (n + 1)! m! F µ + 1 ; −n − 1 n + α + β + 2 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 1 ; ρ + 1 ; x; y F(µ+) = 2 µ+ν+2 (α + 1) n (ρ + 1) m B(ν + 1, µ + 2) n! m! F µ + 2 ; −n n + α + β + 1 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 1 ; ρ + 1 ; x; y F(ν+) = 2 µ+ν+2 (α + 1) n (ρ + 1) m B(ν + 2, µ + 1) n! m! F µ + 1 ; −n n + α + β + 1 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 1 ; ρ + 1 ; x; y · · · LEMMA A.1. Let θ x = x ∂ ∂x and θ y = y ∂ ∂y , then the following differential equations hold (θ x − n)F = −(n + α)F(n−, β+) (A.4) (θ x + n + α + β + 1)F = (n + α + β + 1)F(β+) (A.5) (θ y − m)F = −(m + ρ)F(m−, δ+) (A.6) (θ y + m + ρ + δ + 1)F = (m + ρ + δ + 1)F(δ+) (A.7) (θ x + θ y + µ + ν + 1)F = 2νF(ν−) (A.8) Proof. For F as in (A.3) (θ x − n)F = 2 µ+ν+1 (α + 1) n (ρ + 1) m B(ν + 1, µ + 1) n! m! ∞ ∑ k=0 ∞ ∑ l=0 (µ + 1) k+l (−n) k (n + α + β + 1) k (−m) l (m + ρ + δ + 1) l (µ + ν + 2) k+l (α + 1) k (ρ + 1) l (θ x − n) x k y l k! l! , = 2 µ+ν+1 (α + 1) n (ρ + 1) m B(ν + 1, µ + 1) n! m! ∞ ∑ k=0 ∞ ∑ l=0 (µ + 1) k+l (−n) k (n + α + β + 1) k (−m) l (m + ρ + δ + 1) l (µ + ν + 2) k+l (α + 1) k (ρ + 1) l (k − n) x k−1 y l k! l! . As usual replace (k − n)(−n) k by (−n)(−n + 1) k , thus = (−n)2 µ+ν+1 (α + 1) n (ρ + 1) m B(ν + 1, µ + 1) n! m! ∞ ∑ k=0 ∞ ∑ l=0 (µ + 1) k+l (−n + 1) k (n + α + β + 1) k (−m) l (m + ρ + δ + 1) l (µ + ν + 2) k+l (α + 1) k (ρ + 1) l x k y l k! l! = 2 µ+ν+1 (α + 1) n (ρ + 1) m B(ν + 1, µ + 1) n! m! F 1;2;2 1;1;1 µ + 1 ; −n − 1 (n − 1) + α + β + 2 ; −m m + ρ + δ + 1 µ + ν + 2 ; α + 1 ; ρ + 1 ; x; y , since we changed n to n − 1 in the series, we need to equate n + α + β + 1 as well. This is done by raising β by one. Furthermore we need to change the parts in prefactor, to accommodate n − 1 as well. Since (A.9) −n2 µ+ν+1 (α + 1) n (ρ + 1) m B(ν + 1, µ + 1) n! m! = − 2 µ+ν+1 (ρ + 1) m B(ν + 1, µ + 1)(α + n)(α + 1) n−1 m! (n − 1)! , the equation (A.4) follows. Relation (A.6) follows analogue to it. Relation (A.5) or (A.7) follow directly by applying the differential operators, since the prefactor doesn't need to be changed. For the last relation (A.8) the differential operator (θ x + θ y + µ + ν + 1) applied to x k y l yields (k + l + µ + ν + 1). This reduces (ν + µ + 2) k+l in the denominator to (ν + µ + 2) k+l−1 . Therefore we multiply the series with ν+µ+1 ν+µ+1 , such that the part in the denominator becomes (ν + µ + 1) k+l . Since it is only a change in the denominator it is rather a change in ν than in µ. The rest follows using a property of the Beta-function, i.e. (n + α + β + m + ρ + δ − µ − ν + 1)F = (n + α + β + 1)F(β+) + (m + ρ + δ + 1)F(δ+) + 2νF(ν−) The easiest way of deriving (4.18) and (4.19) is to introduce another formulation for F. Recall that Jacobi polynomials can be expressed as P (α,β) n = (−1) n (β + 1) n n! 2 F 1 −n, n + α + β + 1 β + 1 ; 1 + x 2 as well. Using this expression in the derivation of the Kampé de Fériet series yield the analogue form (A.10) F := (−1) n+m 2 µ+ν+1 (β + 1) n (δ + 1) m B(ν + 1, µ + 1) m! n! F 1;2;2 1;1;1 ν + 1 ; −n n + α + β + 1 ; −m m + ρ + δ + 1 µ + ν + 2 ; β + 1 ; δ + 1 ; x; y . Then the differential equations can be derived on the same way as seen before. Thus LEMMA A.4. (θ x − n) F = (n + β) F(n−, α+), (A.11) (θ x + n + α + β + 1) F = (n + α + β + 1) F(α+), (A.12) (θ y − m) F = (m + δ) F(m−, ρ+), (A.13) (θ y + m + ρ + δ + 1) F = (m + ρ + δ + 1) F(ρ+), (A.14) (θ x + θ y + µ + ν + 1) F = 2µ F(µ−). (n + α + β + m + ρ + δ − µ − ν + 1)F = (n + α + β + 1)F(α+) + (m + ρ + δ + 1)F(ρ+) + 2µF(µ−). REMARK 3. Following Burchnall and Chaundy [12], [13] one can easily compute an expansion base, i.e. F = ∑ ∞ i=0 A i (x)A i (y), for F. Fig. 4. 1 : 1Nonzero pattern of the two-dimensional mass matrix on a triangle (left), schematic representation of the recursions (4.2), (4.1) for the (j, ) entry. the sparsity pattern. Since each recursion formula needs starting values, we compute the lowest order integrals in each block of the mass matrix, compare the left picture infigure 4.1, by a low order quadrature. An extension to the edge functions is straight forward, since those are just special cases of the interior functions. THEOREM 4. 2 . 2Let α = ρ, µ = β = δ = ν = 0. Then the recursion relation(4.7)c 0 I n+1,m+1 + c 1 I n+1,m + c 2 I n,m+1 + c 3 I n,m ( 4 . 411) (α + β + 2n)F(β−) = (α + β + n)F + (α + n)F(n−)(4.12) (α + β + 2n)F(α−) = (α + β + n)F − (β + n)F(n−) 2F = F(ν+, β+) + F(µ+, α+) F(n−) = F(β−) − F(α−)and 7 analogue recurrence formulas in m, ρ and δ, where LEMMA 4 . 10 . 410Let F = I n,m , where I n,m is as in(4.8). Then the following recurrence relation holds(4.24) Fig. 5.1 7 . 7Acknowledgement. The first author has been supported by the Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). b−) + xZ(d+, e+) − (c − a) c xZ(c+, d+, e+). Subtracting the last two equations from each other yields 0 = e(Z(a−) − Z(b−) + c −1 d(b − a)xZ(c+, d+, e+) and if we set b to b + 1 0 = e(Z(a−, b+) − Z) + c −1 d(b + 1 − a)xZ(b+, c+, d+, e+). . 2 . 2For F as in (A.3) the following contiguous recurrence relation hold, (n + m + µ + ν + 4)F(n+, m+, ν+) = (n + α + 1)F(m+, β+, ν+) + (m + ρ + 1)F(n+, δ+, ν+) + 2(ν + 1)F(ν+) Similar (4.17) can be proven. Take (A.5) and (A.7) and subtract (A.8) LEMMA A.3. For F as in (A.3) the following contiguous recurrence relation hold, 4.18) follows by subtracting (A.11) and (A.13) from (A.15) LEMMA A.5. For F as in (A.3) or (A.10) the following contiguous recurrence relation hold, (n + m + µ + ν + 4)F(n+, m+, µ+) = −(n + β + 1)F(m+, α+, µ+) − (m + δ + 1)F(n+, ρ+, µ+) + 2(µ + 1)F(µ+) and analogue LEMMA A.6. For F as in (A.3) or (A.10) the following contiguous recurrence relation hold, m,a−1 A similar result holds for the integrated Jacobi polynomials.COROLLARY 5.2. For the integrand I a,n,m = 1 ) and (4.17) consider(A.3) F := 2 µ+ν+1 (α + 1) n (ρ + 1) m B(ν + 1, µ + 1) n! m!F 1;2;2 1;1;1 Furthermore we omit the indices of F to keep the notation as simple as possible.A.1.1. Recurrence formula. One can extend the proofs for the contiguous relations of a hypergeometric series to the general case of the Kampé de Fériet series using the techniques in Rainville[43]Ch.4. Let Z be a generalization of F to arbitrary coefficients, i.e.be a general Kampé de Fériet series. It has similar contiguous functions to the six contiguous functions of the Gaussian hypergeometric series, namelyobviously one can find 6 similar functions for f , g and h. In the following omit ( f ) m , (g) m and (h) m to simplify the notation. Use the differential operator θ x = x ∂ ∂x . This leads to(a + n)τ n τ m τ n+m and hence (A.2) (θ x + a)Z = aZ(a+),and so on. Following one of the calculations in[43], one can derive Bernstein-Bézier finite elements of arbitrary order and optimal assembly procedures. M Ainsworth, G Andriamaro, And O Davydov, SIAM J. Sci. Comput. 33M. AINSWORTH, G. ANDRIAMARO, AND O. DAVYDOV, Bernstein-Bézier finite elements of arbitrary order and optimal assembly procedures, SIAM J. Sci. Comput., 33 (2011), pp. 3087-3109. Preconditioning the mass matrix for high order finite element approximation on triangles. M Ainsworth And S, Jiang, SIAM J. Numer. Anal. 57M. AINSWORTH AND S. JIANG, Preconditioning the mass matrix for high order finite element approximation on triangles, SIAM J. Numer. Anal., 57 (2019), pp. 355-377. An O(p 3 ) hp-version FEM in two dimensions: Preconditioning and post-processing. M Ainsworth, S Jiang, M A Sanchéz, Comput. Methods Appl. Mech. Engrg. 350M. AINSWORTH, S. JIANG, AND M. A. SANCHÉZ, An O(p 3 ) hp-version FEM in two dimensions: Preconditioning and post-processing, Comput. Methods Appl. Mech. Engrg., 350 (2019), pp. 766-802. G E Andrews, R Askey, And R Roy, of Encyclopedia of Mathematics and its Applications. CambridgeCambridge University Press71Special functionsG. E. ANDREWS, R. ASKEY, AND R. ROY, Special functions, vol. 71 of Encyclopedia of Mathematics and its Applications, Cambridge Univer- sity Press, Cambridge, 1999. P Appell, J De Fériet, Fonctions hypergéométriques et hypersphériques. Polynomes d'Hermite. VII + 434 p. Paris. Gauthier-VillarsP. APPELL AND J. KAMPÉ DE FÉRIET, Fonctions hypergéométriques et hypersphériques. Polynomes d'Hermite. VII + 434 p. Paris, Gauthier-Villars (1926)., 1926. Contiguous hypergeometric functions of the type 3 F 2 (1). W N Bailey, Proc. Glasgow Math. Assoc. 2W. N. BAILEY, Contiguous hypergeometric functions of the type 3 F 2 (1), Proc. Glasgow Math. Assoc., 2 (1954), pp. 62-65. Generalized hypergeometric series. Cambridge Tracts in Mathematics and Mathematical Physics. 32Stechert-Hafner, Inc, Generalized hypergeometric series, Cambridge Tracts in Mathematics and Mathematical Physics, No. 32, Stechert-Hafner, Inc., New York, 1964. A Becirovic, P Paule, V Pillwein, A Riese, C Schneider, And J Schoeberl, Hypergeometric Summation Algorithms for High Order Finite Elements, Computing. 78Preliminary version availableA. BECIROVIC, P. PAULE, V. PILLWEIN, A. RIESE, C. SCHNEIDER, AND J. SCHOEBERL, Hypergeometric Summation Algorithms for High Order Finite Elements, Computing, 78 (2006), pp. 235-249. Preliminary version available. Sparsity optimized high order finite element functions on simplices. S Beuchler, V Pillwein, J Schöberl, And S Zaglmayr, Numerical and symbolic scientific computing. SpringerWienNewYork, ViennaS. BEUCHLER, V. PILLWEIN, J. SCHÖBERL, AND S. ZAGLMAYR, Sparsity optimized high order finite element functions on simplices, in Numerical and symbolic scientific computing, Texts Monogr. Symbol. Comput., SpringerWienNewYork, Vienna, 2012, pp. 21-44. New shape functions for triangular p-FEM using integrated Jacobi polynomials. S And J Beuchler, Schöberl, Numer. Math. 103S. BEUCHLER AND J. SCHÖBERL, New shape functions for triangular p-FEM using integrated Jacobi polynomials, Numer. Math., 103 (2006), pp. 339-366. The convergence rate of multigrid with Gauss-Seidel relaxation for the Poisson equation. D Braess, Multigrid methods, Proceedings of the Conference held at Köln-Porz. W. Hackbusch and U. TrottenbergBerlin-Heidelberg-New YorkSpringer Verlagno. 960 in Lecture notes in mathematicsD. BRAESS, The convergence rate of multigrid with Gauss-Seidel relaxation for the Poisson equation, in Multigrid methods, Proceedings of the Conference held at Köln-Porz, November 23-27, 1981, W. Hackbusch and U. Trottenberg, eds., no. 960 in Lecture notes in mathematics, Berlin-Heidelberg-New York, 1982, Springer Verlag, pp. 368-386. Expansions of Appell's double hypergeometric functions. J L W Burchnall And T, Chaundy, Quart. J. Math., Oxford Ser. 11J. L. BURCHNALL AND T. W. CHAUNDY, Expansions of Appell's double hypergeometric functions, Quart. J. Math., Oxford Ser., 11 (1940), pp. 249- 270. Expansions of Appell's double hypergeometric functions. Quart. J. Math., Oxford Ser. II, Expansions of Appell's double hypergeometric functions. II, Quart. J. Math., Oxford Ser., 12 (1941), pp. 112-128. Gröbner bases, symbolic summation and symbolic integration. F Chyzak, Gröbner bases and applications. Linz; CambridgeCambridge Univ. PressF. CHYZAK, Gröbner bases, symbolic summation and symbolic integration, in Gröbner bases and applications (Linz, 1998), vol. 251 of London Math. Soc. Lecture Note Ser., Cambridge Univ. Press, Cambridge, 1998, pp. 32-60. The Finite Element Method for Elliptic Problems. P Ciarlet, North-Holland, AmsterdamP. CIARLET, The Finite Element Method for Elliptic Problems, North-Holland, Amsterdam, 1978. Frontiers: three dimensional elliptic and Maxwell problems with applications. L Demkowicz, J Kurtz, D Pardo, M Paszyński, W Rachowicz, And A Zdunek, CRC Applied Mathematics and Nonlinear Science Series. 2Chapman & Hall/CRCComputing with hp-adaptive finite elementsL. DEMKOWICZ, J. KURTZ, D. PARDO, M. PASZYŃSKI, W. RACHOWICZ, AND A. ZDUNEK, Computing with hp-adaptive finite elements. Vol. 2, Chapman & Hall/CRC Applied Mathematics and Nonlinear Science Series, Chapman & Hall/CRC, Boca Raton, FL, 2008. Frontiers: three dimensional elliptic and Maxwell problems with applications. Release 1.0.16 of 2017-09-18. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller and B. V. SaundersNIST Digital Library of Mathematical FunctionsNIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/, Release 1.0.16 of 2017-09-18. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller and B. V. Saunders, eds. Spectral methods on triangles and other domains. M Dubiner, J. Sci. Comput. 6M. DUBINER, Spectral methods on triangles and other domains, J. Sci. Comput., 6 (1991), pp. 345-390. A Erdélyi, W Magnus, F Oberhettinger, F G Tricomi, Tables of integral transforms. Harry BatemanNew York-Toronto-LondonMcGraw-Hill Book Company, IncIIA. ERDÉLYI, W. MAGNUS, F. OBERHETTINGER, AND F. G. TRICOMI, Tables of integral transforms. Vol. II, McGraw-Hill Book Company, Inc., New York-Toronto-London, 1954. Based, in part, on notes left by Harry Bateman. L C Evans, Partial differential equations. Providence, RIAmerican Mathematical Society19second ed.L. C. EVANS, Partial differential equations, vol. 19 of Graduate Studies in Mathematics, American Mathematical Society, Providence, RI, second ed., 2010. Multiple hypergeometric functions and applications. H Exton, Mathematics & its Applications. L. J. SlaterJohn Wiley & Sons, Inc.]H. EXTON, Multiple hypergeometric functions and applications, Ellis Horwood Ltd., Chichester; Halsted Press [John Wiley & Sons, Inc.], New York-London-Sydney, 1976. Foreword by L. J. Slater, Mathematics & its Applications. Elliptic partial differential equations of second order. D S Gilbarg And N, Trudinger, Classics in Mathematics. Springer-VerlagReprint of the 1998 editionD. GILBARG AND N. S. TRUDINGER, Elliptic partial differential equations of second order, Classics in Mathematics, Springer-Verlag, Berlin, 2001. Reprint of the 1998 edition. W Hackbusch, Multigrid Methods and Applications. Springer-Verlag. HeidelbergW. HACKBUSCH, Multigrid Methods and Applications, Springer-Verlag. Heidelberg, 1985. Fast and accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights. N Hale And A, Townsend, SIAM J. Sci. Comput. 35N. HALE AND A. TOWNSEND, Fast and accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights, SIAM J. Sci. Comput., 35 (2013), pp. A652-A674. Symbolic Evaluation of hp-FEM Element Matrices. T Haubold, V Pillwein, And S Beuchler, PAMM19201900446T. HAUBOLD, V. PILLWEIN, AND S. BEUCHLER, Symbolic Evaluation of hp-FEM Element Matrices, PAMM, 19 (2019), p. e201900446. Accuracy and stability of numerical algorithms. N J Higham, Society for Industrial and Applied Mathematics (SIAM). second ed.N. J. HIGHAM, Accuracy and stability of numerical algorithms, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second ed., 2002. Some results on Jacobi polynomials. S L Kalla, Tamkang J. Math. 15S. L. KALLA, Some results on Jacobi polynomials, Tamkang J. Math., 15 (1984), pp. 149-156. G E J Karniadakis And S, Sherwin, Spectral/hp element methods for CFD, Numerical Mathematics and Scientific Computation. OxfordOxford University Presssecond ed.G. E. KARNIADAKIS AND S. J. SHERWIN, Spectral/hp element methods for CFD, Numerical Mathematics and Scientific Computation, Oxford University Press, Oxford, second ed., 2013. M Kauers, Guessing Handbook, 09-07Research Institute for Symbolic Computation (RISC). Tech. Rep.M. KAUERS, Guessing Handbook, Tech. Rep. 09-07, Research Institute for Symbolic Computation (RISC). The Holonomic Toolkit. M Kauers, Computer Algebra in Quantum Field Theory: Integration, Summation and Special Functions, Texts and Monographs in Symbolic Computation. SpringerM. KAUERS, The Holonomic Toolkit, in Computer Algebra in Quantum Field Theory: Integration, Summation and Special Functions, Texts and Monographs in Symbolic Computation, Springer, 2013, pp. 119-144. Ore Polynomials in Sage, in Computer Algebra and Polynomials. M Kauers, M Jaroschek, And F Johansson, Lecture Notes in Computer Science. J. Gutierrez, J. Schicho, and M. WeimannM. KAUERS, M. JAROSCHEK, AND F. JOHANSSON, Ore Polynomials in Sage, in Computer Algebra and Polynomials, J. Gutierrez, J. Schicho, and M. Weimann, eds., Lecture Notes in Computer Science, 2014, pp. 105-125. A new proof of Saalschütz's theorem for the series 3 F 2 (1) and its contiguous results with applications. Y S K Kim And A, Rathie, Commun. Korean Math. Soc. 27Y. S. KIM AND A. K. RATHIE, A new proof of Saalschütz's theorem for the series 3 F 2 (1) and its contiguous results with applications, Commun. Korean Math. Soc., 27 (2012), pp. 129-135. Fast inversion of the simplicial Bernstein mass matrix. R C Kirby, Numer. Math. 135R. C. KIRBY, Fast inversion of the simplicial Bernstein mass matrix, Numer. Math., 135 (2017), pp. 73-95. Two-variable analogues of the classical orthogonal polynomials, in Theory and application of special functions. T Koornwinder, Proc. Advanced Sem., Math. Res. Center, Univ. Wisconsin. 35Math. Res. Center, Univ. Wisconsin, Publ.T. KOORNWINDER, Two-variable analogues of the classical orthogonal polynomials, in Theory and application of special functions (Proc. Advanced Sem., Math. Res. Center, Univ. Wisconsin, Madison, Wis., 1975), 1975, pp. 435-495. Math. Res. Center, Univ. Wisconsin, Publ. No. 35. Advanced applications of the holonomic systems approach. C Koutschan, Linz, AustriaJohannes Kepler UniversityPhD thesisResearch Institute for Symbolic Computation (RISC)C. KOUTSCHAN, Advanced applications of the holonomic systems approach, PhD thesis, Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Linz, Austria, 2009. Non-homogeneous boundary value problems and applications. J.-L Lions And E, Magenes, Die Grundlehren der mathematischen Wissenschaften. P. KennethNew York-HeidelbergSpringer-VerlagI181J.-L. LIONS AND E. MAGENES, Non-homogeneous boundary value problems and applications. Vol. I, Springer-Verlag, New York-Heidelberg, 1972. Translated from the French by P. Kenneth, Die Grundlehren der mathematischen Wissenschaften, Band 181. J Melenk, K Gerdes, And C Schwab, Fully Discrete hp-Finite Elements: Fast Quadrature. 190J. MELENK, K. GERDES, AND C. SCHWAB, Fully Discrete hp-Finite Elements: Fast Quadrature, Computer Methods in Applied Mechanics and Engineering, 190 (1999), pp. 4339-4364. A sparse spectral method on triangles. S Olver, A Townsend, And G Vasil, SIAM J. Sci. Comput. 41S. OLVER, A. TOWNSEND, AND G. VASIL, A sparse spectral method on triangles, SIAM J. Sci. Comput., 41 (2019), pp. A3728-A3756. Spectral methods for problems in complex geometries. S A Orszag, J. Comput. Phys. 37S. A. ORSZAG, Spectral methods for problems in complex geometries, J. Comput. Phys., 37 (1980), pp. 70-92. Hypergeometric Summation Techniques for High Order Finite Elements. V Pillwein, P Paule, C Schneider, And J Schoeberl, PAMMV. PILLWEIN, P. PAULE, C. SCHNEIDER, AND J. SCHOEBERL, Hypergeometric Summation Techniques for High Order Finite Elements, PAMM, 6 (2006), pp. 689-690. Sur une famille de polynomesà deux variables orthogonaux dans un triangle. J Proriol, C. R. Acad. Sci. 245J. PRORIOL, Sur une famille de polynomesà deux variables orthogonaux dans un triangle, C. R. Acad. Sci. Paris, 245 (1957), pp. 2459-2461. Numerical Approximation of partial differential equations. A Quateroni And A, Valli, Springer Series in Computational Mathematics. Berlin-Heidelberg-New YorkSpringerA. QUATERONI AND A. VALLI, Numerical Approximation of partial differential equations, no. 23 in Springer Series in Computational Mathemat- ics, Springer. Berlin-Heidelberg-New York, 1997. E D Rainville, Special functions. New YorkThe Macmillan CoE. D. RAINVILLE, Special functions, The Macmillan Co., New York, 1960. Extensions of Euler type II transformation and Saalschütz's theorem. M A K Rakha And A, Rathie, Bull. Korean Math. Soc. 48M. A. RAKHA AND A. K. RATHIE, Extensions of Euler type II transformation and Saalschütz's theorem, Bull. Korean Math. Soc., 48 (2011), pp. 151-156. C Schwab, p-and hp-finite element methods, Numerical Mathematics and Scientific Computation. New YorkOxford University PressTheory and applications in solid and fluid mechanicsC. SCHWAB, p-and hp-finite element methods, Numerical Mathematics and Scientific Computation, The Clarendon Press, Oxford University Press, New York, 1998. Theory and applications in solid and fluid mechanics. Generalized hypergeometric functions. L J Slater, Cambridge University PressCambridgeL. J. SLATER, Generalized hypergeometric functions, Cambridge University Press, Cambridge, 1966. Finite element analysis. B Szabó And I, Babuška, John Wiley & Sons, IncNew YorkB. SZABÓ AND I. BABUŠKA, Finite element analysis, A Wiley-Interscience Publication, John Wiley & Sons, Inc., New York, 1991. . G Szegő, R I Providence, American Mathematical Society Colloquium Publications23third ed.G. SZEGŐ, Orthogonal polynomials, American Mathematical Society, Providence, R.I., third ed., 1967. American Mathematical Society Collo- quium Publications, Vol. 23. . J A Wilson, Hypergeometric Series, Relations, Some, Orthogonal Functions, Llc Proquest, Ann Arbor, MIPh.D.)-The University of Wisconsin -Madison. ThesisJ. A. WILSON, HYPERGEOMETRIC SERIES RECURRENCE RELATIONS AND SOME NEW ORTHOGONAL FUNCTIONS, ProQuest LLC, Ann Arbor, MI, 1978. Thesis (Ph.D.)-The University of Wisconsin -Madison. Approximation and orthogonality in Sobolev spaces on a triangle. Y Xu, Constr. Approx. 46Y. XU, Approximation and orthogonality in Sobolev spaces on a triangle, Constr. Approx., 46 (2017), pp. 349-434. A holonomic systems approach to special functions identities. D Zeilberger, J. Comput. Appl. Math. 32D. ZEILBERGER, A holonomic systems approach to special functions identities, J. Comput. Appl. Math., 32 (1990), pp. 321-368.
[]
[ "Improving the List Decoding Version of the Cyclically Equivariant Neural Decoder", "Improving the List Decoding Version of the Cyclically Equivariant Neural Decoder" ]
[ "Xiangyu Chen ", "Min Ye " ]
[]
[]
The cyclically equivariant neural decoder was recently proposed in [Chen-Ye, International Conference on Machine Learning, 2021] to decode cyclic codes. In the same paper, a list decoding procedure was also introduced for two widely used classes of cyclic codes-BCH codes and punctured Reed-Muller (RM) codes. While the list decoding procedure significantly improves the Frame Error Rate (FER) of the cyclically equivariant neural decoder, the Bit Error Rate (BER) of the list decoding procedure is even worse than the unique decoding algorithm when the list size is small. In this paper, we propose an improved version of the list decoding algorithm for BCH codes and punctured RM codes. Our new proposal significantly reduces the BER while maintaining the same (in some cases even smaller) FER. More specifically, our new decoder provides up to 2dB gain over the previous list decoder when measured by BER, and the running time of our new decoder is 15% smaller. Code available at github.com/improvedlistdecoder/code
10.1109/isit50566.2022.9834337
[ "https://arxiv.org/pdf/2106.07964v1.pdf" ]
235,436,011
2106.07964
946542a4fc2321896afd149d325c960043a92878
Improving the List Decoding Version of the Cyclically Equivariant Neural Decoder Xiangyu Chen Min Ye Improving the List Decoding Version of the Cyclically Equivariant Neural Decoder The cyclically equivariant neural decoder was recently proposed in [Chen-Ye, International Conference on Machine Learning, 2021] to decode cyclic codes. In the same paper, a list decoding procedure was also introduced for two widely used classes of cyclic codes-BCH codes and punctured Reed-Muller (RM) codes. While the list decoding procedure significantly improves the Frame Error Rate (FER) of the cyclically equivariant neural decoder, the Bit Error Rate (BER) of the list decoding procedure is even worse than the unique decoding algorithm when the list size is small. In this paper, we propose an improved version of the list decoding algorithm for BCH codes and punctured RM codes. Our new proposal significantly reduces the BER while maintaining the same (in some cases even smaller) FER. More specifically, our new decoder provides up to 2dB gain over the previous list decoder when measured by BER, and the running time of our new decoder is 15% smaller. Code available at github.com/improvedlistdecoder/code I. INTRODUCTION Machine learning methods have recently been applied to the area of decoding error-correcting codes [1]- [13]. These methods have demonstrated improvements over the classical decoding algorithms for codes with short to moderate block length. In particular, one line of research pioneered by [1], [4] introduced neural decoders as a generalization of the classic Belief Propagation (BP) decoding algorithm, where the Trellis graph in the BP algorithm is viewed as a fully connected neural network [1], and the weights in the Trellis graph are optimized by training the neural network. The fully connected neural networks were further replaced by recurrent neural networks (RNNs) in [4], and the tools of graph neural networks were also introduced to improve the neural decoders [10]. Very recently, the cyclically equivariant neural decoder was proposed to decode cyclic codes [14]. Inspired by the fact that the Maximum Likelihood (ML) decoder of any cyclic code is equivariant to cyclic shifts, [14] imposed a shift invariant structure on the weights of the neural decoder so that it shares the equivariant property of the ML decoder. More precisely, any cyclic shift of inputs results in the same cyclic shift of the decoding outputs for the cyclically equivariant neural decoder. Simulations with BCH codes and punctured Reed-Muller (RM) codes demonstrated that the cyclically equivariant neural decoder consistently outperforms the conventional neural decoders when decoding cyclic codes [14]. In addition to the cyclically equivariant neural decoder, [14] further proposed a list decoding procedure for BCH codes and punctured RM codes which significantly improves the Frame X. Chen For certain high-rate BCH codes and punctured RM codes, the list decoder with a large enough list size achieves almost the same FER as the ML decoder. However, the Bit Error Rate (BER) of the list decoding procedure in [14] is even worse than the unique decoding algorithm when the list size is small. This can be explained as follows: FER is the fraction of incorrectly decoded codewords, and BER is the fraction of incorrectly decoded bits (or codeword coordinates). We say that a codeword is incorrectly decoded whenever the decoding result is different from the true codeword, no matter it differs in all coordinates or it differs only in a single bit. Therefore, the fraction of incorrectly decoded bits in the incorrectly decoded codewords does not affect FER at all but it is an important factor of BER. In fact, BER is simply the product of FER and the average fraction of incorrectly decoded bits in the incorrectly decoded codewords. The major downside of the list decoder in [14] is that it does not optimize this fraction at all. As a consequence, whenever the list decoder makes a mistake, the decoding result differs from the true codeord in more or less half of all the codeword coordinates. On the other hand, although the unique decoding version of the cyclically equivariant neural decoder has a larger FER, it tries to minimize the Hamming distance between the decoding results and the true codewords even when it can not completely recover the true codeword. That's why it has an even better BER compared to the list decoder with a small list size. In this paper, we propose a new neural decoder for BCH codes and punctured RM codes which improves upon the list decoding version of the cyclically equivariant neural decoder. More precisely, it achieves a significantly smaller BER compared to the list decoder in [14] while maintaining the same (in some cases even smaller) FER. Moreover, the running time of our new decoder is also reduced by 15% compared to the list decoder in [14]. Both our new decoder and the previous list decoder make use of the affine invariant property of extended BCH codes and RM codes. In the previous list decoder, we associate a parity check matrix to each affine permutation on the codeword coordinates, and we use the cyclically equivariant neural decoder to perform neural Belief Propagation on the Tanner graph of each parity check matrix. After obtaining the list of decoding results from all affine permutations, the final decoding result is obtained from the ML decoding among this list. In our new decoder, we build a large parity check matrix containing all the rows of the parity check matrices used in the previous list decoder, and our new decoder performs neural BP on the Tanner graph of this large parity check matrix. Similarly to the cyclically equivariant neural decoder in [14], the weights in our new decoder also satisfy certain invariant structure, which brings much better performance than the vanilla neural BP decoders. II. BACKGROUND ON RM CODES AND BCH CODES In this section, we collect some basic properties about (punctured) RM codes and (extended) BCH codes that are needed to develop our new decoder. Readers may consult [15], [16] for more information about these two code families. In order to define BCH codes and punctured RM codes, we first introduce some notation. Let m be an integer, and let α be a primitive element of the finite field F 2 m . For 1 ≤ j ≤ 2 m − 2, let M (j) (x) be the minimal polynomial of α j over the binary field. Both BCH codes and punctured RM codes are cyclic codes, and they can be defined by generator polynomials and parity check polynomials. More specifically, for BCH code with designed distance 2δ + 1 and code length n = 2 m − 1, the generator polynomial is g(x) = lcm{M (1) (x), M (3) (x), . . . , M (2δ−1) (x)}, where lcm stands for least common multiple; see Chapter 7.6 of [15]. For rth order punctured RM code with code length n = 2 m − 1, the generator polynomial is g( x) = lcm{M (j) (x) : 1 ≤ j ≤ 2 m − 2, 1 ≤ w 2 (j) ≤ m − r − 1}, where w 2 (j) is the number of 1's in the binary expansion of j; see Chapter 13.5 of [15]. For a cyclic code with code length n, the generator polynomial g(x) always divides x n − 1, and the parity check polynomial is simply h(x) = (x n − 1)/g(x) . For an (n, k) cyclic code, the degree of h is k, and so h(x) can be written as h(x) = h k x k +· · ·+h 2 x 2 +h 1 x+h 0 , where the coefficients h k , . . . , h 2 , h 1 , h 0 are either 0 or 1. The following (n − k) × n matrix h k . . . h 2 h 1 h 0 0 0 . . . 0 0 h k . . . h 2 h 1 h 0 0 . . . 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 0 . . . 0 h k . . . h 2 h 1 h 0 (1) is a parity check matrix of the cyclic code, and this particular parity check matrix is used in the neural BP decoders in [1], [4] for BCH codes. RM codes and extended BCH codes are obtained from adding an overall parity bit to punctured RM codes and BCH codes, respectively. More precisely, if C is a punctured RM code, then {(C 0 , C 1 , . . . , C n ) : (C 1 , . . . , C n ) ∈ C, C 0 = C 1 + · · · + C n } is a RM code. Similarly, if C is a BCH code, then {(C 0 , C 1 , . . . , C n ) : (C 1 , . . . , C n ) ∈ C, C 0 = C 1 + · · · + C n } is an extended BCH code. It is well known that both RM codes and extended BCH codes are invariant to the affine group [17]. In order to explain the affine invariant property, we use (C 0 , C 1 , . . . , C n ) to denote a codeword from an extended BCH code or a RM code with length n + 1 = 2 m . Next we define a one-to-one mapping f between the index set {0, 1, . . . , n} and the finite field F 2 m = {0, 1, α, α 2 , . . . , α n−1 } as follows: f (0) = 0 and f (i) = α i−1 for i ∈ [n]. For a, b ∈ F 2 m , a = 0, the affine mapping X → aX +b defines a permutation on the finite field F 2 m , and through the function f it also induces a permutation on the index set {0, 1, . . . , n}. More precisely, for 1 ≤ i ≤ n and 0 ≤ j ≤ n, we use σ i,j to denote the permutation on {0, 1, . . . , n} induced by the mapping X → f (i)X + f (j): σ i,j (v) = f −1 f (i)f (v) + f (j) for v ∈ {0, 1, . . . , n}. The permutations {σ i,j : 1 ≤ i ≤ n, 0 ≤ j ≤ n} form the affine group to which the RM codes and the extended BCH codes are invariant. The special case σ i,0 is the permutation that fixes C 0 and performs (i − 1) cyclic right shifts on (C 1 , C 2 , . . . , C n ). The extended code is invariant to such a permutation because (C 1 , C 2 , . . . , C n ) belongs to a cyclic code. For both the list decoder in [14] and the new decoder in this paper, we focus on another special case i = 1, and we write σ j = σ 1,j to simplify the notation. By definition, σ j is the permutation on {0, 1, . . . , n} induced by the mapping X → X + f (j), so σ j (v) = f −1 (f (v) + f (j)) for 0 ≤ v ≤ n. Clearly, σ 0 is the identity permutation. We will use the set of permutations {σ 0 , σ 1 , . . . , σ n } in our new decoder. In Fig. 1, we give a concrete example for n = 7, where each row in the top-left matrix represents a permutation σ j . III. OUR NEW DECODER Our new decoder follows the general structure of the neural BP decoders proposed in [1], [4] with some additional structures imposed on the weights. In order to build a BP decoder, we first need to identify a parity check matrix of the code. Typically, a parity check matrix of an (n, k) code has size (n − k) × n. In this case, all the row vectors in the matrix are linearly independent. In our application, however, we allow the number of rows in the parity check matrix to be larger than n − k, so the parity check matrix may have some "redundant" row vectors which are linear combinations of other rows. Suppose that the parity check matrix H has m rows and n columns. The Tanner graph corresponding to H is a bipartite graph constructed as follows: It has n variable nodes labelled as v 0 , v 1 , . . . , v n−1 on the left side and m check nodes labelled as c 1 , c 2 , . . . , c m on the right side. An edge is connected between v j and c i in the Tanner graph if and only if H ij = 1. The inputs of the decoder are the log likelihood ratios (LLRs) of n codeword coordinates: L j = log P(y j |C j = 0) P(y j |C j = 1) for j ∈ {0, 1, . . . , n − 1}, where (C 0 , . . . , C n−1 ) is a randomly chosen codeword, and (y 0 , . . . , y n−1 ) is the channel output after transmitting (C 0 , . . . , C n−1 ) through n independent copies of some noisy channel. The decoder aims to recover the codeword from the channel output, or equivalently, from the LLRs. In classic BP algorithms, messages propagate back and forth through the edges of the Tanner graph for several iterations. In each iteration, the message on every edge is updated using the messages on its neighboring edges from the previous iteration together with the LLRs. More precisely, in every odd iteration, the message on an edge (c i , v j ) is updated using the messages on all the other edges that are connected to v j together with the LLR L j . In every even iteration, the message on an edge (c i , v j ) is updated using the messages on all the other edges that are connected to c i . The final decoding result of the jth coordinate is obtained by summing up L j and the messages In [14] we use the parity check matrix in the bottom-left corner. By applying column permutation σ j to this matrix, we obtain other matrices whose rows are also parity checks of the code, e.g., the two matrices in the bottom-middle and bottom-right corner. In this paper, we build a large parity check matrix H which contains all the row vectors of such matrices, and the weights of our neural decoder are invariant to both the cyclic shifts and the permutations σ 0 , . . . , σ 7 . on all the edges connected to v j . In [1], [4], a set of learnable weights are added into the calculations of odd iterations and the final outputs. In [14], a cyclically invariant structure is imposed on the learnable weights to obtain better performance when decoding cyclic codes. We will only describe our decoder for RM codes and extended BCH codes because the decoder for punctured RM codes and BCH codes only requires a trivial modification: If we want to decode the punctured codes, we only need to append a zero entry to the LLR vector. This zero entry means that we know nothing about the overall parity bit. After that the decoder for extended codes can be directly applied to obtain the decoding results. Let C be a RM code or an extended BCH code with code length n = 2 m , and let (C 0 , . . . , C n−1 ) be a codeword of C. Without loss of generality, we assume that the last (n − 1) coordinates form a cyclic code C, which is either a punctured RM code or a BCH code. As mentioned in Section II, a parity check matrix of C has the form (1), where every row of this matrix is a cyclic shift of its first row. Due to the cyclically invariant property of C, one can show that every cyclic shift of the first row is a parity check of C. In total, there are (n − 1) cyclic shifts, and we build an (n − 1) × (n − 1) parity check matrix of C consisting of these (n−1) row vectors. Finally, by appending an all-zero column vector in front of this (n − 1) × (n−1) matrix, we obtain an (n−1)×n parity check matrix of the original code C. As a concrete example, the parity check matrix of the form (1) for (7, 4) Hamming code is Fig. 1 is a parity check matrix of the (8, 4) extended Hamming code. Given the code C, we denote such an (n − 1) × n parity check matrix as H 0 . Since C is invariant under the permutations σ 0 , σ 1 , . . . , σ n−1 defined in Section II, the matrix obtained by performing column permutation σ j on the matrix H 0 is also a parity check matrix of C for all 0 ≤ j ≤ n − 1. For example, the matrix in the bottom-middle of Fig. 1 is obtained by column permutation σ 2 , and the matrix in the bottom-right corner is obtained by column permutation σ 6 . We use H j to denote the matrix obtained by column permutation σ j . Our decoding algorithm has a parameter P ∈ [n], which is the number of permutations we use in our algorithm. By increasing the value of P , the algorithm achieves smaller decoding error probability at the cost of higher time complexity. Given the value P ∈ [n], we pick P permutations from the set {σ 0 , σ 1 , . . . , σ n−1 }. Simulation results indicate that the performance of our decoder does not depend on which P permutations we choose, so we can simply pick the permutations σ 0 , σ 1 , . . . , σ P −1 . Then we build a large parity check matrix H of size P (n − 1) × n, which consists of all the row vectors of H 0 , H 1 , . . . , H P −1 . Our neural decoder performs Belief Propagation on the Tanner graph of H. Let us now take a closer look at the matrix in the bottomleft corner of Fig. 1. Since the last 7 columns of this matrix are cyclic shifts of each other, it is natural to impose a shift invariant structure on weights of the neural decoder associated with the Tanner graph of this matrix. In general, given a RM code or an extended BCH code C, the last (n − 1) columns of H 0 are also cyclic shifts of each other, and we also impose the shift-invariant structure on the weights of the neural decoder. Note that such a structure was already adopted in [14]. The major innovation of our new decoder is that we further extend this invariant structure to the columns obtained by the permutations σ 0 , σ 1 , . . . , σ n−1 . We are now ready to formally define our decoder. We index the columns of H 0 from 0 to n−1. Suppose that the number of 1's in the first column of H 0 is u, and let {i 1 , i 2 , . . . , i u } ⊆ [n] be the set satisfying that the (i b , 1)th entry of H 0 is 1 for all b ∈ [u]. Let π j be the permutation obtained by j−1 right cyclic shifts on the set {1, 2, . . . , n − 1}. We use a triple (z, i, j) to denote an edge in the Tanner graph of H. Recall that H contains all the row vectors of H 0 , H 1 , . . . , H P −1 . The edge (z, i, j) corresponds to the (i, j)th entry of the matrix H z . Then (0, π j (i 1 ), j), (0, π j (i 2 ), j), . . . , (0, π j (i u ), j) are the u edges that contain v j as an endpoint in the jth column of H 0 ; see Fig. 1. For an edge e in the Tanner graph, we use x [s] (e) to denote the message on e in the s-th iteration. In the calculations of each odd iteration, we use the following u 2 weights: {w [s] b,b : b, b ∈ [u], b = b } and {w [s] b : b ∈ [u]}. For odd s and an edge e = (z, π σz(j) (i b ), j) with b ∈ [u], the message x [s] (e) is given by x [s] (e) = x [s] ((z, π σz(j) (i b ), j)) = tanh 1 2 w [s] b L j (2) + b ∈[u]\{b} w [s] b ,b x [s−1] ((z, π σz(j) (i b ), j)) . The calculations of even iterations are the same as those in the vanilla BP algorithm: For even s and an edge e = (z, i, j), x [s] (e) = 2 tanh −1 e ∈Nz(ci)\{e} x [s−1] (e ) , where N z (c i ) is the set of all the edges containing c i as an endpoint in H z . In the calculations of the output layer, we use the following u weights: {w out b : b ∈ [u]}. The jth output is given by o j = L j + P −1 z=0 u b=1 w out b x [2t] ((z, π σz(j) (i b ), j))(3) for j ∈ [n], where 2t is the total number of iterations. IV. SIMULATION RESULTS We present the simulation results of our new decoder in this section, and we compare its performance with the decoders in [4], [14]. In particular, we refer to the neural decoder in [4] as N 18. We refer to the cyclically equivariant neural decoder in [14] as Cyc, and the list decoding version of Cyc is referred to as Cyc list with a parameter specifying the list size. Our decoder also has a parameter P which is the number of permutations we use in the decoder. Note that when P = 1, our new decoder reduces to the Cyc decoder in [14]. When we set P = in our decoder and Cyc list, the BER of our decoder demonstrates 1 to 2 dB improvements over Cyc list (see Fig. 3), and the FER of our decoder remains the same or even smaller (see Fig. 2). Moreover, our decoder also reduces the running time by at least 15%; see Table II. An important advantage of our decoder is that when we increase the value of P , our decoder gracefully reduces the BER. In contrast, when takes a small value, e.g., = 4, the BER of Cyc list is even worse than Cyc while the running time is 4 times larger; see Fig. 3. As explained in the Introduction, this is because the Cyc list decoder does not optimize the fraction of incorrect bits in the incorrectly decoded codewords. Note that this fraction is precisely the ratio between BER and FER. Whenever Cyc list outputs an incorrect codeword, it simply picks a random one, so with high probability the fraction of incorrect bits in this randomly chosen codeword is very close to 1/2, as indicated by Table I. In contrast, our decoder always tries to minimize the fraction of incorrect bits even when it outputs the wrong decoding result, and the fraction of incorrect bits in the wrong decoding result is smaller than 0.1 for our decoder; see Table I. [4]. Cyc and Cyc list refer to the decoders in [14], where the parameter is the list size. For our new decoder, the parameter P is the number of permutations used in the decoder. When P = 1, our new decoder reduces to the Cyc decoder in [14]. As we increase the value of P , our decoder gracefully reduces the BER. When we take = P = 4, our decoder has 1 to 2dB gain over the Cyc list decoder in terms of BER. Fig. 1 : 1σ 0 , . . . , σ 7 are the permutations under which the extended (8, 4) Hamming code is invariant. the matrix in the bottom-left corner of Fig. 2 : 2Comparison of FER between our new decoder and Cyc list. When P = , our decoder always has the same or even smaller FER than Cyc list. Fig. 3 : 3N 18 refers to the neural decoder in is with Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China. M. Ye is with Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China. Email: [email protected] Error Rate (FER) of the cyclically equivariant neural decoder. TABLE I : IThe BER/FER for three SNR values and different decoders. This ratio is equal to the fraction of incorrect bits in the incorrectly decoded codewords. The list size in Cyc list[14] is = 4, and the number of permutations used in our decoder is P = 4.Code BCH(63,36) BCH(63,45) Decoder/SNR 4 5 6 4 5 6 Cyc list 0.500 0.499 0.503 0.499 0.499 0.496 Ours 0.072 0.060 0.058 0.056 0.047 0.033 TABLE II : IIComparison between the decoding time of Cyc list in[14] and our new decoder. When P = , our new decoder reduces the running time by at least 15%.Code Cyc Cyc list = 4 Ours P = 4 Cyc list = 16 Ours P = 16 Cyc list = 64 Ours P = 64 BCH(63,36) 3.59ms 17.4ms 15.2ms 68.6ms 56.7ms 268ms 213ms BCH(63,45) 4.60ms 22.4ms 18.9ms 85.2ms 72.5ms 343ms 289ms Learning to decode linear codes using deep learning. E Nachmani, Y Be&apos;ery, D Burshtein, 2016 54th Annual Allerton Conference on Communication, Control, and Computing. AllertonE. Nachmani, Y. Be'ery, and D. Burshtein, "Learning to decode linear codes using deep learning," in 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2016, pp. 341-346. On deep learning-based channel decoding. T Gruber, S Cammerer, J Hoydis, S Brink, 2017 51st Annual Conference on Information Sciences and Systems (CISS). IEEET. Gruber, S. Cammerer, J. Hoydis, and S. ten Brink, "On deep learning-based channel decoding," in 2017 51st Annual Conference on Information Sciences and Systems (CISS). IEEE, 2017, pp. 1-6. Scaling deep learning-based decoding of polar codes via partitioning. S Cammerer, T Gruber, J Hoydis, S. Ten Brink, GLOBE-COM 2017-2017 IEEE Global Communications Conference. IEEES. Cammerer, T. Gruber, J. Hoydis, and S. Ten Brink, "Scaling deep learning-based decoding of polar codes via partitioning," in GLOBE- COM 2017-2017 IEEE Global Communications Conference. IEEE, 2017, pp. 1-6. Deep learning methods for improved decoding of linear codes. E Nachmani, E Marciano, L Lugosch, W J Gross, D Burshtein, Y Be&apos;ery, IEEE Journal of Selected Topics in Signal Processing. 121E. Nachmani, E. Marciano, L. Lugosch, W. J. Gross, D. Burshtein, and Y. Be'ery, "Deep learning methods for improved decoding of linear codes," IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 119-131, 2018. Deepcode: Feedback codes via deep learning. H Kim, Y Jiang, S Kannan, S Oh, P Viswanath, Advances in Neural Information Processing Systems. H. Kim, Y. Jiang, S. Kannan, S. Oh, and P. Viswanath, "Deepcode: Feedback codes via deep learning," in Advances in Neural Information Processing Systems, 2018, pp. 9436-9446. Communication algorithms via deep learning. H Kim, Y Jiang, R Rana, S Kannan, S Oh, P Viswanath, 6th International Conference on Learning Representations. H. Kim, Y. Jiang, R. Rana, S. Kannan, S. Oh, and P. Viswanath, "Communication algorithms via deep learning," in 6th International Conference on Learning Representations, ICLR 2018, 2018. Learning to decode LDPC codes with finite-alphabet message passing. B Vasić, X Xiao, S Lin, 2018 Information Theory and Applications Workshop (ITA). IEEEB. Vasić, X. Xiao, and S. Lin, "Learning to decode LDPC codes with finite-alphabet message passing," in 2018 Information Theory and Applications Workshop (ITA). IEEE, 2018, pp. 1-9. Lowcomplexity recurrent neural network-based polar decoder with weight quantization mechanism. C.-F Teng, C.-H D Wu, A K S Ho, A.-Y A Wu, ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSPC.-F. Teng, C.-H. D. Wu, A. K.-S. Ho, and A.-Y. A. Wu, "Low- complexity recurrent neural network-based polar decoder with weight quantization mechanism," in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). . IEEE. IEEE, 2019, pp. 1413-1417. Turbo autoencoder: Deep learning based channel codes for pointto-point communication channels. Y Jiang, H Kim, H Asnani, S Kannan, S Oh, P Viswanath, Advances in neural information processing systems. Y. Jiang, H. Kim, H. Asnani, S. Kannan, S. Oh, and P. Viswanath, "Turbo autoencoder: Deep learning based channel codes for point- to-point communication channels," in Advances in neural information processing systems, 2019, pp. 2758-2768. Hyper-graph-network decoders for block codes. E Nachmani, L Wolf, Advances in Neural Information Processing Systems. E. Nachmani and L. Wolf, "Hyper-graph-network decoders for block codes," in Advances in Neural Information Processing Systems, 2019, pp. 2329-2339. Reinforcement learning for channel coding: Learned bit-flipping decoding. F Carpi, C Häger, M Martalò, R Raheli, H D Pfister, 2019 57th Annual Allerton Conference on Communication, Control, and Computing. AllertonIEEEF. Carpi, C. Häger, M. Martalò, R. Raheli, and H. D. Pfister, "Rein- forcement learning for channel coding: Learned bit-flipping decoding," in 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2019, pp. 922-929. Learning to decode: Reinforcement learning for decoding of sparse graph-based channel codes. S Habib, A Beemer, J Kliewer, Advances in Neural Information Processing Systems. S. Habib, A. Beemer, and J. Kliewer, "Learning to decode: Reinforce- ment learning for decoding of sparse graph-based channel codes," in Advances in Neural Information Processing Systems, 2020. Pruning neural belief propagation decoders. A Buchberger, C Häger, H D Pfister, L Schmalen, A G Amat, 2020 IEEE International Symposium on Information Theory (ISIT). IEEEA. Buchberger, C. Häger, H. D. Pfister, L. Schmalen, and A. G. i Amat, "Pruning neural belief propagation decoders," in 2020 IEEE International Symposium on Information Theory (ISIT). IEEE, 2020, pp. 338-342. Cyclically equivariant neural decoders for cyclic codes. X Chen, M Ye, arXiv:2105.05540to appear in ICML 2021X. Chen and M. Ye, "Cyclically equivariant neural decoders for cyclic codes," 2021, to appear in ICML 2021, arXiv:2105.05540. The theory of error correcting codes. F J Macwilliams, N J A Sloane, Elsevier16F. J. MacWilliams and N. J. A. Sloane, The theory of error correcting codes. Elsevier, 1977, vol. 16. Reed-Muller codes: Theory and algorithms. E Abbe, A Shpilka, M Ye, IEEE Transactions on Information Theory. 676E. Abbe, A. Shpilka, and M. Ye, "Reed-Muller codes: Theory and algorithms," IEEE Transactions on Information Theory, vol. 67, no. 6, pp. 3251-3277, 2021. Some results on cyclic codes which are invariant under the affine group and their applications. T Kasami, S Lin, W W Peterson, Information and Control. 115-6T. Kasami, S. Lin, and W. W. Peterson, "Some results on cyclic codes which are invariant under the affine group and their applications," Information and Control, vol. 11, no. 5-6, pp. 475-496, 1967.
[]
[ "Temporal semi-discretizations of a backward semilinear stochastic evolution equation *", "Temporal semi-discretizations of a backward semilinear stochastic evolution equation *" ]
[ "Binjie Li \nSchool of Mathematics\nSichuan University\n610064ChengduChina\n", "† \nSchool of Mathematics\nSichuan University\n610064ChengduChina\n", "Xiaoping Xie \nSchool of Mathematics\nSichuan University\n610064ChengduChina\n" ]
[ "School of Mathematics\nSichuan University\n610064ChengduChina", "School of Mathematics\nSichuan University\n610064ChengduChina", "School of Mathematics\nSichuan University\n610064ChengduChina" ]
[]
This paper studies the convergence of three temporal semi-discretizations for a backward semilinear stochastic evolution equation. For general terminal value and general coefficient with Lipschitz continuity, the convergence of the first two temporal semi-discretizations is established, and an explicit convergence rate is derived for the third temporal semi-discretization. The third temporal semi-discretization is applied to a general stochastic linear quadratic control problem, and the convergence of a temporally semi-discrete approximation to the optimal control is established.
10.1007/s00245-023-10014-4
[ "https://export.arxiv.org/pdf/2106.13428v3.pdf" ]
235,652,184
2106.13428
c59ada4413fd5bbca09b05e5e1461394ea2cce99
Temporal semi-discretizations of a backward semilinear stochastic evolution equation * Aug 2022 Binjie Li School of Mathematics Sichuan University 610064ChengduChina † School of Mathematics Sichuan University 610064ChengduChina Xiaoping Xie School of Mathematics Sichuan University 610064ChengduChina Temporal semi-discretizations of a backward semilinear stochastic evolution equation * Aug 2022backward semilinear stochastic evolution equationBrownian motiondiscretizationstochastic linear quadratic control AMS subject classifications 49M2565C3060H3565K10 This paper studies the convergence of three temporal semi-discretizations for a backward semilinear stochastic evolution equation. For general terminal value and general coefficient with Lipschitz continuity, the convergence of the first two temporal semi-discretizations is established, and an explicit convergence rate is derived for the third temporal semi-discretization. The third temporal semi-discretization is applied to a general stochastic linear quadratic control problem, and the convergence of a temporally semi-discrete approximation to the optimal control is established. Introduction In the literature, Bismut [3] first introduced the finite dimensional linear backward stochastic differential equations (BSDEs, for short) to study the stochastic optimal control problems. Later, Pardoux and Peng [33] studied the general finite dimensional BSDEs with Lipschitz nonlinearity, and Hu and Peng [21] established the well-posedness for the backward semilinear stochastic evolution equations with Lipschitz nonlinearity. Since then a considerable number of papers have been published for the applications of the BSDEs to stochastic optimal control, partial differential equations and mathematical finance; see [26,32,34,36,42] and the references cited therein. We particularly refer the reader to [10,11,12,13,14,15,16,17,18] and the references therein for the applications of the backward stochastic partial differential equations to the stochastic optimal control problems. By now, the numerical solutions of the finite-dimensional BSDEs have been extensively studied. We particularly introduce several works as follows. For backward-forward SDEs, Ma et al. [31] proposed a four-step scheme, Zhang [43] and Bouchard and Touzi [4] analyzed two Euler type schemes, and Chassagneux [6] studied a class of linear multistep methods. The above four works all require that the coefficients are deterministic. For a class of nonlinear BSDEs with particular terminal value and sufficiently smooth deterministic coefficients, Zhao et al. [44] proposed a stable multistep scheme. For the nonlinear BSDEs with general terminal value and general coefficients, Hu et al. [20] analyzed three schemes with some restrictions on the regularity of the underlying solution, and these restrictions might be difficult to verify. We also refer the reader to the references cited in the above papers for more related works. Additionally, because of the close connections between the stochastic evolution equations and the backward stochastic evolution equations, we refer the reader to [1,5,7,8,22,23,24,25,27,41] and the references therein, for the numerical analysis of the stochastic evolution equations. Compared with the numerical analysis of the finite-dimensional BSDEs, the numerical analysis of the backward stochastic semilinear evolution equations is very limited. Wang [40] analyzed a discretization for a backward semilinear stochastic parabolic equation; since this discretization uses the eigenvectors of the Laplace operator, its application appears to be limited. Recently, Li and Xie [28] analyzed a spatial semi-discretization for a backward semilinear stochastic parabolic equation with general filtration, using the standard piecewise linear finite element method. To our best knowledge, no numerical analysis of temporal semi-discretizations is available for a backward semilinear stochastic evolution equation in an infinite-dimensional Hilbert space. The immaturity of the numerical analysis of the backward semilinear stochastic evolution equations motivates us to study the temporal semi-discretizations for the equation dp(t) = −(Ap(t) + f (t, p(t), z(t))) dt + z(t) dW (t), 0 t T, p(T ) = p T ,(1) where 0 < T < ∞, W (·) is a one-dimensional real Brownian motion, and p T and f are given. One key difficulty in the numerical analysis of the backward semilinear stochastic evolution equation (1) is that the process z is generally of low temporal regularity. In this paper, we analyze three Euler type temporal semi-discretizations for equation (1). For the first two semi-discretizations, the process z is discretized by the piecewise constant processes, and we prove that the two semi-discretizations are convergent. More precisely, we obtain the error bound c(τ 1/2 + |||z − P τ z||| L 2 (0,T ;H) ), where P τ z is the optimal approximation of z in the space of piecewise constant processes. Hence, if the process z indeed possesses higher temporal regularity, then an explicit convergence rate will readily be derived. For the third semidiscretization, the process z is not discretized, and an explicit convergence rate is derived. Finally, we apply the third semi-discretization to a general stochastic linear quadratic control problem, and establish the convergence of a temporally semi-discrete approximation, with reasonable regularity assumption on the data. To sum up, our main contributions lie in the following aspects. • This work, to our best knowledge, provides the first numerical analysis of temporal semi-discretizations for an infinite-dimensional semilinear BSDE. • Our analysis, compared with most of the numerical analysis of the finitedimensional BSDEs, neither requires the terminal value to be generated by a forward stochastic evolution equation, nor requires the coefficient to be deterministic. In addition, it requires only some reasonable regularity assumptions on the data, and imposes no regularity restriction on the solution. • In the literature, the numerical analysis of the stochastic optimal control problems governed by the SPDEs is very limited; see [9,28,38,37,45]. Our analysis for the temporal semi-discretization of the general stochastic linear quadratic control problem, as far as we know, appears to be the first numerical analysis of such kinds of problems where the noise is multiplicative and the diffusion term contains the control variable. The rest of this paper is organized as follows. Section 2 introduces some preliminaries. Section 3 gives three temporal semi-discretizations and their error estimates. Section 4 applies the third temporal semi-discretization to a stochastic linear quadratic control problem. Finally, Section 5 concludes this paper. Preliminaries Let (Ω, F , P) be a given complete probability space, on which a one-dimensional Brownian motion W (·) is defined. Let F := {F t | t 0} be the filtration generated by W (·) and augmented by the P-null sets of F . We use E to denote the expectation and use E t to denote the conditional expectation with respect to F t for each t 0. For any separable Hilbert space X with norm · X , we write the Hilbert space L 2 (Ω, F T , P; X) as L 2 (Ω; X), and use |||·||| X to denote its norm. Moreover, define L 2 F (0, T ; X) := ϕ : [0, T ] × Ω → X | ϕ is F-progressively measurable and T 0 |||ϕ(t)||| 2 X dt < ∞ , and let L 2 F (Ω; C([0, T ]; X)) be the space of all F-progressively measurable processes ϕ with continuous paths in X such that |||ϕ||| C([0,T ];X) := E sup t∈[0,T ] ϕ(t) 2 X 1/2 < ∞. The space L 2 F (Ω; C([0, T ]; X)) is a Banach space with respect to the above norm |||·||| C([0,T ];X) . Let H be a real separable Hilbert space with inner product (·, ·) H . Assume that A : Domain(A) ⊂ H → H is a linear operator satisfying the following properties: • A is self-adjoint, i.e., (Av, w) H = (v, Aw) H for all v, w ∈ Domain(A); • A is surjective, and there exists a positive constant δ such that (−Av, v) H δ v 2 H for all v ∈ Domain(A); • Domain(A) is dense in H, and Domain(A), equipped with the norm A· H , is compactly embedded into H. It is evident that A will generate an analytic contractive semigroup {e tA | t 0} on H. For each 0 γ 1, define H γ := {(−A) −γ v | v ∈ H} and endow this space with the norm v H γ := (−A) γ v H ∀v ∈ H γ . In the sequel, we will use [·, ·] to denote the usual inner product of the Hilbert space L 2 (Ω; H). For any two Banach spaces B 1 and B 2 , L(B 1 , B 2 ) is the space of all bounded linear operators from B 1 to B 2 , and L(B 1 , B 1 ) is abbreviated to L(B 1 ). We denote by I the identity mapping. Finally, for the data f and p T in equation (1), we make the following assumptions. Hypothesis 2.1. We assume that (i)-(iv) hold: (i) The functional f : [0, T ] × Ω × H × H → H satisfies that f (·, v, w) ∈ L 2 F (0, T ; H) for all v, w ∈ H. (ii) There exists a positive constant C L such that, P almost surely for almost every t ∈ [0, T ], f (t, p 1 , z 1 ) − f (t, p 2 , z 2 ) H C L ( p 1 − p 2 H + z 1 − z 2 H )(2) for all p 1 , p 2 , z 1 , z 2 ∈ H. (iii) p T ∈ L 2 (Ω; H 1/2 ). Under the above hypothesis, equation (1) admits a unique mild solution (p, z), and (p, z) ∈ L 2 F (Ω; C([0, T ]; H 1/2 )) ∩ L 2 F (0, T ; H 1 ) × L 2 F (0, T ; H 1/2 ).(3) Three temporal semi-discretizations Let J be a positive integer and define t j := jτ for each 0 j J, where τ := T /J. Define X τ := V : [0, T ] × Ω → H | V (t j ) ∈ L 2 (Ω, F tj , P; H) and V is constant on [t j , t j+1 ) for each 0 j < J . For any V ∈ X τ , we denote V (t j ), 0 j J, by V j for convenience. For each 0 j < J, define δW j := W (t j+1 ) − W (t j ), and define I j τ : L 2 (Ω; H) → L 2 (Ω, F tj , P; H) by I j τ v := 1 τ E tj (vδW j ) ∀v ∈ L 2 (Ω; H).(4) We also let P τ be the L 2 (Ω; L 2 (0, T ; H))-orthogonal projection onto X τ ; more concretely, for any v ∈ L 2 (Ω; L 2 (0, T ; H)), (P τ v)(t) := 1 τ E tj tj+1 tj v(s) ds for all t j t < t j+1 with 0 j < J. In the rest of this paper, c denotes a generic positive constant, independent of τ , and its value may differ in different places. Now we present three temporal semi-discretizations of equation (1). The first semi-discretization seeks (P, Z) ∈ X τ × X τ by                P J = p T ,(5a)Z j = I j τ P j+1 + tj+1 tj f (t, P j+1 , Z j ) dt , 0 j < J,(5b)P j − E tj P j+1 = τ AP j + E tj tj+1 tj f (t, P j+1 , Z j ) dt, 0 j < J.(5c) The second semi-discretization seeks (P, Z) ∈ X τ × X τ by          P J = p T , (6a) Z j = I j τ P j+1 , 0 j < J,(6b)P j − E tj P j+1 = τ AP j + E tj tj+1 tj f (t, P j+1 , Z j ) dt, 0 j < J.(6c) The third semi-discretization seeks (P, Z) ∈ X τ × L 2 F (0, T ; H) by      P J = p T ,(7a)P j −P j+1 = τ AP j + tj+1 tj f (t, P j+1 , Z(t))dt− tj+1 tj Z(t) dW (t), 0 j < J. (7b) The main results of this section are the following three theorems. Theorem 3.1. Assume that Hypothesis 2.1 holds and τ < 1/C 2 L . Let (p, z) and (P, Z) be the solutions of (1) and (5), respectively. Then |||p(t j ) − P j ||| H + |||z − Z||| L 2 (0,T ;H) c τ 1/2 + |||(I − P τ )z||| L 2 (0,T ;H) .(8) Theorem 3.2. Assume that Hypothesis 2.1 holds. Let (p, z) and (P, Z) be the solutions of (1) and (6), respectively. Then the error estimate (8) still holds. Theorem 3.3. Let (p, z) and (P, Z) be the solutions of (1) and (7), respectively. Then, under the conditions of Theorem 3.1, we have max 0 j<J |||p(t j ) − P j ||| H + |||z − Z||| L 2 (0,T ;H) cτ 1/2 .(9) We only provide a complete proof of Theorem 3.1, since the proofs of Theorems 3.2 and 3.3 are similar (see Remark 3.4). To this end, we proceed as follows. Preliminary results We present some standard estimates as follows. For any 0 < t T and 0 β γ 1, we have (see, e.g., [35,Theorem 6 .13, Chapter 2]) e tA L(H β ,H γ ) ct β−γ ,(10)I − e tA L(H γ ,H β ) ct γ−β .(11) By [39, Theorem 7.3] we have, for any 0 β 1, e mτ A − (I − τ A) −m L(H β ,H) cτ β m β−1 ∀m > 0.(12) For any v ∈ H 1/2 and g ∈ L 2 (0, T ; H), we have the following estimates: J−1 j=0 w j − e (T −tj )A v − T tj e (s−tj )A g(t) dt 2 H cτ v 2 H 1/2 + g 2 L 2 (0,T ;H)(13) and, for any 0 j < J, max j k<J w k H 1/2 + J−1 k=j τ w k 2 H 1 1/2 c v H 1/2 + g L 2 (tj ,T ;H) ,(14) where {w j } J−1 j=0 is defined by w j := (I − τ A) −(J−j) v + J−1 k=j (I − τ A) −(k−j+1) t k+1 t k g(t) dt ∀0 j < J. In addition, for any v ∈ L 2 (Ω; H) and w ∈ L 2 (Ω, F tj , P; H) with 0 j < J, the following properties are easily verified by (4): I j τ w = 0 P-a.s.,(15)I − δW j I j τ (δW j w) = 0 P-a.s.,(16)v − δW j I j τ v, δW j w = 0,(17)v − δW j I j τ v 2 H + δW j I j τ v 2 H = |||v||| 2 H .(18) Remark 3.1. The estimates (13) and (14) Three temporal semi-discretizations of a backward linear stochastic evolution equation This subsection studies the convergence of three temporal semi-discretizations for the following backward linear stochastic evolution equation: dp(t) = −(Ap + g)(t)dt + z(t)dW (t), 0 t T, (19a) p(T ) = p T ,(19b) where g ∈ L 2 F (0, T ; H) and p T ∈ L 2 (Ω; H). The main results are the following three lemmas. Lemma 3.1. Assume that (p, z) is the solution of (19) with p T ∈ L 2 (Ω; H 1/2 ) and g ∈ L 2 F (0, T ; H). Define (P, Z) ∈ X τ × X τ by                P J = p T ,(20a)Z j = I j τ P j+1 + tj+1 tj g(t) dt , 0 j < J, (20b) P j − E tj P j+1 = τ AP j + E tj tj+1 tj g(t) dt, 0 j < J. (20c) Then the following estimates hold: for any 0 j < J, |||p(tj) − Pj||| H cτ 1/2 (J − j) −1/2 |||pT ||| H 1/2 + |||g||| L 2 (0,T ;H) ; (21) J −1 j=0 |||p − Pj+1||| 2 L 2 (t j ,t j+1 ;H) 1/2 cτ 1/2 |||pT ||| H 1/2 + |||g||| L 2 (0,T ;H)) ; (22) |||z − Z||| L 2 (0,T ;H) cτ 1/2 |||pT ||| H 1/2 + |||g||| L 2 (0,T ;H) + |||(I − Pτ )z||| L 2 (0,T ;H) .(23)Lemma 3.2. Define (P, Z) ∈ X τ × X τ by          P J = p T , Z j = I j τ P j+1 , 0 j < J, P j − E tj P j+1 = τ AP j + E tj tj+1 tj g(t) dt, 0 j < J. Then the three estimates in Lemma 3.1 still hold, under the conditions of Lemma 3.1. Lemma 3.3. Define (P, Z) ∈ X τ × L 2 F (0, T ; H) by    P J = p T , P j − P j+1 = τ AP j + tj+1 tj g(t) dt − tj+1 tj Z(t) dW (t), 0 j < J. Then, under the conditions of Lemma 3.1, the error estimates (21) and (22) in Lemma 3.1 still hold, and |||z − Z||| L 2 (0,T ;H) cτ 1/2 |||p T ||| H 1/2 + |||g||| L 2 (0,T ;H) .(26) Since the proofs of Lemmas 3.2 and 3.3 are similar to (and simpler than) that of Lemma 3.1, we only prove the latter. To this end, we first present some standard properties of the solution (p, z) to equation (19) as follows: • for any 0 t T , we have p(t) = E t e (T −t)A p T + T t e (r−t)A g(r) dr P-a.s.;(27) • for any 0 s t T , we have p(s) − p(t) = t s (Ap + g)(r) dr − t s z(r) dW (r) P-a.s.,(28)p(s) − e (t−s)A p(t) = t s e (r−s)A g(r) dr − t s e (r−s)A z(r) dW (r) P-a.s.; (29) • for p T ∈ L 2 (Ω; H 1/2 ) and g ∈ L 2 F (0, T ; H), we have |||p||| L 2 (0,T ;H 1 ) + |||z||| L 2 (0,T ;H 1/2 ) c |||p T ||| H 1/2 + |||g||| L 2 (0,T ;H) . (30) Remark 3.2. The above properties are standard and easily verified by the Galerkin method and the basic properties of the finite-dimensional BSDEs (see, e.g., [34,Chapter 5]). Then we present two technical lemmas, which can be proved by straightforward calculations. Lemma 3.4. For any 0 j < J, J−1 k=j t k+1 t k e (t−tj)A − (I − τ A) −(k−j+1) 2 L(H) dt cτ.(31) Proof. We have J−1 k=j t k+1 t k e (t−tj)A − (I − τ A) −(k−j+1) 2 L(H) dt = J−1 k=j t k+1 t k e (t−tj)A − e (t k+1 −tj )A + e (t k+1 −tj )A − (I − τ A) −(k−j+1) 2 L(H) dt 2 J−1 k=j t k+1 t k e (t−tj)A − e (t k+1 −tj )A 2 L(H) dt + 2 J−1 k=j t k+1 t k e (t k+1 −tj )A − (I − τ A) −(k−j+1) 2 L(H) dt =:I 1 + I 2 . For I 1 we have I 1 = 2 J−1 k=j t k+1 t k (I − e (t k+1 −t)A )e (t−tj )A 2 L(H) dt 2 tj+1 tj I − e (tj+1−t)A 2 L(H) e (t−tj)A 2 L(H) dt + 2 J−1 k=j+1 t k+1 t k I − e (t k+1 −t)A 2 L(H 1 ,H) e (t−tj)A 2 L(H,H 1 ) dt cτ, by the following two estimates: (10) and (11)) (10) and (11)) tj+1 tj I − e (tj+1−t)A 2 L(H) e (t−tj)A 2 L(H) dt c tj+1 tj dt (bycτ and J−1 k=j+1 t k+1 t k I − e (t k+1 −t)A 2 L(H 1 ,H) e (t−tj)A 2 L(H,H 1 ) dt c J−1 k=j+1 t k+1 t k (t k+1 − t) 2 (t − t j ) −2 dt (bycτ 2 J−1 k=j+1 t k+1 t k (t − t j ) −2 dt cτ. For I 2 , by (12) we obtain I 2 c J−1 k=j τ (k − j + 1) −2 cτ. Combining the above estimates of I 1 and I 2 yields (31) and thus completes the proof. Lemma 3.5. Let (p, z) be the solution to equation (19) with g ∈ L 2 F (0, T ; H) and p T ∈ L 2 (Ω; H 1/2 ). Then J−1 k=0 t k+1 t k |||p(t)||| H 1 dt 2 cτ |||p T ||| 2 H 1/2 + |||g||| 2 L 2 (0,T ;H) .(32) Proof. Let η(t) := e (T −t)A p T + T t e (s−t)A g(s) ds ∀0 t T. It is standard that |||η||| L 2 (0,T ;H 1 ) c |||p T ||| H 1/2 + |||g||| L 2 (0,T ;H) .(33) By (27) we have, for any 0 t < T , |||p(t)||| H 1 = |||E t η(t)||| H 1 |||η(t)||| H 1 , so that J−1 k=0 t k+1 t k |||p(t)||| H 1 dt 2 J−1 k=0 τ t k+1 t k |||p(t)||| 2 H 1 dt J−1 k=0 τ t k+1 t k |||η(t)||| 2 H 1 dt = τ |||η||| 2 L 2 (0,T ;H 1 ) , which, together with (33), proves the desired estimate (32). Finally, we are in a position to prove Lemma 3.1 as follows. Proof of Lemma 3.1. Firstly, let us prove (21). Let 0 j < J be arbitrary but fixed. From (20), it is easily verified that P j = E tj (I − τ A) −(J−j) p T + J−1 k=j t k+1 t k (I − τ A) −(k−j+1) g(t) dt P-a.s.(34) Hence, by (27) we obtain p(t j ) − P j = I 1 + I 2 P-a.s., where I 1 := E tj e (T −tj )A − (I − τ A) −(J−j) p T , I 2 := E tj J−1 k=j t k+1 t k e (t−tj)A − (I − τ A) −(k−j+1) g(t) dt . For I 1 we have (12)). |||I 1 ||| H = E tj e (T −tj )A − (I − τ A) −(J−j) p T H e (T −tj )A − (I − τ A) −(J−j) p T H e (T −tj)A − (I − τ A) −(J−j) L(H 1/2 ,H) |||p T ||| H 1/2 cτ 1/2 (J − j) −1/2 |||p T ||| H 1/2 (by For Using the above estimate and (21) yields J−1 j=0 |||p − P j+1 ||| 2 L 2 (tj ,tj+1;H) = J−1 j=0 |||p − p(t j+1 ) + p(t j+1 ) − P j+1 ||| 2 L 2 (tj ,tj+1;H) 2 J−1 j=0 |||p − p(t j+1 )||| 2 L 2 (tj ,tj+1;H) + 2 J−1 j=0 |||p(t j+1 ) − P j+1 ||| 2 L 2 (tj ,tj+1;H) cτ |||p T ||| 2 H 1/2 + |||g||| 2 L 2 (0,T ;H) , which implies the desired estimate (22). Thirdly, let us prove (23). Fix 0 j < J. By (28) we have p(t j+1 )+ tj+1 tj g(t) dt = p(t j )− tj+1 tj Ap(t) dt+ tj+1 tj z(t) dW (t) P-a.s.,(36) so that, P-a.s., (I − Et j − δWjI j τ ) p(tj+1) + t j+1 t j g(t) dt = (I − Et j − δWjI j τ ) p(tj) − t j+1 t j Ap(t) dt + t j+1 t j z(t) dW (t) = − δWjI j τ p(tj) − (I − Et j − δWjI j τ ) t j+1 t j Ap(t) dt + (I − δWjI j τ ) t j+1 t j z(t) dW (t) = − (I − Et j − δWjI j τ ) It follows that I − E tj − δW j I j τ p(t j+1 ) + tj+1 tj g(t) dt + (I − E tj ) tj+1 tj Ap(t) dt = δW j I j τ tj+1 tj Ap(t) dt + (I − δW j I j τ ) tj+1 tj z(t) dW (t) P-a.s., which further implies (I − Et j ) p(tj+1) + t j+1 t j (Ap + g)(t) dt = δWjI j τ p(tj+1) + t j+1 t j (Ap + g)(t) dt + (I − δWjI j τ ) t j+1 t j z(t) dW (t) P-a.s.(37) By (36) we also have tj+1 tj z(t) dW (t) = (I − E tj ) p(t j+1 ) − p(t j ) + tj+1 tj (Ap + g)(t) dt = (I − E tj ) p(t j+1 ) + tj+1 tj (Ap + g)(t) dt P-a.s., which, together with (20b) and (16), implies P-a.s. tj+1 tj (z − Z)(t) dW (t) = (I − E tj ) p(t j+1 ) + tj+1 tj (Ap + g)(t) dt − δW j I j τ P j+1 + tj+1 tj g(t) dt = δW j I j τ p(t j+1 )−P j+1 + tj+1 tj Ap(t)dt + (I −δW j I j τ ) tj+1 tj z(t)dW (t) (by (37)) = δW j I j τ p(t j+1 )−P j+1 + tj+1 tj Ap(t)dt +(I −δW j I j τ ) tj+1 tj (z −P τ z)(t)dW (t). Hence, t j+1 t j (z − Z)(t) dW (t) 2 H = δWjI j τ p(tj+1)−Pj+1 + t j+1 t j Ap(t)dt 2 H + (I −δWjI j τ ) t j+1 t j (z−Pτ z)(t)dW (t) 2 H p(tj+1) − Pj+1 + t j+1 t j Ap(t) dt 2 H + t j+1 t j (z − Pτ z)(t) dW (t) 2 H (by (18)) = p(tj+1) − Pj+1 + t j+1 t j Ap(t) dt 2 H + |||(I − Pτ )z||| 2 L 2 (t j ,t j+1 ;H) 2 |||p(tj+1) − Pj+1||| 2 H + 2 t j+1 t j Ap(t) dt 2 H + |||(I − Pτ )z||| 2 L 2 (t j ,t j+1 ;H) , where we have used the property (17) in the first equality. Since 0 j < J is arbitrary, summing over j from 0 to J − 1 leads to J −1 j=0 t j+1 t j (z − Z)(t) dW (t) 2 H 2 J −1 j=0 |||p(tj+1) − Pj+1||| 2 H + 2 J −1 j=0 t j+1 t j Ap(t) dt 2 H + |||(I − Pτ )z||| 2 L 2 (0,T ;H) 2 J −1 j=0 |||p(tj+1) − Pj+1||| 2 H + 2 J −1 j=0 t j+1 t j |||Ap(t)||| H dt 2 + |||(I − Pτ )z||| 2 L 2 (0,T ;H) = 2 J −1 j=0 |||p(tj+1) − Pj+1||| 2 H + 2 J −1 j=0 t j+1 t j |||p(t)||| H 1 dt 2 + |||(I − Pτ )z||| 2 L 2 (0,T ;H) . which, together with the equality J−1 j=0 tj+1 tj (z − Z)(t) dW (t) 2 H = J−1 j=0 |||z − Z||| 2 L 2 (tj ,tj+1;H) = |||z − Z||| 2 L 2 (0,T ;H) , implies |||z − Z||| 2 L 2 (0,T ;H) 2 J−1 j=0 |||p(t j+1 ) − P j+1 ||| 2 H + 2 j−1 Finally, combining (38), (39), and (32) proves (23) and thus concludes the proof of Lemma 3.1. Proof of Theorem 3.1 For any 0 j < J, since (18) implies τ 1/2 I j τ v H = δW j I j τ v H |||v||| H ∀v ∈ L 2 (Ω; H), we obtain I j τ L(L 2 (Ω;H)) τ −1/2 ∀0 j < J. By the above estimate, (2) and the condition τ < 1/C 2 L , a straightforward contraction argument proves that the temporal semi-discretization (5) admits a unique solution (P, Z). In the sequel, we will assume that τ is sufficiently small; otherwise, the error estimate (8) is evident. We split the rest of the proof into the following four steps. Step 1. We present some preliminary notations and estimates. Let M := τ + |||(I − P τ )z||| 2 L 2 (0,T ;H) .(40)Define ( P , Z) ∈ X τ × X τ by                P J = p T ,(41a)Z j = I j τ P j+1 + tj+1 tj f (t, p(t), z(t)) dt , 0 j < J,(41b)P j − E tj P j+1 = τ A P j + E tj tj+1 tj f (t, p(t), z(t)) dt, 0 j < J. (41c) In view of p T ∈ H 1/2 and the fact f (·, p(·), z(·)) ∈ L 2 F (0, T ; H), by Lemma 3.1 we obtain max 0 k<J p(t k ) − P k H + J−1 k=0 p − P k+1 2 L 2 (t k ,t k+1 ;H) 1/2 + z − Z L 2 (0,T ;H) cM 1/2 .(42) Letting E P := P − P and E Z := Z − Z, from (5) and (41) we conclude that                E P J = 0, (43a) E Z j = I j τ E P j+1 + t j+1 t j G(t, E P j+1 , E Z j ) dt , 0 j < J,(43b)E P j − Et j E P j+1 = τ AE P j + Et j t j+1 t j G(t, E P j+1 , E Z j ) dt, 0 j < J,(43c) where G(t, E P j+1 , E Z j ) := f (t, E P j+1 + P j+1 , E Z j + Z j ) − f (t, p(t), z(t))(44) for all t j t < t j+1 with 0 j < J. We have, for any 0 k < J, t k+1 t k G(t, E P k+1 , E Z k ) 2 H dt = t k+1 t k f (t, E P k+1 + P k+1 , E Z k + Z k ) − f (t, p(t), z(t)) 2 H dt (by (44)) c t k+1 t k E P k+1 + P k+1 − p(t) 2 H + E Z k + Z k − z(t) 2 H dt (by (2)) c τ E P k+1 2 H + E Z 2 L 2 (t k ,t k+1 ;H) + p− P k+1 2 L 2 (t k ,t k+1 ;H) + z− Z 2 L 2 (t k ,t k+1 ;H) . Hence, for each 0 j < J, J −1 k=j t k+1 t k G(t, E P k+1 , E Z k ) 2 H dt c J −1 k=j τ E P k+1 2 H + E Z 2 L 2 (t j ,T ;H) + J −1 k=j p− P k+1 2 L 2 (t k ,t k+1 ;H) + z− Z 2 L 2 (0,T ;H) , which, together with the fact E P J = 0 and (42), leads to J −1 k=j t k+1 t k G(t, E P k+1 , E Z k ) 2 H dt c E P 2 L 2 (t j ,T ;H) + E Z 2 L 2 (t j ,T ;H) + M .(45) Step 2. Let us prove that, for any 0 j < J, E Z L 2 (tj ,T ;H) c M 1/2 + E P J H 1/2 + E P L 2 (tj ,T ;H) + T − t j E Z L 2 (tj ,T ;H) .(46) For each 0 j < J, define η j := E P j+1 − E P j + τ AE P j + tj+1 tj G(t, E P j+1 , E Z j ) dt.(47) Using (17), (43b) and the fact τ AE P j − E P j , E Z j δW j = 0, we obtain η j − E Z j δW j , E Z j δW j = 0 for all 0 j < J. For any 0 k = j < J, since (43c) implies E tj η j = 0 P-a.s., it is easily verified that η j − E Z j δW j , E Z k δW k = 0. Consequently, J−1 k=j η k − J−1 k=j E Z k δW k , J−1 k=j E Z k δW k = 0 ∀0 j < J. It follows that, for any 0 j < J, J−1 k=j E Z k δW k 2 H = J−1 k=j η k , J−1 k=j E Z k δW k = E P J − E P j + J−1 k=j τ AE P k + J−1 k=j t k+1 t k G(t, E P k+1 , E Z k ) dt, J−1 k=j E Z k δW k (by (47)) = E P J + J−1 k=j τ AE P k + J−1 k=j t k+1 t k G(t, E P k+1 , E Z k ) dt, J−1 k=j E Z k δW k E P J H +τ J−1 k=j E P k H 1 + J−1 k=j t k+1 t k G(t, E P k+1 , E Z k ) H dt J−1 k=j E Z k δW k H , which, together with the identity J−1 k=j E Z k δW k 2 H = J−1 k=j E Z k δW k 2 H = J−1 k=j τ E Z k 2 H = E Z 2 L 2 (tj ,T ;H) , implies E Z L 2 (t j ,T ;H) E P J H + τ J −1 k=j E P k H 1 + J −1 k=j t k+1 t k G(t, E P k+1 , E Z k ) H dt E P J H + T − tj E P L 2 (t j ,T ;H 1 ) + T − tj J −1 k=j t k+1 t k G(t, E P k+1 , E Z k ) 2 H dt 1/2 .(48) For any 0 j < J, it is easily verified by (43) that E P j = E tj (I − τ A) −(J−j) E P J + J−1 k=j t k+1 t k (I − τ A) −(k−j+1) G(t, E P k+1 , E Z k ) dt , and so using (14) gives max j k<J E P k 2 H 1/2 + E P 2 L 2 (t j ,T ;H 1 ) c E P J 2 H 1/2 + J −1 k=j t k+1 t k G(t, E P k+1 , E Z k ) 2 H dt .(49) Combining (48) and (49) yields, for any 0 j < J, E Z L 2 (t j ,T ;H) c E P J H 1/2 + c T − tj J −1 k=j t k+1 t k G(t, E P k+1 , E Z k ) 2 H dt 1/2 , so that from (45) we conclude the desired estimate (46). Step 3. Let c * be a particular constant c in the inequality (46), and set j * := min 0 j < J | c * T − t j 1/2 . From (46) it follows that E Z 2 L 2 (tj ,T ;H) c E P J 2 H 1/2 + E P 2 L 2 (tj ,T ;H) + M ∀j * j < J,(50) and so by (45) and (49) we infer that E P j 2 H 1/2 c E P J 2 H 1/2 + E P 2 L 2 (tj ,T ;H) + M ∀j * j < J. Since H 1/2 is continuously embedded into H, we then obtain E P j 2 H 1/2 c E P J 2 H 1/2 + E P 2 L 2 (tj ,T ;H 1/2 ) + M ∀j * j < J, and therefore using the discrete Gronwall's inequality yields max j * j J E P j 2 H 1/2 c E P J 2 H 1/2 + M , which, together with (50), leads to max j * j<J E P j H 1/2 + E Z L 2 (t j * ,T ;H) c E P J H 1/2 + cM 1/2 . Hence, by the estimate E P J H 1/2 cM 1/2 (in fact E P J = 0), we obtain max j * j<J E P j H 1/2 + E Z L 2 (t j * ,T ;H) cM 1/2 .(51) Step 4. Note that J/(J − j * ) is independent of τ . Repeating the argument in Steps 2 and 3 several times (not greater than J/(J − j * )) proves max 0 j<J E P j H 1/2 + E Z L 2 (0,T ;H) cM 1/2 ,(52) which, together with (42) and the fact that H 1/2 is continuously embedded into H, yields the desired estimate (8). This completes the proof of Theorem 3.1. Remark 3.3. Assume that (P, Z) is the solution to (5) and that f satisfies (i) and (ii) in Hypothesis 2.1. Using the techniques in the proof of Theorem 3.1, we can easily obtain the following stability estimate: max 0 j J |||P j ||| H 1/2 + |||Z||| L 2 (0,T ;H) c |||p T ||| H 1/2 + |||f (·, 0, 0)||| L 2 (0,T ;H) , provided that p T ∈ H 1/2 . Moreover, we can use the estimate (52) and the stability estimate of P to further derive the stability estimate of P for p T ∈ H. subject to the state equation dy(t) = (Ay + α 0 y + α 1 u)(t) dt + (α 2 y + α 3 u)(t) dW (t), 0 t T, y(0) = 0,(54) where 0 < ν < ∞, y d ∈ L 2 F (0, T ; H) and α 0 , α 1 , α 2 , α 3 ∈ L 2 F (0, T ; R) ∩ L ∞ (Ω × (0, T )). It is standard that problem (53) admits a unique solutionū. Letȳ be the state with respect to the controlū, and let (p,z) be the solution of the backward stochastic evolution equation dp(t) = −(Ap + α 0p +ȳ − y d + α 2z )(t) dt +z(t) dW (t), 0 t T, p(T ) = 0.(55) Applying the celebrated Itô's formula to [y(·),p(·)] yields T 0 (ȳ − y d )(t), y(t) dt = T 0 (α 1p + α 3z )(t), u(t) dt for all u ∈ L 2 F (0, T ; H), where y is the state with respect to the control u. Using the above equality, we readily conclude the first-order optimality condition of problem (53): ū = −ν −1 (α 1p + α 3z ).(56) Noting that (p,z) is the solution to (55), we have (p,z) ∈ L 2 F (Ω; C([0, T ]; H 1/2 )) ∩ L 2 F (0, T ; H 1 ) × L 2 F (0, T ; H 1/2 ),(57) and so by (56) we getū ∈ L 2 F (0, T ; H 1/2 ). Sinceȳ is the state with respect to the controlū, we then obtain y ∈ L 2 F (Ω; C([0, T ]; H 1/2 )) ∩ L 2 F (0, T ; H 1 ).(58) Remark 4.1. The first-order optimality condition (56) follows from [2,3]. For the theoretical analysis of the stochastic linear quadratic control problems in infinite dimensions, we refer the reader to [30] and the references therein. Remark 4.2. The regularity results (57) and (58) are straightforward by the Galerkin method and the standard theory of the stochastic differential equations and the backward stochastic differential equations (see [34,Chapters 3 and 5]). Temporally semi-discrete problem The temporally semi-discrete problem reads as follows: min U∈Xτ 1 2 |||Y − y d ||| 2 L 2 (0,T ;H) + ν 2 |||U ||| 2 L 2 (0,T ;H) ,(59) subject to the discrete state equation      Y j+1 − Y j = τ AY j+1 + tj+1 tj (α 0 Y + α 1 U )(t) dt + tj+1 tj (α 2 Y + α 3 U )(t) dW (t), 0 j < J, Y 0 = 0,(60) where Y ∈ X τ . The main result of this section is the following error estimate. Recently, Li and Xie [28] have analyzed a spatial semi-discretization for a stochastic linear quadratic control problem with general filtration. For a special case of problem (59), Li and Zhou [29] obtained the temporal accuracy O(τ 1/2 ) for rough data. For other related works, we refer the reader to [9,38,37,45]. The main task of the rest of this subsection is to prove the above theorem. To this end, we proceed as follows. For any v ∈ L 2 F (0, T ; H), we use S τ v to denote the solution to discretization (60) with U being replaced by v. A routine argument (see, e.g., [27,Theorem 3.14]) gives max 0 j J |||(S τ v) j ||| H c v L 2 (0,T ;H) .(62) For any P, Z ∈ X τ and g, v ∈ L 2 F (0, T ; H), define S (P, Z, g, v) := J −1 j=0 t j+1 t j (α1Pj+1 + α3Z)(t), v(t) dt − t j+1 t j (α0Pj+1 + g + α2Z)(t) dt, t j+1 t j (α2Sτ v + α3v)(t) dW (t) .(63) In the sequel we will always assume τ < 1 α 2 2 L ∞ (Ω×(0,T )) , to ensure that the later discretizations (64) and (69) each admit a unique solution (see the proof of Theorem 3.1). One form of the first-order optimality condition of problem (59) is as follows. P j −P j+1 = τ AP j + tj+1 tj α 0Pj+1 + S τŪ − y d + α 2Z (t) dt − tj+1 tjZ (t) dW (t), 0 j < J.(64b) Then ν T 0 [Ū (t), U (t)] dt + S (P ,Z, S τŪ − y d , U ) = 0 ∀U ∈ X τ .(65) Proof. Following the proof of [29,Lemma 4.19], we can easily obtain T 0 (S τŪ − y d )(t), (S τ v)(t) dt = S (P ,Z, S τŪ − y d , v)(66) for all v ∈ L 2 F (0, T ; H). By this equality, a straightforward calculation yields (65). Remark 4.4. Note that (64) is not a natural adjoint equation of the discrete state equation (60), and hence the first-order optimality condition (65) is unusual. We can also use the temporal semi-discretizations (5) and (6) to form the first-order optimality condition of problem (65); however, we observe that the temporal semi-discretization (7) appears to be more suitable for the numerical analysis of problem (59). Proof. Fix 0 j < J. By definition we have dȳ(t) = (Aȳ + α 0ȳ + α 1ū )(t) dt + (α 2ȳ + α 3ū )(t) dW (t), 0 t T, so that y(t)−ȳ(t j ) = t tj (Aȳ + α 0ȳ + α 1ū )(t) dt+ t tj (α 2ȳ + α 3ū )(t) dW (t), t j t T. It follows that for any t j t t j+1 , ȳ(t) −ȳ(t j ) 2 H 2 t tj (Aȳ + α 0ȳ + α 1ū )(t) dt 2 H + 2 t tj (α 2ȳ + α 3ū )(t) dW (t) 2 H = 2 t tj (Aȳ + α 0ȳ + α 1ū )(t) dt 2 H + 2 t tj |||(α 2ȳ + α 3ū )(t)||| 2 H dt 2(t − t j ) t tj |||(Aȳ + α 0ȳ + α 1ū )(t)||| 2 H dt + 2 t tj |||(α 2ȳ + α 3ū )(t)||| 2 H dt, which implies |||ȳ −ȳ(t j )||| 2 L 2 (tj ,cτ 1/2 .(68) We divide the rest of the proof into the following four steps. Step 1. Let (P, Z) ∈ X τ × L 2 F (0, T ; H) be the solution to the discretization      P J = 0, P j − P j+1 = τ AP j + tj+1 tj α 0 P j+1 +ȳ − y d + α 2 Z (t) dt − tj+1 tj Z(t) dW (t), 0 j < J.(69) In view of (58) and the fact y d ∈ L 2 F (0, T ; H), we can use Theorem 3.3 to conclude that max 0 j J |||p(t j ) − P j ||| H + |||z − Z||| L 2 (0,T ;H) cτ 1/2 ,(70) which, together with (68), yields J −1 j=0 |||p − Pj+1||| 2 L 2 (t j ,t j+1 ;H) 1/2 J −1 j=0 |||p −p(tj+1)||| 2 L 2 (t j ,t j+1 ;H) 1/2 + J −1 j=0 |||Pj+1 −p(tj+1)||| 2 L 2 (t j ,t j+1 ;H) 1/2 cτ 1/2 .(71) In addition, from (70) and (57) The basic idea is standard (see, e.g., [19,Theore 3.4]). We first present three equalities. Inserting v := P τū −Ū into (66) gives T 0 (S τŪ − y d )(t), (S τ (P τū −Ū ))(t) dt = S (P ,Z, S τŪ − y d , P τū −Ū ), (74) and similarly we have T 0 (ȳ − y d )(t), (S τ (P τū −Ū ))(t) dt = S (P, Z,ȳ − y d , P τū −Ū ). By definition, it is easily verified that S (P, Z,ȳ − y d , P τū −Ū ) − T 0 (α 1p + α 3z )(t), (ū −Ū )(t) dt = I 1 + I 2 + I 3 + I 4 . (76) Next, by (56) (by (67) and (62)). Step 3. Let us estimate I 1 , I 2 , I 3 and I 4 . For I 1 , by (71) we have Conclusions In this paper, we have analyzed three Euler type temporal semi-discretizations for a backward semilinear stochastic evolution equation with Lipschitz nonlinearity. With reasonable regularity assumptions on the data, we have established the convergence for the first two semi-discretizations and derived an explicit convergence rate for the third semi-discretization. In the numerical analysis, no regularity restriction has been imposed on the solution, the coefficient has not been necessarily deterministic, and the terminal value has not been necessarily generated by a forward stochastic evolution equation. We have applied the third temporal semi-discretization to a general stochastic linear quadratic control problem and established the convergence for a temporally semi-discrete approximation of the optimal control. + |||(I − P τ )z||| 2 L 2 (0,T ;H) . t j+1 ) − P j+1 ||| 2 H cτ |||p T ||| 2 H 1/2 + |||g||| 2 L 2 (0,T ;H) . Remark 3. 4 . 4Following the proof of Theorem 3.1, we can easily prove Theo- Theorem 4. 1 . 1Assume that y d ∈ L 2 F (0, T ; H). Letū andŪ be the solutions to problems (53) and (59), respectively. Then ū −Ū L 2 (0,T ;H) c τ 1/2 + (I − P τ )ū L 2 (0,T ;H) . (61) Remark 4.3. Lemma 4 . 1 . 41Assume thatŪ is the solution to problem (59). Let (P ,Z) ∈ X τ × L 2 F (0, T ; H) be the solution to the discretization Lemma 4 . 2 . 42Letū be the solution to (53), and letȳ be the state with respect toū. Then |||ȳ − S τū ||| L 2 (0,T ;H) cτ 1/2 . + c |||(I − P τ )ū||| 2 L 2 (0,T ;H) + I 1 + I 2 + I 3 + I 4 , + α3z)(t), (ū − Pτū)(t) dt. L 2 21p + α 3z )(t), (ū −Ū )(t) dt, and inserting U := P τū −Ū into (t), (ū −Ū )(t) dt = S (P ,Z, S τŪ − y d , P τū −Ū ). 1p + α 3z )(t), (ū −Ū )(t) dt + S (P ,Z, S τŪ − y d , P τū −Ū ) = S (P, Z,ȳ − y d , P τū −Ū ) − T 0 (α 1p + α 3z )(t), (ū −Ū )(t) dt + S (P ,Z, S τŪ − y d , P τū −Ū ) − S (P, Z,ȳ − y d , P τū −Ū )= I 1 + I 2 + I 3 + I 4 + S (P ,Z, S τŪ − y d , P τū −Ū ) − S (P, Z,ȳ − y d , P τū −Ū ) (by (76)) = I 1 + I 2 + I 3 + I 4 + T 0 (S τŪ −ȳ)(t), (S τ (P τū −Ū ))(t) dt (by(74)and(75)).Hence, the desired estimate (73) follows from T 0 (S τŪ −ȳ)(t), (S τ (P τū −Ū ))τŪ −ȳ)(t), (S τ P τū −ȳ)(t) (0,T ;H) + |||S τ (I − P τ )ū|||2 L 2 (0,T ;H) cτ + c |||(I − P τ )ū||| 2 L 2 (0,T ;H) P τū −Ū L 2 (0,T ;H) cτ 1/2 P τū −Ū L 2 (0,T ;H) . tj+1;H)τ 2 |||Aȳ + α 0ȳ + α 1ū ||| L 2 (tj ,tj+1;H) + 2τ |||α 2ȳ + α 3ū ||| |||ȳ −ȳ(t j )||| 2 L 2 (tj ,tj+1;H) τ 2 |||Aȳ + α 0ȳ + α 1ū ||| 2 L 2 (0,T ;H) + 2τ |||α 2ȳ + α 3ū ||| 2 L 2 (0,T ;H) .By (58) and the factū ∈ L 2 F (0, T ; H), we then obtain|||ȳ(t j ) − (S τū ) j ||| H cτ 1/2 (see[27, Theorem 3.14]).This completes the proof.Finally, we are in a position to prove Theorem 4.1 as follows.Proof of Theorem 4.1. Letȳ be the state with respect to the controlū, and let (p,z) be the solution to equation (55). Similar to(35), we have2 2 L 2 (tj ,tj+1;H) . Hence, J−1 j=0 J−1 j=0 |||ȳ −ȳ(t j )||| 2 L 2 (tj ,tj+1;H) cτ, so that the desired estimate (67) follows from max 0 j<J J−1 j=0 |||p −p(t j+1 )||| 2 L 2 (tj ,tj+1;H) 1/2 J −1 k=j t k+1 t k e (t−t j )A − (I − τ A) −(k−j+1) g(t) dt H J −1 k=j t k+1 t k e (t−t j )A − (I − τ A) −(k−j+1) g(t) dt H J −1 k=j t k+1 t k (e (t−t j )A − (I − τ A) −(k−j+1) )g(t) H dt J −1 k=j t k+1 t k e (t−t j )A − (I − τ A) −(k−j+1) L(H) |||g(t)||| H dt J −1 k=j t k+1 t k e (t−t j )A − (I − τ A) −(k−j+1) 2 L(H) dt 1/2 |||g||| L 2 (t j ,T ;H) cτ 1/2 |||g||| L 2 (t j ,T ;H) (by Lemma 3.4).Combining the above estimates of I 1 and I 2 then yields(21). Secondly, let us prove(22). For any 0 j < J, by (28) we havep(t) − p(t j+1 ) = tj+1 t (Ap + g)(s) ds − tj+1 t z(s) dW (s), t j t < t j+1 , t j+1 t j Ap(t) dt + (I − δWjI j τ ) t j+1t j z(t) dW (t) (by(15)). For I 2 we have I 2 c |||z − Z||| L 2 (0,T ;H) P τū −Ū L 2 (0,T ;H) cτ 1/2 P τū −Ū L 2 (0,T ;H) (by (70)).For I 3 we have(72)).For I 4 , by (56) and the definition of P τ we haveStep 4. Combining (73) and the above estimates of I 1 , I 2 , I 3 and I 4 in Step 3, we conclude that We can then apply the Young's inequality with ε to obtain ν ū −Ū 2 L 2 (0,T ;H) cτ + c |||(I − P τ )ū||| 2 L 2 (0,T ;H) , which implies the desired estimate (61). This completes the proof of Theorem 4.1. Strong and weak divergence of exponential and linear-implicit Euler approximations for stochastic partial differential equations with superlinearly growing nonlinearities. M Beccari, M Hutzenthaler, A Jentzen, R Kurniawan, F Lindner, D Salimova, arXiv:1903.06066M. Beccari, M. Hutzenthaler, A. Jentzen, R. Kurniawan, F. Lindner, and D. Salimova. Strong and weak divergence of exponential and linear-implicit Euler approximations for stochastic partial differential equations with su- perlinearly growing nonlinearities. arXiv:1903.06066, 2019. Stochastic maximum principle for distributed parameter systems. A Bensoussan, J. Franklin Institute. 315A. Bensoussan. Stochastic maximum principle for distributed parameter systems. J. Franklin Institute, 315:387-406, 1983. Conjugate convex functions in optimal stochastic control. J.-M Bismut, J. Math. Anal. Appl. 44J.-M. Bismut. Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl., 44:384-404, 1973. Discrete-time approximation and Monte-Carlo simulation of backward stochastic differential equations. B Bouchard, N Touzi, Stoch. Process. Appl. 111B. Bouchard and N. Touzi. Discrete-time approximation and Monte-Carlo simulation of backward stochastic differential equations. Stoch. Process. Appl., 111:175-206, 2004. Approximating stochastic evolution equations with additive white and rough noises. Y Cao, J Hong, Z Liu, SIAM J. Numer. Anal. 55Y. Cao, J. Hong, and Z. Liu. Approximating stochastic evolution equations with additive white and rough noises. SIAM J. Numer. Anal., 55:1958- 1981, 2017. Linear multistep schemes for BSDEs. J.-F Chassagneux, SIAM J. Numer. Anal. 52J.-F. Chassagneux. Linear multistep schemes for BSDEs. SIAM J. Numer. Anal., 52:2815-2836, 2014. Strong and weak convergence rates of a spatial approximation for stochastic partial differential equation with one-sided Lipschitz coefficient. J Cui, J Hong, SIAM J. Numer. Anal. 57J. Cui and J. Hong. Strong and weak convergence rates of a spatial approx- imation for stochastic partial differential equation with one-sided Lipschitz coefficient. SIAM J. Numer. Anal., 57:1815-1841, 2019. Numerical approximation of some linear stochastic partial differential equations driven by special additive noises. Q Du, T Zhang, SIAM J. Numer. Anal. 40Q. Du and T. Zhang. Numerical approximation of some linear stochastic partial differential equations driven by special additive noises. SIAM J. Numer. Anal., 40:1421-1445, 2002. The forward-backward stochastic heat equation: numerical analysis and simulation. T Dunst, A Prohl, SIAM J. Sci. Comput. 38T. Dunst and A. Prohl. The forward-backward stochastic heat equation: numerical analysis and simulation. SIAM J. Sci. Comput., 38:A2725- A2755, 2016. Stochastic maximum principle for optimal control of SPDEs. M Fuhrman, Y Hu, G Tessitore, C. R. Acad. Sci. Paris, Ser. I. 350M. Fuhrman, Y. Hu, and G. Tessitore. Stochastic maximum principle for optimal control of SPDEs. C. R. Acad. Sci. Paris, Ser. I, 350:683-688, 2012. Stochastic maximum principle for optimal control of SPDEs. M Fuhrman, Y Hu, G Tessitore, Appl. Math. Optim. 68M. Fuhrman, Y. Hu, and G. Tessitore. Stochastic maximum principle for optimal control of SPDEs. Appl. Math. Optim., 68:181-217, 2013. Stochastic maximum principle for optimal control of a class of nonlinear SPDEs with dissipative drift. M Fuhrman, C Orrieri, SIAM J. Control Optim. 54M. Fuhrman and C. Orrieri. Stochastic maximum principle for optimal con- trol of a class of nonlinear SPDEs with dissipative drift. SIAM J. Control Optim., 54:341-371, 2016. Nonlinear Kolmogorov equations in infinite dimensional spaces: the backward stochastic differential equations approach and applications to optimal control. M Fuhrman, G Tessitore, Ann. Probab. 30M. Fuhrman and G. Tessitore. Nonlinear Kolmogorov equations in in- finite dimensional spaces: the backward stochastic differential equations approach and applications to optimal control. Ann. Probab., 30:1397-1465, 2002. Infinite horizon backward stochastic differential equations and elliptic equations in Hilbert spaces. M Fuhrman, G Tessitore, Ann. Probab. 32M. Fuhrman and G. Tessitore. Infinite horizon backward stochastic dif- ferential equations and elliptic equations in Hilbert spaces. Ann. Probab., 32:607-660, 2004. Stochastic maximum principle for SPDEs with noise and control on the boundary. G Guatteri, Syst. Control Lett. 60G. Guatteri. Stochastic maximum principle for SPDEs with noise and control on the boundary. Syst. Control Lett., 60:198-204, 2011. On the existence of optimal controls for SPDEs with boundary noise and boundary control. G Guatteri, F Masiero, SIAM J. Control Optim. 51G. Guatteri and F. Masiero. On the existence of optimal controls for SPDEs with boundary noise and boundary control. SIAM J. Control Optim., 51:1909-1939, 2013. On the backward stochastic Riccati equation in infinite dimensions. G Guatteri, G Tessitore, SIAM J. Control Optim. 44G. Guatteri and G. Tessitore. On the backward stochastic Riccati equation in infinite dimensions. SIAM J. Control Optim., 44:159-194, 2005. Well posedness of operator valued backward stochastic Riccati equations in infinite dimensional spaces. G Guatteri, G Tessitore, SIAM J. Control Optim. 52G. Guatteri and G. Tessitore. Well posedness of operator valued backward stochastic Riccati equations in infinite dimensional spaces. SIAM J. Control Optim., 52:3776-3806, 2014. Optimization with PDE Constraints. M Hinze, R Pinnau, M Ulbrich, S Ulbrich, SpringerNetherlandsM. Hinze, R. Pinnau, M. Ulbrich, and S. Ulbrich. Optimization with PDE Constraints. Springer, Netherlands, 2009. Malliavin calculus for backward stochastic differential equations and application to numerical solutions. Y Hu, D Nualart, X Song, Ann. Appl. Probab. 21Y. Hu, D. Nualart, and X. Song. Malliavin calculus for backward stochastic differential equations and application to numerical solutions. Ann. Appl. Probab., 21:2379-2423, 2011. Adapted solution of a backward semilinear stochastic evolution equation. Y Hu, S Peng, Stoch. Anal. Appl. 9Y. Hu and S. Peng. Adapted solution of a backward semilinear stochastic evolution equation. Stoch. Anal. Appl., 9:445-459, 1991. Numerical Approximations of Stochastic Differential Equations With Non-globally Lipschitz Continuous Coefficients. M Hutzenthaler, A Jentzen, Amer Mathematical SocietyM. Hutzenthaler and A. Jentzen. Numerical Approximations of Stochas- tic Differential Equations With Non-globally Lipschitz Continuous Coeffi- cients. Amer Mathematical Society, 2015. On a perturbation theory and on strong convergence rates for stochastic ordinary and partial differential equations with non-globally monotone coefficients. M Hutzenthaler, A Jentzen, Ann. Probab. 48M. Hutzenthaler and A. Jentzen. On a perturbation theory and on strong convergence rates for stochastic ordinary and partial differential equations with non-globally monotone coefficients. Ann. Probab., 48:53-93, 2020. Pathwise numerical approximation of SPDEs with additive noise under non-global Lipschitz coefficients. A Jentzen, Potential Anal. 31A. Jentzen. Pathwise numerical approximation of SPDEs with additive noise under non-global Lipschitz coefficients. Potential Anal., 31:375-404, 2009. A Milstein scheme for SPDEs. Found. A Jentzen, M Röckner, Comput. Math. 15A. Jentzen and M. Röckner. A Milstein scheme for SPDEs. Found. Comput. Math., 15:313-362, 2015. Backward stochastic differential equations in finance. N El Karoui, S Peng, M C Quenez, Math. Financ. 7N. El Karoui, S. Peng, and M. C. Quenez. Backward stochastic differential equations in finance. Math. Financ., 7:1-71, 1997. Strong and weak approximation of semilinear stochastic evolution equations. R Kruse, SpringerCham, SwitzerlandR. Kruse. Strong and weak approximation of semilinear stochastic evolution equations. Springer, Cham, Switzerland, 2014. Convergence of a spatial semi-discretization for a backward semilinear stochastic parabolic equation. B Li, X Xie, arXiv:2105.10130submittedB. Li and X. Xie. Convergence of a spatial semi-discretization for a backward semilinear stochastic parabolic equation. submitted, arXiv:2105.10130,2021. Discretization of a distributed optimal control problem with a stochastic parabolic equation driven by multiplicative noise. B Li, Q Zhou, J. Sci. Comput. 872021B. Li and Q. Zhou. Discretization of a distributed optimal control problem with a stochastic parabolic equation driven by multiplicative noise. J. Sci. Comput., 87, 2021. Mathematical control theory for stochastic partial differential equations. Q Lü, X Zhang, Springer2021ChamQ. Lü and X. Zhang. Mathematical control theory for stochastic partial differential equations. Springer, Cham, 2021. Solving forward-backward stochastic differential equations explicitly-a four step scheme. J Ma, P Protter, J M Yong, Probab. Theory Related Fields. 98J. Ma, P. Protter, and J.M. Yong. Solving forward-backward stochastic dif- ferential equations explicitly-a four step scheme. Probab. Theory Related Fields, 98:339-359, 1994. Forward-Backward Stochastic Differential Equations and Their Applications. J Ma, J Yong, SpringerBerlinJ. Ma and J. Yong. Forward-Backward Stochastic Differential Equations and Their Applications. Springer, Berlin, 1999. Adapted solution of a backward stochastic differential equation. E Pardoux, S Peng, Syst. Control Lett. 14E. Pardoux and S. Peng. Adapted solution of a backward stochastic differ- ential equation. Syst. Control Lett., 14:55-61, 1990. Stochastic differential equations, backward SDEs, partial differential equations. E Pardoux, A Rȃşcanu, SpringerChamE. Pardoux and A. Rȃşcanu. Stochastic differential equations, backward SDEs, partial differential equations. Springer, Cham, 2014. Semigroups of linear operators and applications to partial differential equations. A Pazy, SpringerNew YorkA. Pazy. Semigroups of linear operators and applications to partial differ- ential equations. Springer, New York, 1983. Backward stochastic differential equations and applications to optimal control. S Peng, Appl. Math. Optim. 27S. Peng. Backward stochastic differential equations and applications to optimal control. Appl. Math. Optim., 27:125-144, 1993. Strong error estimates for a space-time discretization of the linear quadratic control problem with the stochastic heat equation with linear noise. A Prohl, Y Wang, arXiv:2012.04418v1A. Prohl and Y. Wang. Strong error estimates for a space-time discretiza- tion of the linear quadratic control problem with the stochastic heat equa- tion with linear noise. arXiv:2012.04418v1, 2020. Strong rates of convergence for spacetime discretization of the backward stochastic heat equation, and of a linear-quadratic control problem for the stochastic heat equation. A Prohl, Y Wang, arXiv:2012.10117v1A. Prohl and Y. Wang. Strong rates of convergence for space- time discretization of the backward stochastic heat equation, and of a linear-quadratic control problem for the stochastic heat equation. arXiv:2012.10117v1, 2020. Galerkin Finite Element Methods for Parabolic Problems. V Thomée, SpringerBerlinV. Thomée. Galerkin Finite Element Methods for Parabolic Problems. Springer, Berlin, 2006. A semidiscrete Galerkin scheme for backward stochastic parabolic differential equations. Y Wang, Math. Control Relat. Fields. 6Y. Wang. A semidiscrete Galerkin scheme for backward stochastic parabolic differential equations. Math. Control Relat. Fields, 6:489-515, 2016. Galerkin finite element methods for stochastic parabolic partial differential equations. Y Yan, SIAM J. Numer. Anal. 43Y. Yan. Galerkin finite element methods for stochastic parabolic partial differential equations. SIAM J. Numer. Anal., 43:1363-1384, 2005. J Yong, X Y Zhou, Stochastic Controls, Hamiltonian Systems and HJB Equations. New YorkSpringerJ. Yong and X. Y. Zhou. Stochastic Controls, Hamiltonian Systems and HJB Equations, Applications of Mathematics. Springer, New York, 1999. A numerical scheme for BSDEs. J Zhang, Ann. Appl. Probab. 14J. Zhang. A numerical scheme for BSDEs. Ann. Appl. Probab., 14:459-488, 2004. A stable multistep scheme for solving backward stochastic differential equations. W Zhao, G Zhang, L Ju, SIAM J. Numer. Anal. 481W. Zhao, G. Zhang, and L. Ju. A stable multistep scheme for solving back- ward stochastic differential equations. SIAM J. Numer. Anal., 48(1):1369- 1394, 2010. Numerical analysis of a Neumann boundary control problem with a stochastic parabolic equation. Q Zhou, B Li, arXiv:2104.09443submittedQ. Zhou and B. Li. Numerical analysis of a Neumann bound- ary control problem with a stochastic parabolic equation. submitted, arXiv:2104.09443,2021.
[]
[ "New insights into the Be/X-ray binary system MXB 0656-072", "New insights into the Be/X-ray binary system MXB 0656-072" ]
[ "E Nespoli [email protected] \nObservatorio Astronómico de la Universidad de Valencia\nCalle Catedrático Jose Beltran\n\n\n46980Paterna, ValenciaSpain\n\nScience Operations Department\nEuropean Space Astronomy Centre (ESA/ESAC)\nVillanueva de la Cañada (Madrid)Spain\n", "P Reig \nFoundation for Research and Technology -Hellas, IESL, Voutes\n71110Heraklion, CreteGreece\n\nPhysics Department\nUniversity of Crete\n710 03Heraklion, CreteGreece\n", "A Zezas \nPhysics Department\nUniversity of Crete\n710 03Heraklion, CreteGreece\n" ]
[ "Observatorio Astronómico de la Universidad de Valencia\nCalle Catedrático Jose Beltran\n", "46980Paterna, ValenciaSpain", "Science Operations Department\nEuropean Space Astronomy Centre (ESA/ESAC)\nVillanueva de la Cañada (Madrid)Spain", "Foundation for Research and Technology -Hellas, IESL, Voutes\n71110Heraklion, CreteGreece", "Physics Department\nUniversity of Crete\n710 03Heraklion, CreteGreece", "Physics Department\nUniversity of Crete\n710 03Heraklion, CreteGreece" ]
[]
Context. The X-ray transient MXB 0656-072 is a poorly studied member of high-mass X-ray binaries. Based on the transient nature of the X-ray emission, the detection of pulsations, and the early-type companion, it has been classified as a Be X-ray binary (Be/XRB). However, the flaring activity covering a large fraction of a giant outburst is somehow peculiar. Aims. Our goal is to investigate the multiwavelength variability of the high-mass X-ray binary MXB 0656-072. Methods. We carried out optical spectroscopy and analysed all RXTE archive data, performing a detailed X-ray-colour, spectral, and timing analysis of both normal (type-I) and giant (type-II) outbursts from MXB 0656-072.Results. This is the first detailed analysis of the optical counterpart in the classification region (4000-5000 A). From the strength and ratio of the elements and ions, we derive an O9.5Ve spectral type, in agreement with previous classification. This confirms its Be nature. The characterisation of the Be/XRB system relies on Balmer lines in emission in the optical spectra, long-term X-ray variability, and the orbital period vs. spin period and EW(Hα) relation. The peculiar feature that distinguishes the type-II outburst is flaring activity, which occurs during the whole outburst peak, before a smoother decay. We interpret it in terms of magnetohydrodynamic instability. Colour and spectral analysis reveal a hardening of the spectrum as the flux increases. We explored the aperiodic X-ray variability of the system for the first time, finding a correlation of the central frequency and rms of the main timing component with luminosity, which extends up to a "saturation" flux of 1×10 −8 erg cm −2 s −1 . A correlation between timing and spectral parameters was also found, pointing to an interconnection between the two physical regions responsible for both phenomenologies.
10.1051/0004-6361/201219586
[ "https://arxiv.org/pdf/1205.2845v2.pdf" ]
55,001,163
1205.2845
7f6717d1b0b5a5cb89362cd5fc17650024f22107
New insights into the Be/X-ray binary system MXB 0656-072 8 Oct 2012 May 1, 2014 May 1, 2014 E Nespoli [email protected] Observatorio Astronómico de la Universidad de Valencia Calle Catedrático Jose Beltran 46980Paterna, ValenciaSpain Science Operations Department European Space Astronomy Centre (ESA/ESAC) Villanueva de la Cañada (Madrid)Spain P Reig Foundation for Research and Technology -Hellas, IESL, Voutes 71110Heraklion, CreteGreece Physics Department University of Crete 710 03Heraklion, CreteGreece A Zezas Physics Department University of Crete 710 03Heraklion, CreteGreece New insights into the Be/X-ray binary system MXB 0656-072 8 Oct 2012 May 1, 2014 May 1, 2014Astronomy & Astrophysics manuscript no. mxb0656˙v5 Preprint online version:X-rays: binaries -pulsars: individual: MXB 0656-072 Context. The X-ray transient MXB 0656-072 is a poorly studied member of high-mass X-ray binaries. Based on the transient nature of the X-ray emission, the detection of pulsations, and the early-type companion, it has been classified as a Be X-ray binary (Be/XRB). However, the flaring activity covering a large fraction of a giant outburst is somehow peculiar. Aims. Our goal is to investigate the multiwavelength variability of the high-mass X-ray binary MXB 0656-072. Methods. We carried out optical spectroscopy and analysed all RXTE archive data, performing a detailed X-ray-colour, spectral, and timing analysis of both normal (type-I) and giant (type-II) outbursts from MXB 0656-072.Results. This is the first detailed analysis of the optical counterpart in the classification region (4000-5000 A). From the strength and ratio of the elements and ions, we derive an O9.5Ve spectral type, in agreement with previous classification. This confirms its Be nature. The characterisation of the Be/XRB system relies on Balmer lines in emission in the optical spectra, long-term X-ray variability, and the orbital period vs. spin period and EW(Hα) relation. The peculiar feature that distinguishes the type-II outburst is flaring activity, which occurs during the whole outburst peak, before a smoother decay. We interpret it in terms of magnetohydrodynamic instability. Colour and spectral analysis reveal a hardening of the spectrum as the flux increases. We explored the aperiodic X-ray variability of the system for the first time, finding a correlation of the central frequency and rms of the main timing component with luminosity, which extends up to a "saturation" flux of 1×10 −8 erg cm −2 s −1 . A correlation between timing and spectral parameters was also found, pointing to an interconnection between the two physical regions responsible for both phenomenologies. Introduction Be/X-ray binaries (Be/XRBs) constitute a sub-class of highmass X-ray binaries (HMXBs) in which the companion is a Be star, i.e. a non-supergiant fast-rotating OB-star that during its life has shown at some point spectral lines in emission (see Reig 2011, for a recent review). They are also characterised by infrared excess, which means that they are brighter in the IR than their non-emitting counterparts of the same spectral type. Both phenomena, emission lines and IR excess, are thought to arise from a common cause, namely the presence of an extended circumstellar envelope around the stellar equator, made up of ionised gas that is expelled from the star in a way that is not yet completely understood. This complex scenario is referred to as the Be phenomenon (Porter & Rivinius 2003;Ekström et al. 2008). When a Be star is part of an X-ray binary, the system is usually transient, and the compact object is virtually always a pulsar, with typical spin periods ranging between 1-10 3 s. Be/XRBs are characterised by high variability on a wide range of both time scales (from seconds to years) and wavelengths, although the fastest variability is observed in the X-ray band. For longer periods, the variability is apparent in both high-energy and lowenergy wavelengths, and is attributed to major changes in the circumstellar disc structure. The complexity of the dynamics of Send offprint requests to: Elisa Nespoli the Be phenomenon and its relation with the accretion onto the compact object clearly require a multiwavelength approach in the study of these systems. Even if its phenomenology is entangled and multi-faceted, the long-term X-ray variability in Be/XRBs is traditionally described by a classification into two types of outbursts. Type-I (or normal) outbursts are periodic or quasi-periodic events, occurring in correspondence (or close) to the periastron passage of the neutron star. They are generally short, with a typical duration of 0.2-0.3 P orb , and show luminosities L X = 10 36 -10 37 erg s −1 . Type-II (or giant) outbursts are unpredictable, long (one or more orbital periods), and bright events, with typical X-ray luminosities of L X = 10 37 -10 38 erg s −1 , corresponding to up to the Eddington luminosity for a neutron star. The presence of quasiperiodic oscillations (QPOs) in some systems would support the suggestion of the formation of an accretion disc around the neutron star during type-II outbursts (see for instance Motch et al. 1991;Hayasaki & Okazaki 2004). The transient X-ray binary MXB 0656-072 was discovered by SAS-3 in September 1975, when a flux density of 80 mCrab was reported (Clark et al. 1975), and subsequently observed twice in 1976 by Ariel V at 50 (March 19) and 70 (March 27) mCrab, respectively (Kaluzienski 1976). These intensities would convert into an X-ray luminosity of ∼2-3 × 10 36 erg s −1 , assuming a distance of 3.9 kpc (McBride et al. 2006). Therefore, they would correspond to the typical X-ray luminosity range for type-I outbursts. Although the discovery of the source dates back to more than 35 years ago, very little is known about the system. MXB 0656-072 was only catalogued as an HMXB in 2003 after extended re-brightening, when its optical counterpart was identified and classified as an O9.7Ve star (Pakull et al. 2003), and a pulsed period of 160.7s detected (Morgan et al. 2003). The energy spectrum of the source showed a cyclotron resonant energy feature (CRSF) at a central energy of ∼33 keV (Heindl et al. 2003). This event was classified as a type-II outburst. McBride et al. (2006) monitored the source over the 2003 major outburst and studied the average X-ray spectrum during the peak of the outburst and the change in the spin period. They did a pulse phase-resolved analysis and found that the width of the CRSF varied with pulse phase, being wider during the pulse decline. However, the energy of the CRSF did not change with pulse phase. A cyclotron line at ∼ 33 keV implies a magnetic field strength of 3.7 × 10 12 G. They also present a timing analysis of the pulsations, including pulse profile dependence on energy and the changes in the spin period throughout the outburst. Recently, Yan et al. (2012) have presented a correlated optical/X-ray analysis of the system during type-I outbursts, after discovering a 101.2 days orbital period. In this paper we analyse all RXTE data available for MXB 0656-072, which include the major 2003 outburst and a series of type-I outbursts observed in 2007-2008. We performed X-ray colour, spectral, and timing analysis of the giant outburst, and colour and spectral analysis of the normal outbursts. We focus on the aperiodic variability and the study of the broad-band noise. Moreover, we present here additional optical spectroscopy that allows us to refine and robustly justify the spectral classification on one side, besides providing a long-term correlated Hα/X-ray follow up of the system. RXTE observations and data reduction RXTE followed the source during the first giant outburst observed since its discovery, starting in October 2003 for approximately three and a half months. (MJD 52931-53033). Renewed activity of the source was detected by the RXTE in November 2007, lasting for one year (MJD 54419-54776), with luminosities lower than in 2003. The 2007 X-ray variability consisted of a series of four (quasi-)periodic flares, which are reminiscent of type-I outbursts. Table 1 shows the observation log. The total net exposure amounted to 129.7 ks for the first event analysed here, and to 368.7 ks for the second one. We employed data from all the three instruments onboard RXTE (Bradt et al. 1993), the All-Sky Monitor (ASM), the Proportional Counter Array (PCA), and the High Energy Xray Timing Experiment (HEXTE). The ASM incorporates three wide-angle shadow cameras equipped with proportional counters with a total collecting area of 90 cm 2 . It works in the 2-10 keV energy range, mapping the 80% of the sky every 90 min-utes. The PCA consists of five proportional counter units (PCUs) with a total collecting area of ∼6250 cm 2 and operates in the 2-60 keV range, with a nominal energy resolution of 18% at 6 keV. The HEXTE comprises two clusters of four NaI/CsI scintillation counters, with a total collecting area of 2 × 800 cm 2 , sensitive in the 15-250 keV band with a nominal energy resolution of 15% at 60 keV. Both the PCA and the HEXTE have a maximum time resolution of ∼1µs Data reduction was performed using HEASOFT version 6.9. An energy spectrum was obtained for each pointing, after filtering out unsuitable data according to the recommended criteria 1 , employing Standard 2 mode data from the PCA (PCU2 only) and Standard (archive) mode from the HEXTE Cluster A, with a time resolution of 16s. The PCA and HEXTE spectra were extracted, background-subtracted, and dead-time corrected. For the PCA, the 3-30 keV energy range was retained, while the HEXTE provided a partially overlapping extension from 25 to 100 keV. In the case of the 2003 outburst, for each observation, the two resulting spectra were simultaneously fitted with XSPEC v. 12.6.0 (Arnaud 1996). For the 2007-2008 event, three average PCA spectra were extracted in three luminosity ranges (< 1 × 10 −9 , 1 − 2.5 × 10 −9 , and > 2.5 × 10 −9 erg cm 2 s −1 respectively) and fitted between 3-60 keV in order to constrain the CRSF. Then a spectrum for each pointing was fitted between 3-30 keV, fixing the CRSF parameters to the corresponding average ones. HEXTE 2007-2008 spectra were too faint to be employed. During the fitting, a systematic error of 0.6% was added to the PCA spectra. Power spectral density (PSD) was computed using PCA Event or Single Bit data. We first extracted, for each observation, a light curve in the energy range ∼3.5-17 keV (channels 8-39) with a time resolution of 2 −6 s. The light curve was then divided into 128-s segments, and a fast Fourier transform was computed for each segment. The final PSD was computed as the average of all the power spectra obtained for each segment. These averaged power spectra were logarithmically rebinned in frequency and corrected for dead time effects according to the prescriptions given in Nowak et al. (1999). Power spectra were normalised such that the integral over the PSD is equal to the squared fractional rms amplitude, according to the so-called rms-normalisation (Belloni & Hasinger 1990;Miyamoto et al. 1991). Optical observations Optical spectroscopic observations of the companion star to MXB 0656-072 were performed using the Fred Lawrence Whipple Observatory at Mt. Hopkins (Arizona, USA) and the 1.3-m telescope from the Skinakas observatory (Crete, Greece). Table 2 gives the log of the observations. The 1.3m telescope of the Skinakas Observatory (SKO) was equipped with a 2000 × 800 ISA SITe CCD and a 1302 l mm −1 grating (on 30 September 2010) and 2400 l mm −1 (on 8 November 2011), giving a nominal dispersion of ∼ 1 Å/pixel and ∼ 0.5 Å/pixel, respectively. We also observed MXB 0656-072 in queue mode with the 1.5-m telescope (FLWO) at Mt. Hopkins (Arizona) and the FAST-II spectrograph plus FAST3 CCD, a back-side illuminated 2688 × 512 UA STA520A chip with 15 µm pixels. Spectra of comparison lamps were taken before each exposure to account for small variations in the wavelength calibration during the night. To ensure a homogeneous processing of the spectra, all of them were normalised with respect to the local continuum, which was rectified to unity by employing a spline fit. We measured calibrated photometry of the optical counterpart with dedicated observations for the first time, performed from the 1.3-m telescope of the Skinakas Observatory on 2 November 2010 (JD 2,455,503.6). MXB 0656-072 was observed through the Johnson B, V, R, and I filters. The telescope was equipped with a 2048 × 2048 ANDOR CCD with a 13.5 µm pixel size. Standard stars from the Landolt list (Landolt 2009) were used for the transformation equations. Reduction of the data was carried out in the standard way using the IRAF tools for aperture photometry. The resulting magnitudes are: B = 13.25 ± 0.02 mag, V = 12.25 ± 0.02 mag, R = 11.63 ± 0.02 mag, and I = 10.97 ± 0.02 mag. Spectral class The only report of the spectral type of the massive companion in MXB 0656-072 is given by Pakull et al. (2003), who suggest an O9.7V spectral class. Blue-and red-end optical spectra covering the period 2005-2009 are also presented in Yan et al. (2012). Figure 1 shows the optical spectrum of MXB 0656-072 in the region 4000-4800 Å from the Skinakas observatory. This spectrum is the average of three spectra obtained with a total exposure time of 7200s each. The distinct presence of He II lines indicates an O-type star, while the presence of He I lines implies that the spectral type must be later than O8. Some He I lines, such as λ4713 and λ4921, appear (partially) in emission and cannot be separated out from the continuum. The ratio C III λ 4650 to He II λ4686 is close to 1, which agrees with an O9-O9.5 type (Walborn & Fitzpatrick 1990). The ratio He II λ4200/He I λ4144 allows us to distinguish between these two close sub-classes (Walborn 1971). This ratio is approximately 1 in O9 stars and lower than 1 in O9.5 stars. In MXB 0656-072 this ratio appears to be slightly lower than 1, favouring the later type classification. Also, since the strength of the He I λ4144 might be diminished by the emission from the circumstellar disc, the O9.5 class appears to be more likely. On the other hand, given the relatively low S/N in this part of the spectrum and the uncertainty introduced in the definition of the continuum during the normalisation, an O9 spectral type cannot be completely ruled out. As for the luminosity class, the strong He II λ4686 absorption accompanied by weak N III λ4634-4640-4642 clearly indicates a main-sequence star, and so does the strength of Si IV λ4089 in comparison with that of He I λ4144. We conclude that the optical counterpart to MXB 0656-072 is an O9.5V star 2 . In addition to He I lines, the Balmer series of hydrogen lines are strongly affected by emission. Even Hǫ appears to be filledin with emission. Columns 6 and 7 in Table 2 give the equivalent width of the Hα and Hβ lines. A long-term decrease in the strength of these two lines is observed. Results We analysed all the observations of MXB 0656-072 in the RXTE archive. We separated the observations into two intervals corresponding to two significant events. The first interval started on 20 October 2003 (MJD 52932) and covered a total of 101.5 days. This interval includes a giant (type II) outburst. Assuming a distance of 3.9 kpc (McBride et al. 2006), the maximum 3-30 keV luminosity during the outburst was L X = 3.7 × 10 37 erg s −1 , registered at MJD∼52966.9. The second interval covered the period 27 November 2007 to 11 November 2008 (MJD 54419.5-54776.6) and includes a series of minor (type I) outbursts. The peak X-ray luminosity of these outbursts was L X = 1.37 × 10 37 erg s −1 . Type II outburst Colour analysis The PCA light curve and colour behaviour during the 2003 outburst is presented in Fig. 2. The outburst showed strong flaring behaviour during the peak phase, followed by smoother decay. Each point corresponds to an RXTE pointing and is directly obtained from PCU2 count rate. Different symbols mark the differ-4000 4100 4200 4300 4400 4500 4600 4700 Wavelength (A) ent outburst phases, the flare-like phase (open squares), and the smoother decay (filled circles). Error bars are the same size as the points whenever they do not appear in the plots. The X-ray colours were defined as follows, soft colour (SC): 7-10 keV / 4-7 keV; hard colour (HC): 15-30 keV / 10-15 keV. The two colours follow identical patterns during the outburst, both correlating with flux. In the inset, a zoomed-in view is shown, presenting the light curve for one PCA pointing, with a 2s time resolution. From the inset it is clear that the flare-like activity displayed during the outburst is large-scale behaviour that in fact corresponds to variability on various time scales, if investigated at higher time resolution: besides the ∼160s pulse period, slower and faster changes in intensity are clearly detected in the 2s resolution light curve. The amplitude of change in count rate in both colours is more than twice in the decay (∼0.15) compared to the flare phase (∼0.05, see Fig. 2). Also, the colour values are higher larger during the flares, indicating a harder spectrum. Spectral analysis We fitted energy spectra with a continuum constituted by a photo-absorbed power law with a high-energy exponential cutoff, modified by a Gaussian line at ∼6.5 keV with a fixed 0.5 keV width to account for Fe Kα fluorescence. A CRSF at an average central energy of 37 keV was detected in high-flux observations only, above L X = 1.6 × 10 37 erg s −1 , i.e. above 0.4 × L Xmax . In addition, MXB 0656-072 shows significant residuals at ∼11 keV. They were fitted out by means of a Gaussian absorption-like profile, which allowed acceptable fits, passing from χ 2 ∼107 for 70 DOF to χ 2 ∼85 for 67 DOF, for a typical high-flux observation. This component is found at almost constant energy across the spectra, with a weighted mean value of 11.68±0.05 keV, and was consistently reported also by McBride et al. (2006) for the giant outburst and by Yan et al. (2012) for the normal ones. We found that this feature is only necessary in spectra where the CRSF is present as well. Its origin is uncertain. Figure 3 shows a typical spectrum for a high-flux observation, with the corresponding residuals in the case of the best fit (a), the fit excluding the absorption at ∼11 keV (b), and the one excluding the CRSF (c) from the model. The best-fit main spectral parameters are shown in Fig. 4 as a function of the calculated 3-30 keV flux. Different marks and colours identify the two different phases of the outburst, the flaring and the smooth one. Points separated in time and corresponding to different phases behave in a similar way, which only depends on flux. The power-law photon index decreases with Xray flux, confirming the result from the colour analysis that as the X-ray flux increases the spectrum becomes harder (see also Fig. 2). The strength of the iron fluorescence line nicely correlates with flux, revealing an increase in the reprocessed material as the flux increases, as expected. The central energy of the Fe line remained fairly constant, with a mean value of 6.49±0.06 keV, and a mean equivalent width (EW) of 0.34±0.07 keV. The CRSF was almost constant during the outburst, with the following weighted mean values: E c = 36.8 ± 0.4 keV, σ = 9.1 ± 0.4. These values are not compatible with the bestfit values by McBride et al. (2006), who found the CRSF at a central energy of 32.8 +0.5 −0.4 keV. The discrepancy may be because McBride et al. (2006) obtained one spectral fit from a spectrum retrieved by summing up all the spectra from the flare phase of the outburst (from MJD 52932 to MJD 52964). In fact, when choosing four individual observations of that interval, the cyclotron line in the spectra of those four observations showed line energies above 33 keV (see Fig. 3 in McBride et al. 2006). Our best-fit value for the central energy agrees with the one firstly reported by Heindl et al. (2003), of E c = 36 ± 1 keV. All the observed correlations, except for the iron line strength, are significant up to a "saturation" flux of ∼10 −8 erg cm −2 s −1 , which corresponds to L X = 1.8 × 10 37 erg s −1 . This Because of the flaring activity, it is difficult to identify the peak of the outburst. The average 3-30 keV X-ray luminosity in the time interval MJD 52940-52970 is L X = 2.5 × 10 37 erg s −1 , although a maximum of L X = 3.7 × 10 37 erg s −1 was obtained on MJD 52966.9. The lowest luminosity corresponds to the last point of the decay with L X = 1 × 10 36 erg s −1 . Timing analysis In this work, we focus on the aperiodic variability of the system. For a study of the X-ray pulsations, see McBride et al. (2006). The neutron star spin frequency has a fundamental peak at ∼6 mHz, below our PSD frequency range (0.008-32 Hz). Thus, peaks derived from the neutron star's pulsations do not appreciably distort the continuum in the power spectra. We fitted each PSD with the sum of Lorentzian functions with the objective of providing a unified phenomenological description of the timing behaviour of the system during the outburst. We denote each component as L i , and its characteristic frequency ν max as ν i . According to the definition in Belloni et al. (2002), this is the frequency where the component contributes most of its variance per logarithmic frequency interval, ν max = ν 2 0 + (FWHM/2) 2 , where ν 0 is the centroid frequency and FWHM is the full width at half maximum of the Lorentzian function. In this work, we always refer to characteristic frequencies ν max . The low and middle-frequency noise (L 0 and L 1 ) is accounted for by zero-centred Lorentzians with a characteristic frequency that is generally lower for low-flux pointings, and higher for high-flux observations. The corresponding fractional rms Fig. 7. Relation between the photon index and the characteristic frequency of the L 1 broad-noise component. varies during the outburst, in anti-correlation with flux. These components are the only ones required during all the smooth decay phase (after MJD 52975) and part of the flaring phase of the outburst, up to f x ≈ 1 × 10 −8 erg cm −2 s −1 . Beyond that luminosity, an additional component is necessary, L 2 , whose characteristic frequency varies in the range ∼1-5 Hz, whereas the fractional amplitude of variability varies between 10% and 20%, without a clear dependence on flux. Unlike L 0 and L 1 , this component has, in general, a non-zero centroid frequency, and its average value for the Q-factor, defined as Q = ν 0 /FWHM, is ∼0.6, denoting a narrower feature compared to L 0 and L 1 . Figure 5 presents the evolution of the characteristic frequency and rms over the outburst for the L 1 component, the best constrained one. Up to f x ≈ 1 × 10 −8 erg cm −2 s −1 , as the flux increases the characteristic frequency increases, whilst after that luminosity it remains approximately constant. The rms variability shows an opposite trend, decreasing as the flux increases, and then saturating. Figure 6 shows the relation between the L 1 and L 2 characteristic frequencies; although within some scattering, the two frequencies follow a correlated trend in all their range of variation. We studied the correlated spectral/timing behaviour, and found that during the very last part of the flaring phase, and the whole decay, the photon index Γ and the central frequency of the main timing component, ν 1 , vary in an anti-correlated way (Fig. 7). During the flares the L 1 frequency shows only slight variation due to the appearance of L 2 , so that no spectral/timing relation could be expected. Type I outbursts In addition to the major (type II) outburst reported in previous sections, MXB 0656-072 underwent a series of four fainter outbursts between November 2007 and November 2008. The PCA began to monitor these outbursts at the end of the first one. All outbursts exhibited similar peak luminosities (L peak = 1.2 × 10 37 erg s −1 ). This luminosity is about three times lower than that of the November 2003 type-II outburst. We show in Fig. 8 the simultaneous Hα measurements and ASM light curve. The outbursts are separated by ∼100 days and in between the major peaks, other minor peaks are observed. The onset of these outbursts coincided with an optical maximum brightness of the donor: at around MJD 54500 (February 2008) the equivalent width of the Hα line seems to have reached a maximum value of ∼25 Å (Yan et al. 2012). In our X-ray spectral analysis, in order to best constrain the CRSF, hardly detectable at very low flux, we extracted three average spectra in three ranges of luminosities and fitted them between 3-60 keV. The model employed was the same as for Fig. 8. ASM light curve and Hα equivalent width evolution. A 5day running average was applied to the 1-day ASM light curve to reduce the noise. Open circles correspond to observations by Yan et al. (2012) and filled circles to our observations (see Table 2). type-II outburst, and all the components were needed to obtain acceptable fits at in all the three spectra. Spectral parameters are shown in Table 3. Similarly to our analysis of the giant outburst, we also generated one energy spectrum for each observing interval and studied the evolution of the spectral parameters. We fixed the CRSF parameters in each single spectrum to the ones obtained from the corresponding flux average spectrum, and fitted each one between 3 keV and 30 keV. We found that the spectral parameters follow similar trends to those seen during the type II outburst (Fig. 4). The photon index decreases as the flux increases, as in the type II outburst, although with a steeper dependence with flux. The iron line energy does not significantly vary, while its intensity, expressed as normalisation, tightly correlates with luminosity. The iron line width was fixed at 0.5 keV. The hydrogen column density generally anti-correlates with flux, while the cutoff energy does not show any smooth relation with luminosity, although it displays lower values at higher flux and vice versa. 13.0 +7.8 −3.8 a: < 10 −9 erg cm 2 s −1 b: (1-2.5) ×10 −9 erg cm 2 s −1 c: > 2.5 × 10 −9 erg cm 2 s −1 Discussion We have performed a detailed X-ray and optical analysis of the poorly studied hard X-ray transient MXB 0656-072. All the available observational data indicate that MXB 0656-072 is a member of the class of massive X-ray binaries known as Be/Xray binaries. X-rays are produced in the vicinity of the compact object, while the optical variability comes from the young and massive companion. The detection of X-ray pulsations, the transient nature of the X-ray emission, and the characteristics of the X-ray spectrum (power-law continuum modified by an exponential cutoff and the presence of fluorescence iron line and cyclotron feature) are typical of neutron star binaries. The observation of Balmer lines in emission favours the Be/XRB classification. Furthermore, the long-term X-ray variability, consisting of giant (type II) and minor recurrent outbursts (type I) are typical of Be/XRB. Optical observations The optical counterpart to MXB 0656-072 was classified as a O9.5Ve star, refining previous classification by Pakull et al. (2003). Its spectrum is strongly affected by emission, with the first three lines of the Balmer series (Hα, Hβ, and Hγ) showing an emission profile, while the next two (Hδ and Hǫ) are partially filled in with emission. This extra emission is thought to arise from the equatorial disc around the Be star. The picture described by optical spectroscopy is fully consistent with the so-called "Be-phenomenon" and confirms that the system is a Be/XRB. From the analysis of type-I outbursts, Yan et al. (2012) find an orbital period of 101.2d. Once the period is known, we can use two important relationships involving the orbital period of the system, the P spin − P orb (Corbet 1986) and P orb − EW(Hα) (Reig et al. 1997;Reig 2011) diagrams, to support the orbital period found. In the first diagram (see Fig. 6 in Yan et al. 2012), the source is clearly located in the region occupied by Be/XRBs; in the second one (Fig. 9), a ∼100d orbital period fits nicely in the expected EW(Hα) vs. orbital period relation. The P orb −EW(Hα) correlation is a consequence of tidal truncation of the Be star's circumstellar disc by the neutron star. Assuming that the equivalent width of the Hα line, EW(Hα), provides a good measure of the size of the circumstellar disc (Quirrenbach et al. 1997;Tycner et al. 2005), the P orb − EW(Hα) correlation indicates that systems with long orbital periods have larger discs, while narrow orbit systems contain smaller discs. In short orbital period systems, the neutron star prevents the formation of extended discs. As can be seen in Fig. 9, an orbital period of ∼ 100 days agrees very well with the maximum EW(Hα) of ∼ −25 Å. Type-II outburst The profile of the type-II outburst in MXB 0656-072 is unusual among Be/XRB. Although flaring behaviour has been seen in other systems (e.g., EXO 2030+375, A0535+26, Klochkov et al. 2011;Caballero et al. 2008), the profile of the outbursts tends to be more symmetric and to have a smoother peak. In MXB 0656-072, the flaring activity covers a substantial part of the outburst (Fig. 2). It was found that magneto-hydrodynamic instabilities at the inner edge of the accretion disc may produce oscillations in the accreting flow, possibly leading to the observed behaviour (Apparao 1991;Postnov et al. 2008;D'Angelo & Spruit 2010). Especially, D' Angelo & Spruit (2010) show that, if the accretion disc is truncated by the neutron star's magnetic field outside but close to the corotation radius (at which the Keplerian frequency in the disc equals the star's rotational frequency), then the accretion becomes time-dependent and takes the form of repeated bursts. We found correlated behaviour of both the SC and the HC with flux, which corresponds to a general hardening of the spectra as the flux increases. The dependence of the SC on luminosity does, in appearance, contrast with the observed negative correlation in other sources. Reig (2008) analysed the spectral and timing properties of four Be/XRBs during giant outburst, identifying two source states, a low-luminosity horizontal branch (HB) and a high-luminosity diagonal branch (DB). During the outburst, the sources spend most of the time in the DB, during which the SC anti-correlates with flux. The transition to the HB inverts this trend, and the SC starts correlates with it. The correlation between the SC and luminosity observed in MXB 0656-072 thus reveals that the source never undergoes the transition to the DB, but remains in the HB during all the duration of the outburst. The hardening of the spectrum at high flux is confirmed by spectral analysis: from simultaneous spectral fitting of PCA and HEXTE spectra, we found a clear anti-correlated behaviour of the spectral index with flux ( Fig. 4) up to flux ∼ 1.2 × 10 −8 erg cm 2 s −1 , or L X = 2.3 × 10 37 erg s −1 , when the trend flattens. The spectral hardening with increasing flux can be interpreted with simple Comptonisation models: the X-ray emission from HMXBs is thought to arise in an accretion column, close to the neutron star's surface, as a result of Comptonisation of soft photons injected in the accretion flow from the NS thermal mound, by high-energy electrons of the accreting matter (Becker & Wolff 2007). Increasing X-ray luminosity would mainly correspond to increasing mass accretion rate, which can be translated into averagely increasing number of up-scattering collisions of photons with electrons, resulting in a harder spectrum. After 1A 1118-615 (Nespoli & Reig 2011), MXB 0656-072 is the second X-ray pulsar to show correlated spectral/aperiodic behaviour (Fig.7). This translates into a necessary connection between the two physical regions responsible for producing the two phenomenologies, the accretion column on one side, where energy spectra arise, and the accretion disc on the other side where, according to the "perturbation propagation" model, the aperiodic variability is located (Revnivtsev et al. 2009, and references therein). In this model, in fact, the X-ray variability is caused by perturbations in the inner disc flow, at different radii. The two regions are physically separated, being the first close to the NS polar cap regions, and the second confined outside the magnetosphere, but our results on two systems show that they are somehow coupled. In general, the X-ray variability of MXB 0656-072 during the type-II outburst resembles that of 1A 1118-615 at many levels. The colours behave exactly the same in the two sources (both correlate with flux). The spectral parameters have the same trend in the two sources, and in both cases the relation with flux ceases at some saturation luminosity. In MXB 0656-072 this saturation is reached at 0.5×L Xmax , while in 1A 1118-615 at 0.7×L Xmax . Finally, the correlations between the two broadband timing components and between spectral and timing parameters are also observed in both sources, making them very similar during a type-II outburst, although no flaring activity is observed in 1A 1118-615. A peculiar feature in the X-ray energy spectra of MXB 0656-072 is an absorption-line-like profile at an average constant energy between 11-12 keV. This component was investigated by Coburn (2001), who detected it in the range 8-12 keV in the spectra of many X-ray pulsars. The feature was consistently observed at the same energy, irrespective the CRSF energy, and moreover, it was evident in some systems that do not display a cyclotron line. This made Coburn (2001) conclude that the component should not be a magnetic effect. The feature seems to be intrinsic to X-ray pulsars spectra, since it was observed with different instruments (besides RXTE, Ginga and BeppoSAX, Mihara 1995;Santangelo et al. 1998). In the case of MXB 0656-072, this component is only found in spectra where the CRSF is present as well, although no other relation could be established between the two features. Conclusions We presented a detailed X-ray and optical study of MXB 0656-072 covering both types of X-ray variability observed in a Be/XRB, namely type-I and type-II outbursts. The major outburst is characterised by flare-like behaviour during its peak, followed by smoother decay. We interpreted the flaring activity as possibly due to magneto-hydrodynamic instabilities at the inner edge of the accretion disc. The colour and spectral analyses reveal a hardening of the spectra as the luminosity increases, which can be understood in the framework of the models for spectra production in X-ray pulsars. The analysis of aperiodic variability shows correlated behaviour of the timing parameters with flux, which translates into a correlation between spectral/timing features and can be interpreted as an interconnection between the two physical regions responsible for the two phenomenologies. All the X-ray behaviour during the type-II outburst resembles that of 1A 1118-615, although no such flaring activity was observed in that system. The spin period vs. EW(Hα) relation confirmed the orbital period proposed for the source. The full multiwavelength analysis corroborates the Be/XRB nature of the system. Further observations during major outbursts are needed in order to explore the nature of the flaring activity and the timing/spectral correlation detected in this work. Deeper comprehension of the interaction between the magnetosphere and the accretion disc is also necessary to explain the correlated behaviour of the spectral and aperiodic features. Fig. 2 . 2PCA light curve and colour behaviour during the 2003 outburst. In the inset, 2s resolution light curve for one pointing during the flaring phase. Fig. 3 . 3Typical spectrum for an observation during the flaring phase of the giant outburst (obsid: 80067-11-03-05, L x =2.8×10 37 erg s −1 ): in the upper panel, PCA and HEXTE data points and corresponding best fit are reported; below: residuals for the best fit (a), the fit excluding the absorption feature at ∼11 keV (b), and the one excluding the CRSF (c), respectively. luminosity roughly coincides with the large-amplitude flaring phase. Fig. 4 .Fig. 5 . 45Evolution of the main spectral parameters during the 2003 outburst. From the upper panel: photon index, cutoff energy, hydrogen column density, and Fe line intensity. Evolution of the timing parameters, characteristic frequency and fractional rms, of the best constrained noise component, L 1 , during the 2003 outburst. Fig. 6 . 6Relation between the maximum frequencies of the L 1 and L 2 components. Fig. 9 . 9P orb − EW(Hα) diagram. The star symbol marks the position of MXB 0656-072. Table 1 . 1Journal of RXTE observations.N. of Proposal MJD On-source pointings ID range time (ks) 28 80067 52931.8-52975.4 91.1 33 80430 52966.9-53033.3 38.6 44 93032 54419.5-54748.6 179.6 123 93423 54449.1-54776.6 189.1 Table 2 . 2Log of the optical observations.Date JD Telescope Grating Wavelength EW(Hα) EW(Hβ) (2,400,000+) (l/mm) range (Å) (Å) (Å) 14-11-2009 * 55150.41 FLWO 600 4760-6760 −18.4 ± 1.0 −2.98 ± 0.11 12-01-2010 55209.37 FLWO 600 4730-6730 −20.9 ± 1.5 −3.43 ± 0.08 16-01-2010 55213.21 FLWO 600 4740-6740 −21.2 ± 0.8 −3.60 ± 0.09 30-09-2010 55470.60 SKO 1301 5300-7300 −10.8 ± 0.4 - 30-10-2010 55501.01 FLWO 1200 6200-7200 −11.9 ± 0.6 - 29-11-2010 55530.83 FLWO 1200 6200-7200 −11.6 ± 0.6 - 03-10-2011 55838.98 FLWO 1200 6200-7200 −12.2 ± 0.7 - 01-11-2011 55867.88 FLWO 1200 6200-7200 −13.0 ± 0.7 - 08-11-2011 55873.60 SKO 2400 3940-5040 - −2.23 ± 0.07 23-11-2011 55889.93 FLWO 1200 6200-7200 −14.1 ± 0.8 - 31-11-2011 * 55927.71 FLWO 1200 6200-7200 −15.4 ± 0.8 - 19-01-2012 55946.79 FLWO 1200 6200-7200 −15.3 ± 0.8 - 22-01-2012 * 55927.71 FLWO 1200 6200-7200 −15.2 ± 0.8 - * : Average of two measurements. Fig. 1. The optical spectrum of the optical counterpart to MXB 0656-072 in the 4000-4800 Å region. The identified lines correspond to He II λ4200, λ4541, λ4686, He I λ4026, λ4144, λ4387, λ4471, λ4713, C III λ4070, λ4650, N III λ4640, Si IV λ4089, λ4116, and the hydrogen lines of the Balmer series between H β and H ǫ . A Gaussian smoothing filter (σ = 1) was applied to reduce the noise.0.8 1 1.2 Normalised intensity o HeI H γ H δ H ε NIII HeII CIII HeII HeI HeI HeI CIII HeI SiIV HeII Table 3 . 3Type-I outburst spectral analysis for average spectra at different flux ranges. Spectral parameter low flux a med. flux b high flux c Γ 1.08±0.08 0.83±0.04 0.49±0.03 cutoff en. (keV) 16.9 +2.5 −2.3 14.4 +0.9 −0.5 10.5±0.02 pow. norm. (ph/keV/cm 2 /s) 0.11 +0.04 −0.02 0.12±0.01 0.115 +0.004 −0.007 nH (10 22 cm −2 ) 4.1±0.5 2.8±0.3 1.9±0.3 E Fe (keV) 6.5±0.1 6.45 +0.05 −0.04 6.5±0.4 EW Fe 0.06±0.02 0.14 +0.2 −0.8 0.17 +0.22 −0.11 E 10keV−gaus 10.3 +0.4 −0.5 10.6 +0.3 −0.1 10.6 +0.2 −0.1 σ 10keV−gaus 6.5 +0.9 −0.6 4.2 +0.3 −0.5 2.9 +0.24 −0.23 τ 10keV−gaus 15.8 +6.2 −3.9 2.3 +0.4 −0.5 1.05 +0.16 −0.12 E cyc 31.9 +1.3 −2.7 35.6 +2.1 −1.3 35.0 +1.7 −1.0 σ cyc 11.2 +3.2 −0.9 8.5 +1.4 −1.2 8.8 +2.2 −1.3 τ cyc 58.0 +13.2 −8.9 14.0 +8.1 −3.7 Among which, elevation from the Earth greater than 10 • and pointing offset lower than 0.02 • ; see PCA digest at http://heasarc.gsfc.nasa.gov/docs/xte/pca news.html The interpolated class O9.7 suggested byPakull et al. (2003) is mainly used for supergiants. The primary defining criterion for the O9.7 type is He II λ4541≈Si III λ4552(Walborn & Fitzpatrick 1990). According toWalborn (1971), it does not appear that this subdivision is useful at the lower luminosities (class II and below) because the Si III lines are too weak. Acknowledgements. EN acknowledges a "VALi+d" postdoctoral grant from the "Generalitat Valenciana" and was supported by the Spanish Ministry of Economy and Competitiveness under contract AYA 2010-18352. PR acknowledges support by the Programa Nacional de Movilidad de Recursos Humanos de Investigación 2011 del Plan Nacional de I-D+i 2008-2011 of the Spanish Ministry of Education, Culture and Sport. PR also acknowledges partial support by the COST Action ECOST-STSM-MP0905-020112-013371. ASM quick-look results provided by the ASM/RXTE team. . K M V Apparao, ApJ. 375701Apparao, K. M. V. 1991, ApJ, 375, 701 K A Arnaud, Astronomical Society of the Pacific Conference Series. G. H. Jacoby & J. Barnes10117Astronomical Data Analysis Software and Systems VArnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17-+ . P A Becker, M T Wolff, ApJ. 654435Becker, P. A. & Wolff, M. T. 2007, ApJ, 654, 435 . T Belloni, G Hasinger, A&A. 230103Belloni, T. & Hasinger, G. 1990, A&A, 230, 103 . T Belloni, D Psaltis, M Van Der Klis, ApJ. 572392Belloni, T., Psaltis, D., & van der Klis, M. 2002, ApJ, 572, 392 . H V Bradt, R E Rothschild, J H Swank, A&AS. 97355Bradt, H. V., Rothschild, R. E., & Swank, J. H. 1993, A&AS, 97, 355 . I Caballero, A Santangelo, P Kretschmar, A&A. 48017Caballero, I., Santangelo, A., Kretschmar, P., et al. 2008, A&A, 480, L17 . G W Clark, G D Schmidt, J R P Angel, IAU Circ. 28431Clark, G. W., Schmidt, G. D., & Angel, J. R. P. 1975, IAU Circ., 2843, 1 . W Coburn, University, San California, R H Diego Corbet, MNRAS. 2201047PhD thesisCoburn, W. 2001, PhD thesis, UNIVERSITY OF CALIFORNIA, SAN DIEGO Corbet, R. H. D. 1986, MNRAS, 220, 1047 . C R D&apos;angelo, H C Spruit, MNRAS. 4061208D'Angelo, C. R. & Spruit, H. C. 2010, MNRAS, 406, 1208 . S Ekström, G Meynet, A Maeder, F Barblan, A&A. 478467Ekström, S., Meynet, G., Maeder, A., & Barblan, F. 2008, A&A, 478, 467 . K Hayasaki, A T Okazaki, MNRAS. 350971Hayasaki, K. & Okazaki, A. T. 2004, MNRAS, 350, 971 W Heindl, W Coburn, I Kreykenbohm, J Wilms, The Astronomer's Telegram. 2001Heindl, W., Coburn, W., Kreykenbohm, I., & Wilms, J. 2003, The Astronomer's Telegram, 200, 1 . L J Kaluzienski, IAU Circ. 29355Kaluzienski, L. J. 1976, IAU Circ., 2935, 5 . D Klochkov, C Ferrigno, A Santangelo, A&A. 5368Klochkov, D., Ferrigno, C., Santangelo, A., et al. 2011, A&A, 536, L8 . A U Landolt, AJ. 1374186Landolt, A. U. 2009, AJ, 137, 4186 . V A Mcbride, J Wilms, M J Coe, A&A. 451267McBride, V. A., Wilms, J., Coe, M. J., et al. 2006, A&A, 451, 267 T Mihara, Dept. of Physics, Univ. of Tokyo (M95). PhD thesisMihara, T. 1995, PhD thesis, , Dept. of Physics, Univ. of Tokyo (M95), (1995) . S Miyamoto, K Kimura, S Kitamoto, T Dotani, K Ebisawa, ApJ. 383784Miyamoto, S., Kimura, K., Kitamoto, S., Dotani, T., & Ebisawa, K. 1991, ApJ, 383, 784 E Morgan, R Remillard, J Swank, The Astronomer's Telegram. 1991Morgan, E., Remillard, R., & Swank, J. 2003, The Astronomer's Telegram, 199, 1 . C Motch, L Stella, E Janot-Pacheco, M Mouchet, ApJ. 369490Motch, C., Stella, L., Janot-Pacheco, E., & Mouchet, M. 1991, ApJ, 369, 490 . E Nespoli, P Reig, A&A. 5267Nespoli, E. & Reig, P. 2011, A&A, 526, A7+ . M A Nowak, B A Vaughan, J Wilms, J B Dove, M C Begelman, ApJ. 510874Nowak, M. A., Vaughan, B. A., Wilms, J., Dove, J. B., & Begelman, M. C. 1999, ApJ, 510, 874 M W Pakull, C Motch, I Negueruela, The Astronomer's Telegram. 2021Pakull, M. W., Motch, C., & Negueruela, I. 2003, The Astronomer's Telegram, 202, 1 . J M Porter, T Rivinius, PASP. 1151153Porter, J. M. & Rivinius, T. 2003, PASP, 115, 1153 . K Postnov, R Staubert, A Santangelo, A&A. 480477ApJPostnov, K., Staubert, R., Santangelo, A., et al. 2008, A&A, 480, L21 Quirrenbach, A., Bjorkman, K. S., Bjorkman, J. E., et al. 1997, ApJ, 479, 477 . P Reig, A&A. 489725Reig, P. 2008, A&A, 489, 725 . P Reig, Ap&SS. 3321Reig, P. 2011, Ap&SS, 332, 1 . P Reig, J Fabregat, M J Coe, A&A. 322193Reig, P., Fabregat, J., & Coe, M. J. 1997, A&A, 322, 193 . M Revnivtsev, E Churazov, K Postnov, S Tsygankov, A&A. 5071211Revnivtsev, M., Churazov, E., Postnov, K., & Tsygankov, S. 2009, A&A, 507, 1211 . A Santangelo, S Del Sordo, A Segreto, A&A. 34055Santangelo, A., del Sordo, S., Segreto, A., et al. 1998, A&A, 340, L55 . C Tycner, J B Lester, A R Hajian, ApJ. 624359Tycner, C., Lester, J. B., Hajian, A. R., et al. 2005, ApJ, 624, 359 . N R Walborn, ApJS. 23257Walborn, N. R. 1971, ApJS, 23, 257 . N R Walborn, E L Fitzpatrick, PASP. 102379Walborn, N. R. & Fitzpatrick, E. L. 1990, PASP, 102, 379 . J Yan, J A Zurita Heras, S Chaty, H Li, Q Liu, ApJ. 75373Yan, J., Zurita Heras, J. A., Chaty, S., Li, H., & Liu, Q. 2012, ApJ, 753, 73
[]
[ "Symmetry dependence of phonon lineshapes in superconductors with anisotropic gaps", "Symmetry dependence of phonon lineshapes in superconductors with anisotropic gaps" ]
[ "T P Devereaux \nDepartment of Physics\nUniversity of California Davis\n95616CA\n" ]
[ "Department of Physics\nUniversity of California Davis\n95616CA" ]
[]
The temperature dependence below T c of the lineshape of optical phonons of different symmetry as seen in Raman scattering is investigated for superconductors with anisotropic energy gaps. It is shown that the symmetry of the electron-phonon vertex produces non-trivial couplings to an anisotropic energy gap which leads to unique changes in the phonon lineshape for phonons of different symmetry. The phonon lineshape is calculated in detail for B 1g and A 1g phonons in a superconductor with d x 2 −y 2 pairing symmetry. The role of satellites peaks generated by the electronphonon coupling are also addressed. The theory accounts for the substantial phonon narrowing of the B 1g phonon, while narrowing of the A 1g phonon which is indistinguishable from the normal state is shown, in agreement with recent measurements on Bi 2 Sr 2 CaCu 2 O 8 and YBa 2 Cu 3 O 7 .
10.1103/physrevb.50.10287
[ "https://arxiv.org/pdf/cond-mat/9406061v1.pdf" ]
17,833,924
cond-mat/9406061
9abe0d172c8d368ef788a87d4cc32256e1328a07
Symmetry dependence of phonon lineshapes in superconductors with anisotropic gaps 15 Jun 1994 T P Devereaux Department of Physics University of California Davis 95616CA Symmetry dependence of phonon lineshapes in superconductors with anisotropic gaps 15 Jun 1994numbers: 7420Fg7430Gn7460-w7465+n Typeset Using REVTEX 1 The temperature dependence below T c of the lineshape of optical phonons of different symmetry as seen in Raman scattering is investigated for superconductors with anisotropic energy gaps. It is shown that the symmetry of the electron-phonon vertex produces non-trivial couplings to an anisotropic energy gap which leads to unique changes in the phonon lineshape for phonons of different symmetry. The phonon lineshape is calculated in detail for B 1g and A 1g phonons in a superconductor with d x 2 −y 2 pairing symmetry. The role of satellites peaks generated by the electronphonon coupling are also addressed. The theory accounts for the substantial phonon narrowing of the B 1g phonon, while narrowing of the A 1g phonon which is indistinguishable from the normal state is shown, in agreement with recent measurements on Bi 2 Sr 2 CaCu 2 O 8 and YBa 2 Cu 3 O 7 . I. INTRODUCTION Optical phonons observed via Raman scattering have provided a large amount of information concerning the energy gap in high-T c superconductors [1], and there have been attempts to describe the changes in the phonon lineshapes below T c in s−wave [2] and d−wave superconductors [3]. It is believed that the changes in the phonon lineshape below T c are due in part to changes in the phonon self-energy resulting from coupling between phonons and quasiparticles. It has been argued that if the optical phonon has a frequency below the pair threshold energy 2∆, then the phonon's linewidth decreases (narrows) and its frequency renormalizes to lower frequencies (softens) as the quasiparticles become frozen out. However, for a phonon near 2∆, the linewidth is predicted to grow due to the enhancement of the density of states at the gap edge and there can be either pronounced phonon softening or hardening depending on which side of the threshold the phonon is located. This simple picture has been employed to determine the position of 2∆ in the cuprate superconductors. However, this simple analysis applied to the cuprate systems has revealed that the above picture is a bit misleading. The above scenario has yielded a value for the energy gap that is different for different types of optical phonons and is thus symmetry-dependent. For the case of the Bi 2:2:1:2 system, where very clean surfaces can be obtained, a low frequency phonon which transforms according to A 1g symmetry (located at 464 cm −1 , connected with the bridging Oxygen vibrations) shows a downward frequency shift (softening) while no substantial linewidth change from the normal state can be resolved from the data [4]. However, the B 1g phonon (285 cm −1 , connected with the antisymmetric out of plane O(2) and O (3) vibrations in the Cu-O plane) on the contrary shows a small frequency softening but a substantial linewidth narrowing below T c [5]. Similar behavior is seen in the YBCO systems [6], where such a large difference in behavior between the A 1g and B 1g phonons in part led the authors of Ref. [6] to suggest that these two phonons interact with different electronic systems. There has been no satisfactory theoretical explanation for the behavior of the different phonons. The main problem in addressing these experiments with the existing theories concerns the lack of attention paid to the symmetry dependence of the optical phonons. However, this symmetry dependence can be an important tool to uniquely determine thê k−dependence of the energy gap around the Fermi surface. It has been shown that the electronic contribution to Raman scattering can provide a large amount of polarization (symmetry) dependent information that allows for a stringent test to made to determine the actual symmetry of the energy gap in superconductors [7]. It was shown that the coupling between the Raman vertex and an anisotropic gap leads to symmetry dependent spectra, with peak positions and low frequency and temperature behavior dependent on polarization orientations. These changes in the spectra allow for a direct determination of | ∆(k) |. Good agreement with the electronic Raman spectra taken on very clean BSCCO surfaces was obtained using a gap which was predominantly or entirely of d x 2 −y 2 symmetry, where the peak position and the low frequency behavior of the spectra could be straightforwardly accounted for. The symmetry dependence of the data led to the conclusion that the gap must be predominantly of B 1g character. Since the phonon self-energy is very similar to the electronic Raman density response, the same type of analysis for the electronic contribution to Raman scattering can be made to the phonons as well, leading to a further check on the predictions recently made concerning the energy gap in the cuprate materials. We propose an alternative explanation for the symmetry dependence of the Raman shifts based upon nontrivial couplings of phonons of different symmetry with an anisotropic energy gap. Close attention will be paid to the role of the electron-phonon vertex, and consequences of its k-dependence will be addressed. Most importantly, it is shown that the lineshape is polarization dependent for anisotropic superconductors and different dependences on temperature can be used to determine not only the magnitude but the symmetry dependence of the energy gap. Moreover, it is shown that the peak of the self energy can be located at frequencies below 2∆ max for certain polarizations which have an symmetry orthogonal to that of the energy gap. Thus if the symmetry of the phonon is neglected, values of the energy gap inferred from changes in the phonon lineshape using an isotropic s-wave theory will be underestimated. In particular, the phonon spectral function for a superconductor with d x 2 −y 2 symmetry is examined and a comparison is made with experimental data on both the B 1g and the A 1g phonons in BSCCO and YBCO. It is shown that satisfactory agreement can be obtained which reconciles the differences between the A 1g and B 1g phonon lineshapes. II. PHONON SPECTRAL FUNCTION The phonon spectral function is given by ImD(ω) = 4ω 2 0 Σ ′′ (ω) [ω 2 − ω 2 0 − 2ω 0 Σ ′ (ω)] 2 + 4ω 2 0 Σ ′′2 ,(1) where ω 0 is the optical phonon frequency and Σ ′ , Σ ′′ are the real and imaginary parts of the phonon self-energy, respectively. The real part of the self energy renormalizes the position of the phonon, while the imaginary part governs the linewidth. The interaction of optical phonons and electrons can be simply written as H e−ph = k,q,γ,σ g γ k (q)c † k−q,σ c k,σ (b † q,γ + b q,γ ),(2) where g γ k (q) is the matrix element for scattering an electron from k → k − q, and b q,γ , b † q,γ are the field operators for phonons of branch γ. The details of the scattering matrix elements depend on the nature of the mechanism of the electron-phonon coupling and the symmetry of the lattice vibration. In this paper we only consider the symmetry of the matrix element and leave a treament of the mechanism and magnitude of the coupling for future consideration [8]. We take the k dependence along the Fermi surface of the vertex into account by expanding in terms of Fermi surface harmonics Φ for small q, g γ k = L g γ L Φ γ L (k),(3) where the index L indicates the order of polynomial that transforms according to the γ − th A 1ĝ k = g A 1g L=0 + g A 1g L=4 √ 2 cos(4φ) + . . . g B 1ĝ k = g B 1g L=2 √ 2 cos(2φ) + . . .(4) where we have dropped higher order terms, arguing that they are more anisotropic than the terms considering here and will hence be of minor importance. The L = 2 term for the A 1g channel which is present for z dispersion is absent here and the L = 4 term is the first anisotropic term in the series in this case [7]. Also, since there is no dispersion in the z direction in this case, there are no contributions to the E g channels. Consequences of the Fermi surface and the resulting response functions are considered in a forthcoming publication [9], and thus for our purposes we will confine our attention to only cylindrical Fermi surfaces. Σ(q, ω) = Σ(q = 0, ω) + δΣ(q, ω),(5) Delaying a discussion of δΣ until Section III, we can write down the spectrum of the self energy at q = 0 in the pair approximation, e.g., neglecting collective modes as Σ ′′ g,g (q = 0, ω) = − 4N F ω | g γ k | 2 | ∆(k) | 2 Θ(ω 2 − 4 | ∆(k) | 2 ) ω 2 − 4 | ∆(k) | 2 tanh(ω/4T ).(6) The subscript g, g denotes the pair susceptibility calculated with vertices g. The real part can be obtained via a Kramers-Kronig transformation. Here . . . denotes an average over the Fermi surface, N F is the density of states per spin at the Fermi level, Θ is a Theta function and ∆(k) is the generalized k-dependent energy gap. We see that if the gap is isotropic, (∆(k) = ∆), the average around the Fermi surface is frequency independent and thus the symmetry of the vertex only determines an overall prefactor of the self energy. Also, since the imaginary part of the self-energy has a divergence at the pair threshold energy 2∆ a phonon with a frequency below the threshold should be infinitely sharp (neglecting strong-coupling effects). However, if the gap is anisotropic, the vertex and gap couple when averaging over the Fermi surface to produce non-trivial changes in the self-energy of phonons of different symmetries. Further, if the gap vanishes on the Fermi surface, the presence of the nodes can provide decay channels for the phonon leading to a finite linewidth for all non-zero frequencies [3]. The isotropic (L = 0) density-like terms will be coupled to the long range Coulomb forces and thus we must take screening of the vertex into account. Summing R.P.A. diagrams we recover the known result at q = 0, Σ sc. = Σ g,g − Σ 2 g,1 /Σ 1,1 ,(7) where 1 denotes the L = 0 contribution of the vertex g [10,11]. Therefore we see that the L = 0 terms are completely screened for q = 0 as a consequence of the long range Coulomb interactions and do not contribute to the Raman response. Carrying out the integrations in Eq. (6) using a d x 2 −y 2 gap, ∆(k, T ) = ∆ 0 (T ) cos(2φ) for a cylindrical Fermi surface, the spectrum of the phonon self energies can be written down in terms of complete elliptical integrals K and E of the first and second kinds, respectively. Taking screening into account and defining x = ω/2∆ 0 , we obtain for T = 0, Σ ′′ sc. B 1g = Σ ′′ B 1g (q = 0, ω) = −4N F g 2 B 1g 3πx [(2 + x 2 )K(x) − 2(1 + x 2 )E(x)], x ≤ 1 , x[(1 + 2x 2 )K(1/x) − 2(1 + x 2 )E(1/x)], x > 1,(8) i.e., the B 1g channel is not affected by Coulomb screening, while Σ sc. A 1g = Σ A 1g ,A 1g − Σ 2 A 1g ,1 /Σ 1,1 ,(9) with the spectral functions Σ ′′ A 1g ,A 1g (q = 0, ω) = −4N F g 2 A 1g 15πx × [(7 − 8x 2 + 16x 4 )K(x) − (7 − 12x 2 + 32x 4 )E(x)], x ≤ 1, x 4 [(32 − 28/x 2 + 11/x 4 )K(1/x) − (32 − 12/x 2 + 7/x 4 )E(1/x)], x > 1,(10)Σ ′′ A 1g ,1 (q = 0, ω) = −2 √ 2N F g A 1g 3πx [(1 + 2x 2 )K(x) − (1 + 4x 2 )E(x)], x ≤ 1, (1/x)[(4 − 1/x 2 )K(1/x) − (4 + 1/x 2 )E(1/x)], x > 1,(11) and Σ ′′ 1,1 (q = 0, ω) = −2N F πx [K(x) − E(x)], x ≤ 1 , x[K(1/x) − E(1/x)], x > 1.(12) The response functions for finite T are obtained simply by multiplying Eqs. (8) and (10-12) by the factor tanh(ω/4T ). The partial screening of the A 1g channel by long-range Coulomb forces comes from the observation that the square of the energy gap enters into the response function in Eq. (6). For the case of d x 2 −y 2 pairing symmetry, the energy gap squared contains a term which transforms according to A 1g symmetry which leads to a mixing of the L = 0 and L = 4 A 1g basis functions. This corresponds to partial "transverse screening" of the A 1g vertex [7]. The corresponding real parts were obtained via Kramers-Kronig analysis and are plotted together with the imaginary parts in Figure 1 for the B 1g and screened A 1g channels. We see that the peak in the imaginary part of the self-energy (which determines the linewidth of the phonon) lies at different frequencies ω peak ∼ 2∆ 0 (T ) and 1.2∆ 0 (T ) for the B 1g and A 1g channels, respectively. This is a consequence of the angular averaging which couples the gap and e − ph vertex, and leads to constructive (destructive) interference under averaging if the vertex and the gap have the same (different) symmetry. Similar behavior for the electronic contribution to Raman scattering led to the reasoning that the symmetry which shows the highest peak position gives an unique indication of the predominant symmetry of the gap [7]. The symmetry dependence is also manifest in the low frequency behavior, which can be written as Σ ′′ B 1g (ω → 0) = 3N F g 2 B 1g x 3 /4 + O(x 5 ), Σ ′′ A 1g (ω → 0) = N F g 2 A 1g x + O(x 3 ),(13) i.e., the spectrum of the self energy rises slower in the B 1g channel than the A 1g channel While the B 1g channel shows a mild frequency dependence away from the peak maximum and then a rapid change of sign at the peak, the A 1g channel shows a smooth crossover from negative to positive values, with a change of sign that occurs at a frequency which is slightly greater than the peak maximum in the imaginary part. Thus a phonon of A 1g symmetry which lies at energies below 2∆ 0 (T ) can become hardened as opposed to softened. We immediately can draw the conclusion that phonons of the same frequency will show qualitatively different behavior in different channels as a consequence of their symmetry. Therefore, careful attention must be paid to symmetry before an analysis of the gap can be made by locating the point where phonon softening or hardening occurs. III. TEMPERATURE DEPENDENCE We now investigate the temperature dependence of the phonon lineshape. The q = 0 spectral function, Eq. (6), vanishes at T c due to the lack of particle-hole continuum for pair creation. This term thus always predicts phonon broadening compared to the normal state below T c for a gap with nodes. However, the term responsible for the normal metal self energy (due to, eg., finite momentum transfer or anharmonic decay) will be affected by superconductivity due to the reorganization of the density of states as the gap opens up. In order to recover the normal metal lineshape at T c , one must use finite q (or impurity scattering [10,13]) to generate the additional term δΣ which does not vanish at T c . We now generalize the result to finite q for anisotropic gaps. For finite q, the spectrum of δΣ at finite temperatures is given by δΣ ′′ (q, T, ω) = Θ(v F q − ω) | g γ k | 2 F (k, ω) with F (k, ω) = N F π 2 2v F q ∞ |∆(k)| dE [f (E) − f (E + ω)] E(E + ω)− | ∆(k) | 2 E 2 − | ∆(k) | 2 (E + ω) 2 − | ∆(k) | 2 +Θ(E− | ∆(k) | −ω)[f (E − ω) − f (E)] E(E − ω)− | ∆(k) | 2 E 2 − | ∆(k) | 2 (E − ω) 2 − | ∆(k) | 2 ,(14) where f is a Fermi function. The Theta function Θ(v F q −ω) restricts the frequency shift due to phase space consideration, reflecting that the region of the particle-hole continuum vanishes for small wavenumbers as a consequence of momentum conservation. Since v F q ≪ ∆ in the cuprate materials and also in A-15 materials, this term will only contribute to the self energy for phonons of very small energy. However, it has been shown for s−wave superconductors [10] that the incorporation of impurity scattering removes the phase-space restriction due to the lifting of momentum conservation and δΣ contributes for all frequencies. While incorporating impurity scattering remains beyond the scope of the present treatment, we remark that it is expected that a similar consideration for the case of d−wave superconductors would also lead to the contribution of δΣ for all frequencies. This remains to be explored [14]. In the limit of small frequencies (ω << T ), we obtain the simple result δΣ ′′ (q, ω << T ) = ω 2N F π 2 v F q | gk | 2 e |∆(k)|/T + 1 Θ(v F q − ω).(15) Similarly, at T c , Eq. (14) recovers δΣ ′′ (q, ω << T c ) = Θ(v F q − ω)ω N F π 2 v F q | g γ k | 2 and thus the ratio of the low frequency response in the superconducting state to that of a normal metal at T c is given by δΣ ′′ (q, ω << T ) δΣ ′′ (q, ω << T c ) = 2 | g γ k | 2 f (| ∆(k) |) | g γ k | 2 .(16) This shows how the redistribution of the density of states below T c to higher energies as the gap opens up leads to a reduction of the decay channels available to particle-hole creation and a net decrease in the phonon linewidth. In isotropic superconductors, the Fermi function can be pulled out of the average and the resulting expression is independent of phonon symmetry. However, once the gap is anisotropic, there exists coupling between the vertex and the the gap which leads to a symmetry dependent result. Using a weak coupling expression for the temperature dependence of the energy gap (2∆ 0 /k B T c = 5.1252), we numerically evaluate Eq. (16) while taking screening into account. The results are plotted in Fig. 2 For higher frequencies ω > T , we have evaluated Eq. 14 directly. The results are quite similar to those of Fig. 2 for all frequencies ω up to roughly 4∆, but then at higher frequencies all channels eventually display a linear T dependence (i.e., the behavior of the normal state) for energy scales much greater than the gap energy. IV. ENTIRE SPECTRAL FUNCTION AND ROLE OF SATELLITES In this section we consider the entire phonon spectral function Eq. (1), paying particular attention to the role of satellites which arise due to e − ph coupling. The role of satellites have not been explored in anisotropic superconductors. As is well known for BCS superconductors, satellites appear in the phonon spectral function for all frequencies of the optical phonon, but have the greatest residue for phonons near twice the gap edge. In the BCS case, impurities wipe out the satellite peak [10], explaining why they have yet to be definitively observed in conventional A-15 superconductors. In the absence of impurities however vastly different lineshapes can be obtained due to the interference of the satellites. Using a gap of d x 2 −y 2 symmetry and working specifically at q = 0, Eq. (6), we find that the satellites are present in anisotropic superconductors as well due to the fact that the real part of the denominator of Eq. (1) has two zeroes for any value of ω 0 -one at the renormalized phonon frequency and the other at the satellite position. The satellite becomes more pronounced the closer the optical phonon is to the peak position of the self energy spectrum as in the BCS case and interferes with the phonon. Therefore, for the case of a phonon located below the spectral maximum, where the satellite peak is observable only at T=0 for large e − ph coupling, as the gap decreases on approaching T c the satellite will be made to pass through the phonon position and will be subsequently distorted. This is shown in Fig. 3 for a B 1g phonon (ω 0 /∆ 0 (T = 0) = 1.0, g 2 B 1g N F /∆ 0 (T = 0) = 0.1) for the temperatures indicated. The phonon lineshape is drastically affected by the satellite which takes spectral weight away from the phonon when the peak of the spectral function is close to the phonon position. The phonon linewidth grows as the peak of the spectrum moves up in energy with decreasing T and is hardened. The linewidth and frequency shift reaches a maximum when the peak and phonon position coincide and then the linewidth decreases and the phonon softens as T → T c . V. CONCLUSIONS AND COMPARISON WITH EXPERIMENT In this Section we combine the previous results and examine the phonon linewidth as a function of temperature for the case of two phonons which lie at approximately 285, 340 cm −1 for the B 1g channel and 464, 500 cm −1 for the A 1g channel in BSCCO and YBCO, respectively. Using our previous fits to the electronic Raman scattering in BSCCO, we obtained a value of the energy gap at T = 0 to be ∆ 0 (T = 0) = 287 cm −1 [7]. Therefore the normalized optical phonon frequency is given by ω 0 /∆ 0 (T = 0) ∼ 0.99, 1.62 for the B 1g , A 1g phonon, respectively in BSCCO, while for YBCO the ratio is expected to be slightly higher. We can immediately make the following statement. Since the interference effects of the phonon with the satellite peak can only occur for a phonon which is located at T = 0 below the peak in the imaginary part of the self energy, there should be no interference effects on the A 1g phonon since it lies above the peak in the spectrum at T = 0. Thus its renormalization should be a monotonic function of temperature. However that is not the case for the B 1g phonon. Anomalous behavior of the B 1g renormalization as seen in Fig. 3 arises due to the interference between the phonon and the rapid rise of the self energy near 2∆ 0 (T ), which passes through the phonon frequency at T/T c ∼ 0.9. Another remark is in order. In order to make an accurate fit to the data, the magnitude of the coupling constant needs to be addressed. As we have seen in Section IV, it controls the strength of the satellite and its subsequent effect on the phonon lineshape. Little is known about the coupling constant [8] and thus we can only make general statements on the behavior of the phonons. The magnitude of the effect cannot be predicted. Inspecting Fig. 1, the q = 0 part of the self energy, at each phonon frequency, we see that the B 1g phonon is broadened and softened at T = 0 compared to T c while the A 1g phonon is broadened but lies right at the point where the real part is changing sign. This term most accurately describes what is seen in the phonons in YBCO. A rapid rise of the B 1g phonon linewidth below T c has been seen [6,15], reflecting the interference of the peak in the self energy and the phonon, (see Eq. (6) and Fig. 3). The B 1g phonon additionally is mostly softened to lower frequencies. For the A 1g phonon, the frequency shift is seen only to slightly higher frequencies. This is due to the real part of the self energy crossing zero at the phonon position. However, the A 1g phonon narrows just below T c , reaches a minimum width and then broadens at lower temperatures. This could be a result of a competition between the effects of Σ and δΣ, but without information concerning the magnitude of the coupling constant, finite momentum transfer or impurity scattering this remains an open question. In addition, we have the contribution arising from non-zero q, Eq. (14), which indicates that this contribution to the self energy is reduced compared to its value at T c and decreases as T 3 and T for low temperatures in the B 1g and A 1g channels, respectively (see Fig. 2). This term most accurately describes the experiments in BSCCO. The linewidth of both the B 1g and A 1g phonons decrease monotonically with temperature, which points to the lack of contribution coming from satellite effects. The linewidth decrease in the B 1g channel can be fit with a T 3 dependence while a term linear in T can be fit to the A 1g phonon which is the same dependence as in the normal state. Both behaviors are consistent with Eq. (16). This is to be compared with the predictions of an isotropic s−wave theory lines, which are identical for each channel (∼ e −∆/T ). The theory for a gap of B 1g character shows a marked symmetry dependence resulting from the interplay of gap and vertex symmetry. While it is arguable whether an exponential temperature dependence can also fit the B 1g data [5], the lack of change of the exponent of the A 1g phonon's linewidth is a direct consequence of the energy gap anisotropy. Therefore it will appear that the A 1g phonon will be unaffected by superconductivity. Since the B 1g phonon has the same symmetry of a d x 2 −y 2 gap, it's linewidth will show the greatest change due to the onset of superconductivity. In conclusion, within the accuracy of the experiments, we have seen that the changes We note that the logarithmic singularity in the B 1g channel is due to the two dimensionality of the Fermi surface. Adding z dispersion removes the singularity and replaces it with a cusp, as seen in Ref. [7]. The peak position is unchanged in the B 1g channel but the low frequency behavior is modified by a log(| 2∆ 0 /ω |) prefactor to the ω 3 behavior. For the A 1g channel, the presence of z dispersion allows the L = 2 channel to contribute. The L = 2 A 1g channel has a peak at lower frequencies than the L = 4 channel, (ω peak ∼ 0.6∆ 0 ), and also has the same log prefactor as above to the linear ω dependence, turning the low frequency behavior into a convex shape [7]. Therefore, for a more realistic Fermi surface applicable to the cuprates which is predominantly 2-D representation of the point group of the crystal. For cylindrical Fermi surfaces, L can be replaced by azimuthal quantum numbers. The symmetry of the optical phonon enters into the matrix elements g γ k . The matrix elements for the phonons accessible to in-plane polarizations are given for a cylindrical (2-D) Fermi surface in terms of azimuthal angle φ as g [ 12 ] 12. The power laws are a signature of an energy gap which vanishes on lines on the Fermi surface, but the channel dependence of the exponents are unique to a d x 2 −y 2 pair state. These channel-dependent power-laws have been observed in the electronic contribution to Raman scattering in BSCCO, YBCO, and double and triple layer Thallium cuprates which constitute strong evidence for a d−wave gap of this symmetry as opposed to d xy , d xz or d yz symmetry, which also have nodes on lines on the Fermi surface [7]. The real parts of the self energies (which determines the frequency renormalization) show a change of sign near the peak in the imaginary part. as a function of T /T c for a d x 2 −y 2 energy gap compared to a BCS isotropic gap. The low temperature behavior is given by a power-law in T for all channels for the d−wave case while the ubiquitous exponential dependence in T is seen for all channels in the s−wave case. The power-law behavior for the d−wave case is channel dependent, with exponents identical to those of Eq.(13), in the sense that ω can be replaced by T . What is remarkable is that the fall off of the Fermi function at low temperatures is quite slow in those channels which are orthogonal to the symmetry of the gap, with the notable example of the A 1g channel, which shows a residual broadening at T /T c = 0.3 of roughly 20 percent of that of the normal state. This was argued in the case of electronic Raman scattering to be further evidence for an energy gap in the cuprate materials which has predominantly B 1g character, due to the observation that a gap opens up quickly in the B 1g channel compared to A 1g and others which have been probed via Raman scattering[7]. in the phonon lineshape as a function of symmetry can be explained with a choice of the gap which has (at least predominantly) B 1g character, supporting recent comparisons made on electronic Raman scattering on BSCCO. However, without knowledge of the magnitude of the coupling constant, the importance of finite q and satellite effects remains an open question. Of course other choices of gaps which have a small but finite minimum value, eg. anisotropic s-wave or s + id would give similar results to the d x 2 −y 2 choice for the gap. Both the electronic and phonon contributions to Raman scattering below T c can be explained by simply invoking the symmetry of the vertex which couples to the symmetry of the gap. More detailed experiments would be extremely useful to pin down the magnitude of the e − ph vertex and subsequently the role of the satellites, and the role of impurities and the mechanism and magnitude of the coupling remains to be explored [8,14]. ACKNOWLEDGMENTS I am indebted to Drs. D. Einzel, A. Virosztek and A. Zawadowski for many valuable discussions and a critical reading of the manuscript. I would also like to thank D. Leach for discussions and for providing me with the data discussed in the text prior to publication. Similarly, I would also like to acknowledge helpful suggestions and discussions from Drs. R. Hackl, E. Nicol, B. Stadlober, and G. Zimanyi. I would like to acknowledge the hospitality of A. Virosztek and A. Zawadowski and their colleagues at the Research Institute for Solid State Physics of the Hungarian Academy of Science and the Institute of Physics of the Technical University of Budapest, where part of this work was completed. This work was supported by N.S.F. grant 92-06023 and the American Hungarian Joint Grant number NSF 265/92b. 7 T. P. Devereaux, D. Einzel, B. Stadlober, R. Hackl, D. H. Leach and J. J. Neumeier, Phys. Rev. Lett. 72, 396 (1994); T. P. Devereaux, D. Einzel, B. Stadlober and R. Hackl, ibid. 72, 3290 (1994). 8 T 8. P. Devereaux, A. Virosztek and A. Zawadowski, unpublished.9 T. P. Devereaux and D. Einzel, preprint. 10 T. P. Devereaux, Phys. Rev. B 45, 12965 (1992); ibid. 47, 5230 (1993). 11 M.V. Klein and S. B. Dierker, Phys. Rev. B 29, 4976 (1984); H. Monein and A. Zawadowski, Phys. Rev. B 41, 8798 (1990). FIGURESFIG. 1 . 1Real and imaginary parts of the phonon self energy for the B 1g and A 1g channels for a cylindrical Fermi surface. Magnitude of the vertices are set equal to one. FIG. 2. Temperature dependence of the ω → 0 imaginary part of the self energy in a d x 2 −y 2 and isotropic BCS superconductor compared to the normal state. FIG. 3. Phonon spectral function for a B 1g , (ω 0 = ∆ 0 (T = 0), g 2 B 1g N F /∆ 0 (T = 0) = 0.1), phonon in a superconductor with d x 2 −y 2 pairing symmetry for various values of T /T c as indicated in upper part of Figure. The form of the e − ph interaction Eq. (2) is similar to the electronic contribution to Raman scattering in the case of non-resonant scattering with the replacement of the effective Raman vertex by the e − ph coupling vertex. Thus we can proceed along the lines recently taken for the case of the electronic Raman scattering [7], where it was shown that the Raman response is extremely polarization dependent for superconductors with an anisotropic energy gap. Moreover, it was shown that the collective modes which appear in the case of d−wavesuperconductors are of little importance to the Raman response [7,9]. We can then separate the self energy in two parts C Thomsen, M Cardona, Physical Properties of High T c Superconductors. D. M. GinsbergSingaporeWorld ScientificIC. Thomsen and M. Cardona, in Physical Properties of High T c Superconductors, Vol. I, edited by D. M. Ginsberg (World Scientific, Singapore, 1989). . R Zeyher, G Zwicknagl, Z. Phys. B. 78175R. Zeyher and G. Zwicknagl, Z. Phys. B 78, 175 (1990). . E J Nicol, C Jiang, J P Carbotte, Phys. Rev. B. 478131E. J. Nicol, C. Jiang, and J. P. Carbotte, Phys. Rev. B 47, 8131 (1993). . G Burns, G V Chandrashekhar, F H Dacol, P Stroebel, Phys. Rev. B. 39775G. Burns, G. V. Chandrashekhar, F. H. Dacol, and P. Stroebel, Phys. Rev. B 39, 775 (1989); M Boekholt, A Erle, P C Splittgerber-Hünnekes, G Güntherodt, Solid State Comm. 741107M. Boekholt, A. Erle, P. C. Splittgerber-Hünnekes, and G. Güntherodt, Solid State Comm. 74, 1107 (1990). D H Leach, C Thomsen, M Cardona, L Mihaly, C Kendziora, Solid State Comm. 88457D. H. Leach, C. Thomsen, M. Cardona, L. Mihaly, and C. Kendziora, Solid State Comm. 88, 457 (1993). and references therein. in nature, the logarithmic singularity in the B 1g channel can be cut off while the A 1g channel can become more convex in shape and have a peak resulting from the superposition of the L = 2 and L = 4 contributions. E Altendorf, X K Chen, J C Irwin, R Liang, W N Hardy, Phys. Rev. B. 478140which is slightly lower than that of a pure 2-D Fermi surfaceE. Altendorf, X. K. Chen, J. C. Irwin, R. Liang and W. N. Hardy, Phys. Rev. B 47, 8140 (1993), and references therein. in nature, the logarithmic singularity in the B 1g channel can be cut off while the A 1g channel can become more convex in shape and have a peak resulting from the superpo- sition of the L = 2 and L = 4 contributions which is slightly lower than that of a pure 2-D Fermi surface. . A Zawadowski, M Cardona, Phys. Rev. B. 4210732A. Zawadowski and M. Cardona, Phys. Rev. B 42, 10732 (1990). . T P Devereaux, unpublishedT. P. Devereaux, unpublished. . S L Cooper, F Slakey, M V Klein, J P Rice, E D Bukowski, D M Ginsberg, Phys. Rev. B. 3811934S. L. Cooper, F. Slakey, M. V. Klein, J. P. Rice, E. D. Bukowski, and D. M. Ginsberg, Phys. Rev. B 38, 11934 (1988). Solid State Comm. E Altendorf, J Chrzanowski, J C Irwin, J Franck, 76391E. Altendorf, J. Chrzanowski, J. C. Irwin and J. Franck, Solid State Comm. 76, 391 (1990).
[]
[ "Path Planning and Energy Management of Hybrid Air Vehicles for Urban Air Mobility", "Path Planning and Energy Management of Hybrid Air Vehicles for Urban Air Mobility" ]
[ "Satyanarayana G Manyam ", "David W Casbeer ", "Swaroop Darbha ", "Isaac E Weintraub ", "Krishna Kalyanam " ]
[]
[]
A novel coupled path planning and energy management problem for a hybrid unmanned air vehicle is considered, where the hybrid vehicle is powered by a dual gas/electric system. Such an aerial robot is envisioned for use in an urban setting where noise restrictions are in place in certain zones necessitating battery only operation. We consider the discrete version of this problem, where a graph is constructed by sampling the boundaries of the restricted zones, and develop a path planning algorithm. The planner simultaneously solves the path planing along with the energy mode switching control, under battery constraints and noise restrictions. This is a coupled problem involving discrete decision making to find the path to travel, and determining the state of charge of the battery along the path, which is a continuous variable. A sampling based algorithm to find near optimal solution to this problem is presented. To quantify the efficacy of the solution, an algorithm that computes tight lower bounds is also presented. The algorithms presented are verified using numerical simulations, and the average gap between the feasible solutions (upper bounds) and the lower bounds are, empirically, shown to be within 15% of each other.
10.1109/lra.2022.3191810
[ "https://arxiv.org/pdf/2205.14464v1.pdf" ]
249,191,315
2205.14464
c315fba23357e8093dad796d2f75638401688f6c
Path Planning and Energy Management of Hybrid Air Vehicles for Urban Air Mobility Satyanarayana G Manyam David W Casbeer Swaroop Darbha Isaac E Weintraub Krishna Kalyanam Path Planning and Energy Management of Hybrid Air Vehicles for Urban Air Mobility A novel coupled path planning and energy management problem for a hybrid unmanned air vehicle is considered, where the hybrid vehicle is powered by a dual gas/electric system. Such an aerial robot is envisioned for use in an urban setting where noise restrictions are in place in certain zones necessitating battery only operation. We consider the discrete version of this problem, where a graph is constructed by sampling the boundaries of the restricted zones, and develop a path planning algorithm. The planner simultaneously solves the path planing along with the energy mode switching control, under battery constraints and noise restrictions. This is a coupled problem involving discrete decision making to find the path to travel, and determining the state of charge of the battery along the path, which is a continuous variable. A sampling based algorithm to find near optimal solution to this problem is presented. To quantify the efficacy of the solution, an algorithm that computes tight lower bounds is also presented. The algorithms presented are verified using numerical simulations, and the average gap between the feasible solutions (upper bounds) and the lower bounds are, empirically, shown to be within 15% of each other. I. INTRODUCTION In the area of urban air mobility (UAM) and drone delivery, many commercial ventures are considering electric propulsion aircraft [1], [2]. Given the deficiencies in stateof-the-art lithium-ion battery energy density and fuel cell technology [3], [4], it is prudent to consider alternative technologies that can help reduce our carbon footprint in the near term. Furthermore, as the world looks for faster modes of transportation and quicker delivery of goods, our skies will become saturated with the noise from these drones [5]. In the most prevalent use cases, these vehicles will operate in locations where such droning background noise is unacceptable. A gasoline-electric hybrid aerial robotic vehicle is well suited for UAM or drone delivery applications, where the gasoline engine provides long endurance and electric motor facilitates the low noise mode [6]. The perceived noise level when the aerial vehicle is powered by electric motor is considerably less compared to a gasoline engine [7], [8]. In this letter, we consider a noise constrained path planning problem for a hybrid gasoline-electric unmanned air vehicle. We assume the robotic vehicle is equipped with a series hybrid architecture [9], where the propellers are powered by an electric motor that draws power from either the gasoline engine-generator or from the battery. The gasoline engine, when run at full capacity (maximum flow rate of fuel), can run the motor and also charge the battery; or the engine can be run at a limited capacity to produce only sufficient power to run the motor. However, due to frictional losses, it is more efficient to run at full capacity and charge the battery to full whenever possible, and then power the motor with the battery. Therefore, we assume that the fuel rate is at maximum whenever the gasoline mode is chosen. We further assume that the architecture facilitates instant effortless switching between these two modes. The robotic vehicles considered can make sharp turns similar to quadrotors, and therefore, we do not consider the kinematic constraints. The path planning problem involves finding a path between a pre-specified start and goal locations in the presence of quiet zones. The aerial robot is allowed to pass through quiet zones, however it must be powered by the electric mode (gasoline-engine turned off) while flying above such zones. A candidate path can be divided into several segments, where each segment could be either gasoline or electric mode. The cost we have considered in this letter is the fuel cost, and therefore the objective is to minimize the length of the segments that are traveled in gasoline mode. This cost is appropriate for commercial applications where the objective is to minimize the fuel consumption. However, the framework presented here can easily modified for any other application that aims to minimize a different objective, for example, travel time. The decision making involves finding the path, while simultaneously determining the switching points from gas to electric and vice-versa. To do so, the planner must determine the segments along the path where the power source is the gasoline engine or the electric motor, such that state of charge remains within the capacity limits. To find the optimal paths, one needs to model the battery characteristics, the rate of charge of the battery while the robot travels in gasoline mode and the rate of discharge in electric mode. For simplicity, in the operational limits of the battery charge level, we assume that the rates of discharge and recharge with respect to distance traveled are constants. Let q(s) be the variable representing battery charge along a path, where the parameter s represents the length of the path q (s) = −α if electric mode, β if gasoline mode,(1)q min ≤ q(s) ≤ q max ,(2) where α and β are the rates of discharge and recharge per unit distance traveled. This model allows for quick evaluation of feasibility for a given state of charge at initial and final position of a segment of the path. The path planning algorithm we present in this paper can accommodate higher fidelity battery models. Toy Example: Suppose the initial charge is 80%, and final charge level is required to be at least 50%. A feasible path and the corresponding charge profile along the path are shown in Fig. 1, where there exists one quiet zone, shown in grey. The path is demarcated and shown in magenta and green for the gasoline and electric modes, respectively. Note that the charge profiles are linear due to the model assumed in (1). In this example, the robot travels using the gasoline mode and switches to electric mode before it enters the quiet zone and continues in the electric mode throughout the quiet zone, and switches to the gasoline mode after exiting from the quiet zone. It is clear that the mode of the segment in the quiet zone is electric, and the other segments could be either gasoline or electric. The planning algorithm must chose the modes and switching points along a path that minimizes the total fuel cost. Let us represent a path as a series of segments {(v s , v 1 ), (v 1 , v 2 ), . . . (v n , v t )}. For a given path, one solution approach would be to determine the charge levels {q 1 , . . . q n } at the vertices {v 1 , . . . v n } that are feasible with respect to the battery model, and the state of charge satisfies (2) everywhere along the path. However, for the problem considered in this paper, the path itself is a decision variable; this coupling between the path planning and charge profile planning poses a challenge. On its own, path planning in the presence of quiet zones resides in a continuous space resulting in an infinite dimensional problem. Sampling the continuous space is a popular technique used to translate the problem to a discrete planning problem. We discretize the problem by sampling the boundaries of the quiet zones, and generate a graph similar to a visibility graph [10], shown in Fig. 2. We address the discrete version of the problem by formulating the path planning and the energy management problem on the discrete graph. It allows us to reduce the infinite dimensional path planning problem into a finite dimensional discrete problem. There is a loss of optimality using this approach, however, it yields a simplified and potentially tractable approach to solve the coupled infinite dimensional problem. Note that, the loss of optimality depends on the sampling intervals along the boundaries, and therefore the loss could be made sufficiently small with sufficiently large sampling rate. We also develop an algorithm that produces tight lower bounds to this problem, and thereby corroborates the quality of the feasible solutions. The key idea to obtain this lower bound is to partition the domain of a set of continuous variables, that determine the state of charge at each vertex, into a set of sub-intervals. The state of charge at these vertices is allowed to be discontinuous, i.e. the vehicle can arrive at and exit a vertex with different charge levels; but the two charge levels are forced to lie in one of the sub-intervals. This problem is relaxed compared to the original because we are allowing it to violate the continuity of the state of charge at the vertices. Related literature: The path planning problems are solved using sampling based algorithms such as RRT [11], RRT * [12], BIT * [13], road-map based search techniques such as PRM methods [14], incremental graph search methods such as D * Lite [15]. Other path planning techniques include visibility graph, Voronoi diagram based or potential field methods [16]. There exists few results in the area of energy aware path planning. In [17], a planning problem is considered where a robot needs to accomplish a set of goals while maintaining a minimum energy threshold. In [18], an energy aware coverage planning problem is addressed, where a robot needs to cover an area that minimizes the total energy consumption. A planning problem to minimize energy consumption in the presence of disturbances is addressed in [19]. The paper uses a model predictive approach to estimate safety critical states, and presents a self triggering schedule to re-plan, and is compared to periodic re-planning. Zhang et al. addressed a route planning for a plug-in hybrid vehicle is addresses in [20] that minimizes energy consumption. The authors addressed the coupled routing problem that aims to simultaneously optimize the decision making for path and the power management. The problem considered in this paper addresses the path planning problem for a hybrid aerial robot, in the presence of (noise-)restricted zones. Here, we aim to optimize the travel cost for a robotic vehicle that can switch between two different modes. To the best of our knowledge, a path planning that allows a robot to switch between travel modes, with constraints on the modes in certain regions, has not addressed before in the literature. The application of this novel problem to urban air mobility and drone delivery is of particular relevance and deserves attention. In sampling based and incremental search techniques, a tree is iteratively constructed by following a sampling procedure. To determine the switch points along the path while simultaneously growing the tree is not possible without decoupling the path planning and energy management problems. The novelty of the proposed algorithm lies in addressing the coupled problem that involves path planning and energy management. Moreover, the presented technique facilitates computation of the lower bound to the optimal solution, which ratifies the quality of the feasible solutions produced. The main contributions of this work are: (i) we present a novel path planning and energy management problem for a hybrid robot suited for an urban air mobility application, (ii) we formulate the coupled problem on the discrete graph that involves discrete decision making for the path, and continuous variables for the state of charge along the path, (iii) we present a sampling based approach to find near optimal solutions, (iv) we develop a partitioning algorithm to find tight lower bounds to the optimal solution, (v) we validate the presented algorithms using numerical experiments on some benchmark areas of operation. The rest of the paper is organized as follows. In Section II, a graph is constructed by sampling the boundaries of the quiet zones, and the hybrid path planning is defined on this graph. The algorithms to compute near optimal solutions and tight lower bounds using sampling and partitioning approach, respectively, are presented in Section III. The algorithms presented were tested using computational experiments, the results are presented in Section IV, and some concluding remarks are provided in Section V. II. PROBLEM FORMULATION Before formalizing the problem, we define the graph, G s using Algorithm 1, which takes as inputs the start and goal locations, v s and v t , a set of quiet zones, O, and a sampling interval, δL. The boundaries of each quiet-zone are sampled uniformly by choosing a point at every δL units of distance. The vertex set, V s , is built by creating a vertex corresponding to these samples, the start and goal positions (steps 4-6 of Algorithm 1). An edge is added if the two vertices are visible 1 ; note that, the edges between the vertices that belong to the same quiet zone are added too (step 8). The edge set consists of edges both inside and outside the quiet zones. The sampled graph of the example problem is shown in Fig. 2. The dotted lines inside the quiet zone represent the feasible segments of the path, where the mode of travel is restricted to electric. Though we consider only the polygonal quiet zones in the examples, this method directly applies to non-polygonal quiet zones too. For non-convex shapes, one may consider convex-hull as done in [21], and generate the graph. If one needs to consider obstacles along with quiet zones, it is sufficient to add the vertices of the obstacles to V s , and corresponding edges that does not intersect with the obstacles. We formulate and solve the path planning and energy management problem using the sampled graph, G s (V s , E s ). With this approximation, the path planning reduces to finding 1 Two vertices are considered visible if the straight line connecting them does not intersect with any other quiet zones. an ordered sequence of nodes S p := {v s , v s1 , . . . v s k , v t } on graph G s along with the state of charge of the battery at each of these nodes, {q s , q s1 , . . . , q s k , q t }. Let v l indicate the l th node in S p . The following must hold for S p and the corresponding battery charge at each node v l ∈ S p : (i) the state of charges, q s l , q s l+1 on every edge (v l , v l+1 ) ∈ S p satisfies the battery dynamics (1) and (2), (ii) the segments of the path in the quiet zones are in electric mode, (iii) the state of charge of the battery at the terminal node is greater than a specified value, q goal , and (iv) the total cost of travel is minimized. The cost we aim to minimize is the total fuel consumed while traveling in gasoline mode. Therefore, each edge has a zero cost if it is traveled completely in electric mode. Algorithm 1 Construction of the sampling graph 1: function SAMPLINGGRAPH(v s , v t , O, δL) 2: V s ← INITIATENODESET() 3: E s ← INITIATEEDGESET() 4: V s ← V s ∪ {v s , v t } 5: for O i ∈ O do 6: V s ← V s ∪ DISCRETESAMPLING(O i , δL) 7: for v i , v j ∈ V s do 8: if CHECKEDGEFEASIBILITY(v i , v j ) then 9: E s ← E s ∪ (v i , v j ) 10: G s ← CREATEGRAPH(V s , E s ) 11: return G s Let I := {1, . . . , |V s |} be the set of indices of the vertices in V s . For any i, j ∈ I, let x ij denote the binary variable such that x ij = 1 if an edge (v i , v j ) is chosen to be on the path, and x ij = 0, otherwise. Given the states of charge of the battery q i and q j at the vertices v i and v j , respectively, let c ij (q i , q j ) be the cost of travel from v i to v j . Let x represent a matrix of all binary variables, and X be the set of all feasible paths from v s to v t in G s . The optimization problem, P 1 , is stated as the following. P 1 : min x∈X ,q k ∈[qmin,qmax],∀k∈I i,j∈I x ij c ij (q i , q j )(3) In the following sections, we present the algorithms that simultaneously optimizes the two sets of variables, x and {q i , i ∈ V s }. The approach involves sampling of the state of charge at every vertex in V s , constructing a graph G u (V u , E u ), and solving a shortest path problem on this graph; the solution to the shortest path problem on G u produces a feasible solution (upper bound to optimal solution) to P 1 . For a given map with quiet zones, the construction of the base graph without the start and goal can be done offline. We only need to add the start and goal vertices and corresponding edges to compute the shortest path, which significantly reduces the online computation time. We also present a partitioning approach, similar to the upper bounding algorithm, that produces tight lower bounds. In this way, we provide both upper and lower bounds to the optimal solution and hence the gap between the two is the maximum gap between the optimal and feasible (upper bound) solutions. III. TECHNICAL APPROACH The algorithms to compute the upper bounds and lower bounds follow a method similar to Algorithm 1 of creating nodes and edges. These algorithms sample and partition the state of charge at each vertex in V s . In the upper bounding algorithm, the state of charge at each vertex is uniformly sampled, creating a new node for each sample of state of charge at each vertex. In contrast, the lower bounding algorithm partitions the feasible interval of state of charge into small sub-intervals and builds a graph where each node represents a sub-interval of the state-of-charge at each vertex in V s . This partitioning approach is similar to that found in [22]- [25]. For coupled optimization problems involving both discrete and continuous variables, this approach is found to produce tight lower and upper bounds, and therefore guarantees the quality of the upper bounds with respect to the optimal solution. A. Feasible Solution (Upper bounds) In problem P 1 , at any vertex v k ∈ V s \ {v s , v t }, the charge q k is a continuous variable and q k ∈ [q min , q max ]. To compute near optimal feasible solutions, for every v k ∈ V s \ {v s , v t }, we chose a discrete set of values, Q k u , sampled uniformly in the interval [q min , q max ]. Such a sampling procedure transforms P 1 into the discretized problem P 2 below: P 2 : min x∈X ,q k ∈Q k u ∀k∈I i,j∈I x ij c ij (q i , q j ).(4) By construction, any solution to P 2 is a feasible solution to P 1 . To solve P 2 , we construct a graph G u (V u , E u ), where the set of nodes, V u , consists of all nodes corresponding to every combination of v k ∈ V s \ {v s , v t } and q k ∈ Q k u . One can choose the discrete set Q k u to be the same for every node v k , and let it be Q u . For the start node, there is only one value (v s , q init ), and the feasible charges for the goal node is sampled from the set [q goal , q max ]; denote this set of charges as Q goal . Then V u contains the Cartesian product of the sets V s \ {v s , v t } and Q u , and the nodes corresponding to the start and goal nodes, that is V u := {V s \ {v s , v t } × Q u } ∪ {(v s , q init )} ∪ {{v t } × Q goal }. An illustration of the graphs G s and G u is shown in Fig. 3(a) and 3(b). For any nodes, v l ∈ V u , let the position corresponding to the node v l be p l and state of charge of the battery, q l . We check if a feasible edge exists between a pair of nodes that complies with the battery dynamics (1), and compute the cost of such edge if it exists. For a given pair of nodes v ui , v uj ∈ V u , where v ui = (v i , q i ), v uj = (v j , q j ), Algorithm 2 checks if edge (v ui , v uj ) is feasible with respect to the battery dynamics. The algorithm returns the cost of the corresponding edge as zero if the travel mode is completely electric. Notice that the gasoline engine runs at full capacity whenever it is turned on, and therefore it is sufficient to find the length of the segments that are traveled in gasoline mode to evaluate the cost. If an edge is traveled in both gasoline and electric modes, the algorithm computes the distance traveled in gasoline mode, λd ij , and returns the fuel cost of travel, c f λd ij , where c f is cost of fuel per unit distance traveled. Algorithm 2 Evaluation of an edge 1: function SOCFEASIBILITY(v ui , v uj ) 2: d ij = |position(v ui ) − position(v uj )| 3: if q j > min(q max , q i + βd ij ) then 4: FeasCheck ← false 5: else if q i − αd ij > q min & q j ≤ q i − αdλ 1 ← (q max − q i )/(β * d ij ) 10: if 0 ≤ λ ≤ 1 then 16: FeasCheck ← true 17: if FeasCheck then 18: cost ← c f λd ij fuel cost return FeasCheck, cost Algorithm 2 checks if "all electric" mode is feasible in step 5. The variable λ indicates how much of the edge is traveled in gasoline mode. When an edge is traveled in partly gasoline and partly electric mode, without loss of generality, we assume that the robot travels in gasoline mode first and then switches to electric. Further, we allow a maximum of three switch points on an edge. When an edge is sufficiently long, more than three switch points might be necessary; however, such cases can be accommodated by breaking the long edge into smaller edges by introducing artificial vertices. If an edge is traveled using both modes, the value of λ is computed in steps 9-14, and the corresponding cost is computed in step 18. λ 2 ← (q max − q j )/(α * d ij ) 11: if λ 1 , λ 2 ≥ 0 & λ 1 + λ 2 ≤ 1 then 12: λ ← λ 1 + ( α α+β ) * (1 − λ 1 − λ 2 ) To solve the problem P 2 , we construct graph G u (V u , E u ), such that a shortest path on this graph produces a solution to the problem P 1 . The pseudocode of the algorithm that solves P 2 is presented in Algorithm 3. The problem prescribes a state of charge of the robot at the start, and therefore we can create a node, v us , correspondingly, and add to V u , shown in step 6. In step 7, the discrete set, Q u , is obtained by sampling the interval [q min , q max ]. In steps 9-10, for every combination of (v i , q j ), v i ∈ V s \{v s , v t }, q j ∈ Q u , we create a node and add to V u . A minimum state of charge, q goal , is required at the goal position, and to satisfy that, we sample uniformly in the interval [q goal , q max ], and create the set of nodes, V goal (shown in step 12), that correspond to the goal position and a state of charge q j ∈ Q goal . In steps 14-19, we construct the set of edges, E u , by adding an edge for every pair of nodes in V u , if Algorithm 2 returns a feasible solution; the costs returned by the algorithm are set as the weights of those edges. Note that there could be multiple nodes in V goal , that correspond to the goal position, and satisfy the minimum charge required at goal. Therefore, any path from v us to a node in V goal is a feasible path. To find the minimum cost path, we add an another node v ut , that corresponds to the goal, and add zero cost edges between all nodes in V goal and v ut , shown in steps 20-22. Finally, we use Dijkstra's algorithm to find the optimal shortest path from v us to v ut in step 24. The time complexity of Dijkstra's algorithm is O(|V u | 2 ), and it returns the minimum cost path on the graph G u . The trajectory for the robot is constructed using the position of the vertices in the shortest path. V u ← INITIATENODESET() 3: E u ← INITIATEEDGESET() 4: v us ← CREATENODE(v s , q init ) 5: v ut ← CREATENODE(v t ) 6: V u ← V u ∪ {v us } 7: Q u ← DISCRETESAMPLING([q min , q max ], δq) 8: Q goal ← DISCRETESAMPLING([q goal , q max ], δq) 9: for v i ∈ V s , q j ∈ Q u do 10: V u ← V u ∪ CREATENODE(v i , q j ) 11: for q j ∈ Q goal do 12: V goal ← V goal ∪ CREATENODE(v t , q j ) 13: V u ← V u ∪ V goal 14: for v ui , v uj ∈ V u do 15: if CHECKEDGEFEASIBILITY(v ui , v uj ) then 16: f eas, cost ← SOCFEASIBILITY(v ui , v uj ) 17: if feas then 18: E u ← E u ∪ (v ui , v uj ) 19: c ij ← cost 20: V u ← V u ∪ {v ut } 21: for v u k ∈ V goal do 22: E u ← E u ∪ (v u k , v ut ) 23: G u ← CREATEGRAPH(V u , E u ) 24: path u ← SHORTESTPATH(G u , v us , v ut ) 25: return path u The algorithm returns the shortest path, path u , which is comprised of a series of nodes {v s1 , . . . v sp }, and each of these nodes correspond to the vertices in V u . This series of vertices in V u is the path taken by the robot. The states of charge corresponding to each of the nodes in path u are the charges at corresponding vertices in V u . The feasibility check in step 16 of Algorithm 3 ensures the feasibility of the whole path. Since, the weight of each edge is the cost of travel in gasoline mode, the shortest path is the path of minimum fuel cost. The optimal solution of P 2 is a feasible solution to P 1 , but may not be optimal due to the discrete sampling. A tight lower bound could corroborate the quality of a feasible solution, and an algorithm to compute tight lower bounds is presented in the next section. B. Lower Bounds To compute lower bounds, it is a common practice to relax a set of constraints and the optimal solution of the resulting relaxed problem gives a lower bound to the original problem. The Held-Karp bounds for traveling salesman problem [26], is a good example of this technique. In recent work, for coupled problems involving discrete and continuous decision variables, a technique was developed where a set of constraints are relaxed, and another set of 'loose' constraints are added, such that it produces tight lower bounds. This was shown to produce very tight lower bounds for routing problem with turn radius constraints [23], [24] and for neighborhood traveling salesman problem [25]. The problem in this paper is also a coupled problem; and we develop an algorithm using a similar idea of partitioning the continuous decision variables, that produces tight lower bounds. To compute the lower bound, we pose a relaxation of the problem P 1 . Any feasible solution to P 1 would be given as a sequence of nodes {v 1 , . . . v p }, and a state of charge at each of these nodes, {q 1 , . . . q p }. The continuity of the state of charge dictates that at any intermediate node v j along the path, the state of charge at the end of prior edge is same as the state of charge at the beginning of the following edge. For example, if a feasible solution contains two successive edges, (v i , v j ) and (v j , v k ), let q e ij be the charge at the end of edge (v i , v j ), and q s jk be the charge at the start of edge v j v k . The position of the end of the edge (v i , v j ) is same as the start of the edge (v j , v k ); therefore, the continuity of the charge profile dictates that q e ij = q s jk . We relax this continuity constraint and allow q e ij and q s jk to be different but we restrict them to lie in an interval, i.e., q e ij , q s jk ∈ (q p , q p+1 ). We refer to this relaxed problem as P 3 . Since, this is a relaxation to P 1 , every feasible solution to P 1 is also feasible to the relaxed problem P 3 . Therefore, the optimal solution of P 3 is a lower bound to the optimal solution of P 1 . To this end, at every node, we partition the feasible interval of state of charge into n l sub-intervals. At the vertices v i ∈ V s \ {v s , v t }, the set of intervals would be Q i l = {[q min , q 1 ], [q 1 , q 2 ], . . . [q n l −1 , q max ]}. At the goal node, the minimum charge required is q goal , and therefore the intervals would be {[q goal , q 1 ], . . . [q ng−1 , q max ]}, for some n g . Letq i represent an interval of states of charge at a node v i . Now the relaxed problem P 3 is stated as follows: P 3 : min x∈X ,q k ∈Q k l ,∀k∈I i,j∈I x ij c ij (q i ,q j ).(5) We solve P 3 by constructing a graph, G l (V l , E l ), similar to the G u (V u , E u ). The nodes in V l are the combination of the vertices v i ∈ V s and the intervalsq j ∈ Q j l . For a pair of nodes v li , v lj ∈ V l , let (q i k , q i k+1 ) and (q j m , q j m+1 ) be the corresponding charge intervals. The cost of the corresponding edge in E l is the minimum cost of the edge, min qi∈(q i k ,q i k+1 ),qj ∈(q j m ,q j m+1 ) c ij (q i , q j ).(6) Due to the linear battery dynamics, this cost could be found by using Algorithm 2 with the upper limit of the interval at the first node, and lower limit of the interval at the second node, q i k+1 , q j m . The graph construction of G l (V l , E l ) is similar to the one presented in Algorithm 3, however it differs only in two aspects: (i) in steps 7-8, the discrete sampling is replaced with continuous partition of the interval [q min , q max ] and [q goal , q max ], respectively, and (ii) the edge cost, in step 16, are assigned using the solution of (6). Similar to the steps 10, 12 of Algorithm 3, nodes are created corresponding to a combination of vertices v i ∈ V s and intervalsq j ∈ Q i l . The rest of the graph construction goes similar, and therefore, to avoid the repetition, we do not present the pseudocode for the lower bounding algorithm. An illustration of the construction of the graphs G u and G l is shown in Figs. 3(b) -3(c). In the following theorem, we formally prove that the optimal solution to P 3 is a lower bound to the optimal solution of P 1 . Theorem 1. The optimal solution of P 3 is a lower bound to the optimal solution of P 1 . Proof. It is sufficient to show that every feasible solution to P 1 is also a feasible solution to P 3 , and the cost of the solution to P 3 is less than or equal to the cost of the solution to P 1 . Let V f eas := {v s , v s1 , . . . v sp , v t } be the sequence of vertices in a feasible solution to P 1 , and Q f eas := {q s , q s1 , . . . q sp , q t } be the corresponding states of charge at those vertices. For each vertex v k ∈ V f eas and the corresponding state of charge q k , there exists an intervalq k ∈ Q k l , such that q k ∈q k . Construct a path in G l by identifying the nodes corresponding to v k andq k . This gives a feasible path from v ls to v lt in G l , and let V l f eas be the sequence of nodes. The cost of each edge between successive nodes in V l f eas is less than or equal to the cost of corresponding edge in V f eas due to (6), and thus cost of the path in V l f eas is less than or equal to the one in V f eas . IV. COMPUTATIONAL RESULTS To evaluate their performance, we have tested the algorithms to compute the upper bounds and the lower bounds using several scenarios constructed from randomly generated maps and benchmark maps. Since the problem of path planning in the presence of obstacles is closely related to the problem we consider, we use previously established benchmark maps [27] to test our methods. We constructed the maps by randomly generating polygonal restricted zones, where the centers of the polygons are sampled from an uniform distribution. The number of sides are also randomly generated for each restricted zone. There are 10, 15, 20 and 25 restricted zones in map1, map2, map3 and map4, respectively. We have constructed two more maps using the benchmark instances 'boston2' and 'newyork0' from [27]. These are based on real world maps of a region in the cities Boston and New York; we identified the regions directly above the buildings as restricted zones, and they are appropriate for drone delivery applications as discussed in Section I. An interested reader can access the Julia code to extract the buildings from the OpenStreetMaps data from https://github.com/manyamgupta/HybridPathPlanning.git. We have generated 50 scenarios with each of the above maps, where the start and goal positions are chosen from a random distribution such that the straight line distance between them is greater than a specified limit. Further, the instances are run with different levels of discretization of the states of charge. The rate of discharge, α, and the rate of recharge, β are chosen such that the ratio α β is equal to two, i.e., the battery discharges twice as fast as the it recharges per unit distance traveled. In Fig. 4(a), a path of the feasible solution is shown for a scenario generated in the 'newyork0' map, and the charge profiles of the feasible path and the lower bound are shown in Fig. 4(b). We consider no-fly zones in this scenario, shown in red, and were addressed as explained in Section II. The vertical lines are the positions of the vertices in Fig. 4(b), and one may observe the charge profile is not continuous for the lower bound path. This is expected due to the relaxation of the charge continuity, and the resulting solution is a lower bound, rather than a feasible solution. For this scenario, the cost of the feasible path produced by Algorithm 3 is 5318 and the lower bound is given as 5137, therefore, the gap between upper bound and lower bound is around 3.5%. This infers that the feasible solution is within less than 3.5% from the optimal solution. The percent gap between the lower bounds and upper bounds is a measure of the quality of the feasible solutions. This is the maximum gap between the feasible solution and the optimal solution. For each of the maps, there are 50 scenarios, and each scenario is solved with 20, 30 and 40 discretizations. The box plot of the percentage gap is shown in Fig. 5(a). Clearly, with higher sampling rate, the algorithm produces better solutions as evident from the reducing gap with higher discretizations. The maps Boston and N ew Y ork have a few quiet zones with very small edges, and because of these, the lower bound graph consists of many zero cost edges. For example, let [q a , q b ] be the charge interval corresponding to the vertices v k and v l in G l . If the charge required to travel on the edge (v k , v l ) is less then q b − q a , this edge can be on a path with zero cost, and have same charge at the start and end of the edge. This resulted in loose lower bounds, and hence the higher gap. However, the algorithm produces tighter lower bounds by choosing sufficiently large number of sub-intervals n l while constructing the graph G l . But, this comes at a higher cost in computational time required. The computation time required to find the upper bounds and lower bounds are shown as box and whisker plots in Figs. 5(b) and 5(c). The higher computation times for the Boston and N ew Y ork maps is due to the higher number of quiet zones. A significant part of the computational effort in Algorithm 3 is spent constructing the graph G u (V u , E u ). However, for UAM applications the restricted zones, start and end locations are know a priori and can be computed ahead of time. For other applications, like package delivery, the restricted zones and the problem parameters are known a priori, but the positions of the start and goal may not be known. In practice, one may construct the parts of the graph G u (V u , E u ) offline without the nodes corresponding to the start and goal position, and edges incident on them. And when the robot's start and goal positions are specified, the corresponding edges of the graph could be constructed and added to the graph with Algorithm 3. Therefore, to validate the feasibility of online implementation of this algorithm, it is sufficient to analyse the computational effort of the online part. The online computation time required by Algorithm 3 is shown in Fig 5(b). Though the effort required increases with higher number of restricted zones and higher sampling rate, it is still in the order of seconds, and therefore, is viable for on-board implementation. To evaluate the cost savings from the proposed framework, we solved the related but different path planning problem where the quiet zones are considered to be no-fly zones (i.e., feasible paths must completely avoid the quiet zones). This is done by removing the quiet zone edges from the graph G u , and solving for the shortest path thereafter. Note that, we still consider the hybrid mode of the robotic vehicle, and the switching between the gasoline and electric mode still exists. If we were to restrict this to gasoline mode only, the savings would be much larger. The percentage reduction in cost using Algorithm 3 is presented in Fig. IV. The cost savings are around 5 to 10% for most cases, with some cases having much higher savings. This large variance is due to the randomly generated start and goal locations; the cost reduction depends on the difference in length between the shortest path that avoids the "no-fly zones" and path generated from our work that passes through the restricted zones. V. CONCLUSIONS A novel hybrid path planning problem that arises from urban air mobility is presented. In this path planning problem, the hybrid vehicle is required to run in electric mode in certain regions to comply with noise restrictions. The path planner needs to generate a path and schedule for switching between gasoline and electric modes. Algorithms based on sampling and partitioning are presented yielding upper and lower bounds to the coupled path planning and energy management problem. The paper assumes a linear battery model, but a higher fidelity model could be easily integrated with the algorithms presented here. The solutions produced by the presented algorithms are empirically shown to yield upper and lower bounds that are within 15% of one another, indicating that the feasible solutions are of high quality. As a future research direction, one may develop an iterative scheme to compute lower bounds that refines the partitioning in each iteration only where it is necessary, and thus overcome the cost of higher sampling. Another direction of future research includes the adaptive sampling of the boundaries of the quiet zones that chooses higher number of samples on the boundaries that are more likely to be on the path. Fig. 2 . 2A graph generated by sampling the boundaries of the quiet zones. Algorithm 3 3Construction of the graph, G u 1: function SAMPLINGGRAPHG(G s , δq) 2: constructed by sampling the boundaries of the quiet zones (b) Gu(Vu, Eu) obtained by sampling charge at every v i ∈ Vs (c) Graph G l (V l , E l ) obtained by partitioning charge into subintervals Fig. 3. Graph construction to compute the upper bounds and lower bounds to P 1 Fig. 4 . 4Results of a scenario generated in the 'newyork0' map with 35 quiet zones Fig. 5 . 5Computational results from benchmark and random maps Fig. 6. Percentage cost reduction compared to path planning that avoids quiet zones arXiv:2205.14464v1 [math.OC] 28 May 2022s t quiet zone distance traveled battery charge 100 50 gasoline mode gasoline mode electric mode v1 v2 Fig. 1. An example of a feasible path for hybrid navigation traveled, then ij then6: FeasCheck ← true electric mode 7: λ ← 0 8: else partly gasoline/electric 9: Satyanarayana G. Manyam is with the Infoscitex corporation, a DCS Urban air mobility (UAM) market study. S Hasan, NASA, Tech. Rep. S. Hasan, "Urban air mobility (UAM) market study," NASA, Tech. Rep., 2019. An initial concept for intermediate-state, passenger-carrying urban air mobility operations. M D Patterson, D R Isaacson, N L Mendonca, N A Neogi, K H Goodrich, M Metcalfe, B Bastedo, C Metts, B P Hill, D Decarme, AIAA Scitech Forum. 1626M. D. Patterson, D. R. Isaacson, N. L. Mendonca, N. A. Neogi, K. H. Goodrich, M. Metcalfe, B. Bastedo, C. Metts, B. P. Hill, D. DeCarme, et al., "An initial concept for intermediate-state, passenger-carrying urban air mobility operations," in AIAA Scitech Forum, no. 1626, 2021. Faith in batteries. K Button, Aerospace America. K. Button, "Faith in batteries," Aerospace America, pp. 36-42, Oct. 2021. Decarbonization Algebra: The COP26 Calls for Impossibly Steep Cuts in Carbon Emissions: Numbers Don't Lie. V Smil, IEEE Spectrum. 592V. Smil, "Decarbonization Algebra: The COP26 Calls for Impossibly Steep Cuts in Carbon Emissions: Numbers Don't Lie," IEEE Spectrum, vol. 59, no. 2, pp. 20-21, Feb. 2022. Urban air mobility noise: Current practice, gaps, and recommendations. S A Rizzi, D L Huff, D D Boyd, P Bent, B S Henderson, K A Pascioni, D C Sargent, D L Josephson, M Marsan, H B He, NASA, Tech. Rep. S. A. Rizzi, D. L. Huff, D. D. Boyd, P. Bent, B. S. Henderson, K. A. Pascioni, D. C. Sargent, D. L. Josephson, M. Marsan, H. B. He, et al., "Urban air mobility noise: Current practice, gaps, and recommendations," NASA, Tech. Rep., 2020. Benefits of hybridelectric propulsion to achieve 4x cruise efficiency for a VTOL UAV. W J Fredericks, M D Moore, R C Busan, International Powered Lift Conference. W. J. Fredericks, M. D. Moore, and R. C. Busan, "Benefits of hybrid- electric propulsion to achieve 4x cruise efficiency for a VTOL UAV," in International Powered Lift Conference, no. 4324, 2013. Measured noise from small unmanned aerial vehicles. R Cabell, F Grosveld, R Mcswain, Inter-Noise and Noise-Con Congress and Conference Proceedings. 252Institute of Noise Control EngineeringR. Cabell, F. Grosveld, and R. McSwain, "Measured noise from small unmanned aerial vehicles," in Inter-Noise and Noise-Con Congress and Conference Proceedings, vol. 252, no. 2. Institute of Noise Control Engineering, 2016, pp. 345-354. A review of distributed electric propulsion concepts for air vehicle technology. H D Kim, A T Perry, P J , AIAA/IEEE Electric Aircraft Technologies Symposium (EATS). IEEEH. D. Kim, A. T. Perry, and P. J. Ansell, "A review of distributed electric propulsion concepts for air vehicle technology," in AIAA/IEEE Electric Aircraft Technologies Symposium (EATS). IEEE, 2018, pp. 1-21. Design of hybrid propulsion systems for unmanned aerial vehicles. J Lieh, E Spahr, A Behbahani, J Hoying, 47J. Lieh, E. Spahr, A. Behbahani, and J. Hoying, "Design of hy- brid propulsion systems for unmanned aerial vehicles," in 47th AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit. 6146AIAA/ASME/SAE/ASEE Joint Propulsion Conference & Exhibit, 2011, p. 6146. M Berg, M Van Kreveld, M Overmars, O C Schwarzkopf, Computational Geometry: Algorithms and Applications. Berlin, HeidelbergSpringer15M. de Berg, M. van Kreveld, M. Overmars, and O. C. Schwarzkopf, Computational Geometry: Algorithms and Applications. Berlin, Heidelberg: Springer, 2000, ch. 15, pp. 307-317. RRT-connect: An efficient approach to single-query path planning. J Kuffner, S Lavalle, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065). 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia (Cat. No.00CH37065)2J. Kuffner and S. LaValle, "RRT-connect: An efficient approach to single-query path planning," in Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automa- tion. Symposia Proceedings (Cat. No.00CH37065), vol. 2, 2000, pp. 995-1001. Sampling-based algorithms for optimal motion planning. S Karaman, E Frazzoli, The International Journal of Robotics Research. 307S. Karaman and E. Frazzoli, "Sampling-based algorithms for optimal motion planning," The International Journal of Robotics Research, vol. 30, no. 7, pp. 846-894, 2011. Batch informed trees (BIT*): Informed asymptotically optimal anytime search. J D Gammell, T D Barfoot, S S Srinivasa, The International Journal of Robotics Research. 395J. D. Gammell, T. D. Barfoot, and S. S. Srinivasa, "Batch informed trees (BIT*): Informed asymptotically optimal anytime search," The International Journal of Robotics Research, vol. 39, no. 5, pp. 543- 567, 2020. Probabilistic roadmaps for path planning in high-dimensional configuration spaces. L E Kavraki, P Svestka, J.-C Latombe, M H Overmars, IEEE Transactions on Robotics and Automation. 124L. E. Kavraki, P. Svestka, J.-C. Latombe, and M. H. Overmars, "Prob- abilistic roadmaps for path planning in high-dimensional configuration spaces," IEEE Transactions on Robotics and Automation, vol. 12, no. 4, pp. 566-580, 1996. Fast replanning for navigation in unknown terrain. S Koenig, M Likhachev, IEEE Transactions on Robotics. 213S. Koenig and M. Likhachev, "Fast replanning for navigation in unknown terrain," IEEE Transactions on Robotics, vol. 21, no. 3, pp. 354-363, 2005. Path planning: A 2013 survey. O Souissi, R Benatitallah, D Duvivier, A Artiba, N Belanger, P Feyzeau, Proceedings of 2013 International Conference on Industrial Engineering and Systems Management (IESM). 2013 International Conference on Industrial Engineering and Systems Management (IESM)O. Souissi, R. Benatitallah, D. Duvivier, A. Artiba, N. Belanger, and P. Feyzeau, "Path planning: A 2013 survey," in Proceedings of 2013 International Conference on Industrial Engineering and Systems Management (IESM), 2013, pp. 1-8. Energy-aware persistent control of heterogeneous robotic systems. T X Lin, E Yel, N Bezzo, American Control Conference (ACC). T. X. Lin, E. Yel, and N. Bezzo, "Energy-aware persistent control of heterogeneous robotic systems," in American Control Conference (ACC), 2018, pp. 2782-2787. Energy-aware coverage path planning of UAVs. C , Di Franco, G Buttazzo, IEEE International Conference on Autonomous Robot Systems and Competitions. C. Di Franco and G. Buttazzo, "Energy-aware coverage path planning of UAVs," in IEEE International Conference on Autonomous Robot Systems and Competitions, 2015, pp. 111-117. Online planning for energy-efficient and disturbance-aware UAV operations. N Bezzo, K Mohta, C Nowzari, I Lee, V Kumar, G Pappas, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). N. Bezzo, K. Mohta, C. Nowzari, I. Lee, V. Kumar, and G. Pappas, "Online planning for energy-efficient and disturbance-aware UAV op- erations," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 5027-5033. Route planning and power management for PHEVs with reinforcement learning. Q Zhang, K Wu, Y Shi, IEEE Transactions on Vehicular Technology. 695Q. Zhang, K. Wu, and Y. Shi, "Route planning and power manage- ment for PHEVs with reinforcement learning," IEEE Transactions on Vehicular Technology, vol. 69, no. 5, pp. 4751-4762, 2020. Viable path planning for data collection robots in a sensing field with obstacles. H Huang, A V Savkin, Computer Communications. 111H. Huang and A. V. Savkin, "Viable path planning for data collection robots in a sensing field with obstacles," Computer Communications, vol. 111, pp. 84-96, 2017. Tightly bounding the shortest Dubins paths through a sequence of points. S G Manyam, S Rathinam, D Casbeer, E Garcia, Journal of Intelligent & Robotic Systems. 882S. G. Manyam, S. Rathinam, D. Casbeer, and E. Garcia, "Tightly bounding the shortest Dubins paths through a sequence of points," Journal of Intelligent & Robotic Systems, vol. 88, no. 2, pp. 495-511, 2017. On tightly bounding the Dubins traveling salesman's optimum. S G Manyam, S Rathinam, Journal of Dynamic Systems, Measurement and Control. 140771013S. G. Manyam and S. Rathinam, "On tightly bounding the Dubins traveling salesman's optimum," Journal of Dynamic Systems, Mea- surement and Control, vol. 140, no. 7, p. 071013, 2018. Near-optimal path planning for a car-like robot visiting a set of waypoints with field of view constraints. S Rathinam, S G Manyam, Y Zhang, IEEE Robotics and Automation Letters. 42S. Rathinam, S. G. Manyam, and Y. Zhang, "Near-optimal path planning for a car-like robot visiting a set of waypoints with field of view constraints," IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 391-398, 2019. On the Dubins traveling salesman problem with neighborhoods. P Váňa, J Faigl, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Hamburg, GermanyP. Váňa and J. Faigl, "On the Dubins traveling salesman problem with neighborhoods," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 2015, pp. 4029- 4034. The traveling-salesman problem and minimum spanning trees. M Held, R M Karp, Operations Research. 186M. Held and R. M. Karp, "The traveling-salesman problem and minimum spanning trees," Operations Research, vol. 18, no. 6, pp. 1138-1162, 1970. Benchmarks for grid-based pathfinding. N Sturtevant, Transactions on Computational Intelligence and AI in Games. 42N. Sturtevant, "Benchmarks for grid-based pathfinding," Transactions on Computational Intelligence and AI in Games, vol. 4, no. 2, pp. 144-148, 2012.
[ "https://github.com/manyamgupta/HybridPathPlanning.git." ]
[ "Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism", "Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism" ]
[ "Latif U Khan ", "Senior Member, IEEENguyen H Tran ", "Shashi Raj Pandey ", "Fel-low, IEEEWalid Saad ", "Fellow, IEEEZhu Han ", "Minh N H Nguyen ", "Choong Seon ", "Senior Member, IEEEHong " ]
[]
[]
Recent years have witnessed a rapid proliferation of smart Internet of Things (IoT) devices. IoT devices with intelligence require the use of effective machine learning paradigms. Federated learning can be a promising solution for enabling IoTbased smart applications. In this paper, we present the primary design aspects for enabling federated learning at network edge. We model the incentive-based interaction between a global server and participating devices for federated learning via a Stackelberg game to motivate the participation of the devices in the federated learning process. We present several open research challenges with their possible solutions. Finally, we provide an outlook on future research.
10.1109/mcom.001.1900649
[ "https://arxiv.org/pdf/1911.05642v1.pdf" ]
207,930,193
1911.05642
2a3d09bbdfe21418ce75d6973f71028fa9192b89
Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism Latif U Khan Senior Member, IEEENguyen H Tran Shashi Raj Pandey Fel-low, IEEEWalid Saad Fellow, IEEEZhu Han Minh N H Nguyen Choong Seon Senior Member, IEEEHong Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism 1Index Terms-Federated learningInternet of ThingsStackel- berg gameedge networks Recent years have witnessed a rapid proliferation of smart Internet of Things (IoT) devices. IoT devices with intelligence require the use of effective machine learning paradigms. Federated learning can be a promising solution for enabling IoTbased smart applications. In this paper, we present the primary design aspects for enabling federated learning at network edge. We model the incentive-based interaction between a global server and participating devices for federated learning via a Stackelberg game to motivate the participation of the devices in the federated learning process. We present several open research challenges with their possible solutions. Finally, we provide an outlook on future research. I. INTRODUCTION Recently, edge computing has gained significant interest due to its ability of extending cloud computing utilities and services to the network edge with low-latency. Numerous Internet of Things (IoT) applications such as augmented reality, autonomous driving, forest fire surveillance, industry 4.0, smart health-care, among others, require edge processing with low latency [1]. In such applications, the involved IoT end devices have stringent computational resource constraints. One way to provide those IoT edge devices with on-demand computing resources is by using a remote cloud. However, the inherent delay pertaining to end-to-end communications with a cloud server can lead to intolerable latency. Therefore, edge computing is a promising solution to enable latencysensitive IoT applications by providing low-latency on-demand computing resources [2]. On the other hand, the data generated by end IoT devices offers an opportunity of using machine learning schemes to enable intelligent applications. Therefore, it is indispensable to use of machine learning at the edge to enable various smart applications. Traditional machine learning use centralized training data at a data center which requires migrating of data from a massive number of geographically distributed smart IoT devices to a centralized location for training [3]. Storing user data at a centralized location of a third party raises serious privacy concerns. To cope with the limitation of not preserving the users' privacy in centralized learning, it is important to introduce distributed, edge-deployed learning algorithms such as federated learning [4]. Federated learning allows privacy preservation by avoiding use of centralized training [4]. An overview of how federated learning can enable IoT-based smart applications is presented in Fig. 1. Depending on how the global learning model is being operated, we can distinguish two categories of federated learning: Cloud-based federated learning and edge-based federated learning [5]. Edge-based federated learning involves a set of devices within close vicinity and computation of global learning model at edge server. On the other hand, a cloud-based federated learning model involves the computation of a global learning model at a cloud for IoT devices that are geographically distributed over a large area. Hereinafter, we consider only edge-based federated learning because of the prime role that it will play in tomorrow's wireless and IoT networks [1]. To benefit from the deployment of federated learning, it is important to address few technical challenges that include local device computational and communication resources optimization. In addition, there is a need for effective incentive mechanisms to motivate the participation of users in the learning of a global federated learning model. Several recent works have considered machine learning in enabling IoT-based smart applications [6]- [9]. The works presented in [6], [7] mostly rely on centralized machine learning solutions which can have limitations in terms of scalability as well as privacypreservation. In [8], the authors studied a federated learning framework to provide efficient resource management at the network edge. The work in [8] presented building blocks, different neural network schemes, and key enablers of machine learning at network edge. However, the works in [8] and [9] do not discuss the important challenges pertaining to incentive design and network optimization under edge-based federated learning. Our key contributions include: • We present the key design challenges and opportunities for implementation of federated learning in edge networks. • To the best of our knowledge, this is the first work to review resource optimization and incentive mechanism for federated learning over edge networks. II. FEDERATED LEARNING AT THE EDGE: KEY DESIGN ASPECTS A. Resource Optimization Optimization of communication and computation resources is absolutely necessary to enable the main phases of federated learning, such as local computation, communication, and global computation. Computation resources can be either those of a local device or of an edge server, whereas communication resources are mainly radio resources of the access network. In the local computation phase, every selected device performs a local model update using its dataset in an iterative manner. The allocation of local device computational resources strongly depends on the device energy consumption, local learning time, and local learning accuracy. In addition, the heterogeneity of the local dataset sizes significantly affects the allocation of local computational resources. Device energy consumption and local learning time are strongly dependent on the CPU capability of the edge device. Increasing the device CPU frequency yields an increase in energy consumption and a decrease in learning time. Similarly, the local computational latency increases for a fixed frequency with an increase in local learning accuracy. Therefore, it is evident that here is a need to study the tradeoff between computation energy consumption, computational latency, learning time, and learning accuracy. On the other hand, the access network and core network resources must be allocated optimally in the communication phase [10]. B. Learning Algorithm Design Federated learning involves the usage of local and global computation resources in addition to communication resources. Several machine learning techniques, such as long short-term memory, convolutional neural network, support vector machines, and Naive Bayes schemes can be used at each local device [3]. To enable federated learning, numerous optimization schemes, such as federated averaging (FedAvg) and FedProx can be used to train non-convex federated learning models [11]. FedProx is the modified version of FedAvg and it counts for statistical heterogeneity among users. FedAvg is based on running stochastic gradient descent (SGD) on a set of smart devices with statistical homogeneity to yield local model weights. Subsequently, an averaging of the local weights is performed at the edge computing server located at BS. FedProx has similar steps as FedAvg, but the difference lies in local device minimizing of objective function that considers the objective function of FedAvg with an additional proximal term which limits the impact of local device data non-independent and identically distributed (non-IID) on the global learning model. FedAvg does not guarantee theoretical convergence, while FedProx shows theoretical convergence. In FedAvg and FedProx, all the devices are weighted equally in global federated learning model computation without considering fairness among devices. However, there exist significant variations in different devices nature (i.e., hardware variability). To address such fairness issues, a so-called q-FedAvg algorithm has been recently proposed. The idea of q-FedAvg is to give higher weights to the devices with poor performance by modifying the objective function of the typical FedAvg algorithm. To introduce potential fairness and reduce training accuracy variance, the local devices having a high empirical loss (local loss function) are emphasized by setting large values of q in the q-FedAvg. Specifically, the value of q determines the amount of fairness, greater that value of q more will be the fairness and vice versa. On the other hand, an adaptive control scheme has been proposed regarding the adaptation of global aggregation frequency for federated learning [5]. Moreover, the adaptive control scheme offers a desirable tradeoff between global model aggregation and local model update to minimize the loss function with resource budget constraint. All of the above discussed methods are used for a single task global federated learning model. We can use a multi-task learning model for multiple tasks, whose data is distributed among multiple edge nodes in a federated learning setting. In [12], federated multi-task learning (FML) has been proposed while considering fault tolerance and joint optimization of both communication and computational resources. C. Incentive Mechanism Design In addition to resource optimization and learning algorithm design, a set of devices involved in the training of a global federated learning model must be given proper incentives to ensure the trustworthiness of their participation in federated learning. Incentives are possible in different forms, such as user-defined utility and money-based rewards. Several frameworks such as game theory, matching theory, and auction theory can be used in the design of incentive mechanisms for federated learning [13], [14]. For instance, consider an incentive mechanism based on game theory in which an edge server and and edge users act as a set of players. The edge server announces a reward as an incentive to the participating nodes while maximizing its benefits in terms of improving global federated learning model accuracy. On the other hand, the edge users try to maximize their individual utilities to improve their benefit. In this regard, utility can, for example, be defined as the improvement of local learning model accuracy within the allowed communication time during the training process. An improvement in the local learning model accuracy of the end-user increases its incentive from the edge server and vice versa. This process of incentive-based sharing of model parameters is continued until convergence to some global model accuracy level. III. INCENTIVE BASED FEDERATED LEARNING OVER EDGE NETWORKS A. System Model Consider a multi-user system comprised of a BS and a set of user devices with non-IID and heterogeneous data sizes. Enabling federated learning over such edge networks involves the use of local device computational resources, cloud computational resources, and communication resources that must be optimally exploited. In a typical federated learning environment, the participating user equipment (UE) have to iterate over their local data with possibly non-IID and unbalanced nature, to train a global model. However, UEs are generally reluctant to participate in federated learning due to limited computing resources and limited communication resources [10]. Thus, enabling federated learning requires some careful design considerations that include: • First of all to motivate UEs for participation, it is necessary to model the economic interaction between the BS and the UEs. Within each global iteration, the BS can offer a reward rate (e.g., $/iterations) to the UEs for their selection of the optimal local iteration strategy (i.e., CPU-frequency cycle) that can minimize the overall energy consumption of federated learning, with a minimal learning time. • The set of resource-constrained UEs involved in federated learning has numerous heterogeneous parameters: Computational capacity, training data size, and channel conditions. This heterogeneity of UEs significantly affects the local learning model computation time for a certain fixed local model accuracy level. To compute the local learning model within fixed allowed time for resource-constrained UEs with heterogeneous parameters, the local learning model accuracy will be different for different UEs. Therefore, it is necessary to tackle the challenge of heterogeneous local learning model accuracy of the participating UEs for synchronous federated learning. B. Stackelberg Game Solution The BS employs an incentive mechanism for motivating the UEs to participate in training of a global federated learning model. However, heterogeneous UEs have different computational and communication costs needed to train a global model. Therefore, they expect different reward rates to perform optimally in a federated learning setting. On the other hand, the BS seeks to minimize the learning time while maximizing the accuracy level of the learning model. Thus, this complex interaction between the BS and the UEs can be naturally cast as a Stackelberg game with one leader (BS) and multiple followers (UEs). Here, for the offered reward, the BS aims at maximizing its utility that is modeled as a function of key federated learning performance metrics such as the number of communication rounds needed to reach a desirable global federated learning model accuracy level. Correspondingly, the UEs will respond to the offered reward by the BS and choose their local strategy (i.e., the selection of CPU-frequency cycle for local computation) to maximize their own benefits. Evaluating the responses from the UEs, the BS will adjust its reward rate, and the process repeats until a desired accuracy level is obtained. To this end, the BS must carefully design an incentive mechanism to influence available UEs for training the global model. In the proposed framework, the sequence of interactions between the BS and the UEs to reach a Stackelberg equilibrium is as follows: • At the beginning, each rational UE in federated learning submits its best response (i.e., optimal CPU-frequency) to the BS for the offered reward rate, to maximize its local utility function. Specifically, each UE considers the viability of the offered reward rate for their incurred computational and communication costs in federated learning. • Next, the BS evaluates these responses, thereafter, updates and broadcasts its offered reward rate to the UEs, to maximize its own utility function (i.e., minimizing the overall energy consumption and the learning time) for the learning problem. • To this end, with the optimal offered reward, the UEs will correspondingly tune their strategy and update response that solves their individual utility maximization problem. Hereafter, the iterative process continues in each round of interaction between the BS and UEs. • In summary, we follow the best response dynamic algorithm to achieve the Stackelberg equilibrium. For this, with the first-order condition, we first find a unique Nash equilibrium at the lower-level problem (among UEs), and, then, use a backward induction method to solve the upperlevel problem (the BS's problem). C. Performance Evaluation In this section, we evaluate the performance of our proposed incentive-based federated learning model. We consider three participating UEs having different channel conditions explicitly, and having equal local data size. At each UE, we define the mean square error of the learning problem, i.e., the local relative accuracy metric, as θ. Further, the utility model for UEs is chosen as a concave function in terms of local relative accuracy θ and offered reward from the BS. In Fig. 3a, the impact of the offered reward rate r on the relative accuracy θ for three UEs is shown. Note that, a smaller value of θ means higher accuracy. An increase in the offered reward rate will motivate UEs to iterate more within one global iteration, resulting in a lower value of θ, which is intuitive. The heterogeneous responses of UEs is the result of individual computational limitations, local data size, and communication channel conditions. The impact of the communication channel conditions on local relative accuracy for a randomly chosen UE, with defined computational characteristics and local data size is illustrated in Fig. 3 and Fig. 4. For clarity, we use a normalized communication time to quantify the adversity of channel conditions. Here, a unit value for the normalized communication time signifies poor channel conditions. As the normalized communication time increases, we observe that the UEs prefer to iterate more locally to avoid expensive communication costs. Fig. 4 presents the relationship between the offered reward rate and local relative accuracy over the communication cost at a particular UE. The heatmap plot reveals the optimal response behavior for the UEs to maximize the utility function at the given channel conditions. To this 5HZDUGUDWHU &RPPXQLFDWLRQFRVW 5HODWLYHDFFXUDF\ Fig. 4: Impact of offered reward rate, communication cost versus local relative accuracy. end, we observe heterogeneity in responses of the participating UEs, under different wireless network conditions and due to their local strategies, for the offered incentive to perform federated learning. Thus, it is crucial to have an appropriate incentive design to align responses of the participating UEs for improving the performance of the federated learning model. IV. OPEN RESEARCH CHALLENGES A. Resource Optimization for Blockchain based Federated Learning An attacker might attack the centralized server involved in federated learning in order to alter global model parameters. In addition, a malicious user might alter federated learning parameters during the communication phase. To cope with such security and robustness issues, blockchain based federated learning (BFL) can be used. BFL does not require central coordination in the learning of the global model that results in enhanced robust operation. In BFL, all the users send their local model parameters to their associated miners, which are responsible for sharing local model updates through a distributed ledger. Finally, local model updates of all the devices involved in learning are sent back by miners to their associated devices for the local models aggregation. Although BFL provides benefits of security and robustness, there exist significant challenge of computational and communication resources optimization to reach a consensus among all miners. Static miners can be implemented at the BS, whereas wireless mobile miners can be implemented using unmanned aerial vehicles (UAVs). However, UAVs based mobile miners pose more serious resource allocation challenges than static miners at the BS. B. Context-Aware Federated Learning How does one enable more specialized federated learning according to users contextual information? Context-awareness is the ability of a devices/system to sense, understand, and adopt its surrounding environment. To enable intelligent context-aware applications, federated learning is a viable solution. For instance, consider keyboard search suggestion in smartphones in which the use of federated learning is a promising solution. In such type of design, we must consider context-awareness for enhanced performance. Unique globally shared federated learning model must be used separately for regions with different languages to enable more effective operation. Therefore, the location of the global model must be considered near that region (i.e., micro data center) rather than a central cloud. C. Mobility-Aware Federated Learning How does one enable seamless communication of smart mobile devices with an edge server during the learning phase of a global federated learning model? A seamless connectivity of the devices with a centralized server during the training phase must be maintained. Mobility of devices must be considered during the device selection phase of federated learning protocol. Deep learning-based mobility prediction schemes can be used to ensure the connectivity of devices during the training phase of a globally shared global model. V. CONCLUSIONS AND FUTURE RECOMMENDATIONS In this paper, we have presented the key design aspects, incentive mechanism, and open research challenges, for enabling federated learning in edge networks. Finally, we present several recommendations for future research: • Generally, federated learning involves training of a global federated learning model via an exchange of learning model updates between a centralized server and geographically distributed devices. However, wireless devices will have heterogeneous energy and processing power (CPU-cycles/sec) capabilities. Moreover, some of the devices might have noisy local datasets. Therefore, there is a need for novel federated learning protocols that will provide criteria for the selection of a set of local devices having sufficient resources. The selection criteria of the devices must include long-lasting backup power, sufficient memory, accurate data, and higher processing power. • A set of densely populated devices involved in federated learning might not be able to have real-time access to the edge server located at the BS due to a lack of communication resources. To cope with this challenge, one can develop new federated learning protocols based on socially-aware device-to-device (D2D) communication. Socially-aware D2D communication has an advantage of reusing the occupied bandwidth by other users while protecting them by keeping the interference level below the maximum allowed limit. Initially, multiple clusters based on social relationships and the distance between devices should be created. Then, a cluster head is selected for every cluster based on its highest social relationship with other devices. Within every cluster, a sub-global federated learning model is trained iteratively by exchanging the learning model parameters between the cluster head and its associated devices. Then, the sub-global Fig. 1 : 1An overview of federated learning in enabling IoT-based smart applications velop an incentive mechanism for federated learning. The Stackelberg game-based interaction enables the clients to strategically set the number of local iterations to maximize their utility. On the other hand, the base station (BS) uses the best response strategies of the users to maximize the performance of federated learning by solving its utility maximization problem. Here, the BS's utility can be modeled as a function of key performance metrics such as the number of global iterations and global accuracy level in the federated learning setting. • Finally, we present some key open research challenges along with guidelines pertaining to federated learning in edge networks. Fig. 3 : 3Impact of (a) offered reward rate r on client's (UEs) iteration strategy for corresponding relative local accuracy, (b) communication time with relative accuracy. Nguyen H. Tran (S'10-M'11-SM'18) is currently working as a senior lecturer in School of Computer Science, The University of Sydney. He received the BS degree from Hochiminh City University of Technology and Ph.D. degree from Kyung Hee University, in electrical and computer engineering, in 2005 and 2011, respectively. He was an Assistant Professor with Department of Computer Science and Engineering, Kyung Hee University, Korea from 2012 to 2017. His research interest is to applying analytic techniques of optimization and game theory to cutting-edge applications. He received the best KHU thesis award in engineering in 2011 and best paper award at IEEE ICC 2016. He has been the Editor of IEEE Transactions on Green Communications and Networking since 2016, and served as the Editor of the 2017 Newsletter of Technical Committee on Cognitive Networks on Internet of Things. We propose a Stackelberg game-based approach to de-arXiv:1911.05642v1 [cs.DC] 6 Nov 2019Core Network Smart Healthcare Smart Transportation Augmented Reality Hospital Smart Grid Big Data ∑ Local Dataset Local Dataset Local Dataset Local Model Local Model Local Model Smart IoT Applications Remote Cloud Access Network Global Model Local Models Aggregation 6G, 5G, Z-Wave, 6 LowPAN, ZigBee, NFC, Wi-Fi, Bluetooth, LTE- Advanced, BLE, Visible Light Communication ∑ Global Model Local Models Aggregation Cloud-based Federated Learning Model Edge-based Federated Learning Model federated learning model parameters from all the cluster heads are sent to the BS for global federated learning model aggregation. Finally, the global federated learning model parameters are sent back to cluster heads which in turn disseminate the learning model parameters to their associated cluster devices. • Exchange of learning model updates via blockchain offers enhanced security. However, reaching consensus via traditional consensus algorithms among blockchain nodes can add more latency to the learning time. Therefore, it is recommended to design novel consensus algorithms with low latency. Latif U. Khan is pursuing his Ph.D. degree in Computer Engineering at Kyung Hee University (KHU), South Korea. He is working as a leading researcher in the intelligent Networking Laboratory under a project jointly funded by the prestigious Brain Korea 21st Century Plus and Ministry of Science and ICT, South Korea. He received his MS (Electrical Engineering) degree with distinction from University of Engineering and Technology (UET), Peshawar, Pakistan in 2017. He is the author of the best paper in the 15 t h IEEE International Conference on Advanced Communication Technology. His research interests include analytical techniques of optimization and game theory to edge computing and end-to-end network slicing.PLACE PHOTO HERE PLACE PHOTO HERE A vision of 6G wireless systems: Applications, trends, technologies, and open research problems. W Saad, M Bennis, M Chen, IEEE Network. to appearW. Saad, M. Bennis, and M. Chen, "A vision of 6G wireless systems: Applications, trends, technologies, and open research problems," IEEE Network, to appear, 2019. M S Elbamby, C Perfecto, C.-F Liu, J Park, S Samarakoon, X Chen, M Bennis, arXiv:1905.05316Wireless edge computing with latency and reliability guarantees. arXiv preprintM. S. Elbamby, C. Perfecto, C.-F. Liu, J. Park, S. Samarakoon, X. Chen, and M. Bennis, "Wireless edge computing with latency and reliability guarantees," arXiv preprint arXiv:1905.05316, 2019. Artificial neural networks-based machine learning for wireless networks: A tutorial. M Chen, U Challita, W Saad, C Yin, M Debbah, IEEE Communications Surveys & Tutorials. M. Chen, U. Challita, W. Saad, C. Yin, and M. Debbah, "Artificial neural networks-based machine learning for wireless networks: A tutorial," IEEE Communications Surveys & Tutorials, 2019. Communication-efficient learning of deep networks from decentralized data. H B E Moore, D Ramage, S Hampson, B A Arcas, arXiv:1602.05629arXiv preprintH. B. E. Moore, D. Ramage, S. Hampson, and B. A. Arcas, "Communication-efficient learning of deep networks from decentralized data," arXiv preprint arXiv:1602.05629, 2016. Adaptive federated learning in resource constrained edge computing systems. S Wang, T Tuor, T Salonidis, K K Leung, C Makaya, T He, K Chan, IEEE Journal on Selected Areas in Communications. 376S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, "Adaptive federated learning in resource constrained edge com- puting systems," IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1205-1221, June 2019. Artificial intelligence-based semantic internet of things in a user-centric smart city. K Guo, Y Lu, H Gao, R Cao, Sensors. 1851341K. Guo, Y. Lu, H. Gao, and R. Cao, "Artificial intelligence-based semantic internet of things in a user-centric smart city," Sensors, vol. 18, no. 5, p. 1341, April 2018. Caching in the sky: Proactive deployment of cache-enabled unmanned aerial vehicles for optimized quality-of-experience. M Chen, M Mozaffari, W Saad, C Yin, M Debbah, C S Hong, IEEE Journal on Selected Areas in Communications. 355M. Chen, M. Mozaffari, W. Saad, C. Yin, M. Debbah, and C. S. Hong, "Caching in the sky: Proactive deployment of cache-enabled unmanned aerial vehicles for optimized quality-of-experience," IEEE Journal on Selected Areas in Communications, vol. 35, no. 5, pp. 1046-1061, May 2017. . J Park, S Samarakoon, M Bennis, M Debbah, arXiv:1812.02858arXiv preprintWireless network intelligence at the edgeJ. Park, S. Samarakoon, M. Bennis, and M. Debbah, "Wireless network intelligence at the edge," arXiv preprint arXiv:1812.02858, 2018. In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning, in press. X Wang, Y Han, C Wang, Q Zhao, X Chen, M Chen, IEEE Network. X. Wang, Y. Han, C. Wang, Q. Zhao, X. Chen, and M. Chen, "In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning, in press," IEEE Network, 2019. A joint learning and communications framework for federated learning over wireless networks. M Chen, Z Yang, W Saad, C Yin, H V Poor, S Cui, arXiv:1909.07972arXiv preprintM. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, "A joint learning and communications framework for federated learning over wireless networks," arXiv preprint arXiv:1909.07972, 2019. A performance evaluation of federated learning algorithms. A Nilsson, S Smith, G Ulm, E Gustavsson, M Jirstrand, Proceedings of the Second Workshop on Distributed Infrastructures for Deep Learning. the Second Workshop on Distributed Infrastructures for Deep LearningNew York, USAA. Nilsson, S. Smith, G. Ulm, E. Gustavsson, and M. Jirstrand, "A per- formance evaluation of federated learning algorithms," in Proceedings of the Second Workshop on Distributed Infrastructures for Deep Learning, New York, USA, December 2018, pp. 1-8. Federated multi-task learning. V Smith, C.-K Chiang, M Sanjabi, A S Talwalkar, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsLong Beach, CA, USA30V. Smith, C.-K. Chiang, M. Sanjabi, and A. S. Talwalkar, "Federated multi-task learning," in Proceedings of Advances in Neural Information Processing Systems 30, Long Beach, CA, USA, May 2017, pp. 4424- 4434. Z Han, D Niyato, W Saad, T Başar, A Hjørungnes, Game theory in wireless and communication networks: theory, models, and applications. Cambridge university pressZ. Han, D. Niyato, W. Saad, T. Başar, and A. Hjørungnes, Game theory in wireless and communication networks: theory, models, and applications. Cambridge university press, 2012. A crowdsourcing framework for on-device federated learning. S R Pandey, N H Tran, M Bennis, Y K Tun, A Manzoor, C S Hong, arXiv:1911.01046arXiv preprintS. R. Pandey, N. H. Tran, M. Bennis, Y. K. Tun, A. Manzoor, and C. S. Hong, "A crowdsourcing framework for on-device federated learning," arXiv preprint arXiv: 1911.01046, 2019.
[]
[ "Ideal Bose Gas and Blackbody Radiation in the Dunkl Formalism", "Ideal Bose Gas and Blackbody Radiation in the Dunkl Formalism" ]
[ "F Merabtine ", "B Hamil ", "B C Lütfüoğlu ", "A Hocine ", "M Benarous ", "\nFaculty of Exact Sciences and Informatics\nDépartement de physique\nFaculté des Sciences Exactes\nLaboratory for Theoretical Physics and Material Physics\nHassiba Benbouali University of Chlef\nAlgeria\n", "\nDepartment of Physics\nUniversité Constantine 1\nConstantineAlgeria\n", "\nFaculty of Exact Sciences and Informatics\nLaboratory for Theoretical Physics and Material Physics\nUniversity of Hradec Králové\nRokitanského 62500 03Hradec KrálovéCzechia\n", "\nFaculty of Exact Sciences and Informatics\nLaboratory for Theoretical Physics and Material Physics\nHassiba Benbouali University of Chlef\nAlgeria\n", "\nHassiba Benbouali University of Chlef\nAlgeria\n" ]
[ "Faculty of Exact Sciences and Informatics\nDépartement de physique\nFaculté des Sciences Exactes\nLaboratory for Theoretical Physics and Material Physics\nHassiba Benbouali University of Chlef\nAlgeria", "Department of Physics\nUniversité Constantine 1\nConstantineAlgeria", "Faculty of Exact Sciences and Informatics\nLaboratory for Theoretical Physics and Material Physics\nUniversity of Hradec Králové\nRokitanského 62500 03Hradec KrálovéCzechia", "Faculty of Exact Sciences and Informatics\nLaboratory for Theoretical Physics and Material Physics\nHassiba Benbouali University of Chlef\nAlgeria", "Hassiba Benbouali University of Chlef\nAlgeria" ]
[]
Recently, deformed quantum systems gather lots of attention in the literature. Dunkl formalism differs from others by containing the difference-differential and reflection operator. It is one of the most interesting deformations since it let us discuss the solutions according to the even and odd solutions. In this work, we studied the ideal Bose gas and the blackbody radiation via the Dunkl formalism. To this end, we made a liaison between the coordinate and momentum operators with the creation and annihilation operators which allowed us to obtain the expressions of the partition function, the condensation temperature, and the ground state population of the Bose gas. We found that Dunkl-condensation temperature increases with increasing θ value. In the blackbody radiation phenomena, we found how the Dunkl formalism modifies total radiated energy. Then, we examined the thermal quantities of the system. We found that the Dunkl deformation causes an increase in entropy and specific heat functions as well as in the total radiation energy. However, we observed *
10.1088/1742-5468/acd106
[ "https://export.arxiv.org/pdf/2301.12236v2.pdf" ]
256,389,544
2301.12236
a50e699cea0e52446c607ddba0936a27204050c7
Ideal Bose Gas and Blackbody Radiation in the Dunkl Formalism 17 May 2023 May 18, 2023 F Merabtine B Hamil B C Lütfüoğlu A Hocine M Benarous Faculty of Exact Sciences and Informatics Département de physique Faculté des Sciences Exactes Laboratory for Theoretical Physics and Material Physics Hassiba Benbouali University of Chlef Algeria Department of Physics Université Constantine 1 ConstantineAlgeria Faculty of Exact Sciences and Informatics Laboratory for Theoretical Physics and Material Physics University of Hradec Králové Rokitanského 62500 03Hradec KrálovéCzechia Faculty of Exact Sciences and Informatics Laboratory for Theoretical Physics and Material Physics Hassiba Benbouali University of Chlef Algeria Hassiba Benbouali University of Chlef Algeria Ideal Bose Gas and Blackbody Radiation in the Dunkl Formalism 17 May 2023 May 18, 20231 a decrease in the Dunk-corrected Helmholtz free energy in this scenario. Finally, we found that the equation of state is invariant even in the considered formalism. Recently, deformed quantum systems gather lots of attention in the literature. Dunkl formalism differs from others by containing the difference-differential and reflection operator. It is one of the most interesting deformations since it let us discuss the solutions according to the even and odd solutions. In this work, we studied the ideal Bose gas and the blackbody radiation via the Dunkl formalism. To this end, we made a liaison between the coordinate and momentum operators with the creation and annihilation operators which allowed us to obtain the expressions of the partition function, the condensation temperature, and the ground state population of the Bose gas. We found that Dunkl-condensation temperature increases with increasing θ value. In the blackbody radiation phenomena, we found how the Dunkl formalism modifies total radiated energy. Then, we examined the thermal quantities of the system. We found that the Dunkl deformation causes an increase in entropy and specific heat functions as well as in the total radiation energy. However, we observed * Introduction It would not be wrong to say that the foundations of Dunkl formalism are based on an article, which was published in the middle of the last century [1]. In this article, Wigner handled two systems, namely the free case and the classical harmonic oscillator, and he tried to derive commutation relations from the equation of motion. Surprisingly, his calculations ended with an extra free constant which meant that the commutation relations cannot be obtained uniquely in the examined systems. In 1951, Yang reworked this issue by considering the quantum harmonic oscillator problem instead of the classical one under more precise definitions of Hilbert space and series expansions [2]. He found that adding a reflection operator to the momentum operator would remove the arbitrariness of the commutation relations. A few decades later, via a purely mathematical point of view, Dunkl contributed to the ongoing discussion on the relations between differential-difference and reflection operators by describing a new derivative operator that could be used instead of the usual partial derivative [3]. It is noteworthy that if the Dunkl momentum operator is defined using the Dunkl derivative, an operator similar to Yang's momentum operator is obtained. Therefore, Dunkl's derivative achieved great attention not only in mathematics [4], but also in physics . Initially, the interest of physicists was mainly based on the applicability of the formalism in the Calogero-Sutherland-Moser models [5,6]. In the last decade, relativistic and non-relativistic systems are being examined within Dunkl-formalism. The main motivation in these studies is the simultaneous derivation of the wave function solutions as odd and even functions, thanks to the reflection operator. For example, in 2013, Genest et al. studied the isotropic and anisotropic Dunkl-oscillator systems in the plane, respectively in [9] and [10]. The next year, the same authors discussed the Dunkl-isotropic system with algebraic methods in [11], and they investigated solutions to the Dunkl-anisotropic system in three dimensions [12]. In 2018, Sargolzaeipor et al. studied the Dirac oscillator within the Dunkl formalism [18]. The next year, Mota et al. solved the Dunkl-Dirac oscillator problem in two dimensions [21]. Recently, the Dunkl-Klein Gordon oscillator is examined in two and three dimensions in [25,26,32]. The Duffin-Kemmer-Petiau oscillator is also studied in the Dunkl formalism in [27]. The use of Dunkl's formalism in physics is not limited to these fields. For instance, two of the authors of this manuscript investigated the thermal properties of a graphene layer that is located in an external magnetic field within the Dunkl formalism in [33]. The nonrelativistic solution of a position-dependent mass model is examined with a Lie algebraic method in [34]. Recently, we observe papers that consider different generalization forms of Dunkl derivative in the literature [35][36][37]. Dunkl formalism is also employed in noncommutative phase space in [38]. In a quite interesting paper, Ubriaco studied the thermodynamics of bosonic systems associated with the Dunkl derivative from the statistical mechanical point of view [13]. He formulated a bosonic model on a Hamiltonian that is defined in terms of Dunkl-creation and annihilation operators with the help of su(1, 1) algebra and investigated the effect of the Wigner parameter on the thermodynamic properties. He observed that the Dunkl-critical temperature, the Dunkl-entropy, and the Dunkl-heat capacity differ from the ordinary ones. Moreover, these differences alter according to parity. In this paper, we intend to investigate ideal Bose gas and blackbody radiation with the Dunkl formalism. Both phenomena are of great importance in the observation of quantum effects in nature, and so it is clear that it should work in the Dunkl formalism as well. To this end we prepare the manuscript as follows: In sect. 2, we introduce the Dunkl formalism which will be used throughout the manuscript. Sect. 3 is devoted to the study of the ideal Bose gas in the Dunkl formalism, where we construct the partition function in the grand canonical ensemble and discuss the modifications to the critical temperature as compared to the non-deformed case. In sect. 4, we investigate several thermal quantities of blackbody radiation within the Dunkl formalism. Finally, we briefly summarize our findings. Dunkl Formalism In Dunkl-quantum mechanics, the momentum operators are substituted with the Dunkl-momentum operators [32,33] ,h i D j =h i d dx j + θ x j (1 − R j ) , j = 1, 2, 3.(1) where θ is the Wigner parameter, and R i is the reflection operator, R i = (−1) x i d dx i , that obeys R i f (x j ) = δ ij f (−x j ) , R i d dx j = −δ ij d dx j R i , R i R j = R j R i .(2) These substitutions lead to a modification of the Heisenberg algebra and commutation relations [D i , D j ] = 0, [x i , D j ] = δ ij (1 + 2θR δ ij ).(3) Upon introducing the following operators [13] φ i ↔ x i = a + i , and φ i ↔ D x i = a i + θ a + i (1 − R i ) ,(4) whereâ + i andâ i are the ordinary creation and annihilation operators, we find that they satisfy the following commutation relations φ i ,φ j = δ ij (1 + 2θR i ).(5) One can therefore express the action of the modified operators on the occupation states |n ≡ |n 1 , n 2 , n 3 , .. as follows:φ i |n i = √ n i + 1 |n i + 1 ,(6)φ i |n i = √ n i + θ √ n i (1 − (−1) n i ) |n i .(7) Using Eqs. (4) and (6), (7), the occupation number operatorN i can be written φ i φ i =N i ,(8)withN i |n i = [n i + θ (1 − (−1) n i )] |n i .(9) It is worth noting that, the eigenvalue of the modifid number operator admits two cases (n i : even or odd). For the even case, we getN i |n i = n i |n i ,(10) while for the odd case, we haveN i |n i = (n i + 2θ) |n i .(11) It's worthwhile to mention that the eigenvalues of the occupation number operatorN i separate into two sub-spaces according to their parity. The even parity sub-space is the standard case. The notable result is that only the odd eigenvalues are affected by the Wigner parameter. Ideal Bose gas in the Dunkl formalism Let us now consider the ideal Bose gas in the grand canonical ensemble. The partition function of the system in the Dunkl formalism is given by Z D = {N i } e −β i=0 N i (ε i −µ) ,(12) where µ is the chemical potential of the gas supposed at a temperature T . Here, 1/β = k B T , where k B denotes Boltzmann constant, and ε i represents the single-particle eigenenergy of the state "i". Using the definitions given in Eqs. (9) and (12), the partition function of the ideal Bose gas in the Dunkl formalism reads Z D = ∞ i=0 ∞ n i =0 e −β(ε i −µ) [n i +θ(1−(−1) n i )] .(13) After some algebra, equation (13) simplifies to the following expression Z D = ∞ i=0 1 + e −β(1+2θ)(ε i −µ) 1 − e −2β(ε i −µ)(14) where the limit θ −→ 0 is evidently the well-known partition function of the ideal Bose gas Z = ∞ i=0 1 1 − ze −βε i ,(15) where z = e βµ is the fugacity. With the partition function (14) in hand, let us investigate the total particle number. To this end, we use the thermodynamic definition: N = 1 β ∂ ∂µ log Z D T,V .(16) By substituting Eq. (14) into Eq. (16), we may write the total number of particles as N = N D 0 + N D e ,(17) where N D 0 = 2 z −2 − 1 + (1 + 2θ) z −(1+2θ) + 1 ,(18) denotes the occupation number of the ground state, and N D e = ∞ i=1 2 e 2βε i z −2 − 1 + (1 + 2θ) e β(1+2θ)ε i z −(1+2θ) + 1 ,(19) specifies the occupation number of the excited states. In the thermodynamic limit, where in a large volume (V → ∞) the number of particles in the system is assumed to be large (N → ∞) in such a way that the density is a constant N V = const. [40][41][42], we use the conversion relation ∞ i=1 → 2πV h (2m) 3/2 +∞ 0 √ εdε,(20) to transform Eq. (19) into N D e V = 2π h (2m) 3/2 √ εdε 2 e 2βε z −2 − 1 + (1 + 2θ) e β(1+2θ)ε z −(1+2θ) + 1 ,(21) where h is the Planck constant, m is the particle mass, and V is the volume of the system. A straightforward calculation yields N D e V = 1 λ 3 √ 2 g 3/2 z 2 − 2 1 + 2θ g 3/2 −z 1+2θ ,(22) where λ = 2πh 2 β m 1/2 is the de Broglie thermal wavelength, and g s (z) = 1 Γ (s) +∞ 0 x s−1 e x z −1 − 1 dx.(23) Thus, we can write the total number of particles as N = 2 z −2 − 1 + (1 + 2θ) z −(1+2θ) + 1 + V λ 3 √ 2 g 3/2 z 2 − 2 1 + 2θ g 3/2 −z 1+2θ .(24) By utilizing the following property g s (z) + g s (−z) = 2 1−s g s (z 2 ),(25) we rewrite Eq. (24) as N V = N D 0 V + g 3/2 (z, θ) λ 3 ,(26) where g 3/2 (z, θ) = g 3/2 (z) + g 3/2 (−z) − 1 √ 1 + 2θ g 3/2 −z 1+2θ .(27) We notice on (22) that N D 0 would be complex for 1 + 2θ < 0. This non-physical behavior imposes a strong condition on the Wigner parameter θ > −1/2 which in turn means that the Dunkl formalism can only describe ideal Bose gases in these regimes. Considering this condition, we depict g 3/2 (z, θ) versus z via different Wigner parameters in Fig. 1. We observe that the function takes greater values when the Wigner parameter takes negative values, and smaller values when it takes positive values. We also note that, in the limit of θ → 0, Eq. (26) reduces to the standard result: N V = N 0 V + g 3/2 (z) λ 3 .(28) To our best knowledge, two methods exist to compute the condensation (or critical) temperature. The first method is to take N D 0 = 0 in Eq. (26), and then, to work out T D c [43][44][45][46]. The second method is to calculate the temperature at which the specific heat of the system reaches its maximum value [47,48]. In this manuscript, we use the fist method. We get where N is the actual total particle number. Hence, the Dunkl-condensation temperature writes N V = g 3/2 (1) λ 3 c 1 + g 3/2 (−1) g 3/2 (1) 1 − 1 √ 1 + 2θ ,(29)T D c T 0 c = 1 + g 3/2 (−1) g 3/2 (1) 1 − 1 √ 1 + 2θ −2/3 ,(30) which clearly shows that the critical temperature is drastically modified by the Wigner parameter. Indeed, only in the limit θ → 0, do we obtain the standard condensation temperature T 0 c = 2πh 2 mk B N V g 3/2 (1) 2/3 . This effect is better visualized by plotting the temperature ratio versus the Wigner parameter (Fig. 2). We observe that the transition temperature is smaller than T 0 c when −1/2 < θ < 0. On the other hand, when θ takes on positive values, T D c becomes greater than T 0 c . Finally, below the transition temperature, one may use Eqs. (26) and (29) to derive the ground state population N D 0 N N D 0 N = 1 − T T D c 3/2 ,(31) In Fig. 3, we plot the ground state population Blackbody radiation In this section, we examine the blackbody radiation phenomena within the Dunkl formalism. We start by expressing the mean occupation number of energy state of photons, N ε , in the Dunkl formalism for the particular case of µ = 0. N ε = 2 e 2βε z −2 − 1 + (1 + 2θ) e β(1+2θ)ε z −(1+2θ) + 1 .(32) By assuming ε =hω, where ω is the frequency of the photon, we write the number of photons within the frequency range of ω and ω + dω via: dN ε = V π 2 c 3 ω 2 dω e 2βhω − 1 + V (1 + 2θ) 2π 2 c 3 ω 2 dω e β(1+2θ)hω + 1 .(33) Hence, the corresponding energy density to this radiation reads: dE D V =h 2π 2 c 3 2ω 3 e 2hω k B T − 1 + (1 + 2θ)ω 3 e (1+2θ)hω k B T + 1 dω.(34) After obtaining the generalized Planck radiation law in the framework of Dunkl-statistics we integrate it over all frequencies. We find the deformed total energy radiated by the cavity in the form of E D V = σ 2c T 4 1 + 7 (1 + 2θ) 3 ,(35) where σ is the Stefan-Boltzmann constant. We observe that the Dunkl-correction to the standard energy radiation arises with the second term of Eq. (35). We note that for θ = 0 the Dunkl-corrected energy reduces to its conventional form, E V = 4σ c T 4 . In order to observe the effect of the Dunkl formalism, we depict the radiated Dunkl energy by the cavity, E D V , versus the temperature according to different deformation parameters in Fig. 4. We see that the radiated energy increases with the increasing temperature as in the ordinary case. We also note that for a fixed value of temperature, the energy radiation decreases when the deformation parameter grows. Next, we investigate several thermal quantities of the blackbody. First, we study Dunkl-corrected Helmholtz free energy by employing F D = −T E D T 2 dT.(36) After we substitute Eq. (35) into Eq. (36), we arrive at, F D V = − σ 6c T 4 1 + 7 (1 + 2θ) 3 .(37) In the limit of θ → 0, we get the ordinary case result, F = − 4V 3c σT 4 . In Fig. 5, we demonstrate the Dunkl-Helmholtz free energy versus temperature. We observe that in all cases the Dunkl-Helmholtz free energy decreases monotonically versus the increasing temperature. We also see that, for a fixed value of T , the free energy function increases when the deformation parameter θ grows. The Dunkl-corrected entropy S = − ∂F ∂T is also of great interest. We indeed find the expression S D V = 2 3c σT 3 1 + 7 (1 + 2θ) 3 ,(38) which shows, upon gathering (37) and (35) that the thermodynamic functions satisfy the standard relation F D = E D − T S D .(39) The behavior of the Dunkl-corrected entropy versus temperature is depicted in Fig. We see that at low temperatures the effect of Dunkl formalism is not observable. At high temperatures, this effect not only becomes significant but there is also a saturation point for θ growing to infinity, where one recovers the standard T 3 law with a different coefficient. Next, we study the Dunkl-corrected specific heat function at constant volume, given by C D V = − ∂E D ∂T V = 2V c σT 4 1 + 7 (1 + 2θ) 3 .(40) In the limit of θ → 0, Eq. (40) gives the standard result. We depict the behavior of the Dunkl-corrected specific heat versus temperature in Fig. 7. We notice that the deformed specific heat function increases with increasing temperature. This increase is greater for negative Wigner parameter. Finally, we calculate the Dunkl-corrected pressure function of the model: P = − ∂F ∂V T .(41) After substituting (37) into (41), we obtain P D = σ 6c T 4 1 + 7 (1 + 2θ) 3 .(42) This yields the equation of state (EOS) in the form P D V = E D 3 ,(43) which shows that the EOS is invariant within Dunkl formalism. Conclusion In this manuscript, we employed Dunkl formalism to investigate two important phenomena in physics. The ideal Bose gas in the grand canonical ensemble, where we construct the partition function and derive the total number of particles and the condensation temperature. A rigorous constraint appears on the Wigner parameter, namely θ > −1/2. Moreover, we found that the deformed critical temperature becomes smaller or greater than the non-deformed one according to the negative and positive values of the deformation parameter. In a second illustration, we examined the blackbody radiation within the Dunkl formalism. We derived the total energy radiated and studied various thermodynamic functions such as the Helmholtz free energy, the entropy, the specific heat, and the pressure of the system. We observed first that the Dunkl formalism leads to the same temperature dependence laws as the non-deformed case but with coefficients depending on the deformation parameter. These coefficients show a characteristic dependence on θ. Finally, we noticed that the equation of state is invariant in the Dunkl formalism. Finally, it's worthwhile to mention that the occurrence of Bose-Einstein condensation in an ideal gas is an elementary illustration of a phase transition. A rigorous development of this phenomenon was given in Refs [49][50][51][52][53]. M. van den Berg et al. studied the Bose-Einstein condensation in the perfect boson gas due to the standard saturation mechanism in a prism of volume V with sides V α , V γ , V δ with α ≥ γ ≥ δ > 0 and α + γ + δ = 1 in [53] and showed that: I : if α < 1/2, above the critical density ρ c , the condensation is formed only in the ground states. II : if α = 1/2, the condensation is formed in an infinite number of single-particle states in a band with ρ > ρ c . III : if α > 1/2, non-extensive occupation is formed where ρ 0 = ρ − ρ c . Within the Dunkl formalism, the boson gas in rectangular parallelepiped of volume V with sides of length V α , V γ , V δ may follow from a finite critical density the same way as in the conventional ideal Bose gas case. This case requires a thorough discussion which is currently under consideration and will be the subject of another study. Figure 1 : 1The variation of g 3/2 (z, θ) via z for different values of θ. Figure 2 : 2Critical temperature ratio versus the Wigner parameter. Figure 3 : 3Ground state population vs. the temperature ratio T /T 0 c for varying θ. Figure 4 : 4Dunkl-corrected energy radiation per unit volume versus the temperature for different values of θ. Figure 5 : 5Dunkl Helmholtz free energy per unit volume versus temperature for different values of θ. Figure 6 : 6The Dunkl entropy per unit volume as a function of temperature for different values of θ. Figure 7 : 7Dunkl specific heat per unit volume as a function of temperature for different values of θ. Data Availability StatementsThe authors declare that the data supporting the findings of this study are available within the article. . E P Wigner, Phys. Rev. 77711E. P. Wigner, Phys. Rev. 77, 711 (1950). . L M Yang, Phys. Rev. 84788L. M. Yang, Phys. Rev. 84, 788 (1951). . C F Dunkl, T , Am. Math. Soc. 3111167C. F. Dunkl, T. Am. Math. Soc. 311(1), 167 (1989). . M Rösler, Lecture Notes in Mathematics. 1817SpringerM. Rösler, Lecture Notes in Mathematics 1817, 93 (Springer, Berlin) 2003. . L Lapointe, L Vinet, Commun. Math. Phys. 1782425L. Lapointe, L. Vinet, Commun. Math. Phys. 178(2), 425 (1996). . S Kakei, J. Phys. A. 29619S. Kakei, J. Phys. A 29, L619 (1996). . S M Klishevich, M S Plyushchay, M Rausch De Traubenberg, Nucl. Phys. B. 616419S. M. Klishevich, M. S. Plyushchay, M. Rausch de Traubenberg, Nucl. Phys. B 616, 419 (2001). . P A Hortváthy, M Plyushchay, M Valenzuela, Ann. Phys. 3251931P. A. Hortváthy, M. Plyushchay, M. Valenzuela, Ann. Phys. 325, 1931 (2010). . V Genest, M Ismail, L Vinet, A Zhedanov, J. Phys. A. 46145201V. Genest, M. Ismail, L. Vinet, A. Zhedanov, J. Phys. A 46, 145201 (2013). . V Genest, L Vinet, A Zhedanov, J. Phys. A. 46325201V. Genest, L. Vinet, A. Zhedanov, J. Phys. A 46, 325201 (2013). . V Genest, M Ismail, L Vinet, A Zhedanov, Commun. Math. Phys. 329999V. Genest, M. Ismail, L. Vinet, A. Zhedanov, Commun. Math. Phys. 329, 999 (2014). . V Genest, L Vinet, A Zhedanov, J. Phys. Conf. Ser. 51212010V. Genest, L. Vinet, A. Zhedanov, J. Phys. Conf. Ser. 512, 012010 (2014). . M R Ubriaco, Physica A. 414128M. R. Ubriaco, Physica A 414, 128 (2014). . V Genest, A Lapointe, L Vinet, Phys. Lett. A. 379923V. Genest, A. Lapointe, L. Vinet, Phys. Lett. A 379, 923 (2015). . E J Jan, S Park, W S Chung, J Kor, Phys. Soc. 683379E. J. Jan, S. Park, W. S. Chung, J. Kor. Phys. Soc. 68(3), 379 (2016). . M Salazar-Ramirez, D Ojeda-Guillén, V D Granados, Eur. Phys. J. Plus. 13239M. Salazar-Ramirez, D. Ojeda-Guillén, V. D. Granados, Eur. Phys. J. Plus 132, 39 (2017). . M Salazar-Ramirez, D Ojeda-Guillén, R D Mota, V D Granados, Mod. Phys. Lett. A. 33201850112M. Salazar-Ramirez, D. Ojeda-Guillén, R. D. Mota, V. D. Granados, Mod. Phys. Lett. A 33(20), 1850112 (2018). . S Sargolzaeipor, H Hassanabadi, W S Chung, Mod. Phys. Lett. A. 33251850146S. Sargolzaeipor, H. Hassanabadi, W. S. Chung, Mod. Phys. Lett. A 33(25), 1850146 (2018). . W S Chung, H Hassanabadi, Mod. Phys. Lett. A. 34241950190W. S. Chung, H. Hassanabadi, Mod. Phys. Lett. A 34(24), 1950190 (2019). . S Ghazouani, I Sboui, M A Amdouni, M B El Hadj Rhouma, J. Phys. A: Math. Theor. 52225202S. Ghazouani, I. Sboui, M. A. Amdouni, M. B. El Hadj Rhouma, J. Phys. A: Math. Theor. 52, 225202 (2019). . R D Mota, D Ojeda-Guillén, M Salazar-Ramírez, V D Granados, Ann. Phys. 411167964R. D. Mota, D. Ojeda-Guillén, M. Salazar-Ramírez, V. D. Granados, Ann. Phys. 411, 167964 (2019). . W S Chung, H Hassanabadi, Rev. Mex. Fis. 663308W. S. Chung, H. Hassanabadi, Rev. Mex. Fis. 66(3), 308 (2020). . Y Kim, W S Chung, H Hassanabadi, Rev. Mex. Fis. 664411Y. Kim, W. S. Chung, H. Hassanabadi, Rev. Mex. Fis. 66(4), 411 (2020). . D Ojeda-Guillén, R D Mota, M Salazar-Ramírez, V D Granados, Mod. Phys. Lett. A. 35312050255D. Ojeda-Guillén, R. D. Mota, M. Salazar-Ramírez, V. D. Granados, Mod. Phys. Lett. A 35(31), 2050255 (2020). . R D Mota, D Ojeda-Guillén, M Salazar-Ramírez, V D Granados, Mod. Phys. Lett. A. 36102150066R. D. Mota, D. Ojeda-Guillén, M. Salazar-Ramírez, V. D. Granados, Mod. Phys. Lett. A 36(10), 2150066 (2021). . R D Mota, D Ojeda-Guillén, M Salazar-Ramírez, V D Granados, Mod. Phys. Lett. A. 36232150171R. D. Mota, D. Ojeda-Guillén, M. Salazar-Ramírez, V. D. Granados, Mod. Phys. Lett. A 36(23), 2150171 (2021). . A Merad, M Merad, Few-Body Syst. 6298A. Merad, M. Merad, Few-Body Syst. 62, 98 (2021). . W S Chung, H Hassanabadi, Mod. Phys. Lett. A. 36182150127W. S. Chung, H. Hassanabadi, Mod. Phys. Lett. A 36(18), 2150127 (2021). . W S Chung, H Hassanabadi, Eur. Phys. J. Plus. 136239W. S. Chung, H. Hassanabadi, Eur. Phys. J. Plus. 136, 239 (2021). . H Hassanabadi, M De Montigny, W S Chung, Physica A. 580126154H. Hassanabadi, M. de Montigny, W. S. Chung, Physica A 580, 126154 (2021). . S H Dong, W H Huang, W S Chung, P Sedaghatnia, H Hassanabadi, EPL. 13530006S. H. Dong, W. H. Huang, W. S. Chung, P. Sedaghatnia, H. Hassanabadi, EPL 135, 30006 (2021). . B Hamil, B C Lütfüoğlu, Few-Body Syst. 6374B. Hamil, B. C. Lütfüoğlu, Few-Body Syst, 63, 74 (2022). . B Hamil, B C Lütfüoğlu, Eur. Phys. J. Plus. 137812B. Hamil, B. C. Lütfüoğlu, Eur. Phys. J. Plus 137, 812 (2022). . P Sedaghatnia, H Hassanabadi, W S Chung, B C Lütfüoğlu, S Hassanabadi, J Kriz, arxiv:2208.12416quanth-phP. Sedaghatnia, H. Hassanabadi, W. S. Chung, B. C. Lütfüoğlu, S. Hassanabadi, J. Kriz, arxiv:2208.12416 [quanth-ph]. . R D Mota, D Ojeda-Guillén, Mod. Phys. Lett. A. 37012250006R. D. Mota, D. Ojeda-Guillén, Mod. Phys. Lett. A 37(01), 2250006 (2022). . S Hassanabadi, J Kriz, B C Lütfüoğlu, H Hassanabadi, Phys. Scr. 97125305S. Hassanabadi, J. Kriz, B. C. Lütfüoğlu, H. Hassanabadi, Phys. Scr. 97, 125305 (2022). Under review in EPL. N Rouabhia, M Merad, B Hamil, N. Rouabhia, M. Merad, B. Hamil. Under review in EPL. . S Hassanabadi, P Sedaghatnia, W S Chung, B C Lütfüoğlu, J Kriz, H Hassanabadi, Eur. Phys. J. Plus. 138331S. Hassanabadi, P. Sedaghatnia, W. S. Chung, B. C. Lütfüoğlu, J. Kriz, H. Hassanabadi, Eur. Phys. J. Plus 138, 331 (2023). . L Vinet, A Zhedanov, Rev. Math. Phys. 34082250025L. Vinet, A. Zhedanov, Rev. Math. Phys. 34(08), 2250025, (2022). W Greiner, L Neise, H Stocker, Thermodynamics and Statistical Mechanics. New YorkSpringerVerlagW. Greiner, L. Neise and H. Stocker, Thermodynamics and Statistical Mechanics (SpringerVerlag, New York, 1995). . A Lavagno, P Narayana, Swamy, Phys. Rev. E. 6536101A. Lavagno, P. Narayana Swamy, Phys. Rev. E 65, 036101 (2002). . E H Lieb, R Seiringer, J Solovej, J Yngvason, arXiv:cond-mat/0610117E. H. Lieb, R. Seiringer, J. P/ Solovej, J. Yngvason, arXiv:cond-mat/0610117. . S Grossmann, M Holthaus, Phys. Lett. A. 208188S. Grossmann, M. Holthaus, Phys. Lett. A 208, 188 (1995). . S Grossmann, M Holthaus, Z. Naturforsch. A. 50921S. Grossmann, M. Holthaus, Z. Naturforsch. A 50, 921 (1995). . Q. -J Zeng, Y. -S Luo, Y. -G Xu, H Luo, Physica A. 398116Q. -J. Zeng, Y. -S. Luo, Y. -G. Xu, H. Luo, Physica A 398, 116 (2014). . Q. -J Zeng, Z Cheng, J. -H Yuan, Physica A. 391563Q. -J. Zeng, Z. Cheng, J. -H. Yuan, Physica A 391, 563 (2012). . K Kirsten, D J Toms, Phys. Rev. A. 544188K. Kirsten, D. J. Toms, Phys. Rev. A 54, 4188 (1996). . K Kirsten, D J Toms, Phys. Lett. A. 222148K. Kirsten, D. J. Toms, Phys. Lett. A 222, 148 (1996). . F Rocca, M Sirugue, D Testard, Commun. Math. Phys. 19119F. Rocca, M. Sirugue, D. Testard, Commun. Math. Phys. 19, 119 (1970). . J T Lewis, J V Pule, Commun. Math. Phys. 361J. T. Lewis, J. V. Pule, Commun. Math. Phys. 36, 1 (1974). . D W Robinson, Commun. Math. Phys. 5053D. W. Robinson, Commun. Math. Phys. 50, 53 (1976). . L J Landau, I F Wilde, Commun. Math. Phys. 7043L. J. Landau, I. F. Wilde, Commun. Math. Phys. 70, 43 (1979). . M Van Den, J T Berg, Lewis, Physica A. 110550M. van den Berg, J. T. Lewis, Physica A 110, 550 (1982).
[]
[ "THE EXPLICIT LOCAL LANGLANDS CORRESPONDENCE FOR GSp 4 , Sp 4 AND STABILITY with an application to modularity lifting", "THE EXPLICIT LOCAL LANGLANDS CORRESPONDENCE FOR GSp 4 , Sp 4 AND STABILITY with an application to modularity lifting" ]
[ "Kenta Suzuki ", "Yujie Xu " ]
[]
[]
We give a purely local proof of the explicit Local Langlands Correspondence for GSp 4 and Sp 4 . Moreover, we give a unique characterization in terms of stability of L-packets and other properties. Finally, in the appendix, we give an application of our explicit local Langlands correspondence to modularity lifting.
null
[ "https://export.arxiv.org/pdf/2304.02622v2.pdf" ]
257,952,649
2304.02622
5bc2958afd74b1c300f258b1db2a68ae366b1322
THE EXPLICIT LOCAL LANGLANDS CORRESPONDENCE FOR GSp 4 , Sp 4 AND STABILITY with an application to modularity lifting Kenta Suzuki Yujie Xu THE EXPLICIT LOCAL LANGLANDS CORRESPONDENCE FOR GSp 4 , Sp 4 AND STABILITY with an application to modularity lifting Dedicated to Professor George Lusztig, with admiration. We give a purely local proof of the explicit Local Langlands Correspondence for GSp 4 and Sp 4 . Moreover, we give a unique characterization in terms of stability of L-packets and other properties. Finally, in the appendix, we give an application of our explicit local Langlands correspondence to modularity lifting. Introduction Let F be a non-archimedean local field and G a connected reductive algebraic group over F . Let G ∨ be the group of C-points of the reductive group whose root datum is the coroot datum of G. The Local Langlands Conjecture predicts a surjective map 1 irred. smooth repres. π of G(F ) /iso. −→    L-parameters i.e. cont. homomorphisms ϕ π : W F × SL 2 (C) → G ∨ W F    /G ∨ -conj., where W F is the Weil group of F . The fibers of this map, called L-packets, are expected to be finite. In order to obtain a bijection between the group side and the Galois side, the above Conjecture was later enhanced (á la Deligne, Vogan, Lusztig etc.). On the Galois side, one considers enhanced L-parameters. Many cases of the Local Langlands Conjecture have been established, most notably: • for GL n (F ): [HT01,Hen00,Sch13]; • for SL n (F ): [HS12] for char(F ) = 0 and [ABPS16b] for char(F ) > 0 (see also [GK81,GK82]); 1 To avoid overunning the margins, we use abbreviations "irred." for "irreducible", "repres." for "representations", "iso." for "isomorphism", "cont." for "continuous" and "conj." for "conjugacy". For simplicity, we only state the conjecture for quasisplit p-adic groups in the introduction, which is sufficient for our current paper. • quasi-split classical groups for F of characteristic zero: [Art13,Moe11] etc. • exceptional group G 2 : [AX22a] For classical groups, the main methods in literature are either (1) to classify representations of these groups in terms of representations of the general linear groups via twisted endoscopy, and to compare the stabilized twisted trace formula on the general linear group side and the stabilized (twisted) trace formula on the classical group side, or (2) to use the theta correspondence. In [AX22b], the second author took a completely different approach to the construction of explicit Local Langlands Correspondences for p-adic reductive groups via reduction to LLC for supercuspidal representations of proper Levi subgroups. This strategy was then applied in [AX22a] to construct the explicit Local Langlands Correspondence for p-adic G 2 , which is the first known case in literature of Local Langlands Correspondence for exceptional groups. In [SX23], the authors uniquely characterize the Local Langlands Correspondence constructed in [AX22a] using an extension of the atomic stability property of L-packets as formulated by DeBacker, Kaletha etc. (see for example [Kal22, Conjecture 2.2]), which is a generalization of the stability property in [DR09]. To do this, we compute the coefficients of certain local character expansions building on methods in [HC99,DS00,BM97]. In this article, we apply this general strategy pioneered in [AX22a,SX23] and construct the explicit Local Langlands Correspondence for the symplectic groups GSp 4 and Sp 4 over an arbitrary non-archimedean local field of residual characteristic = 2, with explicit L-packets and explicit matching between the group and Galois sides. More precisely, we use a combination of the Langlands-Shahidi method, (extended affine) Hecke algebra techniques, Kazhdan-Lusztig theory and generalized Springer correspondence-in particular, the AMS Conjecture on cuspidal support [AMS18, Conjecture 7.8]. For intermediate series, i.e. Bernstein series with supercuspidal support "in between" a torus and G itself, we use our previous result on Hecke algebra isomorphisms and local Langlands correspondence for Bernstein series obtained in [AX22b]. For principal series (i.e. Bernstein series with supercuspidal support in a torus), we improve on previous works we use [Roc98,Ree02,ABPS16a,Ram03] to match the group and Galois sides. For supercuspidal representations, we make explicit the theory of [Kal19, Kal21] for the nonsingular supercuspidal representations and their L-packets. For singular 2 supercuspidal representations, which are not covered in loc.cit. , we use [AMS18, Conjecture 7.8] (see Property 8.1.19) to exhibit them in mixed L-packets with non-supercuspidal representations. These mixed L-packets are drastically different from the supercuspidal L-packets of [Kal19, Kal21]. Furthermore, our LLC satisfies several expected properties, including the expectation that Irr(S ϕ ) parametrizes the internal structure of the L-packet Π ϕ (G), where S ϕ is the component group of the centralizer of the (image of the) L-parameter ϕ. Moreover, we explicitly compute the coefficients of local character expansions of Harish-Chandra characters for certain non-supercuspidal representations (see §6), which allows us to give a unique characterization of our LLC using stability for L-packets. Finally, explicit Local Langlands Correspondences (e.g. explicit Kazhdan-Lusztig triples) have important applications to number theory, such as to the Taylor-Wiles methods and modularity lifting theorems. In Appendix A, we record such an application, following [BCGP21,Tho22,Whi22]. 1.1. Main results. We now state our main results. Let Irr s (G) be the Bernstein series attached to the inertial class s = [L, σ] (for more details, see [AX22a,(3.3.2)] ). Let Φ e (G) denote the set of G ∨ -conjugacy classes of enhanced L-parameters for G. Let Φ s ∨ e (G) ⊂ Φ e (G) be the Bernstein series on the Galois side, whose cuspidal support lies in s ∨ = [L ∨ , (ϕ σ , ρ σ )], i.e. the image under LLC for L of s (for more details, see [AX22a,§2.4] ). For any s = [L, σ] G ∈ B(G), the LLC for L given by σ → (ϕ σ , ρ σ ) is expected to induce a bijection (see [ [Roc98,Ree02,ABPS16a,AMS18]. Let G = GSp 4 (F ) or Sp 4 (F ), and p = 2. Combined with the detailed analysis in all of §3 through §6, we explicitly construct the Local Langlands Correspondence LLC : Irr(G) 1-1 − − → Φ e (G) π → (ϕ π , ρ π ), (1.1.2) and obtain the following result (see Theorem 8.2.8). Theorem 1.1.3. The explicit Local Langlands Correspondence (1.1.2) verifies Π ϕπ (G) ∼ − → Irr(S ϕπ ) for any π ∈ Irr(G), and satisfies (1.1.1) for any s ∈ B(G), where s ∨ = [L ∨ , (ϕ σ , ρ σ )] G ∨ , as well as a list of properties (see §8.1) that uniquely characterize our correspondence. In other words, (1) to each explicitly described π ∈ Irr(G), we attach an explicit L-parameter ϕ π and determine its enhancement ρ π explicitly; (2) to each ϕ ∈ Φ(G), we describe (the shape of) its L-packet Π ϕ (G), and give an internal parametrization in terms of ρ ∈ Irr(S ϕ ); (3) Moreover, for non-supercuspidal representations, we specify the precise parabolic induction that it occurs in. Acknowledgements. Y.X. was supported by NSF grant DMS 2202677. K.S. was partially supported by MIT-UROP. The authors would like to thank Jack Thorne for bringing their attention to the applications of explicit LLC to modularity lifting and for help with references. The authors would also like to thank Anne-Marie Aubert, Stephen DeBacker, Dick Gross, Ju-Lee Kim, George Lusztig, Maarten Solleveld, Loren Spice, Jack Thorne and Dmitri Whitmore for helpful conversations related to this project. The authors would like to thank George Lusztig and Wei Zhang for their continued interest and encouragement. The authors would like to thank Maarten Solleveld for helpful feedback on a previous version of this paper. The authors would like to thank MIT for providing an intellectually stimulating working environment. Preliminaries Let F be a nonarchimedean local field. Let J 2 := 1 1 and β := J 2 −J 2 . Consider the following groups Sp 4 := {g ∈ GL 4 (F ) : T gβg = β} GSp 4 := {g ∈ GL 4 (F ) : T gβg = µ(g)β, for some µ(g) ∈ F × }. In particular, there is an exact sequence 1 → Sp 4 (F ) → GSp 4 (F ) µ − → F × → 1. The Langlands dual groups are GSp ∨ 4 = GSpin 5 (C) and Sp ∨ 4 = PGSpin 5 (C) ∼ = SO 5 (C). Here GSpin 5 := (GL 1 × Spin 5 )/µ 2 where µ 2 is diagonally embedded as in [Asg02, Definition 2.3]. 2.1. Root datum. The following are the data for the root datum for Sp 4 , GSp 4 [Tad94, Asg02,AS06], of type C 2 . We also realize everything in terms of the torus T = {(a 1 , a 2 , b 2 , b 1 ) : a 1 b 1 = a 2 b 2 = µ}. • For Sp, the lattice is X * (T ) := Z{ 1 , 2 }, the roots are ∆ := {± 1 ± 2 , ±2 1 , ±2 2 }, and the simple roots are { 1 − 2 , 2 2 }. • For GSp, the lattice is X * (T ) := Z{ 0 , 1 , 2 }, the roots are ∆ := {± 1 ± 2 } ∪ {±( 0 − 2 1 ), ±( 0 − 2 2 ), ±( 0 − 1 − 2 )}, and the simple roots are { 1 − 2 , 2 2 − 0 }. Here, i (a 1 , a 2 , b 1 , b 2 ) = a i for i = 1, 2 and 0 (a 1 , a 2 , b 2 , b 1 ) = µ. The root groups are given by: U i − j = 1 + x1 ij 1 − x1 n+1−j,n+1−i U i + j = 1 x(1 i,n+1−j + 1 j,n+1−i ) 1 U 2 i = 1 x1 i,n+1−i 1 U − i − j = 1 x(1 n+1−i,j + 1 n+1−j,i ) 1 U −2 i = 1 x1 n+1−i,i 1 , where 1 ij is the matrix with a single one in the (i, j)-component. Letting α := 1 − 2 and β := 2 2 (or 2 2 − 0 , for GSp), and δ := −2 1 (or 0 − 2 1 for GSp) we obtain: Coroots are given by α ∨ := 2(α,−) (α,α) . For Sp 4 and GSp 4 , they are of type B 2 : • X * (T ) := Z{ ∨ 1 , ∨ 2 }, and the simple coroots are {α ∨ := ∨ 1 − ∨ 2 , β ∨ := ∨ 2 }. • X * (T ) := Z{ ∨ 0 , ∨ 1 , ∨ 2 }, and the simple coroots are {α ∨ := ∨ 1 − ∨ 2 , β ∨ := ∨ 2 }. Here, ∨ 0 (t 0 ) ∨ 1 (t 1 ) ∨ 2 (t 2 ) = (t 1 , t 2 , t 0 t −1 2 , t 0 t −1 1 ). The Dynkin diagram is: Remark 2.1.1. GSp 4 happens to be self-dual, under the following isomorphism: X * (T ) = Z{ 0 , 1 , 2 } → X * (T ) = Z{ ∨ 0 , ∨ 1 , ∨ 2 } 0 → −2 ∨ 0 − ∨ 1 − ∨ 2 (2.1.2) 1 → − ∨ 0 2 → − ∨ 0 − ∨ 2 , where α 1 → α ∨ 2 and α 2 → α ∨ 1 , and its inverse is given by ∨ 0 → − 1 , ∨ 1 → 1 + 2 − 0 , ∨ 2 → 1 − 2 . Remark 2.1.3. By the exceptional isomorphism B 2 = C 2 , we have the following description of nilpotent orbits in GSp 4 and Sp 4 (see [CM93, For later use (e.g. §6), we record the following table 1 for Weyl group conjugacy classes for GSp 4 and Sp 4 . We will also need the following picture of a C 2 -apartment in the building B(GSp 4 ). Figure 2. The apartment in B(GSp 4 ) 2.2. Levi subgroups. The Levi subgroups of GSp 4 (resp., Sp 4 ) are: 3π/4 α = 1 − 2 β = 2 2 −α α + β 2α + β −2α − β −α − β −β C2 3π/4 β ∨ = ∨ 2 α ∨ = ∨ 1 − ∨ 2 −β ∨ α ∨ + β ∨ α ∨ + 2β ∨ −α ∨ − 2β ∨ −α ∨ − β ∨ −α ∨ B2 Figure 1. Root diagram for B 2 = C 2 names cycle types e (1)(1) A 1 (1)(1) A 1 (2) A 1 × A 1 (1)(1) C 2 (2)A 1 A 1Ã 1 C 2 A 1 × A 1 C 2 e • GSp 4 (resp., Sp 4 ) • GL 2 × GSp 0 (resp., GL 2 × Sp 0 ). Explicitly, it is GSp 4 ∩ (GL 1 × GL 2 × GL 1 ). • GL 1 × GSp 2 (resp., GL 1 × Sp 2 ). Explicitly, it is GSp 4 ∩ (GL 2 × GL 2 ). • GL 1 × GL 1 × GSp 0 (resp., GL 1 × GL 1 × Sp 0 ), the maximal torus. Given representations π of GL 2 and characters χ 1 , χ 2 , χ 3 , we let π χ 1 , χ 1 π, and χ 1 × χ 2 χ 3 be the (normalized) parabolic induction from GL 2 × GSp 0 , GL 2 × GSp 2 , and GL 1 × GL 1 × GSp 0 , respectively, using notation from [ST93,§1]. GSp ∨ 4 ∼ = GSp 4 (GL 2 × GSp 0 ) ∨ ∼ = GL 1 × GSp 2 (GL 1 × GSp 2 ) ∨ ∼ = GL 2 × GSp 0 (GL 1 × GL 1 × GSp 0 ) ∨ ∼ = GL 1 × GL 1 × GSp 0 . Remark 2.2.2 (LLC for Levis of GSp 4 (F )). By Remark 2.1.1, the LLC for the maximal torus T is given as: hom(W F , T (C)) ∼ = Irr(T ) (χ 1 (w), χ 2 (w), χ 0 χ −1 2 (w), χ 0 χ −1 1 (w)) → χ −1 0 χ 1 χ 2 ⊗ χ 1 χ −1 2 ⊗ χ −1 1 . Similarly, the LLC for the Levi GL 2 (F ) × GSp 0 (F ) ⊂ GSp 4 (F ) is given by: hom(W F × SL 2 (C), GL 1 (C) × GSp 2 (C)) ∼ = Irr(GL 2 (F ) × GSp 0 (F )) (ρ ⊗ ϕ) → ( ρ ⊗ π ∨ ϕ ) ρ −1 , where π ϕ is the image of ϕ under the LLC for GL 2 (F ). Finally, the LLC for the Levi GL 1 (F ) × GSp 2 (F ) ⊂ GSp 4 (F ) is given by: hom(W F × SL 2 (C), GL 2 (C) × GSp 0 (C)) ∼ = Irr(GL 1 (F ) × GSp 2 (F )) (ϕ ⊗ ρ) → ( ρ −1 ω πϕ ) π ∨ ϕ , where ω πϕ = det(ϕ) is the central character of π ϕ . 2.3. Parahoric subgroups. Types of the reductive quotient of maximal parahoric subgroups are given by deleting a node from the extended Dynkin diagram. We fix a standard choice of parahoric subgroups, with roots as indicated by Figure 3. For GSp 4 (F ), the vertices β and δ are in the same orbit in the building: • Removing δ (or β) gives the Dynkin diagram C 2 , giving the parahoric subgroup GSp 4 (o F ) with reductive quotient GSp 4 (k). • Removing α gives the Dynkin diagram A 1 A 1 , giving the groups G α := GSp 4 (F ) ∩     o o o p −1 p o o o p o o o p p p o     ⊃ G α+ = GSp 4 (F ) ∩     1 + p o o o p 1 + p p o p p 1 + p o p 2 p p 1 + p     , with reductive quotient GSp 2,2 (k) := {(g, h) ∈ GSp 2 × GSp 2 : µ(g) = µ(h)}. Similarly, for Sp 4 (F ), we have: • Removing δ gives the Dynkin diagram C 2 , giving the parahoric subgroup Sp 4 (o F ) with reductive quotient Sp 4 (k). • Removing β gives the Dynkin diagram C 2 , giving the parahoric subgroup Sp 4 (F ) ∩ M 2 (o) M 2 (p −1 ) M 2 (p) M 2 (o) = −1 I 2 I 2 Sp 4 (o F ) I 2 I 2 with reductive quotient Sp 4 (k). Here the matrix diag( I 2 , I 2 ) is in GSp 4 (F ), but not Sp 4 (F ). • Removing α gives the Dynkin diagram A 1 A 1 , giving the group G α := Sp 4 (F ) ∩     o o o p −1 p o o o p o o o p p p o     ⊃ G α+ = Sp 4 (F ) ∩     1 + p o o o p 1 + p p o p p 1 + p o p 2 p p 1 + p     , with reductive quotient Sp 2 (k) × Sp 2 (k). However, note that the isomorphism of G α /G α+ with GSp 2,2 (k) (resp., Sp 2 (k) × Sp 2 (k)) above are non-canonical (i.e., depend on a choice of a uniformizer .) To make these isomorphisms Figure 3. Parahoric subgroups G α and G β more canonical, consider the endoscopic subgroup H := Z G (s) with s = diag(1, −1, −1, 1) which is isomorphic to GSp 2,2 (F ) (resp., Sp 2,2 (F )): o o p o p −1 p p o G α o p −1 o p −1 p −1 p p p G βGSp 2,2 (F ) ∼ − → H ( a 1 b 1 c 1 d 1 , a 2 b 2 c 2 d 2 ) →     a 2 b 2 a 1 b 1 c 1 d 1 c 2 d 2     Now there is a canonical isomorphism of G α /G α+ with the reductive quotient of the parahoric subgroup H α := {(g, h) ∈ M 2 (o) × o p −1 p o : det(g) = det(h) ∈ o × }. 3. The group side 3.1. Supercuspidal representations. 3.1.1. Depth-zero supercuspidal representations of Sp 4 , GSp 4 . 3.1.1. First we recall a few general facts on depth-zero supercuspidals. Let π be an irreducible depth-zero supercuspidal representation of G. Then there exists a vertex x ∈ B red (G, F ) and an irreducible cuspidal representation τ of G x (F q ), such that the restriction of π to G x,0 contains the inflation of τ (see [Mor96,[1][2] or [MP96, Proposition 6.6]). The normalizer N G (G x,0 ) of G x,0 in G is a totally disconnected group that is compact mod center, which by [BT84, proof of (5. π = c-Ind G G x,0 τ is fdeg(π) = q rk(G)/2 dim(τ unip ) |(Z G ∨ x,0 (s))(F q )| p , where | · | p denotes the coprime-to-p order. The following construction gives a special class of supercuspidals, i.e. depth-zero regular supercuspidal representations of G is as in [Kal19, Lem 3.4.12]: Definition 3.1.4. For S ⊂ G a maximally unramified elliptic maximal torus and θ : S(F ) → C × a regular character of depth zero, let π (S,θ) := c-Ind G(F ) S(F )G x,0 (θ ⊗ ±R θ S ) . One can generalize the above construction and consider a larger class of supercuspidals called "non-singular" supercuspidals, which are the largest class of supercuspidals living in purely supercuspidal L-packets (see for example [AX22a] for more exposition). 3.1.5. More concretely, depth-zero irreducible supercuspidal representations of G are parametrized by irreducible cuspidal representations of reductive quotients G x of maximal parahorics, which can be inflated to G x,0 , and (non-uniquely) extended to G [x] . Recall from the classical Deligne-Lusztig theory [DL76,§10] and [Lus84a,(8.4.4)], we have bijections (3.1.6) Irr(G x ) ∼ − → (s) E(G x (F q ), s) ∼ − → (s) E(Z G ∨ x (s), 1), where (s) runs through the conjugacy classes of semisimple elements of G ∨ x . Moreover, the bijections preserve cuspidality. We hope to see when H ∨ = Z G ∨ x (s) has a unipotent cuspidal representation. We will repeatedly use the following result: [Lus77,8.11]). Lemma 3.1.7 ([Lus78, Thm 3.22], • SO 2n+1 (F q ) has a unique unipotent cuspidal representation exactly when n = s 2 + s for some integer s ≥ 1, of dimension |SO 2n+1 (F q )| p q ( 2n 2 )+( 2n−2 2 )+··· 2 n (q + 1) 2n (q 2 + 1) 2n−1 · · · (q 2n + 1) . • SO 2n (F q ) has a unique unipotent cuspidal representation exactly when n = 4s 2 for some s ≥ 1. The non-split form SO − 2n (F q ) has a unique unipotent cuspidal representation exactly when n = (2s + 1) 2 for some s ≥ 1. • GL n has no unipotent cuspidal representations for any n ≥ 1. 3.1.8. For us, by §2.3 the reductive quotients G x are Sp 4 (k) or Sp 2 (k) × Sp 2 (k) for G = Sp 4 (F ) and either GSp 4 (k) or GSp 2,2 (k) := {(g, h) ∈ GSp 2 (k) × GSp 2 (k) : µ(g) = µ(h)} for G = GSp 4 (F ). Using (3.1.6) we classify the cuspidal representations of these groups: Lemma 3.1.9. Every cuspidal representations of GSp 2,2 (F q ) (defined in §2.3) is given by, for s = (g, h) ∈ GL 2 (F q ) × GL 2 (F q )/F × q where g has eigenvalues λ 1 , λ q 1 and h has eigenvalues λ 2 , λ q 2 where λ 1 , λ 2 ∈ F q 2 \F q : • if λ q−1 1 = −1 or λ q−1 2 = −1, then E(GSp 2,2 (F q ), s) ∼ = E(R F q 2 /Fq G m × R F q 2 /Fq G m /F × q , 1) = {1}. Denote such a cuspidal representation as ρ (α,β) . • if α q−1 = β q−1 = −1, then E(GSp 2,2 (F q ), s) ∼ = E(R F q 2 /Fq G m × R F q 2 /Fq G m /F × q µ 2 , 1) = {1, sgn}. Denote such cuspidal representations as ρ + (α,β) and ρ − (α,β) . Remark 3.1.10. The cuspidal representations ρ + (α,β) are characterized as the common irreducible constituent of Ind GL 2,2 SL 2 ×SL 2 (R α T R β T ) and the Gelfand-Graev representation Γ GL 2,2 O where O is the orbit of ( 1 1 1 , [Bon11, pg 55]'s notation. 1 1 1 ). The restriction of ρ + (α,β) to SL 2 (F q ) × SL 2 (F q ) is R + (θ 0 ) R + (θ 0 ) + R − (θ 0 ) R − (θ 0 ), in Lemma 3.1.11. The following are the cuspidal representations of GSp 4 (F q ): • The q − 1 twists of the unique unipotent cuspidal, i.e., in E(GSp 4 , s) where s ∈ Z(GSpin 5 ). • R θ T where T is an anisotropic maximal torus and θ is a regular character. Lemma 3.1.12. The following are the cuspidal representations of Sp 4 (F q ): • The unique unipotent cuspidal. • For any α ∈ µ q+1 \{±1} then for any s ∈ SO 5 (F q ) with eigenvalues 1, −1, −1, α ±1 , E(Sp 4 , s) ∼ = E(O 2 (F q ) × U 1 (F q ), 1) = {1, sgn}. There are (q − 1)/2 such conjugacy classes, giving rise to q − 1 representations. • For α = β ±1 ∈ µ q+1 \{±1} and s with eigenvalues 1, α ±1 , β ±1 , E(Sp 4 , s) ∼ = E(T, 1) = {1}. where T = R F q 2 /Fq G m × R F q 2 /Fq G m is an isotropic maximal torus. • For α ∈ µ q 2 +1 \{±1} and s with eigenvalues 1, α, α q , α q 2 , α q 3 , E(Sp 4 , s) ∼ = E(T, 1) = {1}, where T = {t ∈ F q 4 : Nm F q 4 /F q 2 t = 1} is an anisotropic maximal torus. As a consequence of Lemma 3.1.9, Lemma 3.1.11, and Lemma 3.1.12, we obtain the following classifications of depth-zero supercuspidals of GSp 4 and Sp 4 . 3.1.13. Firstly, we have the following classification of depth-zero supercuspidals of GSp 4 (F ). Proposition 3.1.14. The depth-zero supercuspidal representations π of G = GSp 4 (F ) are: (1) π = π (S,θ) for some maximally unramified elliptic maximal torus S and a regular character θ of depth zero. These are regular supercuspidals. (2) π β (θ 10 ⊗ χ) := c-Ind G G δ Z (θ 10 ⊗ χ) where θ 10 is inflated from the unique unipotent cuspidal θ 10 of GSp 4 (F q ) and χ is a character of Z such that χ(Z GSp 4 (o F ) ) = 1. This is F -singular. (3) π α (η 2 ; χ) := c-Ind GαZ (ω η 2 cusp ⊗ χ) which is a k F -singular hence F -singular supercuspidal, where: • η 2 is a ramified quadratic character and ∈ F is a uniformizer such that η 2 ( ) = 1 • ω η 2 cusp := (ρ + (λ,λ) ) (I 2 ,diag( ,1)) where λ q−1 = −1, and ρ + (λ,λ) is the representation of GSp 2,2 (F q ) defined in Lemma 3.1.9, which is viewed as a representation of G α /G α+ by conjugating by (I 2 , diag( , 1)). • χ is an unramified character of Z. (4) Induced representations π (S,θ θ⊗χ) where S = {(x, y) ∈ E × × E × : Nm E/F x = Nm E/F y} and θ is a character of E × giving rise to a character θ θ of S, and χ is a character of F × viewed as a character of S via Nm E/F . This is a F -singular but k F -nonsingular representation. Remark 3.1.15. By Remark 3.1.10, the representationρ(η 2 ) is characterized as the common irreducible constituent of the cuspidal R θ T with θ 2 = 1 and the Gelfand-Graev representation corresponding to the nilpotent orbit ( 1 1 1 , 1 1 ) of H α . By Lemma 3.1.3, the formal degree of the singular depth-zero supercuspidal π β (θ 10 ⊗ χ) is (3.1.16) fdeg(π β (θ 10 ⊗ χ)) = q 11/2 q(q − 1) 2 2(q − 1)(q 2 − 1)(q 4 − 1) = q 1/2 q 6 2(q + 1)(q 4 − 1) , since dim(θ 10 ) = q(q−1) 2 2 , dim(GSp 4 (F q )) = 11 and |GSp 4 (F q )| = (q − 1)q 4 (q 2 − 1)(q 4 − 1) by [Car93,p.75]. Note that the normalization of volumes given by [DR09] guarantees that there is a factor of q 1/2 in the formal degree formula for GSp 4 . To compute the formal degree of π α (η 2 ; χ): since dim R T = (q − 1) 2 , we have dim(ω η 2 cusp ) = 1 2 (q − 1) 2 . Note that |GSp 2,2 (F q )| = (q − 1)q 2 (q 2 − 1) 2 and dim GSp 2,2 (F q ) = 7. Therefore, we have (3.1.17) fdeg (π α (η 2 ; χ)) = 1 2 (q − 1) 2 (q − 1)q 2 (q 2 − 1) 2 q −7/2 = q 1/2 q 2(q + 1)(q 2 − 1) . The formal degree of π (S,θ θ⊗χ) is similar: (3.1.18) fdeg π (S,θ θ⊗χ) = (q − 1) 2 (q − 1)q 2 (q 2 − 1) 2 q −7/2 = q 1/2 q (q + 1)(q 2 − 1) . The remaining cases of formal degrees can be easily computed as they are non-singular supercuspidals. 3.1.19. The case of Sp 4 (F ) is given as follows. Proposition 3.1.20. The depth-zero supercuspidal representations of G = Sp 4 (F ) are given by: (1) π = π (S,θ) for some maximally unramified elliptic maximal torus S and a regular character θ of depth zero. These are regular supercuspidals. (2) Induced representations c-Ind G G β ρ and c-Ind G Gγ ρ, where ρ is one of the following representations of Sp 4 (F q ), inflated via G β , G γ → Sp 4 (F q ): (a) The unique cuspidal unipotent θ 10 of Sp 4 (F q ), which gives rise to F -singular representations π β (θ 10 ) := c-Ind G G β inf θ 10 and π γ (θ 10 ) := c-Ind G Gγ inf θ 10 coming from G β and G γ . These are k F -singular hence F -singular supercuspidals. (b) Corresponding to the characters 1, sgn of G s = O 2 (F q ) × U 2 (F q ) under (3.1.6); this gives rise to k F -nonsingular, and hence F -nonsingular. This gives a total of q − 1 nonsingular representations; (3) Induced representations π ± α (η 2 ) := c-Ind Sp 4 Gα (R ± (θ 0 ) × R ± (θ 0 ) diag( ,1) ) where R ± (θ 0 ) are representations of SL 2 (F q ) defined in [Bon11, §5.2] . This is k F -singular and hence Fsingular. (4) Induced representations π α (θ) := c-Ind Sp 4 Gα (R θ T (R θ T ) diag( ,1) ) where θ is a regular character of an anisotropic torus T of SL 2 (F q ). This is F -singular but k F -nonsingular. By Lemma 3.1.3, the formal degree of the singular supercuspidals π β (θ 10 ) and π γ (θ 10 ) is (3.1.21) fdeg(π β (θ 10 )) = fdeg(π γ (θ 10 )) = 1 2 q(q − 1) 2 q 4 (q 2 − 1)(q 4 − 1)q −10/2 = q 2 2(q + 1) 2 (q 2 + 1) since dim(θ 10 ) = 1 2 q(q − 1) 2 by [Lus77, Theorem 8.2], dim |Sp 4 (F q )| = 10 and |Sp 4 (F q )| = q 4 (q 2 − 1)(q 4 − 1). Note that π β (θ 10 ) and π γ (θ 10 ) live in the same L-packet Π ϕ(η) , mixed with two principal series representations as in §6 L-packet (5.2.2). To compute the formal degree of π ± α (η 2 ): since dim(R ± (θ 0 ) × R ± (θ 0 ) diag( ,1) ) = 1 4 (q − 1) 2 , dim(SL 2 × SL 2 ) = 6 and |SL 2 (F q ) × SL 2 (F q )| = q 2 (q 2 − 1) 2 , we have fdeg(π ± α (η 2 )) = 1 4 (q − 1) 2 q 2 (q 2 − 1) 2 q −6/2 = q 4(q + 1) 2 . These representations live in stable mixed L-packets as in Corollary 6.5.6. Similarly, the formal degree of π α (θ) is (3.1.23) fdeg(π α (θ)) = (q − 1) 2 q 2 (q 2 − 1) 2 q −6/2 = q (q + 1) 2 . This representation lives in the mixed L-packet in (5.2.6). 3.1.2. Positive-depth supercuspidal representations of Sp 4 , GSp 4 . 3.1.24. Type datum. Recall Yu's classification of arbitrary-depth supercuspidals in terms of type datum [Yu01] (which was later generalized in [KY17] to include non-supercuspidal types). Definition 3.1.25. A cuspidal G-datum is a tuple D := ( G, y, r, π 0 , φ) consisting of (1) a tamely ramified Levi sequence G = (G 0 ⊂ G 1 ⊂ · · · ⊂ G d = G) of twisted E-Levi subgroups of G, such that Z G 0 /Z G is anisotropic; (2) a point y in B(G 0 , F ) ∩ A(T, E), whose projection to the reduced building of G 0 is a vertex, where T is a maximal torus of G 0 (hence of G i ) that splits over E; (3) a sequence r = (r 0 , r 1 , . . . , r d ) of real numbers such that 0 < r 0 < r 1 < · · · < r d−1 ≤ r d if d > 0, and 0 ≤ r 0 if d = 0; (4) an irreducible depth-zero supercuspidal representation ρ 0 of K 0 = G 0 [y] whose restriction to G 0 y,0+ is trivial and such that the compact induction c- Ind G 0 K 0 ρ 0 is irreducible supercuspidal; (5) a sequence φ = (φ 0 , φ 1 , . . . , φ d ) of characters, where φ i is a character of G i which is trivial on G i y,r i + and nontrivial on G i y,r i for 0 ≤ i ≤ d − 1, such that • φ d is trivial on G d y,r d + and nontrivial on G d y,r d if r d−1 < r d , and φ d = 1 if r d−1 = r d (with r −1 defined to be 0). • Moreover, φ i is G i+1 -generic of depth r i relative to y in the sense of [Yu01, §9] for 0 ≤ i ≤ d − 1. Formal degrees of arbitrary-depth tame supercuspidal representations in the sense of [Yu01] can be computed as in [Sch21, Theorem A]. Let G be a semisimple F -group, and let D be a cuspidal G-datum with associated supercuspidal representation π. Let R i denote the absolute root system of G i , for the twisted Levi sequence (G i ) 0≤i≤d . Let exp q (t) := q t . Proposition 3.1.26. The formal degree of π is given by (3.1.27) fdeg(π) = dim ρ [G 0 [y] : G 0 y,0+ ] exp q 1 2 dim G + 1 2 dim G 0 y,0 + 1 2 d−1 i=0 r i (|R i+1 | − |R i |) . Remark 3.1.28. The Formal Degree Conjecture of [HII08], which describes the formal degree fdeg(π) of any irreducible smooth representation π of G in terms of adjoint gamma factor, has been proved for regular supercuspidal representations in [ • finite extensions F # 1 , . . . , F # r /F ; • 2-dimensionalétale F # i -algebras F i ; and such that n = r i=1 [F i : F ]. Then, W := r i=1 F i is a 2n-dimensional vector space over F with a symplectic form (3.1.31) q( r i=1 w i , r i=1 w i ) := r i=1 1 [F i : F ] tr F i /F (c i w i w i ), where elements c i ∈ F × i are such that c i = −c i , where · denotes the unique nontrivial automorphism of F i /F # i . Then there is a torus (whose conjugacy class depends only on the c i 's modulo N F i /F # i G m ) (3.1.32) T (1) F 1 /F # 1 ,...,Fr/F # r := r i=1 R F # i /F R (1) F i /F # i G m acting component-wise on W . Similarly, conjugacy classes of GSp 2n (F ) are given by the same data, giving rise to the torus (3.1.33) T F 1 /F # 1 ,...,Fr/F # r := {(x i ) ∈ r i=1 R F 1 /F G m : Nm F 1 /F # 1 (x 1 ) = · · · = Nm Fr/F # r (x r ) ∈ F × }. For Sp 4 (F ), the anistropic maximal tori are thus of the following form: • T (1) F 1 /F,F 2 /F (c 1 , c 2 ) = R (1) F 1 /F G m × R (1) F 2 /F G m , with F 1 , F 2 /F quadratic extensions, where c i ∈ F × /N F i /F (F × 1 ); • T (1) F # 1 ⊕F # 1 /F # 1 = {(x, y) ∈ R F # 1 /F G m ×R F # 1 /F G m : xy = 1} with F # 1 /F a quadratic extension; and • T (1) F 1 /F # 1 (c) = R F # 1 /F R (1) F 1 /F # 1 G m , with F 1 /F # 1 /F a tower of quadratic extensions, where c ∈ (F # 1 ) × /N F 1 /F # 1 (F × 1 ) . Twisted Levi subgroups are obtained as centralizers of coroots into these tori. For the torus T (1) F 1 /F,F 2 /F (c 1 , c 2 ) = R (1) F 1 /F G m × R (1) F 2 /F G m ⊂ SL 2 (F ) × SL 2 (F ) ⊂ Sp 4 (F ), its subtorus R (1) F 1 /F G m × 1 (resp., 1 × R (1) F 2 /F G m ) has centralizer R (1) F 1 /F G m × SL 2 (F ) (resp., SL 2 (F ) × R (1) F 2 /F G m ). When F 1 = F 2 the torus T (1) F 1 /F,F 2 /F (c 1 , c 2 ) also has the diagonal sub-torus ∆(R (1) F 1 /F G m ), which has centralizer U F 1 /F (c 1 , c 2 ), the unitary group of the hermitian space E ⊕ E with hermitian form h(w 1 ⊕ w 2 , w 1 ⊕ w 2 ) = 1 2 (c 1 w 1 w 1 + c 2 w 2 w 2 ). The torus T (1) F # 1 ⊕F # 1 /F # 1 has the sub-torus {(x, y) ∈ F × × F × : xy = 1}, which has centralizer GL 2 (F ) × Sp 0 (F ). The torus T (1) F 1 /F # 1 has no nontrivial F -rational sub-tori. Similarly, for GSp 4 (F ), the maximal tori which are anisotropic modulo center are thus of the following form: • T F 1 /F,F 2 /F (c 1 , c 2 ) = {(x, y) ∈ R F 1 /F G m × R F 2 /F G m : Nm F 1 /F x = Nm F 2 /F y} for quadratic field extensions F 1 , F 2 /F ; • T F # 1 ⊕F # 1 /F # 1 = {(x, y) ∈ R F # 1 /F G m × R F # 1 /F G m : xy ∈ F × } for a quadratic extension F # 1 /F ; and • T F 1 /F # 1 (c) := {x ∈ R F 1 /F G m : Nm F 1 /F # 1 x ∈ F × }, where F 1 /F # 1 is a quadratic field exten- sion. For the torus T F 1 /F,F 2 /F ⊂ {(x, y) ∈ GL 2 (F ) × GL 2 (F ) : det(x) = det(y)} ⊂ GSp 4 (F ), its subtorus {(x, y) ∈ R F 1 /F G m × F × : Nm F 1 /F x = y 2 } has centralizer {(x, y) ∈ R F 1 /F G m × GL 2 (F ) : Nm F 1 /F x = det(y)}. The base change to F 1 gives the Levi subgroup F × 1 × GL 2 (F 1 ). When F 1 = F 2 it also has the diagonal sub-torus ∆(R F 1 /F G m ), which has centralizer GU F 1 /F (2), whose base change to F 1 gives the Levi subgroup F 1 × GL 2 (F 1 ). Finally, the tori T F # 1 ⊕F # 1 /F # 1 and T F 1 /F # 1 have no interesting sub-tori. 3.1.34. Explicit type data for GSp 4 (F ) and Sp 4 (F ). For G = Sp 4 the type datum are: (pos-depth 1 ) G = (T (1) F 1 /F # 1 ) for a tower of quadratic extensions F 1 /F # 1 /F . Here G 0 is abelian, so dim ρ 0 = 1. The corresponding representation is nonsingular. (pos-depth 2 ) G = (T (1) F # 1 ⊕F # 1 /F # 1 ) for a quadratic extension F # 1 /F . Here G 0 is abelian, so dim ρ 0 = 1. The corresponding representation is nonsingular. (pos-depth 3 ) G = (T (1) F 1 /F,F 2 /F ), with F 1 , F 2 /F quadratic extensions. G 0 is abelian, so dim ρ 0 = 1. The corresponding representation is nonsingular. (pos-depth 4 ) G = (R (1) F 1 /F G m × SL 2 (F ) ⊂ G) for a quadratic extension F 1 /F . The positive-depth repre- sentation is nonsingular, since R (1) F 1 /F G m × SL 2 (F ) does not have any singular supercuspidal representations. (pos-depth 5 ) G = (U F 1 /F (c 1 , c 2 ) ⊂ G) for a quadratic extension F 1 /F . Here, the character φ 0 is trivial, since G 0 does not have any interesting characters. The unitary group G 1 = U F 1 /F (c 1 , c 2 ) is quasi-split if and only if the discriminant −c 1 c 2 ∈ N m F 1 /F (F × 1 ). Thus, G 1 has singular supercuspidals if and only if G 0 is quasi-split, which happens if −c 1 c 2 ∈ N m F 1 /F (F × 1 ). (pos-depth 6 ) G = (T (1) F 1 /F,F 2 /F ⊂ R (1) F 1 /F G m × SL 2 (F ) ⊂ G) for quadratic extensions F 1 , F 2 /F . Here, G 0 is abelian so dim ρ 0 = 1. The corresponding representation is nonsingular. (pos-depth 7 ) G = (T (1) F 1 /F,F 1 /F ⊂ U F 1 /F (c 1 , c 2 ) ⊂ G) for a quadratic extension F 1 /F . Here, G 0 is abelian so dim ρ 0 = 1. Moreover, G 1 has no interesting characters, so φ 1 = 1. The corresponding representation is nonsingular. (pos-depth 8 ) G = (T (1) F # 1 ⊕F # 1 /F # 1 ⊂ GL 2 (F ) × Sp 0 (F ) ⊂ G) , for a quadratic extension F # 1 /F 1 . Here, G 0 is abelian so dim ρ 0 = 1 and the representation is nonsingular. The possibilities for G = GSp 4 are: (pos-depth 1 ) G = (G 0 = T F 1 /F # 1 ⊂ G) for a tower of quadratic extensions F 1 /F # 1 /F . Since G 0 is abelian, dim ρ 0 = 1. The corresponding representation is nonsingular. (pos-depth 2 ) G = (G 0 = T F # 1 ⊕F # 1 /F # 1 ⊂ G) for a quadratic extension F # 1 /F . Since G 0 is abelian, dim ρ 0 = 1. The corresponding representation is nonsingular. (pos-depth 3 ) G = (G 0 = T F 1 /F,F 2 /F ⊂ G), with F 1 , F 2 /F quadratic extensions. Since G 0 is abelian, dim ρ 0 = 1. The corresponding representation is nonsingular. (pos-depth 4 ) G = (G 0 = {(x, y) ∈ R F 1 /F G m × GL 2 (F ) : Nm F 1 /F x = det(y)} ⊂ G 1 = G) for a quadratic extension F 1 /F . The positive-depth representation is nonsingular, since R F 1 /F G m ×GL 2 (F ) does not have any singular supercuspidal representations. (pos-depth 5 ) G = (G 0 = GU F 1 /F (c 1 , c 2 ) ⊂ G 1 = G) for a quadratic extension F 1 /F . The unitary group G 0 = GU F 1 /F (c 1 , c 2 ) is quasi-split if and only if the discriminant −c 1 c 2 ∈ N m F 1 /F (F × 1 ). Thus, G 0 has singular supercuspidals if and only if G 0 is quasi-split, which happens if −c 1 c 2 ∈ N m F 1 /F (F × 1 ). (pos-depth 6 ) G = (G 0 = T F 1 /F,F 2 /F ⊂ G 1 = {(x, y) ∈ R F 1 /F G m × GL 2 (F ) : Nm E/F x = det(y)} ⊂ G 2 = G) for quadratic extensions F 1 , F 2 /F . The corresponding representation is nonsingular. (pos-depth 7 ) G = (G 0 = T F 1 /F,F 1 /F ⊂ GU F 1 /F (2) ⊂ G 1 = G) for a quadratic extension F 1 /F . The corresponding representation is nonsingular. (pos-depth 8 ) G = (T (1) F # 1 ⊕F # 1 /F # 1 ⊂ GL 2 (F ) × GSp 0 (F ) ⊂ G) , for a quadratic extension F # 1 /F 1 . Here, G 0 is abelian so dim ρ 0 = 1 and the representation is nonsingular. Note that the trivial representation of the compact unitary group SU(2) is a singular supercuspidal, which is only visible on the level of Vogan packets; it mixes with the Steinberg of SL 2 (F ). 3.2. Reducibility of induced representations. = ± 1 2 , where ν denotes ν = | det()| for GL 2 (F ). The representation I(σ 1 ν 1/2 ⊗ χ) has a unique generic special subrepresentation and a unique irreducible preunitary non-tempered non-generic quotient. For 0 < s < 1/2, all the representations I(σ 1 ν s ⊗ χ) are in the complementary series and s = 1/2 is their end point. (b) Let G = Sp 4 (F ), the representation I(σ) is reducible if and only if σ ∼ = σ (thus ω 2 = 1) and ω = 1. Suppose ω = 1 so that I(σ) is irreducible. Then I(σν s ) is reducible if and only if s = ±1/2. The representation I(σν s ) has a unique generic special subrepresentation and a unique irreducible preunitary non-tempered non-generic quotient. For 0 < s < 1/2, all the representations I(σν s ) are in the complementary series and s = 1/2 is their end point. (c) The Plancherel measure µ(s α, σ) is given by the formula (3.2.2) µ(s α) = γ(G/P ) 2 q n(σ 1 ) (1−ω( )q −2s )(1−ω( ) −1 q 2s ) (1−ω( )q −1−2s )(1−ω( ) −1 q −1+2s ) if ω is unramified γ(G/P ) 2 q n(σ 1 )+n(ω) otherwise Here n(σ 1 ) and n(ω) are the conductors of σ 1 and ω, respectively. For a character χ of F × , let e(χ) := log q |χ( )| be the unique real number such that χ = ν e(χ) χ 0 where χ 0 is a unitary character. Lemma 3.2.3 ([ST93, Lem 3.2]). Let χ 1 , χ 2 , and θ be characters of F × . Then χ 1 × χ 2 θ is reducible if and only if χ 1 = ν ±1 , χ 2 = ν ±1 , or χ 1 = ν ±1 χ ±1 2 . We thus have the following theorem: Theorem 3.2.4. A representation of GSp 4 (F ) parabolically induced from a Levi L ⊂ G is not irreducible exactly in the following cases: (1) When L = T , the representation χ 1 × χ 2 θ is reducible when either: (a) if χ 1 × χ 2 θ is regular, i.e., χ 1 = 1, χ 2 = 1, χ 1 = χ ±1 2 : (i) χ 1 = νχ 2 where χ 2 2 = ν −2 , ν −1 , 1 and χ 2 = ν −2 , ν. Then νχ 2 × χ 2 θ has length 2 and in the Grothendieck ring νχ 2 × χ 2 θ = ν 1/2 χ 2 1 GL 2 θ + ν 1/2 χ 2 St GL 2 θ. The Langlands classification is ν 1/2 χ 2 St GL 2 θ =      J(ν 1/2 χ 2 St GL 2 ; θ) e(χ 2 ) > − 1 2 J(ν 1/2 χ 2 St GL 2 θ) e(χ 2 ) = − 1 2 J(ν −1/2 χ −1 2 St GL 2 ; νχ 2 2 θ) e(χ 2 ) < − 1 2 ν 1/2 χ 2 1 GL 2 θ =                    J(νχ 2 , χ 2 ; θ) e(χ 2 ) > 0 J(νχ 2 , χ 2 θ) e(χ 2 ) = 0 J(νχ 2 , χ −1 2 ; χ 2 θ) 0 > e(χ 2 ) ≥ − 1 2 J(χ −1 2 , νχ 2 ; νχ 2 θ) − 1 2 > e(χ 2 ) > −1 J(χ −1 2 ; ν −1 χ −1 2 νχ 2 2 θ) e(χ 2 ) = −1 J(χ −1 2 , ν −1 χ −1 2 ; νχ 2 2 θ) e(χ 2 ) < −1 (ii) χ 2 = ν and χ 1 = 1, ν ±1 , ν ±2 . Then χ 1 ×ν θ has length 2 and in the Grothendieck ring χ 1 × ν θ = χ 1 ν 1/2 θSt GSp 2 + χ 1 ν 1/2 θ1 GSp 2 . Then, χ 1 ν 1/2 θSt GSp 2 =      J(χ 1 ; ν 1/2 θSt GSp 2 ) e(χ 1 ) > 0 J(χ 1 ν 1/2 θSt GSp 2 ) e(χ 1 ) = 0 J(χ −1 1 ; ν 1/2 χ 1 θSt GSp 2 ) e(χ 1 ) < 0 χ 1 ν 1/2 θ1 GSp 2 =      J(χ 1 , ν; θ) e(χ 1 ) > 0 J(ν; χ 1 θ) e(χ 1 ) = 0 J(χ −1 1 , ν; χ 1 θ) e(χ 1 ) < 0 (iii) χ 1 = ν 2 and χ 2 = ν. Then ν 2 × ν θ has length 4, consisting of: ν 3/2 θSt GSp 4 , ν 3/2 θ1 GSp 4 , J(ν 2 ; ν 1/2 θSt GSp 2 ), J(ν 3/2 St GL 2 ; θ) (iv) χ 1 = νχ 2 and χ 2 of order 2. Then νχ 2 × χ 2 θ has length 4, with a unique essentially square-integrable subquotient denoted by δ([χ 2 , νχ 2 ], θ), as well as J(ν 1/2 χ 2 St GL 2 ; θ), J(ν 1/2 χ 2 St GL 2 ; χ 2 θ), J(νχ 2 ; χ 2 θ). (b) if χ 1 × χ 2 θ is not regular: (i) χ 1 = ν, χ 2 = 1 then ν × 1 θ has length 4 consisting of essentially tempered representations τ (S, θ) and τ (T, θ) such that 1 ν 1/2 θSt = τ (S, θ) + τ (T, θ), as well as J(ν; 1 F × θ) and J(ν 1/2 St GL 2 ; θ), where 1 θ1 GSp 2 = J(ν; 1 θ) + J(ν 1/2 St; θ) (ii) χ 1 = χ 2 = ν then ν × ν θ has length 2 consisting of ν ν 1/2 θ1 GSp 2 = J(ν; ν 1/2 θSt GSp 2 ) ν ν 1/2 θSt GSp 2 = J(ν, ν; θ). (iii) χ 1 = νχ 2 and χ 2 1 = ν, then νχ 2 × χ 2 θ has length 2 consisting of ν 1/2 χ 2 1 GL 2 θ, ν 1/2 χ 2 St GL 2 θ. Here, ν 1/2 χ 2 St GL 2 θ is tempered and ν 1/2 χ 2 1 GL 2 θ = J(νχ 2 , νχ 2 ; χ 2 θ). (2) When L = GL 2 ×GSp 0 , the representation ν β ρ χ, where β ∈ R, ρ is a unitary supercuspidal of GL 2 , and χ : F × → C × is reducible if and only if β = ±1/2 and ρ = ρ ∨ and ω ρ = 1. Moreover, ν 1/2 ρ χ has a unique generic special sub-representation and a unique irreducible preunitary nontempered non-generic quotient. (3) When L = GL 1 × GSp 2 , the representation χ ρ, where χ : F × → C × and ρ is a supercuspidal representation of GSp 2 , is reducible in the following cases: (a) χ = 1 F × , in which case 1 F × ρ splits into a sum of two tempered irreducible subrepresentations which are not equivalent. (b) χ = ν ±1 ξ o where ξ o : F × → C × is a character of order two such that ξ o ρ ∼ = ρ. Then νξ o ρ has a unique irreducible sub-representation which is square-integrable. Proof. Case (1) is from [ST93,§3] and Cases (2) and (3) are from [ST93,§4]. More precisely, Case 1(a)i is [ST93, Lemma 3.3], Case 1(a)ii is [ST93, Lemma 3.4], Case 1(a)iii is [ST93, Lemma 3.5], Case 1(a)iv is [ST93, Lemma 3.6], Case 1(b)i is [ST93, Lemma 3.8], Case 1(b)ii is [ST93, Lemma 3.9], Case 1(b)iii is [ST93, Lemma 3.7]. Let ξ have order 2 and write ξ 1 = T 1 ξ + T 2 ξ as a sum of irreducible representations of Sp 2 . Moreover, for any supercuspidal representation σ of SL 2 (F ), let F × σ := {a ∈ F × : σ diag(a,1) ∼ = σ}, which is really a subgroup of the finite group F × /(F × ) 2 . The analogue of Theorem 3.2.4 for Sp 4 is: Theorem 3.2.5. (1) When L = T , the representation χ 1 × χ 2 1 is reducible when: (a) The representations coming from irreducibles of GSp 4 , i.e., χ 1 = ν ±1 , χ 2 = ν ±1 , and χ 1 = ν ±1 χ 2 . Then χ 1 × χ 2 1 is reducible exactly when χ 1 or χ 2 has order 2. We may suppose without loss that χ 2 has order 2. (i) If χ 1 = χ 2 or χ 1 is not of order 2 then χ 1 T 1 χ 2 and χ 1 T 2 χ 2 are irreducible (ii) If χ 1 = χ 2 then both χ 1 T 1 χ 1 and χ 1 T 2 χ 1 have length two. (b) if χ 1 × χ 2 1 is regular, i.e., χ 1 = 1, χ 2 = 1, χ 1 = χ ±1 2 : (i) χ 1 = νχ 2 where χ 2 2 = ν −2 , ν −1 , 1 and χ 2 = ν −2 , ν. Then νχ 2 × χ 2 1 = ν 1/2 χ 2 1 GL 2 1 + ν 1/2 χ 2 St GL 2 1 has length two. (ii) χ 2 = ν and χ 1 = 1, ν ±1 , ν ±2 . Then χ 1 × ν 1 = χ 1 ν 1/2 St Sp 2 + χ 1 ν 1/2 1 Sp 2 has length two. (iii) χ 1 = ν 2 and χ 2 = ν. Then ν 2 × ν 1 has length 4, consisting of: ν 3/2 St Sp 4 , ν 3/2 1 Sp 4 , J(ν 2 ; ν 1/2 St Sp 2 ), J(ν 3/2 St GL 2 ; 1) (iv) χ 1 = νχ 2 and χ 2 of order 2. Then νχ 2 × χ 2 1 = ν 1/2 χ 2 1 GL 2 1 + ν 1/2 χ 2 St GL 2 1 where ν 1/2 χ 2 1 GL 2 1 and ν 1/2 χ 2 St GL 2 1 each have length three. Otherwise, there are no extra reducibilities. (c) if χ 1 × χ 2 1 is not regular: (i) χ 1 = ν, χ 2 = 1 then ν × 1 1 has length 4 consisting of essentially tempered representations τ and τ , as well as J(ν; 1 F × 1 Sp 2 ), J(ν 1/2 St GL 2 ; 1) (ii) χ 1 = χ 2 = ν then ν × ν 1 = ν ν 1/2 1 Sp 2 + ν ν 1/2 St Sp 2 . in the Grothendieck ring, where both ν ν 1/2 1 Sp 2 and ν ν 1/2 St Sp 2 are irreducible. (iii) χ 1 = νχ 2 and χ 2 1 = ν, then νχ 2 × χ 2 1 has length 2 consisting of ν 1/2 χ 2 1 GL 2 1, ν 1/2 χ 2 St GL 2 1. Otherwise, there are no extra reducibilities. (2) When L = GL 2 × Sp 0 = GL 2 , the representation ν β ρ 1, where β ∈ R and ρ is a unitary supercuspidal of GL 2 is reducible if and only if ρ is self-dual and: (a) β = ±1/2 and ω ρ = 1 F × ; or (b) β = 0 and ω ρ = 1 F × ,. (3) When L = GL 1 × Sp 2 , the representation ν β χ ρ, where χ is a unitary character and β ∈ R and ρ is a supercuspidal representation of Sp 2 , is reducible in the following cases: (a) χ = 1 F × and β = 0, (b) χ has order two and nontrivial on F × σ and β = 0. (c) χ has order two and trivial on F × σ and β = ±1. Proof. See [ST93, Section 5]. The Galois side We are concerned with L-parameters of G = Sp 4 , GSp 4 , i.e, homomorphisms ϕ : W F × SL 2 (C) → G ∨ such that ϕ(w) is semisimple for any w ∈ W F , and the restriction ϕ| SL 2 (C) is a morphism of complex algebraic groups. Lemma 4.0.1. If G • ϕ is abelian, then members of the L-packet for ϕ are representations with support Z G ∨ (Z • Gϕ ) ∨ . Proof. Let ρ ∈ Irr(S ϕ ). Since G • ϕ is abelian, the cuspidal support L ϕ of (u ϕ , ρ), which is a quasi-Levi of G ϕ in the sense of [AMS18, pg 5], must be Z Gϕ (G • ϕ ). Thus the cuspidal support of (ϕ, ρ) must be Z G ∨ (Z • L ϕ ) = Z G ∨ (G • ϕ ) . By Property 8.1.19 the members of the L-packet of ϕ has support Z G ∨ (Z • Gϕ ) ∨ . Let G = Sp 4 (F ) and ϕ : W F × SL 2 → G ∨ = SO 5 (C) be an L-parameter. Consider ϕ| W F as a 5-dimensional representation of W F with an invariant symmetric inner product. We use the following notation from §8.1: G ϕ = Z SO 5 (C) (ϕ(W F )) and S ϕ = π 0 (Z SO 5 (C) (ϕ(W F ))). The cuspidal support map Sc : Φ e (G) → L∈L(G) Φ e,cusp (L)/W G (L) is defined via the Springer correspondence 3 for G ϕ , so we conduct case-work on the shape of the L-parameter ϕ. There are the following cases, depending on how the W F -representation U decomposes (parameterized by partitions of 5). (1) U it is irreducible, so G ϕ = 1 and S ϕ = 1. This is a supercuspidal singleton packet. (2) U = V ⊕ χ where dim V = 4 with a symmetric form V ⊗ V → C and χ 2 = 1. Here G ϕ = µ 2 and S ϕ = µ 2 . Here, L ϕ = µ 2 so Z G ∨ (Z • Lϕ ) = G ∨ . Thus this is a purely supercuspidal packet of size 2. (3) U = V 1 ⊕ V 2 where dim V 1 = 3 and dim V 2 = 2, both self-dual with invariant symmetric forms. Here G ϕ = µ 2 and S ϕ = µ 2 . Again, this is a purely supercuspidal packet of size 2. (4) U = V ⊕ χ 1 ⊕ χ 2 where dim V = 3 and V is self-dual with an invariant symmetric form. Either: (a) χ 1 = χ 2 so χ 2 1 = χ 2 2 = 1 since χ 1 ⊕ χ 2 must be self-dual. Now G ϕ = S(µ 2 × O 2 (C)) ∼ = O 2 (C), since an automorphism of U must act by scalars on V and by an orthogonal transformation on χ 1 ⊕ χ 2 . Since G ϕ has no unipotents, ϕ| SL 2 (C) is trivial and S ϕ = µ 2 . Here L ϕ = 1 × SO 2 (C) so the cuspidal support is Z G ∨ (Z • Lϕ ) = Z G ∨ (1 × SO 2 (C)) = GL 1 (C) × SO 3 (C). Since supercuspidal L-parameters of SO 3 (C) ∼ = PGL 2 (C) have trivial unipotent, by Property 8.1.5 (and the observation that ϕ| SL ), we have ϕ = λ ϕ = ι GL 1 ×SO 3 • λ ϕv = ι GL 1 ×SO 3 • ϕ v . Thus the packet consists of sub-quotients of the parabolic induction χ 1 π V where π V is the representation of Sp 2 (F ) corresponding to V under the LLC for Sp 2 (F ) ∼ = SL 2 (F ) (this is well-defined, since V corresponds to a singleton packet). (b) χ 1 = χ 2 and χ 2 1 = χ 2 2 = 1 then G ϕ = µ 2 2 , so ϕ| SL 2 (C) is trivial and S ϕ = µ 2 2 . By Lemma 4.0.1, this is a purely supercuspidal packet of size 4. (c) χ 1 = χ 2 and χ 1 = χ −1 2 then χ 1 ⊕ χ 2 carries the symmetric form (a 1 , b 1 ), (a 2 , b 2 ) := a 1 b 2 + a 2 b 1 so G ϕ = C × and ϕ| SL 2 (C) = 1 and S ϕ = 1. Again by Lemma 4.0.1 the support of the unique member of the L-packet is Z SO 5 (G • ϕ ) ∨ = F × × Sp 2 (F ). By the same argument as in case 4a, the member of the L-packet is χ 1 π V . (5) U = V 1 ⊕ V 2 ⊕ χ where dim V 1 = dim V 2 = 2, and χ 2 = 1. Either: (a) V 1 ∼ = V 2 and V 1 has an invariant symmetric form so G ϕ = C × and S ϕ = 1. By Lemma 4.0.1, this is a purely supercuspidal singleton packet. w 1 ). Then χ = 1 and G ϕ = Sp 2 (C). The Springer correspondence for Sp 2 ∼ = SL 2 is shown on Table 5b. Thus the Levi subgroup L ϕ ⊂ G ϕ is either T or Sp 2 (C) and • When ϕ| SL 2 = 1 then S ϕ = 1 so the L-packet is {π V 1}. Here, since V is an L-parameter into SL 2 , we have ω ρ = 1, so by Theorem 3.2.5 the representation π V 1 is irreducible. • When ϕ| SL 2 is nontrivial, then S ϕ = µ 2 so the L-packet has size 2. This packet is determined in Section 5. Concretely, the second L-parameter can be considered the W F ×SL 2 (C)-representation U = M 2 (C) ⊕ C where W F acts on M 2 (C) by left multiplication via the representation V 1 , and SL 2 (C) acts on M 2 (C) by right multiplication. (c) V 1 V 2 and both have an invariant symmetric form, then χ ∼ = det(V 1 ) ⊗ det(V 2 ). Here G ϕ = µ 2 2 and S ϕ = µ 2 2 . By Lemma 4.0.1 this is a purely supercuspidal packet of size four. (d) V 1 V 2 and V 1 ∼ = V ∨ 2 then G ϕ = C × and S ϕ = 1. By Lemma 4.0.1 the member of the singleton L-packet is π V 1, supported in GL 2 (F ) × Sp 0 (F ). The representation π V 1 is irreducible by Theorem 3.2.5(2), since π V is not self-dual. (b) V 1 ∼ = V 2 and V 1 has an invariant symplectic form ω then V 1 ⊕ V 1 carries the symmetric form v 1 ⊕ v 2 , w 1 ⊕ w 2 := ω(v 1 , w 2 ) − ω(v 2 ,Z G ∨ (Z • Lϕ ) is either GL 2 (C) × SO 1 (C) or G ∨ ,(6) U = V ⊕ χ 1 ⊕ χ 2 ⊕ χ 3 where dim V = 2 with V self-dual with an invariant symmetric form V ⊗ V → C. Either: Unipotent pairs Representations of W = µ 2 ([1 2 ], 1) 1 ([2], 1) sgn Table 3. Springer Correspondence for SO 3 (C) (a) χ 1 = χ 2 = χ 3 with χ 2 1 = 1 then G ϕ = SO 3 (C) × µ 2 , and χ 1 = det(V ). The Springer correspondence for SO 3 (C) ∼ = PGL 2 (C) is given in Table 6a, where all local systems are supported in the torus. Thus L ϕ = µ 2 × C × ⊂ G ϕ . Now Z G ∨ (Z • Lϕ ) = C × × SO 3 (C) and the members of the L-packet are supported in GL 1 (F ) × Sp 2 (F ). Explicitly, the restriction ϕ| SL 2 (C) is either: (i) trivial, so S ϕ = µ 2 . The W F -representation V ⊕ χ 1 can be viewed as an Lparameter W F → SO 3 (C), which then corresponds to representations π 1 , π 2 of Sp 2 (F ) under LLC for Sp 2 (F ) (the packet has size 2). The L-packet is { χ 1 π 1 , χ 1 π 2 }, which are irreducible by Theorem 3.2.5(3). (ii) nontrivial. Then S ϕ = µ 2 , and by Property 8.1.5 the L-packet is {ν χ 1 π 1 , ν χ 1 π 2 }, which are irreducible by Theorem 3.2.5(3). (b) χ 1 = χ 2 = χ 3 then χ 2 1 = χ 2 3 = 1 and χ 3 = det(V ) and G ϕ = µ 2 × S(O 2 (C) × µ 2 ) with S ϕ = µ 2 × µ 2 . By Lemma 4.0.1 the members of the size four L-packet are supported in GL 1 (F ) × Sp 2 (F ). By the LLC for Sp 2 (F ) the W F -representation V ⊕ χ 3 viewed as an L-parameter W F → SO 3 (C) gives an L-packet {π 1 , π 2 }. Now, each of the representations χ 1 π 1 and χ 1 π 2 have length two by Theorem 3.2.5(3), so they decompose into, say τ 11 + τ 12 and τ 21 + τ 22 , respectively. Then the L-packet for ϕ is {τ 11 , τ 12 , τ 21 , τ 22 }. (c) χ 1 = χ 2 = χ 3 and χ 2 1 = χ 2 2 = χ 2 3 = 1 then G ϕ = µ 2 × S(µ 2 × µ 2 × µ 2 ) and S ϕ ∼ = µ 3 2 . This is a purely supercuspidal packet by Lemma 4.0.1. (d) χ 1 = χ 2 = χ 3 and χ 2 1 = 1 and χ 2 = χ −1 3 but χ 2 2 = 1. Here G ϕ = µ 2 × C × and S ϕ ∼ = µ 2 . The members of the L-packet are supported in GL 1 (F ) × Sp 2 (F ). Letting {π 1 , π 2 } be the L-packet under the LLC for Sp 2 (F ) corresponding to the W F -representation V ⊕χ 1 viewed as a L-parameter W F → SO 3 (C), the L-packet for ϕ is {χ 2 π 1 , χ 2 π 2 }, which is irreducible by Theorem 3.2.5(3). (7) U = 1 ⊕ χ 1 ⊕ χ −1 1 ⊕ χ 2 ⊕ χ −1 2 . (a) χ 1 = χ 2 = 1 then G ϕ = SO 5 (C). The Springer correspondence of G ϕ is [Lus84b, §10.6]: Unipotent pairs Representations of W = µ 2 2 S 2 ([5], 1) (∅, [1 2 ]) ([3, 1 2 ], 1) ([1], [1]) ([3, 1 2 ], −1) (∅, [2]) ([2 2 , 1], 1) ([1 2 ], ∅) ([1 5 ], 1) ([2] , ∅) where we identify representations of the semidirect product (Z/2) 2 S 2 via Lemma 4.0.2 (see also, [CM84, Theorem 10.1.2]). All of the representations are principal series. By Property 8.1.5, the cuspidal support of the L-parameter is ϕ v (w) = λ ϕv (w) = λ ϕ (w) = ϕ(diag( w 1/2 , w −1/2 )). By Remark 2.1.3 all nilpotent orbits in G ∨ are induced from some regular nilpotent orbit in a Levi subgroup L ∨ ⊂ G ∨ . Thus ϕ( w 1/2 , w −1/2 ) is dual to the modulus character δ B L \L . Thus, by Remark 2.2.2, we have ϕ v = χ −1 1 δ B L \L . Thus the L-packet contains an irreducible subquotient of i G P (St L ). (i) If ϕ| SL 2 is [4], then the L-packet member is a subquotient of i G B (δ B\G ), which is square-integrable modulo center, by Property 8.1.20. Thus the L-packet is {St GSp 4 }. (ii) If ϕ| SL 2 is [2 2 ] then S ϕ = µ 2 , then the L-packet members are irreducible constituents of 1 St GL 2 . This is case 1(c)i and the L-packet is {τ ( (1 ⊗ 1, 1) = 1 W (00, −1) S, ν −1/2 χ −1 1 ), τ (T, ν −1/2 χ −1 1 )}. (iii) If ϕ| SL 2 is [2, 1 2 ]. The L-packet members is St GL 2 1, which is case 1(c)iii. (iv) If ϕ| SL 2 is trivial, then the L-packet is {1 × 1 1}, (1 ⊗ 1, sgn) (0e, 1) = (e0, 1) (1 ⊗ sgn, 1) (ee, (1, 1)) (sgn ⊗ sgn, 1) = sgn W (ee, (1, −1)) (sgn ⊗ sgn, sgn) (ee, (−1, 1)) cusp (ee, (−1, −1)) cusp Here on the right 0 and e denote the unipotent classes of SL 2 , which induce unipotent classes on SO 4 = (SL 2 × SL 2 )/µ 2 , and on the left are representations of the Weyl group W = µ 2 2 µ 2 parameterized via Lemma 4.0.2. Thus L ϕ ⊂ G • ϕ = SO 4 (C) is either the maximal torus or SO 4 (C). When L ϕ = SO 4 (C), we have Z G ∨ (Z • Lϕ ) = G ∨ , which corresponds to a supercuspidal member in the Lpacket for ϕ. When L ϕ is a maximal torus, we have Z G ∨ (Z • Lϕ ) is also a torus, which gives rise to a principal series representation in the L-packet for ϕ. Moreover, since the L-parameter is bounded, by Property 8.1.20 the representations are tempered. Either ϕ| SL 2 is: (i) trivial. Here, S ϕ ∼ = µ 2 . The L-packet consists of irreducible constituents of χ 1 × χ 1 1. This is case 1(a)i and the L-packet is { χ 1 T 1 χ 1 , χ 1 T 2 χ 1 }. (ii) the embedding into the first copy of SL 2 (C). Here, S ϕ = 1 and the L-packet consists of an irreducible constituent of ν 1/2 χ 1 × ν −1/2 χ 1 1, which is Theorem 3.2.5(1b). Since the member is tempered, the packet is {χ 1 St GL 2 1}. (iii) the diagonal embedding of SL 2 (C). Here, S ϕ ∼ = µ 2 2 . Concretely, the L-parameter ϕ may be viewed as the W F × SL 2 (C)-representation U = M 2 (C) ⊕ C where W F acts on M 2 (C) by χ 1 and SL 2 (C) acts on M 2 (C) by conjugation. The symmetric form is the trace pairing on M 2 (C). Thus in case 7 (b)iii the members of the size four L-packet consists of two supercuspidals and two principal series. The L-packet is determined in Section 5. (c) χ 2 = 1 and χ 1 is of order 2. We have G ϕ = S(O 3 × O 2 ) ∼ = SO 3 × O 2 . Since both the Springer correspondence for SO 3 and O 2 do not have any nontrivial cuspidal supports (by Table 6a), the members of L-packets are principal series. Moreover, again the L-packet is bounded, so by Property 8.1.20 the representations are tempered. (i) if ϕ| SL 2 = 1, then S ϕ = µ 2 and the packet consists of irreducible constituents of χ 1 × 1 1. This is case 1(a)i, so the L-packet is {1 T 1 χ 1 , 1 T 2 χ 1 }. (ii) if ϕ| SL 2 is non-trivial, then S ϕ = µ 2 and the packet consists of irreducible constituents of χ 1 ×ν 1/2 1. This is case 1(a)i and the L-packet is {ν 1/2 T 1 χ 1 , ν 1/2 T 2 χ 1 }. (d) χ 2 = 1 and χ 2 1 = 1. Here G ϕ = SO 3 (C) × SO 2 (C). By Table 6a the unipotent pairs are all supported in the torus, so the L-packets are singletons consisting of a principal series. The restriction ϕ| SL 2 (C) is either: (i) trivial, then the packet is {χ 1 × 1 1}, where χ 1 × 1 1 is irreducible by Theorem 3.2.5(1a). (ii) nontrivial, then the packet is {χ 1 × ν 1/2 1}, where χ 1 × ν 1/2 1 is irreducible by Theorem 3.2.5(1a). (e) χ 1 = χ 2 are distinct order 2 characters. Here G ϕ = S(O 2 (C) × O 2 (C) × µ 2 ) ∼ = O 2 (C) 2 . Here S ϕ = µ 2 2 and by Lemma 4.0.1 the L-packet members are principal series. The L-packet consists of the irreducible constituents of χ 1 × χ 2 1, which has length 4 by Theorem 3.2.5(1(a)ii). (f) χ 1 = χ −1 2 = 1 and χ 2 1 = 1. Here G ϕ = GL 2 (C) and S ϕ = 1. Here L ϕ ⊂ GL 2 (C) is the maximal torus, so the L-packet consists of principal series representations. (i) if ϕ| SL 2 is trivial, then the L-packet is {χ 1 × χ 1 1}, where irreducibility is by Theorem 3.2.5(1a). (ii) if ϕ| SL 2 is nontrivial, then the member is a irreducible constituent of ν 1/2 χ 1 × ν −1/2 χ 1 1. If χ 1 = ν ±3/2 and χ 2 1 = ν ±1 then the L-packet is {χ 1 St GL 2 1}. Otherwise, if χ 1 = ν ±3/2 then ν ±3/2 St GL 2 1 has length two, since we are in case 1(b)iii. By Property 8.1.3 the L-packet is {J(ν ±3/2 St GL 2 ; 1)}. If χ 1 = ν ±1/2 then ν ±1/2 St GL 2 1 has length two, since we are in case 1(c)i. If χ 1 = ν ±1/2 ξ 1 for some order 2 character ξ 1 then ν ±1/2 ξ 1 St GL 2 1 has length three, and the L-packet is {J(ν 1/2 ξ 1 St GL 2 , 1)}. (g) If χ ±1 1 and χ ±1 2 are all distinct, then G ϕ = C × × C × and ϕ| SL 2 (C) = 1 and S ϕ = 1. By Lemma 4.0.1 the L-packet is a singleton {χ 1 × χ 2 1}, which is reducible by Theorem 3.2.5(1a). In particular, the only mixed packets occur in cases 5b and 7(b)iii. We also use the following well-known fact: where H\A * denotes the set of H-orbits in A * = hom(A, C × ) and H χ is the stabilizer of χ. A pair (χ, ρ) corresponds to the irreducible G-representation Ind G A H χ ( χ ⊗ ρ), where χ(ah) := χ(a) for a ∈ A and h ∈ H χ . Now let G = GSp 4 (F ) and ϕ : W F × SL 2 (C) → G ∨ ∼ = GSp 4 (C) an L-parameter. Now ϕ| W F can be considered a 4-dimensional W F -representation U with a invariant symplectic form ω : U ⊗U → ξ, where ξ is the similitude character. Now U decomposes into irreducible representations according to partitions [4], [2 2 ], [2, 1 2 ], or [1 4 ] (the partition [3, 1] is impossible since the attached bilinear form is necessarily symmetric). Then, G ϕ is the group of W F -representation endomorphisms g : U → U such that the following diagram commutes for some constant c ∈ C × (the similitude): U ⊗ U ξ U ⊗ U ξ. ω g⊗g c ω Thus there are the following cases: (1) U is irreducible with U ∼ = ξU ∨ and the unique pairing U ⊗ U → ξ is anti-symmetric. Here G ϕ = C × and S ϕ = 1 so the packet is a singleton supercuspidal. (2) U = V 1 ⊕ V 2 where V 1 and V 2 are irreducible of dimension 2. Either: (a) V 1 ∼ = V 2 , with an invariant anti-symmetric form ω : V 1 ⊗ V 1 → ξ. Here ξ = det(V 1 ). Then U carries the symplectic form ω (v 1 ⊕ w 1 , v 2 ⊕ w 2 ) = ω(v 1 , w 2 ) + ω(w 1 , v 2 ). Thus, G ϕ = GO 2 (C) ∼ = (C × ) 2 µ 2 , embedded as aI 2 bI 2 cI 2 dI 2 ∈ GSp 4 (C) and S ϕ = µ 2 . By Remark 4.0.1, the L-parameter is supported in GL 2 (C) × GSp 0 (C), so the representations are supported in GL 1 (F ) × GSp 2 (F ). The cuspidal support of ϕ is V 1 and ξ viewed as an L-parameter W F → GL 2 (C) × GSp 0 (C). By Remark 2.2.2 to the representation ξ −1 det(π V 1 ) π ∨ V 1 = 1 π ∨ V 1 of GL 1 (F ) × GSp 2 (F ), which is the cuspidal support of ϕ. Here, π V 1 is the representation of GSp 2 (F ) corresponding to V 1 under LLC for GSp 2 (F ). Thus the members of the L-packet are the two irreducible constituents of 1 π ∨ V 1 (this is case 3a). (b) V 1 ∼ = V 2 , with an invariant symmetric form −, − : V 1 ⊗ V 1 → ξ. Here, ξ = det(V 1 ). Then ω(v 1 ⊕ w 1 , v 2 ⊕ w 2 ) = v 1 , w 2 − v 2 , w 1 . Thus, G ϕ = GL 2 (C) embedded as diag(g, J T g −1 J −1 ) ∈ GSp 4 (C) and S ϕ = 1. Letting T ⊂ G ϕ be a maximal torus the (trivially) enhanced L-parameters are supported in Z G ∨ (T ) = GL 1 C×GSp 2 C, so the members of packets are supported in GL 2 F ×GSp 0 F . (i) If ϕ(SL 2 ) = 1 then the cuspidal support of ϕ is ξ and V viewed as a L-parameter W F → GL 1 C × GSp 2 C. By Remark 2.2.2, the member of the L-packet is an irreducible constituent of ( ξ ⊗ π ∨ V 1 ) ξ −1 . We are in case 2. Since V 1 ∼ = ξV ∨ 1 we have π V 1 ∼ = ξ ⊗ π ∨ V 1 . Thus if ξ = ν β ξ for a unitary character ξ and β ∈ R then π V 1 ξ −1 is irreducible as long as β = ±1. In this case the L-packet is {π V 1 ξ −1 }. Otherwise since the L-parameter ϕ is not (essentially) bounded the singleton L-packet consists of the unique essentially tempered subquotient of π V 1 ξ −1 . (ii) If ϕ| SL 2 is nontrivial then the cuspidal support of ϕ is νξ and ν 1/2 V viewed as a Lparameter W F → GL 1 ×GSp 2 (C). By Remark 2.2.2, the member of the L-packet is an irreducible constituent of (ν 1/2 ξ⊗π ∨ V 1 ) ν −1 ξ −1 ∼ = ν 1/2 π V 1 ν −1 ξ −1 . Letting ξ = ν β ξ as above, if β / ∈ {0, −2} then the singleton L-packet consists of the unique essentially tempered subquotient of ν 1/2 π V 1 ν −1 ξ −1 , by Property 8.1.20. (c) V 1 V 2 then V 1 ∼ = ξ ⊗ V ∨ 2 and so G ϕ = C × × C × and S ϕ = 1. Here, ξ = det(V 1 ). By Lemma 4.0.1 the L-parameter is supported in GL 2 (C) × GSp 0 (C), given by (V 1 , ξ) viewed as an L-parameter W F → GL 2 (C) × GSp 0 (C). Thus by Remark 2.2.2 the Lpacket member is an irreducible constituent of 1 π ∨ V 1 , where π V 1 is the supercuspidal representation of GL 2 (F ) corresponding to V 1 under the LLC for GL 2 (F ). (3) U = V ⊕ χ 1 ⊕ χ 2 where V is irreducible of dimension 2 and χ 1 , χ 2 are characters of W F . There is an anti-symmetric pairing ω : V ⊗ V → ξ, where ξ = det(V ). Moreover, χ 1 χ 2 = ξ and there is an anti-symmetric pairing ω on χ 1 ⊕χ 2 given by ω (a 1 ⊕b 1 , a 2 ⊕b 2 ) = a 1 b 2 −a 2 b 1 . Either: (a) χ 1 = χ 2 , then G ϕ = {(z, g) ∈ C × × GL 2 (C) : z 2 = det(g)} ∼ = C × × SL 2 (C). By Table 5b there are two cases: (i) ϕ| SL 2 = 1, in which case the unipotent pair is supported in C × × T . Then S ϕ = 1 and the L-parameter is supported in GL 1 (C)×GSp 2 (C). The support is V and χ 1 viewed as an L-parameter W F → GL 1 (C)×GSp 2 (C). Thus by Remark 2.2.2, the packet is {( χ 1 ⊗π ∨ V ) χ −1 1 }. Here, ( χ 1 ⊗π ∨ V ) χ −1 1 is irreducible by Theorem 3.2.4, since det(χ 1 ⊗ V ∨ ) = 1 implies the representation χ −1 1 is unitary. (ii) ϕ| SL 2 is regular unipotent, in which case the unipotent pair is supported in either C × × T or C × × SL 2 (C). Thus the L-packet is of size 2, with an intermediate series supported in GL 2 (F ) × GSp 0 (F ) and a supercuspidal representation. This packet is determined in Section 5. (b) χ 1 = χ 2 and χ 1 χ 2 = ξ then G ϕ = {(z, g) ∈ C × × T : z 2 = det(g)} ∼ = C × × C × , embedded as     a z z b     ∈ GSp 4 (C) where ab = z 2 . Here S ϕ = 1 and the enhanced L-parameter is supported in GL 1 (C) × GSp 2 (C), given by χ 1 and V viewed as an Lparameter W F → GL 1 (C) × GSp 2 (C). Thus the L-packet member is an irreducible constituent of ( χ 1 ⊗ π ∨ V ) χ −1 1 . We are in case 2 of Theorem 3.2.4. Let β = e(χ 1 χ −1 2 ) := log q (χ 1 χ −1 2 ( )). Then ( χ 1 ⊗ π ∨ V ) χ −1 1 is irreducible unless β ∈ {±1}. If β ∈ {±1} then the L-packet member is the unique essentially non-tempered subquotient of ( χ 1 ⊗ π ∨ V ) χ −1 1 , since the L-parameter ϕ is not bounded. (4) U = χ 1 ⊕ χ 2 ⊕ χ 3 ⊕ χ 4 where χ i are characters of W F . Either: (a) χ 1 = χ 2 = χ 3 = χ 4 and χ 2 1 = ξ, then G ϕ = G ∨ . The Springer correspondence of G ∨ = GSp 4 (C) is (by the classification in Remark 2. ). Since all the unipotent pairs are supported in the torus, all representations here are principal series. By Property 8.1.5, the cuspidal support of the L-parameter is ϕ v (w) = λ ϕv (w) = λ ϕ (w) = χ 1 (w)ϕ(diag( w 1/2 , w −1/2 )). By Remark 2.1.3 all nilpotent orbits in G ∨ are induced from some regular nilpotent orbit in a Levi subgroup L ∨ ⊂ G ∨ . Thus ϕ( w 1/2 , w −1/2 ) is dual to the modulus character δ B L \L . Thus, by Remark 2.2.2, we have ϕ v = χ −1 1 δ B L \L . Thus the L-packet contains an irreducible subquotient of χ −1 1 i G P (St L ). (i) If ϕ| SL 2 is [4], then the L-packet member is a subquotient of χ −1 1 i G B (δ B\G ) , which is square-integrable modulo center, by Property 8.1.20. Thus the L-packet is given by (χ 1 , χ 3 , χ 3 , χ 1 ). The Springer correspondence for G ϕ is: Unipotent pairs Representations of W = µ 2 2 (00, 1) 1 ⊗ 1 (0e, 1) 1 ⊗ sgn (e0, 1) sgn ⊗1 (ee, 1) sgn ⊗ sgn (ee, −1) cuspidal In all cases the image ϕ(W F ) is compact modulo center, so by Property 8.1.20 the representations in the L-packets are essentially tempered. Either: { χ −1 1 St GSp 4 }. (ii) If ϕ| SL 2 is [2 2 ] then S ϕ = µ 2 , then the L-packet members are irreducible con- stiuents of 1 χ −1 1 St GL 2 . This is case 1(b)i and the L-packet is {τ (S, ν −1/2 χ −1 1 ), τ (T, ν −1/2 χ −1 1 )}. (iii) If ϕ| SL 2 is [2, 1 2 ]. The L-packet members is St GL 2 χ −1 1 , which is case 1(a)i. (iv) If ϕ| SL 2 is trivial, then the L-packet is {1 × 1 χ −1 1 }, where 1 × 1 χ −1 1 is irreducible by Lemma 3.2.3. (b) χ 1 = χ 2 = χ 3 = χ 4 and χ 2 1 = χ 2 3 = ξ, then G ϕ = {(g, h) ∈ GSp 2 × GSp 2 : µ(g) = µ(h)}. Thus W F → T ∨ ⊂ GSp 4 (C) is (i) If ϕ(SL 2 (C)) = 1, then S ϕ = 1. The L-parameter is supported in χ 1 ⊗χ 3 ⊗ξ, so by Remark 2.2.2, the member is an irreducible constituent of χ −1 1 χ 3 × χ −1 1 χ 3 χ −1 1 . By [ST93, Lem 3.2] this is irreducible. (ii) If ϕ| SL 2 (C) is the embedding to the first factor of G ϕ , then S ϕ = 1 and the Lparameter is supported in χ 3 ⊗ ν 1/2 χ 1 ⊗ ξ. Thus by Remark 2.2.2 the member is an irreducible constituent of ν 1/2 χ 1 χ −1 3 × ν −1/2 χ 1 χ −1 3 χ −1 1 . This is case 1(b)iii, so the L-packet is { χ 1 St GL 2 χ −1 1 }. (iii) If ϕ| SL 2 (C) is the embedding to the second factor of G ϕ , swap the role of χ 1 and χ 3 and we are in the case above. (iv) If ϕ| SL 2 (C) is regular we have S ϕ = µ 2 , and the corresponding unipotent pairs have support in either T ∨ or G ϕ . Thus the packet is of size 2 consisting of a principal series and a supercuspidal. The L-packet is determined in Section 5. (c) χ 1 = χ 2 = χ 3 = χ 4 and χ 1 χ 3 = ξ, then G ϕ is the Levi GL 2 (C) × GSp 0 (C). Here S ϕ = 1 and the L-packet members are principal series, since the unipotent pairs are supported in T ∨ . Moreover, since the L-parameter factors through the Levi GL 2 (C) × GSp 0 (C), the L-packet is not discrete, and hence by Property 8.1.20 the members are not squareintegrable modulo center. (i) if ϕ(SL 2 ) = 1, then the L-parameter has support χ 1 ⊗ χ 1 ⊗ ξ. Thus L-packet is { χ −1 3 χ 1 × 1 χ −1 1 }, where irreducibility is by Lemma 3.2.3. (ii) if ϕ| SL 2 is nontrivial, then the L-parameter has support ν 1/2 χ 1 ⊗ ν −1/2 χ 1 ⊗ ξ, so the L-packet member is an irreducible constituent of χ −1 3 χ 1 × ν ν −1/2 χ −1 1 . If χ −1 3 χ 1 / ∈ {1, ν ±1 , ν ±2 } then this is case 1(a)ii and the L-packet is { χ −1 3 χ 1 χ −1 1 St GSp 2 } by Property 8.1.3. If χ −1 3 χ 1 = ν ±1 then we are in case 1 (b)ii and the L-packet must be {ν 3/2 St GL 2 ; ν −1/2 χ −1 1 }. Otherwise, χ −1 3 χ 1 = ν ±2 and we are in case 1(a)iii. By Property 8.1.3 the Lpacket is {J(ν 2 ; χ −1 1 St GSp 2 )}. (d) χ 1 = χ 2 = χ 3 = χ 4 and χ 2 1 = χ 3 χ 4 = ξ then the L-parameter ϕ| W F : W F → T ∨ → G ∨ is given by (χ 3 , χ 1 I 2 , χ 4 ). Here G ϕ is the Levi GL 1 (C)×GSp 2 (C), so by Property 8.1.20 the L-packet members are not square-integrable modulo center. Here S ϕ = 1 and the L-packet members are principal series. (i) if ϕ(SL 2 ) = 1 then the L-parameter has support χ 1 ⊗ χ 3 ⊗ ξ. The L-packet consists of a subquotient of χ −1 1 χ 3 × χ 1 χ −1 3 χ −1 1 . There are several cases: • If ( χ −1 1 χ 3 ) 2 = ν ±1 , then we are in case 1(a)i and the L-packet is {ν ∓1/2 χ −1 1 χ 3 1 GL 2 χ −1 1 }. • If χ −1 1 χ 3 = ν ±1 then we are in case 1 (b)ii and the L-packet is {ν ν −1/2 χ −1 1 1 GSp 2 }. • Otherwise by Lemma 3.2.3 the packet is { χ −1 1 χ 3 × χ 1 χ −1 3 χ −1 1 }. (e) χ 1 = χ 2 = χ 3 = χ 4 with χ 1 χ 4 = χ 2 χ 3 then G ϕ is the maximal torus. Thus S ϕ = 1 and the L-parameter is supported in χ 1 ⊗ χ 2 ⊗ ξ. The L-packet member is an irreducible subquotient of χ 1 χ −1 3 × χ 1 χ −1 2 χ −1 1 . If χ i χ −1 j is not of the form ν ±1 for any i = j then this is irreducible by [ST93, Lem 3.2]. Otherwise: • if χ 1 χ −1 2 = ν and χ 1 χ −1 3 / ∈ {1, ν ±1 , ν ±2 } then we are in case 1(a)ii and the L-packet is { χ 1 χ −1 3 ν 1/2 χ −1 1 1 GSp 2 }. • if χ 1 χ −1 2 = χ 2 χ −1 3 = ν then we are in case 1(a)iii and the L-packet is {ν 3/2 χ −1 1 1 GSp 4 }. The mixed packets are cases 3(a)ii and 4b. Mixed packets Denote the three order 2 characters of F × as η, η 2 , η 2 , where η(x) := (−1) v F (x) is unramified and η 2 and η 2 are ramified quadratics. 5.1. The GSp 4 case. The mixed packet for GSp 4 occurs in: (1) case 3(a)ii Proof. In case 3(a)ii, let ϕ v = (χ 1 , χ 1 ϕ u ) : W F → GL 1 (C) × GSp 2 (C) be the cuspidal support of the intermediate series, where ϕ v | SL 2 = 1 by Remark 5.2.8 and det(ϕ u ) = 1. By Property 8.1.5 we have ϕ v (w, x) = λ ϕv (w) = λ ϕ (w). Here, λ ϕ (w) = diag( w 1/2 χ 1 (w), χ 1 (w)ϕ u (w), w −1/2 χ 1 (w)) so ϕ v (w) = w 1/2 χ 1 (w) ⊗ χ 1 (w)ϕ u (w) . By Remark 2.2.2 this corresponds to the representation ν 1/2 π u ν −1/2 χ −1 1 where π u is the self-dual supercuspidal representation of PGL 2 (F ) corresponding to ϕ u under the LLC for PGL 2 (F ). Thus the intermediate series member of the L-packet is an irreducible subquotient of ν 1/2 π u ν −1/2 χ −1 1 . By Theorem 3.2.4 (2) it has a unique irreducible sub-representation δ(ν 1/2 π u ν −1/2 χ −1 1 ), which is square-integrable. Thus δ(ν 1/2 π u ν −1/2 χ −1 1 ) ∈ Π ϕ . • when the PGL 2 (F )-representation π u has depth zero, it is classified by a regular depthzero character θ : E × /F × → C × , where E/F is the unramified quadratic extension. (5.1.1) Π ϕ(θ) := δ ν 1/2 π (E × ,θ) ν −1/2 χ −1 1 , π (S,θ θ⊗ χ −1 1 ) , where the supercuspidal π (S,θ θ⊗ χ −1 1 ) is defined in Lemma 3.1.14. • when the GL 2 -representation π u has positive depth, the L-packet is of the form (5.1.2) Π ϕ := {δ(ν 1/2 π u ν −1/2 χ −1 1 ), π(π u ) ⊗ χ −1 1 }, where: π u is a supercuspidal representation of GL 2 (F ), which corresponds to a nontrivial representation JL(π u ) of D × /F × under the Jacquet-Langlands correspondence, for D/F the quaternion algebra. The Kim-Yu type is given by a twisted Levi sequence (G 0 ⊂ · · · ⊂ G d = D × /F × ). π(π u ) has Kim-Yu type given by the twisted Levi sequence (G 0 ⊂ · · · ⊂ G d = D × /F × ⊂ GSp 4 (F )). (2) case 4(b)iv Proof. In case 4b, let ϕ v : W F → T ∨ be the cuspidal support of the principal series, where since T ∨ has no unipotents, we have ϕ v | SL 2 = 1. By Property 8.1.5 we have ϕ v (w, x) = λ ϕv (w) = λ ϕ (w). Here, λ ϕ (w) = diag( w 1/2 χ 1 (w), w 1/2 χ 3 (w), w −1/2 χ 3 (w), w −1/2 χ 1 (w)). Under the isomorphism of Remark 2.1.1. the L-parameter ϕ corresponds to an irreducible subquotient of νθ ×θ ν −1/2 χ −1 3 where θ := χ 1 χ −1 3 is an order 2 character of F × . By [ST93, Lemma 3.6] the representation νθ × θ ν −1/2 χ −1 1 has a unique essentially square integrable subquotient δ([θ, νθ], ν −1/2 χ −1 1 ). Thus by Property 8.1.20, we have δ([θ, νθ], ν −1/2 χ −1 1 ) ∈ Π ϕ . Here θ ∈ {η, η 2 , η 2 }. The only singular supercuspidal from Theorem 3.1.14 that's unipotent (up to twisting) is π β (θ 10 ⊗ 1). Therefore it must be in the L-packet Π ϕ (1) . There are three L-packets, with notation from Proposition 3.1.14. Π ϕ (1) : = {δ([η, νη], ν −1/2 χ −1 1 ), π δ (θ 10 ⊗ χ −1 1 )} Π ϕ (2) : = {δ([η 2 , νη 2 ], ν −1/2 χ −1 1 ), π α (η 2 ; χ −1 1 )} Π ϕ (3) : = {δ([η 2 , νη 2 ], ν −1/2 χ −1 1 ), π α (η 2 ; χ −1 1 )}. Here the L-packets Π ϕ (2) and Π ϕ (3) are assembled in Proposition 6.5.5 via stability of characters. Note that the twist χ −1 3 can be recovered as the central character of the representations. We now compute the formal degree of δ([η 2 , νη 2 ], ν −1/2 χ −1 1 ): By [Roc98], we have (5.1.5) d(St H GL 2 ×GL 2 /Gm ) = 1 2 · 1 q 2 − 1 · q − 1 q 2 − 1 · q 3/2 = q 3/2 2(q + 1) 2 . Thus we have (5.1.6) fdeg(δ([η 2 , νη 2 ], ν −1/2 χ −1 1 )) = q 3/2 2(q + 1)(q 2 − 1) , which agrees with the formal degree for the singular supercuspidal computed in (3.1.17). 5.2. The Sp 4 case. The mixed packets for Sp 4 occur in: (1) case 7 (b)iii, when the packet is of size 4, consisting of two supercuspidals and two principal series the irreducible constituents of ν 1/2 χ 1 St GL 2 1. The L-packets consist of principal series from case 1(b)iv, and depth-zero supercuspidals from Theorem 3.1.20. Proof. To each χ 1 = η, η 2 , η 2 , we denote by ϕ(χ 1 ) the corresponding L-parameter, as in case 7 (b)iii. Concretely, ϕ(χ 1 ) : W F → SO 5 (C) corresponds to the W F × SL 2 (C)representation U = M 2 (C) ⊕ C where W F acts on M 2 (C) by χ 1 and SL 2 (C) acts on M 2 (C) by conjugation. In particular, the L-packet Π ϕ(η) is a unipotent L-packet. The principal series members π 1 ( χ 1 ), π 2 ( χ 1 ) ∈ Π ϕ(χ 1 ) have unipotent pairs (ee, (−1, ±1)) on O 4 , by the discussion in case 7 (b)iii. Let ϕ v (χ 1 ) : W F → T ∨ be the cuspidal support, where ϕ v (χ 1 )(SL 2 ) = 1 since T ∨ does not have unipotents. Then by Property 8.1.5 we have ϕ v (χ 1 )(w, x) = λ ϕv(χ 1 ) (w) = λ ϕ(χ 1 ) (w) = ϕ(χ 1 )(w, w 1/2 w −1/2 ). This acts on M 2 (C) as: λ ϕ(χ 1 ) (w)(e 11 ) = χ 1 (w)e 11 λ ϕ(χ 1 ) (w)(e 12 ) = w χ 1 (w)e 12 λ ϕ(χ 1 ) (w)(e 21 ) = w −1 χ 1 (w)e 21 λ ϕ(χ 1 ) (w)(e 22 ) = χ 1 (w)e 22 , so ϕ v (χ 1 ) = det χ 1 ⊗ χ 1 ⊗ 1. Now π 1 (χ 1 ) and π 2 (χ 1 ) are subquotients of ν χ 1 × χ 1 1 = ν 1/2 χ 1 1 GL 2 1+ν 1/2 χ 1 St GL 2 1. Moreover, since π 1 (χ 1 ) and π 2 (χ 1 ) are square-integrable by Property 8.1.20, they must be subquotients of ν 1/2 χ 1 St GL 2 1. By [ST93, Lemma 3.6] over GSp 4 the representation ν χ 1 × χ 1 1 F × contains a unique square integrable subquotient δ([ χ 1 , ν χ 1 ], 1 F × ). This splits into two irreducible representations when restricted to Sp 4 by [ST93, Prop 5.4], and these are exactly the square-integrable subquotients of the Sp 4representation ν χ 1 × χ 1 1. Thus, in the Grothendieck group (5.2.1) δ([ χ 1 , ν χ 1 ], 1 F × )| Sp 4 (F ) = π 1 (χ 1 ) + π 2 (χ 1 ). For the supercuspidals in Π ϕ(η) , there are only two unipotent supercuspidals π β (θ 10 ) and π γ (θ 10 ) coming from Theorem 3.1.20(2a). Therefore these two must be in the L-packet Π ϕ(η) . Note that this agrees with the unipotent L-packet in [LS20]. Moreover, [LS20, Example 9.4] says that Π ϕ(η 2 ) and Π ϕ(η 2 ) contains the depth-zero representations inflated from SL 2 (F q ) × SL 2 (F q ), i.e. the ones in Theorem 3.1.20(3). In summary, we have three L-packets Π ϕ(η) : = {π 1 (η), π 2 (η), π β (θ 10 ), π γ (θ 10 )} (5.2.2) Π ϕ(η 2 ) : = {π 1 (η 2 ), π 2 (η 2 ), π + α (η 2 ), π − α (η 2 )} (5.2.3) Π ϕ(η 2 ) : = {π 1 (η 2 ), π 2 (η 2 ), π + α (η 2 ), π − α (η 2 )}. (5.2.4) The choices between Π ϕ(η 2 ) and Π ϕ(η 2 ) are pinned down in Corollary 6.5.6 via stability of characters. Similar computations as in (5.1.6) shows that the formal degrees of π i (η 2 ) and π ± α (η 2 ) agree. Remark 5.2.5. The L-packets Π ϕ(η 2 ) and Π ϕ(η 2 ) are those in [LS20, Ex 9.4]. (2) case (5b), where the packet is of size 2 consisting of a supercuspidal and an intermediate series. Proof. Let π ∈ Π ϕ be the intermediate series member. By Property 8.1.5 we have λ ϕ = ι GL 2 •λ ϕv up to SO 5 -conjugacy. For the intermediate series representation, since ϕ v : W F → GL 2 (C) is cuspidal, by Remark 5.2.8 we have ϕ v (w, x) = ϕ(w, w 1/2 w −1/2 ), which acts on U = V 2 ⊕ 1 as   w 1/2 ϕ(w) 1 w −1/2 ϕ(w)   . Thus, the L-parameter of the cuspidal support is det 1/2 ϕ. Let ϕ correspond to the unitary representation σ of GL 2 (F ) under the LLC for GL 2 , so ν 1/2 σ is the image of det 1/2 ϕ under the LLC for GL 2 . Thus, π := π(σ) is an irreducible sub-representation of the induced representation ν 1/2 σ 1, which is the unique square-integrable subquotient by [ST93, Prop 5.6(iv)]. It must be the member by Property 8.1.20. In summary, • when ϕ has depth zero, the L-packet is of the form (5.2.6) Π ϕ := {π(σ), π α (η)}, where π α (η) (for η = τ 1 , τ 2 ) is the (singular) depth-zero supercuspidal from Theorem 3.1.20(3). There are q−1 2 such depth-zero L-packets, which agrees with the number of depth-zero supercuspidals of PGL 2 (F ). • when ϕ has positive depth, let π(σ) be the intermediate series representation with σ a positive-depth supercupsidal of PGL 2 corresponding to the character ψ(σ) : E × /F × → C × . The LLC for GL 2 (hence PGL 2 ) gives us a canonical identification E × /F × ∼ − → R (1) E/F G m which identifies ψ(σ) : E × /F × → C × with a character χ(σ) : R (1) E/F G m → C × . Let π χ be the corresponding positive-depth singular supercuspidal. The L-packet in this case is of the form (5.2.7) Π ϕ := {π(σ), π χ(σ) }. Remark 5.2.8. Let ϕ : W F → GL n (C) be a cuspidal L-parameter for GL n . Then ϕ(SL 2 ) = 1. 6. Stability of L-packets 6.1. Parahoric invariants for the GSp 4 (F ) case. Via twisting by the character ν 1/2 χ 3 • µ of GSp 4 , we may focus our attention on δ([η 2 , νη 2 ], 1). It is characterized as the intersection of the sub-representations ν 1/2 η 2 St GL(2) 1 and ν 1/2 η 2 St GL(2) η 2 of νη 2 × η 2 1. We calculate the invariants of δ([η 2 , νη 2 ], 1) with respect to G x+ , where x is a vertex of the Bruhat-Tits building (i.e., α or δ). 6.1.1. Calculating δ([η 2 , νη 2 ], 1) G α+ . Definition 6.1.1. Let H α be the parahoric subgroup of GSp 2,2 (F ) defined in §2.3, which contains the subgroup (6.1.2) H 0 α := {(g, h) ∈ M 2 (o) × o p −1 p o : det(g) = det(h) = 1}. For a ramified quadratic character η 2 of F × , let ∈ F be a uniformizer such that η 2 ( ) = 1 (unique up to (o × F ) 2 ). We define the following irreducible representations of G β /G β+ ∼ = H β /H β+ : where GSpin ∨ 4 ∼ = (GL 2 × GL 2 )/G m , and det i (g 1 , g 2 ) := det(g i ) are well-defined homomorphisms ω η 2 princ := Ind G β G 0 β Z (R + (α 0 ) R + (α 0 ) diag( ,1) ) (6.1.3) ω η 2 cusp := Ind G β G 0 β Z (R + (θ 0 ) R + (θ 0 ) diag( ,GSpin ∨ 4 (F ) → F × /(F × ) 2 . Under these isomorphisms δ([η 2 , νη 2 ], 1) corresponds to η 2 • det i ⊗ St GSpin ∨ 4 . By the Mackey formula, we have an isomorphism of representations of G α /G α+ ∼ = GSp 2,2 (F q ) (6.1.8) (νη 2 × η 2 1) G α+ ∼ = w∈B\G 2 /Gα Ind Gα/G α+ G β ∩wBw −1 /(G α+ ∩wBw −1 ) ( ⊗ ⊗ 1) w , where (6.1.9) B\G 2 /G α ∼ = W (G 2 )/W (GSp 2,2 ) = W/ s β , s 2α+β = {1, s α }. Therefore, the G α+ -invariants of (νη 2 × η 2 1) G α+ gives (6.1.10) (νη 2 ⊗ η 2 1) G α+ Ind GSp 2,2 B ( ⊗ 1 ⊗ ⊗ 1) 2 Likewise, computing the G α+ -invariants gives us the following (ν 1/2 η 2 St 1) G α+ (ν 1/2 η 2 St η 2 ) G α+ Ind GSp 2,2 B ( ⊗ 1 ⊗ ⊗ 1). (6.1.11) We pin down the G β+ -invariants of π(η 2 ) in Corollary 6.1.13. Proposition 6.1.12. The I + -invariants of δ([η 2 , νη 2 ], 1) is δ([η 2 , νη 2 ], 1) I + ∼ = ⊗ ⊗ 1 + ⊗ ⊗ . Proof. A priori we know δ([η 2 , νη 2 ], 1) I + → (νη 2 × η 2 1) I + = w∈W ( ⊗ ⊗ 1) w = ( ⊗ ⊗ 1) 4 + ( ⊗ ⊗ ) 4 . By Lemma 6.1.5, the multiplicity of ⊗ ⊗ 1 in δ([η 2 , νη 2 ], 1), which is the same as the multiplicity of • det 1 in the representation η 2 St SO 4 , is one. Thus the same holds for all Weyl group orbits of the character. Corollary 6.1.13. There is an isomorphism of G α /G α+ -representations δ([η 2 , νη 2 ], 1) G α+ ∼ = ω η 2 princ Proof. The argument is the same as in the proof of Corollary 3.0.8 in [SX23] . By Proposition 6.1.12 we conclude δ([η 2 , νη 2 ], 1) G β+ must be an irreducible component of Ind GSp 2,2 B ( ⊗ 1 ⊗ ⊗ 1), i.e., ω η 2 princ or ω η 2 princ . Together with Lemma 6.1.5 we conclude δ([η 2 , νη 2 ], 1) G α+ ∼ = ω η 2 princ . 6.1.2. Calculating δ([η 2 , νη 2 ], 1) G δ+ . Again by a Mackey theory calculation, we have: (νη 2 × η 2 1) G δ+ ∼ = Ind GSp 4 (Fq) B(Fq) ( ⊗ ⊗ 1) (6.1.14) (ν 1/2 η 2 St GL 2 1) G δ+ ∼ = Ind GSp 4 (Fq) Pα ( St GL 2 ⊗ 1) (6.1.15) (ν 1/2 η 2 St GL 2 η 2 ) G δ+ ∼ = Ind G 2 (Fq) Pα ( St GL 2 ⊗ ), (6.1.16) where P α is a parabolic subgroup of GSp 4 (F q ). Thus, δ([η 2 , νη 2 ], 1) G δ+ is the intersection of Ind Pα(Fq) ( St GL 2 ⊗ ), denoted ω princ . In terms of Lusztig's equivalence [Lus84a,Theorem 4.23], if s ∈ GSpin 5 (F q ) is of order 2 such that its image in SO 5 (F q ) is diag(−1, −1, 1, −1, −1) then Z GSpin 5 (Fq) (s) = GSpin 4 (F q ) ∼ = GSp 2,2 (F q ): (6.1.17) E(GSp 4 (F q ), s) ∼ = E(GSp 2,2 (F q ), 1) = {St GSp 2,2 , 1 GSp 2 , GSp 2 1, 1 GSp 2,2 }, and ω princ corresponds to St GSp 2,2 (Fq) . Thus, in conclusion: GαZ (ω η 2 cusp ), where ω η 2 cusp := (ρ + (λ,λ) ) (I 2 ,diag( ,1)) is a cuspidal representation of G α /G α+ . We may readily calculate the G x+ -invariants of the supercuspidal representation π α (η 2 ; 1), for various vertices x in the Bruhat-Tits building: Lemma 6.2.2. Let π α (η 2 ; 1) be as defined in (6.2.1). We have π α (η 2 ; 1) G α+ ∼ = ω η 2 cusp (6.2.3) π α (η 2 ; 1) G δ+ = 0 (6.2.4) Proof. For each vertex x, by Mackey theory we have π α (η 2 ; 1) G x+ ∼ = g∈Gα\G 2 /Gx Ind Gx Gx∩g −1 Gαg ((ω η 2 cusp ) g ) G x+ ∩g −1 Gαg (6.2.5) = g∈Gα\G 2 /Gx Ind Gx Gx∩G g −1 α ((ω η 2 cusp ) g ) G x+ ∩G g −1 α . (6.2.6) Here, ((ω η 2 cusp ) g ) G x+ ∩G g −1 α ∼ = (ω η 2 cusp ) Gα∩G gx+ , which is 0 unless α = gx since otherwise G β ∩ G gx+ will contain the unipotent radical of some parabolic subgroup of G α , so (ω η 2 cusp ) Gα∩G gx+ = 0 since ω η 2 cusp is cuspidal. 6.3. Stable distributions on GSp 4 and Sp 4 . For this section alone, we switch notation for k to denote the non-archimedean local field, as we reserve the notation F for the facets. First we recall from [DeB02,DeB06,DK06] the general theory of invariant distributions associated to unramified tori. We now recall a few more precise results for later use. Let J(g) be the space of invariant distributions on g. Let J(N ) be the span of the nilpotent orbital integrals. For each Weyl group conjugacy class [w] of G, consider pairs (F, Q F w ) consisting of a facet F ∈ B(G) and the toric Green function Q F w (see for example [Car93,§7.6]) associated to the torus S w in G F corresponding to [w]. Let S be a maximal K-split k-torus in G lifting the pair (F, S w ). Let X Sw ∈ Lie(S)(k) ⊂ g F be a regular semisimple element for which the centralizer in G F of the image of X Sw in Lie(G F ) is S w . Since G der is simply-connected, the number of rational conjugacy classes in G(K) X Sw ∩ g is in bijection with the group of torsion points of X * (T )/(1 − w)X * (T ) for the maximal torus T of G. The following Table 4 is the analogue of [DK06, Table 5] for GSp 4 (note that the analogous table for Sp 4 is calculated in [Wal01], although we do not need it), which records the number of relevant rational conjugacy classes. class of w tor[X * (T )/(1 − w)X * (T )] 1 0 A 1 0 A 1 0 A 1 × A 1 Z/2 C 2 0κ(λ) · µ X λ Sw , where X λ Sw belongs to the G-conjugacy class in G(K) X Sw ∩ g indexed by λ. Note that T w (1) is stable for any reductive group G. On the other hand, the rational classes in G(K) X that intersect Lie(S)(k) are parameterized by the quotient We record the cardinality of the above quotient in the following table: class of w vertex |N (F, S w )| A 1 × A 1 C 2 1 A 1 × A 1 A 1 × A 1 1 In general, consider the set I c := {(F, G)} (see for example [DK06,§4.3]) of pairs consisting of facet F and a cuspidal generalized Green function on Lie(G F )(κ k ), which is endowed with an equivalence relation ∼ as in Definition 4.1.2 loc.cit. Let g 0 be the set of compact elements in g, and J(g 0 ) ⊂ J(g) the subspace of distrubitions with support in g 0 . Let D 0 be the invariant version of the Lie algebra analogue of the Iwahori-Hecke algebra, and let D 0 0 be the subalgebra spanned over facets contained in (the closure of) a fixed alcove F ∅ . We recall the following homogeneity result due to DeBacker and Waldspurger. We have the following list of stable distributions: 6.4. Characters on a neighborhood of 1. In this section, we express δ[η 2 , νη 2 ] G x+ in terms of generalized Green functions, for x = α, δ. D st C 2 := D (F C 2 ,Q F C 2 S C 2 ) D st A 1 := D (F A 1 ,Q F A 1 S A 1 ) D st A 1 := D(F A 1 , Q F A 1 S A 1 ) D st e := D (Fe,Q Fe Se ) D st A 1 ×A 1 := D F C 2 ,Q F C 2 S A 1 ×A 1 + D (F A 1 ×A 1 ,Q F A 1 ×A 1 S A 1 ×A 1 ) D unst A 1 ×A 1 := D F C 2 ,Q F C 2 S A 1 ×A 1 − D (F A 1 ×A 1 ,Q F A 1 ×A 1 S A 1 ×A 1 ) D st F A 1 (1) When F = F C 2 corresponds to the vertex δ, we have that δ([η 2 , νη 2 ], 1) G δ+ ∼ = ω princ corresponds to St GSp 2,2 (Fq) under Lusztig's equivalence (6.1.17). By [DL76], the character of Steinberg is (6.4.1) Ch St GSp 2,2 = 1 4 R A 1 ×A 1 A 1 ×A 1 − R A 1 ×A 1 A 1 ×1 − R A 1 ×A 1 1×A 1 + R A 1 ×A 1 1×1 . Since Lusztig's equivalence (6.1.17) preserves multiplicities, we have (6.4.2) Ch π princ = 1 4 R C 2 A 1 ×A 1 − 2R C 2 A 1 + R C 2 1 . Restricting to the unipotent locus, we have (6.4.3) Ch π princ (u) = 1 4 Q F C 2 A 1 ×A 1 − 2Q F C 2 A 1 + Q F C 2 1 . (2) When F = F A 1 ×A 1 corresponds to the vertex α, we have that δ([η 2 , νη 2 ], 1) G α+ ∼ = ω η 2 princ . The character formula can be computed in the same way as [SX23,(3.4.5)] and we have (6.4.4) Ch π η 2 princ = 1 2 (Q F A 1 × A 1 1 ± q * G sgn ). (3) When F = F A 1 , since δ([η 2 , νη 2 ], 1) G F + is the Jacquet restriction r A 1 ×A 1 A 1 (δ([η 2 , νη 2 ], 1)), thus by (6.1.3) on the unipotent locus we have Ch(δ([η 2 , νη 2 ], 1) G F + ) = Q F A 1 1 ; (4) When F = F A 1 , we have Ch(δ([η 2 , νη 2 ], 1) G F + ) = Q F A 1 1 ; (5) When F = F e , we have Ch(δ([η 2 , νη 2 ], 1) G F + ) = 2. Similarly, we have for F = F A 1 ×A 1 , (6.4.5) Ch(π α (η 2 ; 1) G F + ) = 1 2 (Q F A 1 ×A 1 A 1 ×A 1 ± q * G sgn ). Therefore, we have the following Proposition 6.4.6. For any (possibly equal) ramified quadratic characters η 2 , η 2 , the sum δ([η 2 , νη 2 ], )+ π α (η 2 ; ρ) has a stable character on the topologically unipotent elements. Proof. As remarked at the beginning of §6.1, it suffices to work with the case = 1 in the notation δ([η 2 , νη 2 ], ). From the discussions above, we see that for some explicitly computable constants c i , Ch δ([η 2 ,νη 2 ],1) = c 1 · 1 2 (D st A 1 ×A 1 − D unst A 1 ×A 1 ) ± c 2 · D st (F A 1 ×A 1 ,Gsgn) + cD st e Ch πα(η 2 ;1) = c 1 · 1 2 (D st A 1 ×A 1 + D unst A 1 ×A 1 ) ± c 2 · D st (F A 1 ×A 1 ,Gsgn) Thus by Lemma 6.3.6, the sum is always stable. 6.5. Characters on a neighborhood of s. Let s =     1 −1 −1 −1     ∈ GSp 4 (F ) be order 2 such that Z GSp 4 (s) = GSp 2,2 . By the construction in [AK07,§7], the distributions Ch δ([η 2 ,νη 2 ], ) and Ch πα(η 2 ; ) on GSp 4 induce distributions Θ δ([η 2 ,νη 2 ], ) and Θ πα(η 2 ; ) on (GSp 2,2 ) 0+ , the topologically unipotent elements in GSp 2,2 , such that the attached locally constant functions are compatible (see [AK07,Lemma 7.5]). We shall see when the sum Θ δ([η 2 ,νη 2 ], ) + Θ πα(η 2 ; ) is a stable distribution on (GSp 2,2 ) 0+ . We now look at the characters on an element of the form su for u topologically unipotent. They follow from computations in the previous section §6.4. (1) When F = F C 2 , by [DL76,Theorem 4.2], we have Ch ω princ (su) = 1 4 R S A 1 ×A 1 (su) − 2R S A 1 (su) + R S 1 (su) = (−1) q−1 2 1 2 Q A 1 ×A 1 S A 1 ×A 1 (u) − Q A 1 ×A 1 S A 1 ×1 (u) − Q A 1 ×A 1 S 1×A 1 (u) + Q A 1 ×A 1 S 1 (u) . (6.5.1) (2) When F = F A 1 ×A 1 , we have Ch δ([η 2 ,νη 2 ],1) F + (su) = (−1) q−1 2 · 1 2 Q F A 1 ×A 1 1 (u) ± q * G sgn (u) (6.5.2) Ch πα(η 2 ;1) F + (su) = (−1) q+1 2 · 1 2 Q F A 1 ×A 1 A 1 ×A 1 (u) ± q * G sgn (u) (6.5.3) The following lemma is an analogue of [SX23, Lemma 3.5.1]. Lemma 6.5.4. The distribution D (F A 1 ×A 1 ,Gsgn) on GSp 2,2 is not stable. Proof. A distribution on GSp 2,2 (F ) is stable if and only if it is stable under conjugation by GL 2 (F )× GL 2 (F ). Thus all stable distributions on GSp 2,2 must be restricted from invariant distributions on GL 2 (F ) × GL 2 (F ). But the only invariant distributions on GL 2 (F ) × GL 2 (F ) are spanned by semisimple orbital integrals, and D (F A 1 ×A 1 ,Gsgn) is linearly independent from them (as can be seen by evaluating against G sgn ). Proposition 6.5.5. Let G = GSp 4 (F ). For ramified quadratic characters η 2 and η 2 , the character Ch δ([η 2 ,νη 2 ], ) + Ch πα(η 2 ; ) is stable in a neighborhood of s if and only if η 2 = η 2 . Thus, {δ([η 2 , νη 2 ], ), π α (η 2 ; )} is an L-packet, as dictated by Property 8.1.27, for each ramified quadratic character η 2 . Proof. This follows from the above computations (6.5.1), (6.5.2) and (6.5.3), as well as Lemma 6.5.4 that D (F A 1 ×A 1 ,Gsgn) is a non-stable distribution on GSp 2,2 . Now, by Property 8.1.26, functoriality for Sp 4 → GSp 4 , we obtain the following corollary of Proposition 6.5.5. Let π i (η 2 ) be as defined in (5.2.1). Let π ± α (η 2 ) be as defined in Proposition 3.1.20(3). Corollary 6.5.6. Let G = Sp 4 (F ). The character Ch π 1 (η 2 ) + Ch π 2 (η 2 ) + Ch π + α (η 2 ) + Ch π − α (η 2 ) is stable in a neighborhood of s, for each ramified quadratic character η 2 . Thus we have the following explicit L-packets, as dictated by Property 8.1.27: Π ϕ(η 2 ) := {π 1 (η 2 ), π 2 (η 2 ), π + α (η 2 ), π − α (η 2 )}, for each ramified quadratic character η 2 . Proof. Indeed, by definition we have δ([ χ 1 , ν χ 1 ], 1)| Sp 4 (F ) = π 1 (χ 1 ) + π 2 (χ 1 ) and π α (η 2 ; 1)| Sp 4 (F ) = c-Ind Sp 4 Gα (ω η 2 cusp ) (6.5.7) = c-Ind Sp 4 Gα (R + (θ 0 ) (R + (θ 0 )) diag( ,1) + R − (θ 0 ) (R − (θ 0 )) diag( ,1) ) (6.5.8) = π + α (η 2 ) + π − α (η 2 ). (6.5.9) The claim now follows from Proposition 6.5.5. Explicit L-parameters We construct L-parameters for each reducible induced representation in Theorem 3.2.4. For representations that are not essentially tempered, we give explicit Langlands classifications, so by Property 8.1.3 we have explicit L-parameters (since LLC is known for Levis of GSp 4 ). We only give the L-parameters for GSp 4 , but those for Sp 4 follows by functoriality, Property 8.1.26. 7.1. Principal series for GSp 4 . We proceed by considering Bernstein blocks: let s = [T, χ 1 ⊗ χ 2 ⊗ θ]. Then by Remark 2.2.2 the dual of χ 1 ⊗ χ 2 ⊗ θ is the homomorphism F × → T ∨ (C) given by θ −1 diag(1, χ −1 2 , χ −1 1 , χ −1 1 χ −1 2 ), whose restriction c s to o × F is well-defined. Let J s = Z G ∨ (Im(c s )) and let J s be the Langlands dual group. Then [Roc98] gives a (non-canonical) isomorphism between H(G//J χ , χ 1 ⊗ χ 2 ⊗ θ) and H(J s //I s , 1 I s ), where I s is an Iwahori subgroup of J s . There are the following cases (up to Weyl group conjugates): (J1) If χ 1 = χ 2 = 1 then J s = G ∨ . Representations of the Iwahori-Hecke algebra are classified in [Ram03, Table 5.1]. (J2) If χ 1 = 1 and χ 2 = 1 then J s = GL 2 × GSp 0 so J s = GL 1 × GSp 2 . (J3) If χ 1 = χ −1 2 = 1 and χ 2 1 = 1 then J s = {(g, h) ∈ GL 2 (C) : det(g) = det(h)}. Here J s = GL 2 (F ) × GL 2 (F )/F × . Representations of the Iwahori-Hecke algebra are classified in [Ram03, Table 2.1]. (J4) If χ 1 = χ −1 2 and χ 2 1 = 1 on o × F then J s = GL 1 ×GSp 2 so J s = GL 2 ×GSp 0 . Representations of the Iwahori-Hecke algebra are classified in [Ram03, Table 2.1]. We have the following cases: • In case 1(a)i the only essentially tempered representation is ν 1/2 χ 2 St GL 2 θ where e(χ 2 ) = − 1 2 . if χ 2 is unramified, we are in case (J1). This is case t e in Table 5.1 of [Ram03] so the enhanced L-parameter is: (ϕ σ,[1 4 ] , 1), (ϕ σ,[2 2 ] , 1). -In case (J3), when χ 2 2 is unramified but χ 2 is not, we have J s of type A 1 × A 1 . This is case t a × t o in the notation of Table 2.1 of [Ram03] since the induced representation is of length 2 with a tempered subquotient. Thus the enhanced L-parameter is (ϕ σ,[1 4 ] , 1), (ϕ σ,[2 2 ] , 1). -In case (J4), when χ 2 2 is ramified, we have J s = GL 2 × GSp 0 , of type A 1 , which is case t a in [Ram03, Table 2.1] so the L-parameter is (ϕ σ,[1 4 ] , 1), (ϕ σ,[2 2 ] , 1) • In case 1(a)ii the only essentially tempered representation is χ 1 ν 1/2 θSt GSp 2 for e(χ 1 ) = 0. Here s = [χ 1 , 1, θ]. -In case (J1), when χ 1 is unramified, we have J ∫ = G ∨ . This is case t e in Table 5.1 of [Ram03] so the enhanced L-parameters are: (ϕ σ,[1 4 ] , 1), (ϕ σ,[2 2 ] , 1). -In case (J2), when χ 1 is ramified, we have J ∫ = GL 1 ×GSp 2 . This is case t a in [Ram03, Table 2.1] so the L-parameters are (ϕ σ,[1 4 ] , 1), (ϕ σ,[2 2 ] , 1) • In case 1(a)iii the Steinberg representation corresponds to (ϕ σ, [4] , 1), with the regular unipotent. • In case 1(a)iv the representation δ([χ 2 , νχ 2 ], θ) is essentially square-integrable, living in the . -In case (J1), when χ 2 is the unramified quadratic character, we have J s = G ∨ . This is case t a or t c in [Ram03, ,1 , 1), with trivial unipotent. -In case (J4), when χ 2 is ramified, we have J s of type A 1 × A 1 . This is case t a × t a in the notation of [Ram03, Here there is a slight abuse of notation; the two unipotents [2, 1 2 ] are embedded into G ϕ in different ways. • In case 1(b)i, where s = [T, 1 ⊗ 1 ⊗ θ], we have J s = G ∨ . Here, there are two essentially tempered subquotients so we are in case t b of [Ram03, Table 5.1]: Indexing triple nilpotent orbit representation (t b , 0, 1) e α 1 +β , 1) [2, 1 2 ] τ We again used that St GL 2 corresponds to the regular unipotent under LLC for GL 2 . [1 4 ] J(ν; 1 F × θ) (t b , e β , 1) [2 2 ] J(ν 1/2 St GL 2 ; θ) (t b , e α 1 +β , −1) [2, 1 2 ] τ (t b , • In case 1(b)iii the representation ν 1/2 χ 2 St GL 2 θ is essentially tempered. where s = [T, χ 1 ⊗ χ 1 ⊗ θ], with χ 2 1 = 1, either: -In case (J1), when χ 1 = 1, we have J s = G ∨ . Then we are in case t e of [Ram03, Table 5.1] so the L-parameters are (ϕ [1 4 ] , 1) and (ϕ [2 2 ] , 1). -In case (J4), when χ 1 = 1, we have J s of type A 1 × A 1 . This is of type t a × t o in the notation of [Ram03, Proof. Suppose otherwise, that ϕ| I F = ζ 1 ⊕ ζ 2 for some characters ζ i of I F . Since W F acts trivially on I ab F ∼ = o × F , the group W F intertwines ζ 1 ⊕ ζ 2 . Thus if ζ 1 = ζ 2 then ϕ also splits into two distinct characters, a contradiction, and if ζ 1 = ζ 2 then ϕ(w) for w ∈ W F such that |w| = 1 can be diagonalized, which provides a splitting of ϕ. 7.2.1. When L = GL 2 × GSp 0 , i.e., case 2. Let s = [L, π ⊗ χ], where we assume ω π = 1. By Remark 2.2.2, local Langlands for the Levi gives an L-parameter χ −1 ⊗ χ −1 ϕ ∨ π = χ −1 (1⊗ϕ ∨ π ) : W F → GL 1 (C) × GSp 2 (C), whose restriction c s to I F is well-defined. The centralizer J s := Z G ∨ (Im(c s )) is independent of χ. When J s is connected we have the bijection Irr s (G) ∼ = Irr(H(J s //I s )), where the group of F -rational points on the Langlands dual of J s and I s is an Iwahori subgroup. By Lemma 7.2.1, the restriction ϕ| I F remainds irreducible, so J s = {(z, g) ∈ C × × GSp 2 (C) : det(g) = z 2 } ∼ = C × × SL 2 (C) so J s = F × × PGL 2 (F ). Since the induced representation is of length 2, we are in case t a of [Ram03, Table 2.1], and the L-parameter for the tempered sub-representation is (ϕ σ,[2,1 2 ] , 1). 7.2.2. When L = GL 1 × GSp 2 , i.e., case 3. Let s = [L, χ ⊗ π]. By Remark 2.2.2, local Langlands for the Levi gives an L-parameter ϕ ∨ π ⊗ det(ϕ ∨ π ) χ −1 : W F → GL 2 (C) × GSp 0 (C) , whose restriction c s to I F is well-defined. The centralizer J s := Z G ∨ (Im(c s )) is independent of χ. That is, ϕ ∨ π ⊗ det(ϕ ∨ π ) χ −1 (w) = ϕ ∨ π (w) χ −1 (w)ϕ ∨ π (w) . The induced representation χ π is irreducible only when a) χ = 1 F × or b) χ = ν ±1 ξ o where ξ o is of order two and ξ o π ∼ = π. In either case χϕ π = ϕ π , so the I F -representation c s is simply diag(ϕ ∨ π (w), ϕ ∨ π (w)). Here, in the notation of [AX22b, §2.1], X nr (M, π) := {ξ ∈ X nr (M ) : ξ ⊗ π ∼ = π} has order 1 or 2, since ξ ⊗ π ∼ = π implies ξ 2 ω π = ω π . Moreover, W (M, O) is order 2, since the Weyl group acts by χ ⊗ π → χ −1 ⊗ χπ. Thus, W (M, π, X nr (M )) is of order 2 or 4, and by [Sol22], there is a bijection Irr s (G) Irr(C[X nr (M )] C[W (M, π, X nr (M ))]). The Kazhdan-Lusztig triples can be computed by following the commutative diagram in Property 8.1.19. Main Theorem 8.1. Properties of LLC. We assume for the rest of this paper that p does not divide the order of the Weyl group. We now state a compatibility property of the LLC with supercuspidal supports. Definition 8.1.1. [Vog93] The infinitesimal parameter of an L-parameter ϕ for G is λ ϕ : W F → G ∨ defined by, for w ∈ W F , (8.1.2) λ ϕ (w) := ϕ w, ||w|| 1/2 0 0 ||w|| −1/2 for any w ∈ W F . Property 8.1.3. Let (P, π, ν) be a standard triple for G. We have ϕ J(P,π,ν) = ι L ∨ • ϕ π⊗χν . Property 8.1.5. Let P ⊂ G be a parabolic subgroup with Levi subgroup L, and σ a supercuspidal representation of L. For any irreducible constituent π of Ind G P σ, the infinitesimal L-parameters λ ϕπ and ι L ∨ • λ σ are G ∨ -conjugate. We set the following notations (8.1.8) Z G ∨ (ϕ) := Z G ∨ (ϕ(W F )) and G ϕ := Z G ∨ (ϕ(W F )). We also consider the following component groups (8.1.9) A ϕ := Z G ∨ (ϕ)/Z G ∨ (ϕ) • and S ϕ : = Z G ∨ (ϕ)/Z G ∨ · Z G ∨ (ϕ) • . Recall that A Gϕ (u ϕ ) denotes the component group of Z Gϕ (u ϕ ). By [Mou17, § 3.1], (8.1.10) A ϕ A Gϕ (u ϕ ), where u ϕ := ϕ (1, ( 1 1 0 1 )). Let (ϕ, ρ) be an enhanced L-parameter for G. Recall that u ϕ := ϕ (1, ( 1 1 0 1 )). Then u ϕ is a unipotent element of the (possibly disconnected) complex reductive group G ϕ defined in (8.1.8), and ρ ∈ Irr(A Gϕ (u ϕ )) by (8.1.10). Let t ϕ := (L ϕ , (v ϕ , ϕ )) denote the cuspidal support of (u ϕ , ρ), i.e. (8.1.11) (L ϕ , (v ϕ , ϕ )) := Sc Gϕ (u ϕ , ρ). In particular, (v ϕ , ϕ ) is a cuspidal unipotent pair in L ϕ . Upon conjugating ϕ with a suitable element of Z G • ϕ (u ϕ ), we may assume that the identity component of L ϕ contains ϕ 1, z 0 0 z −1 for all z ∈ C × . Recall that by the Jacobson-Morozov theorem (see for example [Car93,§ 5.3]), any unipotent element v of L ϕ can be extended to a homomorphism of algebraic groups (8.1.12) j v : SL 2 (C) → L ϕ satisfying j v ( 1 1 0 1 ) = v. Moreover, by [Kos59,Theorem 3.6], this extension is unique up to conjugation in Z L ϕ (v) • . We shall call a homomorphism j v satisfying these conditions to be adapted to ϕ. By [AMS18, Lemma 7.6], up to G ∨ -conjugacy, there exists a unique homomorphism j v : SL 2 (C) → L ϕ which is adapted to ϕ, and moreover, the cocharacter (8.1.13) χ ϕ,v : z → ϕ 1, z 0 0 z −1 · j v z −1 0 0 z has image in Z • L ϕ . We define an L-parameter ϕ v : W F × SL 2 (C) → Z G ∨ (Z • L ϕ ) by (8.1.14) ϕ v (w, x) := ϕ(w, 1) · χ ϕ,v (||w|| 1/2 ) · j v (x) for any w ∈ W F and any x ∈ SL 2 (C). Remark 8.1.15. Let w ∈ W F and x w := ||w|| 1/2 0 0 ||w|| −1/2 . By (8.1.2), we have (8.1.16) λ ϕv (w) = ϕ v (w, x w ) = ϕ(w, 1) · χ ϕ,v (||w|| 1/2 ) · j v (x w ) = ϕ(w, 1) · ϕ(1, x w ) · j v (x −1 w ) · j v (x w ) = ϕ(w, x w ) = λ ϕ (w). (1) ϕ is bounded if and only if one element (equivalently any element) of Π ϕ (G) is tempered; (2) ϕ is discrete if and only if one element (equivalently any element) of Π ϕ (G) is squareintegrable modulo center; (3) ϕ is supercuspidal if and only if all the elements of Π ϕ (G) are supercuspidal. Property 8.1.21. [Sha90] The quantity fdeg(π) dim(ρ) is constant in an L-packet. Property 8.1.22. [Sha90, Conjecture 9.4] If ϕ is bounded, then the L-packet Π ϕ (G) is w-generic for some Whittaker datum w. Moreover, the conjectural bijection ι w : Π ϕ (G) → Irr(S ϕ ) maps the w-generic representation to the trivial representation of S ϕ . Here, the left vertical arrow is a correspondence defined by the subset of Π(GSp 2n ) × Π(Sp 2n ) consisting of pairs (π, ) such that is a constituent of the restriction of π to Sp 2n . Property 8.1.27 (Stability). Let ϕ be a discrete L-parameter. There exists a non-zero C-linear combination (8.1.28) SΘ ϕ := π∈Πϕ z π Θ π , for z π ∈ C, which is stable. In fact, one can take z π = dim(ρ π ) where ρ π is the enhancement of the L-parameter. Moreover, no proper subset of Π ϕ has this property. Main Result. Construction of the Local Langlands Correspondence LLC : Irr(G) 1-1 − − → Φ e (G) π → (ϕ π , ρ π ). • Suppose that M g is conjugate to a Levi subgroup of the Klingen parabolic subgroup GL 2 (C) × GSp 0 (C). In this case, we claim that π cannot be conjugate to a representation of the form χSt GL 2 χ for some smooth characters χ and χ , otherwise (π p 1 ) n 1 = 0. This can be seen by first applying the geometric lemma in [BZ77] along with [Whi22,Lemma 5.15]. Then by our classification §4 Case (4c), we have N π = 0. • Suppose that M g is conjugate to a Levi subgroup of the Siegel parabolic GL 1 (C)×GSp 2 (C). In this case, L is conjugate to a Levi subgroup of the Klingen parabolic GL 1 (F ) × GSp 0 (F ). We claim that π cannot be conjugate to a representation χ χ St GSp 2 ; otherwise, similar to the previous bullet point, we get (π p 1 ) n 1 = 0 which is a contradiction. Then by §4 Case (4d), we have N π = 0. • The remaining case is when L = G. By §4 Case (4a), we have N π = 0. The following proposition is an analogue of Proposition A.0.2 for representations with nonzero localized p-invariants (instead of π 1 -invariants). Proposition A.0.3 (Whitmore). Let π be an admissible irreducible Q p [G(F v )]-module such that (π p ) n 0 = 0. Then (1) π is a subquotient of a parabolically induced representation i G B χ for some tamely ramified smooth character χ : T (F v ) → Z Proof. Representations with Iwahori-fixed vectors are classified in §7.1, and we attach explicit Lparameters. Proposition A.0.2 is then applied in [Whi22, Theorem 7.7] to a certain π v for some cuspidal automorphic representation π of GSp 4 (A f ) and v ∈ Q a Taylor-Wiles place, where Q is part of a Taylor-Wiles datum (Q, {(T v ,B v )} v∈Q ) as in [Whi22, Definition 3.9], thus giving the existence of Galois representations associated to a classical weight cuspidal automorphic representation π. Combined with the patching criterion of [BCGP21, Proposition 7.10.1], one can then construct the patched modules as in [BCGP21] and [Whi22,7.11] to deduce modularity lifting theorems for abelian surfaces. ∼ = GSp 4 of Remark 2.1.1 gives the identifications between the dual Levi subgroups: Proposition 3.2.1 ([Sha91, Prop 6.1]). (a) Let G = GSp 4 (F ) for F a non-archimedean field. Let α and β be the short and long simple roots of G, respectively. Let P = MN be the maximal parabolic subgroup such that M is generated by α and M ∼ = GL 2 × GL 1 . Fix an irreducible unitary supercuspidal representation σ = σ 1 ⊗ χ of M = M(F ), where σ 1 is a supercuspidal unitary representation of GL 2 (F ) with central character ω and χ is a unitary character of F * . Then I(σ) is always irreducible. The representation I(σ 1 ν s ⊗ χ) is reducible if and only if ω = 1 and s where 1 × 1 1 is irreducible by Theorem 3.2.5(1a). (b) χ 1 = χ 2 = 1 then χ 1 has order 2 and G ϕ = S(O 4 (C) × µ 2 ) ∼ = O 4 (C). The Springer correspondence for O 4 is (see [CM93, §10.1, p. 166]): Unipotent pairs Representations of W = Lemma 4 .0. 2 ( 42Mackey's little groups method, [Ser77, §8.2]). Let G = A H be a finite group, where A is abelian. Then, there is a bijection Irr(G) ∼ = {χ ∈ H\A * , ρ ∈ Irr(H χ )}, Irr(H(J s , 1)), under which δ([η 2 , νη 2 ], ν −1/2 χ −1 1 ) corresponds to St GL 2 ×GL 2 /Gm . By [Roc98, Theorem 10.7], up to normalization factors of volumes, we have (5.1.4) fdeg(π(η 2 )) = d(St H GL 2 ×GL 2 /Gm ). Now by [CKK12, Theorem 4.1], we have independent of the choice of the uniformizer . By [SX23, Lemma 2.0.1], we have: Lemma 6.1.5. There are canonical support-preserving Hecke algebra isomorphisms H(GSp 4 //I, ⊗ ⊗ 1) ∼ = H(GSpin ∨ 4 //J, • det 1 ) (6.1.6) H(GSp 4 //I, ⊗ ⊗ ) ∼ = H(GSpin ∨ 4 //J, • det 2 ) (6.1.7) GSp 4 ( 4Fq) Pα(Fq) ( St GL 2 ⊗ 1) and Ind GSp 4 (Fq) For each character κ of tor[X * (T )/(1 − w)X * (T )], one can associate (6.3.2) N (F, S w ) := [N G(K) (S(K))/S(K)] Gal(K/k) /[N G (S(k))/S(k)] Theorem 6.3.3 (Waldspurger, DeBacker). We have (1) res D 0 J(g 0 ) = res D 0 J(N ).(2) Suppose D ∈ J(g 0 ). We have res D 0 D = 0 if and only if res D 0 0 D = 0 As a corollary, one has the following.Corollary 6.3.4. Let D ∈ J(g 0 ). We have res D 0 D = 0 if and only if D(Ĝ F ) = 0 for all (F, G) ∈ I c / ∼ ×A 1 , 1Gsgn := D (F A 1 ×A 1 ,Gsgn) (6.3.5) Finally, we record the following result from [DK06, Lemma 6.4.1] (see also [Wal01, Théoréme IV.13]) for later use. Let B st be the set of stable distributions in the above list (6.3.5). Lemma 6.3.6 (Waldspurer, DeBacker-Kazhdan). The elements of the set {res D 0 D|D ∈ B st } form a basis for res D 0 J(g 0 ) ∩ res D 0 J st (g). 8. 1 . 6 . 16The following Property 8.1.19 generalizes Property 8.1.5. Let L(G) be a set of representatives for the conjugacy classes of Levi subgroups of G. By [ABPS17a, Proposition 3.1], for any L ∈ L(G) there is a canonical isomorphism (8.1.7) W G (L) ∼ − → W G ∨ (L ∨ ). Definition 8.1.17.[AMS18, Definition 7.7] The cuspidal support of (ϕ, ρ) is(8.1.18) Sc(ϕ, ρ) := (Z G ∨ (Z • L ϕ ), (ϕ v ϕ , ϕ )).Property 8.1.19. [AMS18, Conjecture 7.8] The following diagram is commutative: Irr(G) Φ e (G) L∈L(G) Irr scusp (L)/W G (L) L∈L(G) Φ e,cusp (L)/W G (L) Property 8.1.20. [Bor79, §10.3] Let ϕ be an L-parameter for G. Conjecture 8.1.23. [AMS18, Conjecture 2] For any s = [L, σ] G ∈ B(G), the LLC for L given byσ → (ϕ σ , ρ σ ) Φ s ∨ e (G), where s ∨ = [L ∨ , (ϕ σ , ρ σ )] G ∨ .Conjecture 8.1.23 is proved for split classical groups [Mou17, §5.3], for GL n (F ) and SL n (F ) [ABPS16b, Theorems 5.3 and 5.6], for principal series representations of split groups [ABPS17b, §16]. For the group G 2 , a bijection between Irr s (G) and Φ s ∨ e (G) has been constructed in [AX22b, Theorem 3.1.19]. For GSp 4 (F ) and Sp 4 (F ), one can easily verify the axioms in the Main Theorem of [AX22b], and thus we have an isomorphism (8.1.25) Irr s (G) ∼ − → Φ s ∨ e (G) Irr s (G) and Φ e (G) = s ∨ ∈B ∨ (G) characters through which O[T /T (O Fv )] W L acts on π p are W G -conjugates of χ and there exists w ∈ W G such that wχ lifts χ. (3) The localized invariants (π p ) n 0 are 1-dimensional and the action of O[T /(T (O Fv ))] W L is through wχ. (4) Finally, if LLC p (π) = (V π , N π ) is the Weil-Deligne representation associated to π under the Local Langlands Correspondence (1.1.2), then N π = 0 and (5) there is an isomorphism of O[T /T (O Fv )] W G -modules (π p ) n 0 ∼ − → π g . For the group GSp 4 and Sp 4 , by [AX22b, Main Theorem], we have such a bijection (1.1.1) for each Bernstein series Irr s (G) of intermediate series. On the other hand, the analogous bijection to (1.1.1) holds for principal series Bernstein blocks thanks toAMS18, Conjecture 2] and Conjecture 8.1.23): (1.1.1) Irr s (G) ∼ − → Φ s ∨ e (G). Table 1 . 1Weyl group conjugacy classes Many properties of the representation π is already visible from the representation τ of G [x] : Lemma 3.1.3. [AX22a, Prop 3.2.4] The formal degree of the depth-zero representation2.8)] coincides with the fixator G [x] of [x] under the action of G on the reduced building of G. Then π is compactly induced from a representation of N G (G x,0 ), i.e. (3.1.2) π = c-Ind G G [x] (τ ). correspondingly. Thus: The Springer correspondence for SL 2 [Lus84b, §10.3]Unipotent pairs Representations of W = µ 2 ([1 2 ], 1) 1 ([2], 1) sgn ([2], −1) cusp Table 2. Here, again the representations of W are parametrized by Lemma 4.0.2 (see also, [CM84, Theorem 10.1.2] 41.3): Unipotent pairs Representations of W = µ 2 2 S 2 ([4], 1) (∅, [1 2 ]) ([2 2 ], 1) ([1], [1]) ([2 2 ], −1) (∅, [2]) ([2, 1 2 ], 1) ([1 2 ], ∅) ([1 4 ], 1) ([2], ∅) Table 4 . 4 Table 5.1]. To see which case we're in, note thatδ([η 2 , νη 2 ], θ) G δ+ corresponds to St GSpin 4 under Lusztig's equivalence E(GSp 4 , ⊗ ⊗ θ) ∼ = E(Z GSpin 5 (s), 1) = E(GSpin 4 , 1). Thus, dim δ([η 2 , νη 2 ], θ) I = δ([η 2 , νη 2 ], θ) G δ+ , R 1 T = St GSpin 4 , R 1 T = 1,and we are in case t a of [Ram03,Table 5.1]. Thus the L-parameter of δ([χ 2 , νχ 2 ], θ) is (ϕ σ Table 2.1]. Thus the L-parameters are:(ϕ σ,[1 4 ] , 1), (ϕ σ,[2,1 2 ] , 1), (ϕ σ,[2,1 2 ] , 1), (ϕ σ,[2 2 ] , 1). Table 2.1] so the L-parameters are (ϕ [1 4 ] , 1) and (ϕ [2 2 ] , 1). 7.2. Intermediate series for GSp 4 . Lemma 7.2.1. Let ϕ be a 2-dimensional irreducible semisimple representation of W F . Then ϕ| I F remains irreducible. Property 8.1.4. ([Art06, §2], and [Kal16, Conjecture B]) The elements of Π ϕ (G) are in bijection with Irr(S ϕ ). The following property is [Vog93, Conjecture 7.18], or equivalently [Hai14, Conjecture 5.2.2]. for eachBernstein series Irr s (G) of intermediate series. On the other hand, the bijection (8.1.25) holds for principal series blocks thanks to [Roc98, Ree02, ABPS16a, AMS18]. Property 8.1.26 (Functoriality). There is a commutative diagramΠ(GSp 2n ) Φ(GSp 2n ) Π(Sp 2n ) Φ(Sp 2n ) LLC std LLC which we define to be simply the ones that are not non-singular in the sense of [Kal21] There exist in literature different ways to normalize the Springer correspondences, see for example[CM84]; for constructing LLC, the normalization used sends the regular nilpotent orbit to the sign representation of W . Note that our normalization of the Springer correspondence differs with[CM84] by a sgn-twist. we certainly expect this property to hold for positive-depth L-packets as well. By case 4(b)iv we have G ϕ GSp 2,2 (C) and S ϕ µ 2 . By the discussion in §5, we have ϕ(η 2 ; χ) = ϕ δ([η 2 ,νη 2 ],ν −1/2 χ) . (c) Let π be a non-unipotent depth-zero singular supercuspidal representation of G. As recalled in (3.1.2), we have π = c-Ind G G[x]τ , where x is a vertex of the Bruhat-Tits building of G and τ is inflated from a representation in the Lusztig series E(G x , s) with s = 1. By Proposition 3.1.14, We have two cases, where x = α:• From §3.1.1 Proposition 3.1.14(3), the reductive quotient G δ ∼ = GSp 2,2 (F q ) := {(g, h) ∈ GL 2 (F q ) × GL 2 (F q ) : det(g) = det(h)} has a rational Lusztig series E(G x 1 , s), where s = (λ, λ) for some λ ∈ F q 2 such that λ q−1 = −1, with singular cuspidal representations ω η 2 cusp . Let π(η 2 ; χ) denote the compact induction c-IndGαZ (ω η 2 cusp ⊗ χ), for each unramified character χ of F × . There are two (depth-zero) ramified cubic characters η 2 and η 2 of F × . Define the following L-parameter with unipotent [2 2 ]:By case 4(b)iv we have G ϕ GSp 2,2 (C), the unipotent element u is regular in G ϕ , and S ϕ µ 2 . By the discussion in § 5, we have ϕ(η 2 ; χ) = ϕ δ([η 2 ,νη 2 ],ν −1/2 χ) , where δ([η 2 , νη 2 ], ν −1/2 χ) is the unique discrete series subquotient of νη 2 × η 2 ν −1/2 χ. By Proposition 6.5.5, we obtain two L-packets of size 2, for each i = 1, 2, 3,is an anisotropic maximal torus and θ is a character of T such that θ 2 is regular. This gives rise to the singular supercuspidal π (S,θ θ) , where θ is a regular character of E × , for an unramified quadratic extension E/F (see Definition 3.1.4). Let ϕ θ be the L-parameter which is χ 2 ⊕ Ind W F W E (θ) as a W F -representation, with unipotent SL 2 (C) acting on χ 2 .Then by the discussion in §5, the L-packet isLet π be a positive-depth singular supercuspidal representation of G. As in §5, such a singular supercuspidal representation necessarily arises from a self-dual supercuspidal representation π u of PGL 2 (F ), via the following recipe:• π u is a supercuspidal representation of GL 2 (F ), which corresponds to a nontrivial representation JL(π u ) of D × /F × under the Jacquet-Langlands correspondence, for D/F the quaternion algebra. The Kim-Yu type is given by a twisted Levi sequence (G 0 ⊂ · · · ⊂ G d = D × /F × ). • π has Kim-Yu type given by the twisted Levi sequence (G 0 ⊂ · · · ⊂ G d = D × /F × ⊂ GSp 4 (F )). It lives in a mixed L-packet together with δ(ν 1/2 π u ν −1/2 χ −1 ), the essentially tempered sub-representation of ν 1/2 π u ν −1/2 χ −1 . Letting ϕ be the L-parameter χ 2 ⊕ V where V is the W F -representation corresponding to ϕ u under the LLC for PGL 2 (F ), with unipotent [2, 1 2 ]. Then (8.2.7) Π ϕ (G) = {π, δ(ν 1/2 π u ν −1/2 χ −1 )} Let G be the group of F -rational points of the groups Sp 4 and GSp 4 . We suppose that the residual characteristic of F is different from 2.Appendix A. Applications to the Taylor-Wiles methodIn this appendix, we adopt notations consistent with standard literature on this topic, though these notations may differ slightly from our main text.We apply the theory developed in[Whi22], which gives a generalized Taylor-Wiles method (see for example[Tho22]) using input from (explicit) Local Langlands Correspondences (e.g.[RS07]), except that we are now equipped with our explicit Local Langlands Correspondence (1.1.2) LLC SX : π → (V π , N π ). (A.0.1)Here we switch to the notation (V π , N π ) loc.cit. instead of our original notations in (1.1.2). We work with Q p -coefficients by fixing an isomorphism ι : C ∼ − → Q p compatible with the choice of q 1/2 v as loc.cit. As in[BCGP21], we view LLC as sending an equivalence class of a smooth irreducible Q p -valued representation of GSp 4 (F v ) to a Weil-Deligne representation of W Fv valued in GSp(Q p ).Let g ∈T (k) for a split maximal torusT contained in a Borel subgroupB ofĜ. Let M g := ZĜ k (g) be the scheme-theoretic centralizer of g.Suppose that q v ≡ 1 mod p. Our explicit LLC gives the following "local lemmas" [Whi22, Propositions 5.18, 5.19], which are analogues for GSp 4 of [Tho22, Proposition 3.13].Proposition A.0.2 (Whitmore). Let π be an admissible irreducible Q p [G(F v )]-module such that (π p 1 ) n 1 = 0. Then (1) π is a subquotient of a parabolically induced representation i G B χ for some tamely ramified smooth character χ :(2) The characters through which O[T /T ∩p 1 ] W L acts on π p 1 are W G -conjugates of χ and there exists w ∈ W G such that wχ lifts χ. (3) The localized invariants (π p 1 ) n 1 are 1-dimensional and the action of O[T /(T ∩ p 1 )] W F is through wχ. (4) Finally, if LLC p (π) = (V π , N π ) is the Weil-Deligne representation associated to π under the Local Langlands Correspondence (1.1.2), then N π = 0.Proof. Statements (1)-(3) follow from [Whi22, Lemma 5.16]. To verify (4), one works case by case according to M g up to conjuacy.• Suppose that g is regular semisimple. In this case, L is a maximal torus and π is an irreducible principal series χ 1 × χ 2 σ. Then by §4 Case (4e), we have N π = 0. Let ϕ σ : W F → L ∨ be the L-parameter attached to σ by the Local Langlands Correspondence for L (see [BH06, LL79]). The L ∨ -conjugacy class of ϕ σ is uniquely determined by σ, and one can easily check that ϕ (χ•det)⊗σ = ϕ σ ⊗ ϕ χ (see for example. Recall from §2.2, L is conjugate to GL 1 × GL 1 × GSp 0 (resp. GL 1 × GL 1 × Sp 0 ), GL 2 × GSp 0 (resp. GL 2 × Sp 0 ) and GL 1 × GSp 2 (resp. GL 1 × Sp 2 ). Kal21, Proposition 3.4.6]), i.e. [AX22b, Property 3.12(1)] holds. This allows us to define (8.2.3) s ∨ := [L ∨ , (ϕ σ , 1)] G ∨When π ∈ Irr(G) is not supercuspidal, we have s = [L, σ] G where L is a proper Levi subgroup of G. Recall from §2.2, L is conjugate to GL 1 × GL 1 × GSp 0 (resp. GL 1 × GL 1 × Sp 0 ), GL 2 × GSp 0 (resp. GL 2 × Sp 0 ) and GL 1 × GSp 2 (resp. GL 1 × Sp 2 ). Let ϕ σ : W F → L ∨ be the L-parameter attached to σ by the Local Langlands Correspondence for L (see [BH06, LL79]). The L ∨ -conjugacy class of ϕ σ is uniquely determined by σ, and one can easily check that ϕ (χ•det)⊗σ = ϕ σ ⊗ ϕ χ (see for example [Kal21, Proposition 3.4.6]), i.e. [AX22b, Property 3.12(1)] holds. This allows us to define (8.2.3) s ∨ := [L ∨ , (ϕ σ , 1)] G ∨ . We have given explicit Kazhdan-Lusztig triples and L-packets in §7. We consider now the case where π is supercuspidal. Hence we have s = [G, π] G for π an irreducible supercuspidal representation of G. (a) When π is non-singular supercuspidal. − → Φ S ∨ E, for intermediate series) and in [ABPS16a] (for principal series). we define (ϕ π , ρ π ) to be the enhanced L-parameter constructed in [Kal19, Kal21− → Φ s ∨ e (G), established in [AX22b, Main Theorem] (for intermediate series) and in [ABPS16a] (for principal series). We have given explicit Kazhdan-Lusztig triples and L-packets in §7. We consider now the case where π is supercuspidal. Hence we have s = [G, π] G for π an irreducible supercuspidal representation of G. (a) When π is non-singular supercuspidal, we define (ϕ π , ρ π ) to be the enhanced L-parameter constructed in [Kal19, Kal21]. When π is a unipotent supercuspidal representation of G, we define (ϕ π , ρ π ) to be the enhanced L-parameter. constructed in [Lus95], [Mor96, § 5.6] and [Sol18] (see also [Sol23])When π is a unipotent supercuspidal representation of G, we define (ϕ π , ρ π ) to be the enhanced L-parameter constructed in [Lus95], [Mor96, § 5.6] and [Sol18] (see also [Sol23]). the reductive quotient G δ ∼ = GSp 4 (F q ) has a unique unipotent cuspidal representation θ 10 , giving unipotent supercuspidals π δ (θ 10 ⊗ χ) for each character χ. Define the following L-parameter ϕ(η; χ) with unipotent [2 2 ]: ϕ(η; χ) := diag( η χ, χ, χ, η χ). ∈ B(G). • x = δ: From §3.1.1 Proposition 3.1.14(2. where s ∨ = [L ∨ , (ϕ σ , ρ σ )] G ∨ , and also satisfies Properties 8.1.3, 8.1.4, 8.1.19, 8.1.20, 8.1.22. Moreover, we have Property 8.1.21 for depth-zero L-packets. 5• x = δ: From §3.1.1 Proposition 3.1.14(2), the reductive quotient G δ ∼ = GSp 4 (F q ) has a unique unipotent cuspidal representation θ 10 , giving unipotent supercuspidals π δ (θ 10 ⊗ χ) for each character χ. Define the following L-parameter ϕ(η; χ) with unipotent [2 2 ]: ϕ(η; χ) := diag( η χ, χ, χ, η χ). ∈ B(G), where s ∨ = [L ∨ , (ϕ σ , ρ σ )] G ∨ , and also satisfies Properties 8.1.3, 8.1.4, 8.1.19, 8.1.20, 8.1.22. Moreover, we have Property 8.1.21 for depth-zero L-packets. 5 . Moreover, Properties 8.1.3, 8.1.4, 8.1.5, 8.1.19, and 8.1.20 (and Property 8.1.26 for Sp 4 ) uniquely characterize our correspondenceMoreover, Properties 8.1.3, 8.1.4, 8.1.5, 8.1.19, and 8.1.20 (and Property 8.1.26 for Sp 4 ) uniquely characterize our correspondence. For GSp 4 , since the L-packets of the representations of the proper Levi subgroups of G are all singletons, the L-packet Π ϕπ (G) is a singleton. Hence, by Property 8.1.4, we have ρ π = 1. Thus the map (8.2.1) is uniquely characterized for non-tempered representations. This finishes the case of non-discrete series tempered representations. Property 8.1.20 holds for supercuspidal L-packets by. For the mixed L-packets, this can be seen directly from §8.2 and the lists loc.cit., where we specify which member in a given L-packet is generic. Since we have already treated the discrete series in 8.2, we are done. For Sp 4 (F ), this follows from Property 8.1.26. Finally, Property 8.1.21 follows from the calculations in Sections 3 and 5, as in [AX22a]. Note that we fix a Whittaker datum for Sp 4 (F ) as in [AMS22] (see also [Sol23])Proof. By Property 8.1.3, the L-parameter ϕ π of each irreducible non-tempered representation π of G is uniquely determined. For GSp 4 , since the L-packets of the representations of the proper Levi subgroups of G are all singletons, the L-packet Π ϕπ (G) is a singleton. Hence, by Property 8.1.4, we have ρ π = 1. Thus the map (8.2.1) is uniquely characterized for non-tempered representations. This finishes the case of non-discrete series tempered representations. Property 8.1.20 holds for supercuspidal L-packets by [AX22a, Lemma 10.1.7]. For the mixed L-packets, this can be seen directly from §8.2 and the lists loc.cit., where we specify which member in a given L-packet is generic. Since we have already treated the discrete series in 8.2, we are done. For Sp 4 (F ), this follows from Property 8.1.26. Finally, Property 8.1.21 follows from the calculations in Sections 3 and 5, as in [AX22a]. Note that we fix a Whittaker datum for Sp 4 (F ) as in [AMS22] (see also [Sol23]). Geometric structure for the principal series of a split reductive p-adic group with connected centre. Anne-Marie Aubert, Paul Baum, Roger Plymen, Maarten Solleveld, MR 3519048J. Noncommut. Geom. 102Anne-Marie Aubert, Paul Baum, Roger Plymen, and Maarten Solleveld, Geometric structure for the principal series of a split reductive p-adic group with connected centre, J. Noncommut. Geom. 10 (2016), no. 2, 663-680. MR 3519048 The local Langlands correspondence for inner forms of SLn. 2-34. MR 3579297Res. Math. Sci. 3, The local Langlands correspondence for inner forms of SLn, Res. Math. Sci. 3 (2016), 2-34. MR 3579297 MR 3666049 [ABPS17b] , The principal series of p-adic groups with disconnected center. MR 3653247Conjectures about p-adic groups and their noncommutative geometry. Providence, RIAmer. Math. Soc691Contemp. Math., Conjectures about p-adic groups and their noncommutative geometry, Around Langlands corre- spondences, Contemp. Math., vol. 691, Amer. Math. Soc., Providence, RI, 2017, pp. 15-51. MR 3666049 [ABPS17b] , The principal series of p-adic groups with disconnected center, Proc. Lond. Math. Soc. (3) 114 (2017), no. 5, 798-854. MR 3653247 The local character expansion near a tame, semisimple element. Jeffrey D Adler, Jonathan Korman, MR 2306039Amer. J. Math. 1292Jeffrey D. Adler and Jonathan Korman, The local character expansion near a tame, semisimple element, Amer. J. Math. 129 (2007), no. 2, 381-403. MR 2306039 Generalizations of the Springer correspondence and cuspidal Langlands parameters. Anne-Marie Aubert, Ahmed Moussaoui, Maarten Solleveld, 121-192. MR 3845761Manuscripta Math. 1571-2Anne-Marie Aubert, Ahmed Moussaoui, and Maarten Solleveld, Generalizations of the Springer cor- respondence and cuspidal Langlands parameters, Manuscripta Math. 157 (2018), no. 1-2, 121-192. MR 3845761 Affine hecke algebras for classical p-adic groups. Anne-Marie Aubert, Ahmed Moussaoui, Maarten Solleveld, Anne-Marie Aubert, Ahmed Moussaoui, and Maarten Solleveld, Affine hecke algebras for classical p-adic groups, 2022. A note on L-packets. James Arthur, MR 2217572Special Issue: In honor of John H. Coates. Part. 2James Arthur, A note on L-packets, Pure Appl. Math. Q. 2 (2006), no. 1, Special Issue: In honor of John H. Coates. Part 1, 199-217. MR 2217572 The endoscopic classification of representations. MR 3135650American Mathematical Society61Providence, RIOrthogonal and symplectic groups, The endoscopic classification of representations, American Mathematical Society Colloquium Publications, vol. 61, American Mathematical Society, Providence, RI, 2013, Orthogonal and symplectic groups. MR 3135650 Generic transfer for general spin groups. Mahdi Asgari, Freydoon Shahidi, 137-190. MR 2219256Duke Math. J. 1321Mahdi Asgari and Freydoon Shahidi, Generic transfer for general spin groups, Duke Math. J. 132 (2006), no. 1, 137-190. MR 2219256 Local L-functions for split spinor groups. Mahdi Asgari, Canad. J. Math. 544MRMahdi Asgari, Local L-functions for split spinor groups, Canad. J. Math. 54 (2002), no. 4, 673-693. MR 1913914 The explicit local langlands correspondence for g2. Anne-Marie Aubert, Yujie Xu, Anne-Marie Aubert and Yujie Xu, The explicit local langlands correspondence for g2, 2022. arXiv:2202.01305Hecke algebras for p-adic reductive groups and Local Langlands Correspondence for Bernstein blocks. 37, Hecke algebras for p-adic reductive groups and Local Langlands Correspondence for Bernstein blocks, arXiv:2202.01305 (2022), 37pp. Abelian surfaces over totally real fields are potentially modular. George Boxer, Frank Calegari, Toby Gee, Vincent Pilloni, MR 4349242Publ. Math. Inst. HautesÉtudes Sci. 134George Boxer, Frank Calegari, Toby Gee, and Vincent Pilloni, Abelian surfaces over totally real fields are potentially modular, Publ. Math. Inst. HautesÉtudes Sci. 134 (2021), 153-501. MR 4349242 Grundlehren der mathematischen Wissenschaften. Colin J Bushnell, Guy Henniart, Springer-Verlag335BerlinThe local Langlands conjecture for GL(2). Fundamental Principles of Mathematical SciencesColin J. Bushnell and Guy Henniart, The local Langlands conjecture for GL(2), Grundlehren der mathe- matischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 335, Springer-Verlag, Berlin, 2006. MR 2234120 Local character expansions. Dan Barbasch, Allen Moy, MR 1474804Ann. Sci.École Norm. Sup. 4Dan Barbasch and Allen Moy, Local character expansions, Ann. Sci.École Norm. Sup. (4) 30 (1997), no. 5, 553-567. MR 1474804 Cédric Bonnafé, Representations of SL2(Fq), Algebra and Applications. LondonSpringer-Verlag London, Ltd132732651Cédric Bonnafé, Representations of SL2(Fq), Algebra and Applications, vol. 13, Springer-Verlag London, Ltd., London, 2011. MR 2732651 Armand Borel, L-Functions Automorphic, MR 546608Automorphic forms, representations and L-functions (Proc. Sympos. Pure Math. Corvallis, OreXXXIII, Amer. Math. Soc2Providence, R.I.Armand Borel, Automorphic L-functions, Automorphic forms, representations and L-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 2, Proc. Sympos. Pure Math., XXXIII, Amer. Math. Soc., Providence, R.I., 1979, pp. 27-61. MR 546608 Groupes réductifs sur un corps local, II: Données radicielles valuées. François Bruhat, Jacques Tits, Publ. Math. I.H.E.S. 60MRFrançois Bruhat and Jacques Tits, Groupes réductifs sur un corps local, II: Données radicielles valuées, Publ. Math. I.H.E.S. 60 (1984), 197-376. MR 86c:20042 Induced representations of reductive p-adic groups. I. Joseph Bernstein, Andrei Zelevinsky, MR 579172Ann. Sci. Ecole Norm. Sup. 4Joseph Bernstein and Andrei Zelevinsky, Induced representations of reductive p-adic groups. I, Ann. Sci. Ecole Norm. Sup. (4) 10 (1977), no. 4, 441-472. MR 579172 Chichester, 1993, Conjugacy classes and complex characters, Reprint of the 1985 original. Roger W Carter, MR 1266626Finite groups of Lie type. Wiley-Interscience PublicationRoger W. Carter, Finite groups of Lie type, Wiley Classics Library, John Wiley & Sons, Ltd., Chichester, 1993, Conjugacy classes and complex characters, Reprint of the 1985 original, A Wiley-Interscience Publication. MR 1266626 On characters and formal degrees of discrete series of affine Hecke algebras of classical types. Dan Ciubotaru, Midori Kato, Syu Kato, MR 2891878Invent. Math. 1873Dan Ciubotaru, Midori Kato, and Syu Kato, On characters and formal degrees of discrete series of affine Hecke algebras of classical types, Invent. Math. 187 (2012), no. 3, 589-635. MR 2891878 Nilpotent Orbits In Semisimple Lie Algebra: An Introduction, Nilpotent Orbits In Semisimple Lie Algebra: An Introduction. H David, William M Collingwood, Mcgovern, MR 771671Travaux en Cours. P. DeligneDavid H. Collingwood and William M. McGovern, Nilpotent Orbits In Semisimple Lie Algebra: An Introduction, Nilpotent Orbits In Semisimple Lie Algebra: An Introduction, Travaux en Cours, Hermann, Paris, 1984, Edited by P. Deligne, pp. 1-32. MR 771671 Nilpotent orbits in semisimple Lie algebras. EnglishNew York, NY, Nilpotent orbits in semisimple Lie algebras, New York, NY: Van Nostrand Reinhold Company, 1993 (English). Parametrizing nilpotent orbits via Bruhat-Tits theory. Stephen Debacker, Ann. of Math. 2MRStephen DeBacker, Parametrizing nilpotent orbits via Bruhat-Tits theory, Ann. of Math. (2) 156 (2002), no. 1, 295-332. MR 1935848 Parameterizing conjugacy classes of maximal unramified tori via Bruhat-Tits theory. MR 2214792Michigan Math. J. 541, Parameterizing conjugacy classes of maximal unramified tori via Bruhat-Tits theory, Michigan Math. J. 54 (2006), no. 1, 157-178. MR 2214792 Stable distributions supported on the nilpotent cone for the group G2, The unity of mathematics. Stephen Debacker, David Kazhdan, MR 2181807Progr. Math. 244Birkhäuser BostonStephen DeBacker and David Kazhdan, Stable distributions supported on the nilpotent cone for the group G2, The unity of mathematics, Progr. Math., vol. 244, Birkhäuser Boston, Boston, MA, 2006, pp. 205- 262. MR 2181807 Pierre Deligne, George Lusztig, Representations of reductive groups over finite fields. 103Pierre Deligne and George Lusztig, Representations of reductive groups over finite fields, Ann. of Math. 103 (1976), no. 1, 103-161. Depth-zero supercuspidal L-packets and their stability. Stephen Debacker, Mark Reeder, MR 2480618Ann. of Math. 2Stephen DeBacker and Mark Reeder, Depth-zero supercuspidal L-packets and their stability, Ann. of Math. (2) 169 (2009), no. 3, 795-901. MR 2480618 Germs, characters, and the Fourier transforms of nilpotent orbits. Stephen Debacker, Paul J SallyJr, MR 1767897The mathematical legacy of Harish-Chandra. Baltimore, MD; Providence, RIAmer. Math. Soc68Proc. SymposStephen DeBacker and Paul J. Sally, Jr., Germs, characters, and the Fourier transforms of nilpotent orbits, The mathematical legacy of Harish-Chandra (Baltimore, MD, 1998), Proc. Sympos. Pure Math., vol. 68, Amer. Math. Soc., Providence, RI, 2000, pp. 191-221. MR 1767897 Supercuspidal unipotent representations: L-packets and formal degrees. Yongqi Feng, Eric Opdam, Maarten Solleveld, MR 4167790J.Éc. polytech. Math. 72020Yongqi Feng, Eric Opdam, and Maarten Solleveld, Supercuspidal unipotent representations: L-packets and formal degrees, J.Éc. polytech. Math. 7 (2020), 1133-1193. MR 4167790 Irreducible constituents of principal series of SLn(k). S S Gelbart, A W Knapp, MR 620252Duke Math. J. 482S. S. Gelbart and A. W. Knapp, Irreducible constituents of principal series of SLn(k), Duke Math. J. 48 (1981), no. 2, 313-326. MR 620252 L-indistinguishability and R groups for the special linear group. MR 644669Adv. in Math. 432, L-indistinguishability and R groups for the special linear group, Adv. in Math. 43 (1982), no. 2, 101-121. MR 644669 Thomas J Haines, MR 3444233The stable Bernstein center and test functions for Shimura varieties, Automorphic forms and Galois representations. CambridgeCambridge Univ. Press2Thomas J. Haines, The stable Bernstein center and test functions for Shimura varieties, Automorphic forms and Galois representations. Vol. 2, London Math. Soc. Lecture Note Ser., vol. 415, Cambridge Univ. Press, Cambridge, 2014, pp. 118-186. MR 3444233 Harish-Chandra, Admissible invariant distributions on reductive p-adic groups. Stephen DeBacker and Paul J. Sally, Jr. MR 1702257Providence, RIAmerican Mathematical Society16Harish-Chandra, Admissible invariant distributions on reductive p-adic groups, University Lecture Series, vol. 16, American Mathematical Society, Providence, RI, 1999, With a preface and notes by Stephen DeBacker and Paul J. Sally, Jr. MR 1702257 Une preuve simple des conjectures de Langlands pour GL(n) sur un corps p-adique. Guy Henniart, MR 1738446Invent. Math. 1392Guy Henniart, Une preuve simple des conjectures de Langlands pour GL(n) sur un corps p-adique, Invent. Math. 139 (2000), no. 2, 439-455. MR 1738446 Formal degrees and adjoint γ-factors. Kaoru Hiraga, Atsushi Ichino, Tamotsu Ikeda, MR 2350057J. Amer. Math. Soc. 211Kaoru Hiraga, Atsushi Ichino, and Tamotsu Ikeda, Formal degrees and adjoint γ-factors, J. Amer. Math. Soc. 21 (2008), no. 1, 283-304. MR 2350057 On L-packets for inner forms of SLn. Kaoru Hiraga, Hiroshi Saito, vi+97. MR 2918491Mem. Amer. Math. Soc. 2151013Kaoru Hiraga and Hiroshi Saito, On L-packets for inner forms of SLn, Mem. Amer. Math. Soc. 215 (2012), no. 1013, vi+97. MR 2918491 The geometry and cohomology of some simple Shimura varieties. Michael Harris, Richard Taylor, Annals of Mathematics Studies. Vladimir G. Berkovich. MR 1876802151Princeton University PressMichael Harris and Richard Taylor, The geometry and cohomology of some simple Shimura varieties, Annals of Mathematics Studies, vol. 151, Princeton University Press, Princeton, NJ, 2001, With an appendix by Vladimir G. Berkovich. MR 1876802 The local Langlands conjectures for non-quasi-split groups, Families of automorphic forms and the trace formula. Tasho Kaletha, arXiv:1912.03274Simons Symp. Springer3283MR 3675168 [Kal19] , Regular supercuspidal representations. MR 4013740 [Kal21] , Supercuspidal L-packetsTasho Kaletha, The local Langlands conjectures for non-quasi-split groups, Families of automorphic forms and the trace formula, Simons Symp., Springer, [Cham], 2016, pp. 217-257. MR 3675168 [Kal19] , Regular supercuspidal representations, J. Amer. Math. Soc. 32 (2019), no. 4, 1071-1170. MR 4013740 [Kal21] , Supercuspidal L-packets, arXiv:1912.03274 (2021), 83. Tasho Kaletha, Representations of reductive groups over local fields. Tasho Kaletha, Representations of reductive groups over local fields, 2022. The principal three-dimensional subgroup and the Betti numbers of a complex simple Lie group. Bertram Kostant, MR 0114875Amer. J. Math. 81Bertram Kostant, The principal three-dimensional subgroup and the Betti numbers of a complex simple Lie group, Amer. J. Math. 81 (1959), 973-1032. MR 0114875 Construction of tame types, Representation theory, number theory, and invariant theory. Ju-Lee Kim, Jiu-Kang Yu, MR 3753917Progr. Math. 323Birkhäuser/SpringerJu-Lee Kim and Jiu-Kang Yu, Construction of tame types, Representation theory, number theory, and invariant theory, Progr. Math., vol. 323, Birkhäuser/Springer, Cham, 2017, pp. 337-357. MR 3753917 . J.-P Labesse, R P Langlands, L-Indistinguishability For Sl, 726-785. MR 540902Canadian J. Math. 312J.-P. Labesse and R. P. Langlands, L-indistinguishability for SL(2), Canadian J. Math. 31 (1979), no. 4, 726-785. MR 540902 On depth zero L-packets for classical groups. Jaime Lust, Shaun Stevens, 1083-1120. MR 4118530Proc. Lond. Math. Soc. 3121Jaime Lust and Shaun Stevens, On depth zero L-packets for classical groups, Proc. Lond. Math. Soc. (3) 121 (2020), no. 5, 1083-1120. MR 4118530 Irreducible representations of finite classical groups. G Lusztig, Invent. Math. 43EnglishG. Lusztig, Irreducible representations of finite classical groups, Invent. Math. 43 (1977), 125-175 (Eng- lish). Representations of finite Chevalley groups. Expository lectures from the CBMS regional conference. George Lusztig, Reg. Conf. Ser. Math. 391978EnglishGeorge Lusztig, Representations of finite Chevalley groups. Expository lectures from the CBMS regional conference held at Madison, Wisconsin, August 8-12, 1977, Reg. Conf. Ser. Math., vol. 39, American Mathematical Society (AMS), Providence, RI, 1978 (English). MR 742472Characters of reductive groups over a finite field. Princeton, NJPrinceton University Press107, Characters of reductive groups over a finite field, Annals of Mathematics Studies, vol. 107, Princeton University Press, Princeton, NJ, 1984. MR 742472 Intersection cohomology complexes on a reductive group. MR 732546Invent. Math. 752, Intersection cohomology complexes on a reductive group, Invent. Math. 75 (1984), no. 2, 205-272. MR 732546 Classification of unipotent representations of simple p-adic groups. MR 1369407Internat. Math. Res. Notices. 11, Classification of unipotent representations of simple p-adic groups, Internat. Math. Res. Notices (1995), no. 11, 517-589. MR 1369407 Multiplicité 1 dans les paquets d'Arthur aux places p-adiques, On certain L-functions. MR 2767522Clay Math. Proc. 13Amer. Math. SocColette MoeglinColette Moeglin, Multiplicité 1 dans les paquets d'Arthur aux places p-adiques, On certain L-functions, Clay Math. Proc., vol. 13, Amer. Math. Soc., Providence, RI, 2011, pp. 333-374. MR 2767522 Lawrence Morris, MR 1399618Tamely ramified supercuspidal representations. 29Lawrence Morris, Tamely ramified supercuspidal representations, Ann. Sci.École Norm. Sup. 29 (1996), no. 5, 639-667. MR 1399618 Centre de Bernstein dual pour les groupes classiques. Ahmed Moussaoui, MR 3694312Represent. Theory. 21Ahmed Moussaoui, Centre de Bernstein dual pour les groupes classiques, Represent. Theory 21 (2017), 172-246. MR 3694312 Jacquet functors and unrefined minimal K-types. Allen Moy, Gopal Prasad, MR 1371680Comment. Math. Helvetici. 713Allen Moy and Gopal Prasad, Jacquet functors and unrefined minimal K-types, Comment. Math. Helvetici 71 (1996), no. 3, 98-121. MR 1371680 On the formal degree conjecture for non-singular supercuspidal representations. Kazuma Ohara, Kazuma Ohara, On the formal degree conjecture for non-singular supercuspidal representations, 2021. Representations of rank two affine Hecke algebras. Arun Ram, Advances in algebra and geometry. Hindustan Book Agency; New DelhiMRHyderabadArun Ram, Representations of rank two affine Hecke algebras, Advances in algebra and geometry (Hy- derabad, 2001), Hindustan Book Agency, New Delhi, 2003, pp. 57-91. MR 1986143 Isogenies of Hecke algebras and a Langlands correspondence for ramified principal series representations. Mark Reeder, Represent. Theory. 6MRMark Reeder, Isogenies of Hecke algebras and a Langlands correspondence for ramified principal series representations, Represent. Theory 6 (2002), 101-126. MR 1915088 Types and Hecke algebras for principal series representations of split reductive p-adic groups. Alan Roche, MR 1621409Ann. Sci.École Norm. Sup. 4Alan Roche, Types and Hecke algebras for principal series representations of split reductive p-adic groups, Ann. Sci.École Norm. Sup. (4) 31 (1998), no. 3, 361-413. MR 1621409 Brooks Roberts, Ralf Schmidt, MR 2344630Local newforms for GSp(4). BerlinSpringer1918Brooks Roberts and Ralf Schmidt, Local newforms for GSp(4), Lecture Notes in Mathematics, vol. 1918, Springer, Berlin, 2007. MR 2344630 The local Langlands correspondence for GLn over p-adic fields. Peter Scholze, 663-715. MR 3049932Invent. Math. 1923Peter Scholze, The local Langlands correspondence for GLn over p-adic fields, Invent. Math. 192 (2013), no. 3, 663-715. MR 3049932 Formal degree of regular supercuspidals. David Schwein, David Schwein, Formal degree of regular supercuspidals, 2021. Jean-Pierre Serre, MR 0450380Linear representations of finite groups. Leonard L. ScottNew York-HeidelbergSpringer-Verlag42Jean-Pierre Serre, Linear representations of finite groups, Graduate Texts in Mathematics, Vol. 42, Springer-Verlag, New York-Heidelberg, 1977, Translated from the second French edition by Leonard L. Scott. MR 0450380 A proof of Langlands' conjecture on Plancherel measures; complementary series for p-adic groups. Freydoon Shahidi, 273-330. MR 1070599Ann. of Math. 2132Freydoon Shahidi, A proof of Langlands' conjecture on Plancherel measures; complementary series for p-adic groups, Ann. of Math. (2) 132 (1990), no. 2, 273-330. MR 1070599 Harmonic analysis on reductive groups. Langlands' conjecture on Plancherel measures for p-adic groups. Bowdoin College in Brunswick, ME, USA; Boston, MA etcEnglishProceedings of a conference, Langlands' conjecture on Plancherel measures for p-adic groups, Harmonic analysis on reductive groups. Proceedings of a conference, held at Bowdoin College in Brunswick, ME, USA, from July 31 to August 11, 1989, Boston, MA etc.: Birkhäuser, 1991, pp. 277-295 (English). A local Langlands correspondence for unipotent representations. Maarten Solleveld, Maarten Solleveld, A local Langlands correspondence for unipotent representations, 2018. Endomorphism algebras and Hecke algebras for reductive p-adic groups. 371-470. MR 4432237J. Algebra. 606, Endomorphism algebras and Hecke algebras for reductive p-adic groups, J. Algebra 606 (2022), 371-470. MR 4432237 On principal series representations of quasi-split reductive p-adic groups. Maarten Solleveld, Maarten Solleveld, On principal series representations of quasi-split reductive p-adic groups, 2023. Induced representations and classifications for GSp(2, F ) and Sp(2, F ). Paul J Sally, Jr , Marko Tadić, MR 1212952Mém. Soc. Math. France (N.SPaul J. Sally, Jr. and Marko Tadić, Induced representations and classifications for GSp(2, F ) and Sp(2, F ), Mém. Soc. Math. France (N.S.) (1993), no. 52, 75-133. MR 1212952 The explicit local langlands correspondence for g2 ii: character formulas and stability. Kenta Suzuki, Yujie Xu, Kenta Suzuki and Yujie Xu, The explicit local langlands correspondence for g2 ii: character formulas and stability, 2023. Representations of p-adic symplectic groups. Marko Tadić, Compositio Math. 9021266251Marko Tadić, Representations of p-adic symplectic groups, Compositio Math. 90 (1994), no. 2, 123-181. MR 1266251 Jack A Thorne, arXiv:2207.04925On the vanishing of adjoint Bloch-Kato Selmer groups of irreducible automorphic Galois representations. 32ppJack A. Thorne, On the vanishing of adjoint Bloch-Kato Selmer groups of irreducible automorphic Galois representations, arXiv:2207.04925 (2022), 32 pp. The local Langlands conjecture, Representation theory of groups and algebras. David A VoganJr, MR 1216197Contemp. Math. 145Amer. Math. SocDavid A. Vogan, Jr., The local Langlands conjecture, Representation theory of groups and algebras, Contemp. Math., vol. 145, Amer. Math. Soc., Providence, RI, 1993, pp. 305-379. MR 1216197 Jean-Loup Waldspurger, Intégrales orbitales nilpotentes et endoscopie pour les groupes classiques non ramifiés. French269Jean-Loup Waldspurger, Intégrales orbitales nilpotentes et endoscopie pour les groupes classiques non ramifiés, Astérisque, vol. 269, Paris: Société Mathématique de France, 2001 (French). The taylor-wiles method for reductive groups. Dmitri Whitmore, Dmitri Whitmore, The taylor-wiles method for reductive groups, 2022. Construction of tame supercuspidal representations. Jiu-Kang Yu, MR 1824988J. Amer. Math. Soc. 3Jiu-Kang Yu, Construction of tame supercuspidal representations, J. Amer. Math. Soc. (2001), no. 3, 579-622. MR 1824988 . M I T , 77Massachusetts Avenue, Cambridge, MA, USAEmail address: [email protected]., 77 Massachusetts Avenue, Cambridge, MA, USA Email address: [email protected] . M I T , 77Massachusetts Avenue, Cambridge, MA, USAEmail address: [email protected]., 77 Massachusetts Avenue, Cambridge, MA, USA Email address: [email protected]
[]
[ "About the asymptotic behaviour of the martingale associated with the Vertex Reinforced Jump Process on trees and Z d", "About the asymptotic behaviour of the martingale associated with the Vertex Reinforced Jump Process on trees and Z d" ]
[ "V Rapenne \nInstitut Camille\nJordan\n" ]
[ "Institut Camille\nJordan" ]
[]
We study the asymptotic behaviour of the martingale (ψ n (o)) n∈N associated with the Vertex Reinforced Jump Process (VRJP). We show that it is bounded in L p for every p > 1 on trees and uniformly integrable on Z d in all the transient phase of the VRJP. Moreover, when the VRJP is recurrent on trees, we have good estimates on the moments of ψ n (o) and we can compute the exact decreasing rate τ such that n −1 ln(ψ n (o)) ∼ −τ almost surely where τ is related to standard quantities for branching random walks. Besides, on trees, at the critical point, we show that n −1/3 ln(ψ n (o)) ∼ −ρ c almost surely where ρ c can be computed explicitely. Furthermore, at the critical point, we prove that the discrete process associated with the VRJP is a mixture of positive recurrent Markov chains. Our proofs use properties of the β-potential associated with the VRJP and techniques coming from the domain of branching random walks. arXiv:2207.12683v2 [math.PR] 1 Jun 2023
null
[ "https://export.arxiv.org/pdf/2207.12683v2.pdf" ]
251,066,745
2207.12683
d0b0e6f4f7e7c14147d25eedd5c0bfa4e0247639
About the asymptotic behaviour of the martingale associated with the Vertex Reinforced Jump Process on trees and Z d V Rapenne Institut Camille Jordan About the asymptotic behaviour of the martingale associated with the Vertex Reinforced Jump Process on trees and Z d We study the asymptotic behaviour of the martingale (ψ n (o)) n∈N associated with the Vertex Reinforced Jump Process (VRJP). We show that it is bounded in L p for every p > 1 on trees and uniformly integrable on Z d in all the transient phase of the VRJP. Moreover, when the VRJP is recurrent on trees, we have good estimates on the moments of ψ n (o) and we can compute the exact decreasing rate τ such that n −1 ln(ψ n (o)) ∼ −τ almost surely where τ is related to standard quantities for branching random walks. Besides, on trees, at the critical point, we show that n −1/3 ln(ψ n (o)) ∼ −ρ c almost surely where ρ c can be computed explicitely. Furthermore, at the critical point, we prove that the discrete process associated with the VRJP is a mixture of positive recurrent Markov chains. Our proofs use properties of the β-potential associated with the VRJP and techniques coming from the domain of branching random walks. arXiv:2207.12683v2 [math.PR] 1 Jun 2023 Introduction and first definitions Let (V, E) be a locally finite graph. Let W > 0. In [DV04], Davis and Volkov introduced a continuous self-reinforced random walk (Y s ) s≥0 known as the Vertex Reinforced Jump Process (VRJP) which is defined as follows: the VRJP starts from some vertex i 0 ∈ V and conditionally on the past before time s, it jumps from a vertex i to one of its neighbour j at rate W L j (s) where L j (s) = 1 + s 0 1{Y u = s}du. In [ST15], Sabot and Tarrès defined the time-change D such that for every s ≥ 0, D(s) = i∈V L i (s) 2 − 1 . Then, they introduced the time-changed process (Z t ) t≥0 = (Y D −1 (t) ) t≥0 . If V is finite, this process is easier to analyse than Y because it is a mixture of Markov processes whose mixing field has a density which is known explicitely. The density of the mixing field of Z was already known as a hyperbolic supersymmetric sigma model. This supersymmetric model was first studied in [DSZ10] and [DS10] and Sabot and Tarrès combined these previous works with their own results in order to make some important progress in the knowledge of the VRJP. However, their formula for the density of the environment of the VRJP was true only on finite graphs. This difficulty has been solved in [STZ17] and [SZ19] where Sabot, Tarrès and Zeng introduced a β-potential with some distribution ν W V which allows to have a representation of the environment of the VRJP on infinite graphs. Thanks to this β-potential, Sabot and Zeng introduced a positive martingale (ψ n (o)) n∈N which converges toward some random variable ψ(o). A remarkable fact is that ψ(o) = 0 if and only if the VRJP is recurrent. Moreover, they proved a 0-1 law for transitive graphs. On these graphs, the VRJP is either almost surely recurrent or almost surely transient. We can study the VRJP on any locally finite graph V . However, in this paper, we will focus only on the two most important cases: • First, we can consider the case where V = Z d . In this case, when d ∈ {1, 2}, the VRJP is always recurrent. (See [SZ19], [Sab21] and [KP21].) On the contrary, when d ≥ 3, Sabot and Tarrès proved in [ST15] that the time-changed VRJP is recurrent for small W and that it is transient for large W . Further, in [Pou19], thanks to a clever coupling of ψ n (o) for different weights, Poudevigne proved there is a unique transition point W c (d) between recurrence and transience on Z d if d ≥ 3. • Another interesting case for the VRJP is when V is a tree. In this case, the environment of the VRJP is easy to describe thanks to independent Inverse Gaussian random variables. Using this representation of the environment, in [CZ18], Chen and Zeng proved there is a unique phase transition between recurrence and transience on supercritical Galton-Watson trees for the time-changed VRJP. (This result was already proved in [BS12] but the proof of [BS12] was very different and did not use the representation of the VRJP as a mixture of Markov processes.) Furthermore the transition point W c (µ) can be computed explicitely and depends only on the mean of the offspring law µ of the Galton-Watson tree. Therefore, if V is a Galton-Watson tree or Z d with d ≥ 3, the following dichotomy is known: there exists W c ∈ R * + (depending on V) such that If W < W c , then a.s, ψ(o) = 0, i.e the VRJP is recurrent. If W > W c , then a.s, ψ(o) > 0, i.e the VRJP is transient. The recurrence of the VRJP can be regarded as a form of "strong disorder". Indeed, if W is small, the reinforcement, i.e the disorder of the system compared to a simple random walk, is very strong. Therefore, the martingale (ψ n (o)) n∈N associated with the system vanishes only when there is strong disorder. This situation is reminiscent of directed polymers in random environment. One can refer to [Com17] for more information on this topic. In the case of directed polymers, there is a positive martingale (M n ) n∈N which converges toward a random variable M ∞ . (M n ) n∈N and (ψ n (o)) n∈N play analoguous roles in different contexts. Indeed, M ∞ > 0 a.s if and only if the system exhibits "weak disorder", exactly as for ψ(o). However, on Z d or on trees, this is possible that M ∞ > 0 a.s but (M n ) n∈N is not bounded in L 2 . (See [CC09] and [BPP93].) Therefore, a natural question regarding (ψ n (o)) n∈N is to know when it is bounded in L p for a fixed value of p > 1. Moreover, as shown in the proof of Theorem 3 in [SZ19], L p boundedness of the martingale (ψ n (o)) n∈N * on Z d for sufficiently large p implies the existence of a diffusive regime for the VRJP, i.e the VRJP satisfies a central-limit theorem. We would like to know whether this diffusive regime coincides with the transient regime or not. This gives another good reason to study the moments of (ψ n (o)) n∈N . Using [DSZ10], [SZ19] and [Pou19], one can prove that, on Z d with d ≥ 3, for any p > 1, there exists a threshold W (p) (d) such that (ψ n (o)) n∈N is bounded in L p for every W > W (p) (d). However, we do not know whether W (p) (d) = W c (d) for every p > 1 or not. In this paper, we will prove that (ψ n (o)) n∈N is uniformly integrable on Z d as soon as the VRJP is transient. Moreover, we will prove that (ψ n (o)) n∈N is bounded in L p for any p > 1 as soon as W > W c (µ) on trees. Furthermore, we will also look at the rate of convergence toward 0 of (ψ n (o)) n∈N on trees when W < W c (µ) under mild assumptions. We have a L p version and an almost sure version of the estimate of the decay of (ψ n (o)) n∈N toward 0. Finally a natural question consists in finding the behaviour of the VRJP at the critical point W c . On Galton-Watson trees, it was proved in [CZ18] or [BS12] that the time-changed VRJP is a mixture of recurrent Markov processes at the critical point. In this paper, we prove that it is even a mixture of positive recurrent Markov processes. However the asymptotic behaviour of the VRJP at the critical point on Z d remains unknown. We will also compute the rate of convergence of (ψ n (o)) n∈N on trees when W = W c (µ). 2 Context and statement of the results 2.1 General notation Let (V, E) be a locally finite countable graph with non oriented edges. We assume that V has a root o. We write i ∼ j when {i, j} ∈ E. For every n ∈ N, we define V n := {x ∈ V, d(o, x) ≤ n} where d is the graph distance on (V, E). For every n ∈ N * , we denote the boundary of V n , that is {i ∈ V n , ∃j ∈ V c n such that {i, j} ∈ E}, by ∂V n . Let us denote by E n the set of edges of V n . If M is a matrix (or possibly an operator) with indices in a set A × B, then for every A ′ ⊂ A and B ′ ⊂ B, the restriction of M to A ′ × B ′ is denoted by M A ′ ,B ′ = (M (i, j)) (i,j)∈A ′ ×B ′ . If M is a symmetric matrix, we write M > 0 when M is positive definite. In this article, we use a lot the Inverse Gaussian distribution. For every (a, λ), recall that an Inverse Gaussian random variable with parameters (a, λ) ∈ R * + 2 has density: 1{x > 0} λ 2πx 3 1/2 exp − λ(x − a) 2 2a 2 x dx. (2.1) The law of the Inverse Gaussian distribution with parameters (a, λ) ∈ R * + 2 is denoted by IG(a, λ). For W > 0 and t ∈ R, if A ∼ IG(1, W ), we write Q(W, t) = E A t . A well-known property of the Inverse Gaussian distribution states that Q(W, t) = Q(W, 1 − t). 2.2 The β-potential and the martingale (ψ n ) n∈N Let (V, E) be an infinite countable graph with non-oriented edges. In this paper, the graph (V, E) will always have a special vertex o called the root. Actually, in our results, V is a rooted tree or Z d with root 0. Let W > 0. In [SZ19], the authors introduced a random potential (β i ) i∈V on V with distribution ν W V such that for every finite subset U ⊂ V , for every (λ i ) i∈U ∈ R U + , exp − i∈U λ i β i ν W V (dβ) = exp     − 1 2 i∼j i,j∈U W 1 + λ i 1 + λ j − 1 − i∼j i∈U,j / ∈U W 1 + λ i − 1     1 i∈U √ 1 + λ i . (2.2) Looking at the Laplace transform in (2.2), we see that (β i ) i∈V is 1-dependent, that is, if U 1 and U 2 are finite subsets of V which are not connected by an edge, then (β i ) i∈U 1 and (β i ) i∈U 2 are independent under ν W V . Moreover, the restriction of this potential on finite subsets has a density which is known explicitely. We give the expression of this density in subsection 3.1. Furthermore, for every (β i ) i∈V , let us introduce the operator H β on V which satisfies: ∀(i, j) ∈ V 2 , H β (i, j) = 2β i 1{i = j} − W 1{i ∼ j}. By proposition 1 in [SZ19], the support of ν W V is D W V = {β ∈ R V , (H β ) U,U is positive definite for all finite subsets U ⊂ V }. Therefore, under ν W V , for every n ∈ N, (H β ) Vn,Vn is positive definite. In particular, it is invertible. We denote byĜ n the inverse of (H β ) Vn,Vn . Moreover, for n ∈ N and β ∈ D W V , let us define (ψ n (i)) i∈V as the unique solution of the equation: (H β ψ n )(i) = 0 ∀i ∈ V n ψ n (i) = 1 ∀i ∈ V c n . (2. 3) The idea behind the definition of (ψ n ) n∈N is to create an eigenstate of H β when n goes to infinity. We can make n go to infinity thanks to the following proposition: Proposition A (Theorem 1 in [SZ19]). For any i, j ∈ V , (Ĝ n (i, j)) n∈N * is increasing ν W V -a.s. In particular there exists a random variableĜ(i, j) such that G n (i, j) −→ n→+∞Ĝ (i, j), ν W V -a.s. Further, for any i, j ∈ V ,Ĝ (i, j) < +∞, ν W V -a. s. Moreover, (ψ n ) n∈N is a vectorial martingale with positive components. In particular, for every i ∈ V the martingale (ψ n (i)) n∈N has an almost sure limit which is denoted by ψ(i). Besides, (Ĝ n ) n∈N is the bracket of (ψ n ) n∈N in the sense that for every i, j ∈ V , (ψ n (i)ψ n (j) −Ĝ n (i, j)) n∈N is a martingale. This martingale (ψ n ) n∈N is crucial in order to study the asymptotic behaviour of the VRJP. One reason for this is that a representation of the environment of the discrete random walk associated with the VRJP starting from i 0 is given by (W G(i 0 , j)G(i 0 , i)) {i,j}∈E where for every (i, j) ∈ V 2 , G(i, j) =Ĝ(i, j) + 1 2γ ψ(i)ψ(j) where γ is random variable with distribution Γ(1/2, 1) which is independent of the random potential β. We will say more about the link between the VRJP and (ψ n ) n∈N in Proposition B. Before this, let us give some notation. (o)) n∈N is bounded in L p if sup n∈N E ν W V [ψ n (o) p ] < +∞. We say that (ψ n (o)) n∈N is uniformly integrable if lim K→+∞ sup n∈N E ν W V [ψ n (o)1{ψ n (o) ≥ K}] = 0. We denote by (Z n ) n∈N the discrete time process associated with the VRJP, that is, the VRJP taken at jump times. We will see that it is a mixture of discrete random walks. Let us introduce the probability measure P V RJP V,W under which (Z n ) n∈N is the discrete time process associated with the VRJP on a graph V with constant weights W starting from o. Notation for the VRJP on trees If V is a rooted tree, there is a natural genealogical order ≤ on V . For u ∈ V , the parent of u is denoted by ⃗ u and the generation of u is denoted by |u|. If (x, u) ∈ V 2 such that x ≤ u, then |u| x = |u| − |x|. If V is a Galton-Watson tree with offspring law µ, let us denote by GW µ the law of V . Then, let us define the probability measure P µ,W under which we first choose randomly the graph V with distribution GW µ and then we choose randomly the potential (β i ) i∈V with distribution ν W V . Moreover, we define P V RJP µ,W under which we first choose randomly the graph V with distribution GW µ and then we choose randomly a trajectory on V with distribution P V RJP V,W . We write E µ,W (·) and E V RJP µ,W (·) when we integrate with respect to P µ,W and P V RJP V,W respectively. The phase transition The martingale ψ is very important in order to understand the recurrence or transience of the VRJP as explained by the following proposition: [Pou19] and [CZ18]). Let us assume that (V, E) is Z d . Then there exists W c (d) > 0 depending only on d such that: For now, on Z d , we are not able to estimate the moments of the martingale (ψ n (o)) n∈N in the transient phase. However, when d ≥ 3, we can prove uniform integrability of this martingale in the transient phase. Proposition B ([ST15], [SZ19],• If W < W c (d), ν W d -a.s, for every i ∈ Z d , ψ(i) = 0 and the VRJP is recurrent. • If W > W c (d), ν W d -• If W ≤ W c (µ), P µ,W -a.s, for every i ∈ V , ψ(i) = 0 and the VRJP is recurrent. • If W > W c (d), P µ, Theorem 1. We assume that V = Z d with d ≥ 3 and that W > W c (d). Then the martingale (ψ n (o)) n∈N is uniformly integrable. Results on Galton-Watson trees Let µ be a probability measure on N. In this paper, we use the following hypotheses for Galton-Watson trees: • Hypothesis A 1 : µ(0) = 0 and m := +∞ k=1 kµ(k) > 1. • Hypothesis A 2 : µ(1) = 0. • Hypothesis A 3 : There exists δ > 0 such that +∞ k=1 k 1+δ µ(k) < +∞. Our first theorem on trees states that, if V is a Galton-Watson tree, (ψ n (o)) n∈N is bounded in L p as soon as the VRJP is transient. Theorem 2. Let V be a Galton-Watson tree with offspring law µ satsifying hypothesis A 1 . Let W > W c (µ). Then, for every p ∈]1, +∞[, the martingale (ψ n (o)) n∈N is bounded in L p , GW µ -a.s. In the recurrent phase, we already know that ψ n (o) a.s − − → 0 on any graph as n goes to infinity. Thanks to the theory of branching random walks and the representation of the VRJP with the β-potential, we are able to be much more accurate on trees. Let us introduce some notation related to branching random walks in order to give the precise asymptotics of (ψ n (o)) n∈N . For every m > 1, W > 0, we define f m,W : R → R t → ln (mQ(W, t)) . Moreover, we will prove in the step 1 of the proof of Theorem 3 that there exists a unique t * (m, W ) > 0 such that f ′ m,W (t * (m, W )) = f m,W (t * (m, W )) t * (m, W ) . (2.4) Then, we define τ (m, W ) = −f ′ m,W (t * (m, W ) ). Thanks to these quantities, we are able to describe the asympotics of (ψ n (o)) n∈N in the two following results. First, we can estimate the moments of (ψ n (o)) n∈N . Theorem 3. Let V be a Galton-Watson tree with offspring law µ satsifying hypotheses A 1 , A 2 and A 3 . Let W < W c (µ). Then we have the following moment estimates: (i) ∀p > 0, E µ,W [ψ n (o) −p ] = E µ,W ψ n (o) 1+p = e npτ (m,W )+o(n) . (ii) ∀p ∈]1 − t * (m, W ), 1[, E µ,W [ψ n (o) p ] = E µ,W ψ n (o) 1−p ≤ e −n(1−p)τ (m,W )+o(n) with τ (m, W ) > 0 and 0 < t * (m, W ) < 1/2. Remark 2.1. In Theorem 3, remark that we can not estimate all the moments of (ψ n (o)) n∈N . This is due to the non-integrability of high moments of some quantities related to branching random walks. We will be more precise in Proposition K. The previous theorem gives good estimates of the moments of (ψ n (o)) n∈N . Moreover, it is also possible to give the exact almost sure decreasing rate of (ψ n (o)) n∈N if W < W c (µ). Theorem 4. Let V be a Galton-Watson tree with offspring law µ satsifying hypotheses A 1 and A 3 . Let W < W c (µ). Then, it holds that, P µ,W -a.s, lim n→+∞ ln(ψ n (o)) n = −τ (m, W ) with τ (m, W ) > 0. The following proposition gives an estimate of the behaviour of the decreasing rate τ (m, W ) near the critical point W c (µ). Proposition 2.1. Let V be a Galton-Watson tree with offspring law µ satsifying hypothesis A 1 . In the neighborhood of the critical point W c (µ), τ (m, W ) ∼ W →Wc(µ) α(m)(W c (µ) − W ) where α(m) = 2 + 1 W c (µ) − 2m K 1 (W c (µ)) K 1/2 (W c (µ)) > 0 where K α is the modified Bessel function of the second kind with index α. Following basically the same lines as in the proofs of the previous estimates on (ψ n (o)) n∈N , we deduce information on the asympotic behaviour of the VRJP when W < W c (µ). More precisely, we can estimate the probability for the VRJP to touch the generation n before coming back to the root o when W < W c (µ). Remind that (Z k ) k∈N is the discrete-time process associated with the VRJP on the rooted tree V starting from o. We define τ + o = inf{k ∈ N * ,Z k = o} and for every n ∈ N * , we define τ n = inf{k ∈ N * , |Z k | = n}. Recall that the probability measure P V RJP µ,W is defined in the paragraph 2.3.2. Proposition 2.2. Let V be a Galton-Watson tree with offspring law µ satsifying hypotheses A 1 , A 2 and A 3 . Let W < W c (µ). Then we have the following estimate: Now, let us look at the behaviour of the martingale (ψ n (o)) n∈N at the critical point W c (µ). −2τ (m, W ) ≤ lim inf n→+∞ ln P V RJP µ,W (τ + o > τ n ) n and lim sup n→+∞ ln P V RJP µ,W (τ + o > τ n ) n ≤ −τ (m, W ) × t * (m, W ) where 0 < t * (m, W ) < 1/2. Theorem 5. Let V be a Galton-Watson tree with offspring law µ satsifying hypothesis A 1 and A 3 . We assume that W = W c (µ). Then, under P µ,W , ln(ψ n (o)) n 1/3 a.s − −−−− → n→+∞ −ρ c where ρ c = 1 2 3π 2 σ 2 2 1/3 with σ 2 = 16m +∞ 0 W c (µ) ln(x) 2 √ 2πx e − Wc(µ) 2 (x+1/x−2) dx. Remark 2.3. At the critical point, we are not able to have precise L p bounds for ψ n (o). Indeed, in the subcritical phase, we have subexponential bounds for some functionals associated with branching random walks. At the critical point, we would need to be more accurate. The recurrence of the VRJP on trees at the critical point W c (µ) was already known. The following theorem states that the VRJP on trees is even positive recurrent at the critical point. This result is of a different kind than the previous ones. However, the proof requires the same tools as before. Theorem 6. Let V be a Galton-Watson tree with offspring law µ satsifying hypothesis A 1 and A 3 . We assume that W = W c (µ). Then, the discrete-time VRJP (Z n ) n∈N associated with (Z t ) t≥0 is a mixture of positive recurrent Markov chains. Background Marginals and conditional laws of the β-potential The law ν W V introduced in section 1 was originally defined on finite graphs in [STZ17] with general weights. More precisely, on a finite set S, we can define a β-potential with some lawν P,η S for every (η i ) i∈S ∈ R S + and every P = (W i,j ) i,j∈S 2 ∈ R S 2 + . One can remark that the weights in the matrix P are not assumed to be constants anymore. Moreover we allow loops, that is, W i,i can be non-zero for every i ∈ S. The term η is a boundary term which represents the weights of some edges relating S to some virtual vertices which are out of S. The probability measureν P,η S is defined in the following way: by Lemma 4 in [SZ19] the function β → 1{H (S) β > 0} 2 π |S|/2 e − 1 2 ⟨1,H (S) β 1⟩− 1 2 ⟨η,(H (S) β ) −1 η⟩+⟨η,1⟩ 1 det H (S) β (3.1) is a density. H (S) β is a matrix on S × S defined by H (S) β (i, j) = 2β i 1{i = j} − W i,j 1{i ∼ j} and 1 stands for the vector (1, · · · , 1) in R S in the expression (3.1). Then, we can define a probability measure with the density (3.1) and we denote it byν P,η S (dβ). Besides, the Laplace transform ofν P,η S can be computed and it is very similar to the Laplace transform of ν W V . Indeed, for any λ ∈ R S + , e −⟨λ,β⟩νP,η S (dβ) = e −⟨η, √ λ+1−1⟩− 1 2 i∼j W i,j √ (1+λ i )(1+λ j )−1 i∈S (1 + λ i ) −1/2 where √ 1 + λ is the vector ( √ 1 + λ i ) i∈S . Further, the family of distributions of the formν P,η S have a very useful behaviour regarding its marginals and conditional laws. Indeed, marginals and conditional laws are still of the formν P,η S . The following lemma gives a formula for the law of the marginals and the conditional laws: Lemma C (Lemma 5 in [SZ19]). Let S be a finite set. Let U ⊂ S be a subset of S. Let (η i ) i∈S ∈ R S + and P = (W i,j ) i,j∈S 2 ∈ R S 2 + . Underν P,η S , (i) β U has lawν P U,U ,η U , where for every i ∈ U ,η i = η i + j∈U c W i,j . (ii) Conditionally on β U , β U c has distributionνP ,η U c whereP andη are defined in the following way: For every (i, j) ∈ U c × U c , P (i, j) =W i,j = W i,j + k∼i,k∈U l∼j,l∈U W i,k W j,l (H β ) −1 U,U (k, l) . For every i ∈ U c ,η i = η i + k∼i,k∈U l∈U W i,k (H β ) −1 U,U (k, l)η l . In [SZ19], the infinite potential ν W V is defined thanks to a sequence of potentials of the form ν P,η Vn on the exhausting sequence (V n ) n∈N which is shown to be compatible. More, precisely, the restrictions of ν W V are given by the following lemma: Lemma D. Let n ∈ N * . Let (β i ) i∈V be a random potential following ν W V . Then (β i ) i∈Vn is distributed asνP (n) ,η (n) Vn where • For every i, j ∈ V n ,P (n) (i, j) = W 1{i ∼ j}. • For every i ∈ V n ,η (n) i = j∼i,j / ∈Vn W . Warm-up about the VRJP Recall that (Z t ) t≥0 := (Y D −1 (t) ) t≥0 is a time-changed version of the VRJP with constant weights W on the graph V . As explained before, (Z t ) t≥0 is easier to analyse than (Y t ) t≥0 because it is a mixture of Markov processes. In the particular case of finite graphs, Sabot and Tarrès gave an explicit description of the density of a random field associated with the environment. Proposition E (Theorem 2 in [ST15]). Let (V, E) be a finite graph. Let W > 0. Then, the time- changed VRJP (Z t ) t≥0 on V with constant weights W > 0 starting from i 0 ∈ V is a mixture of Markov processes. Moreover, it jumps from i to j at rate W e U j −U i where the field (U i ) i∈V has the following density on the set {(u i ) i∈V ∈ R V , u i 0 = 0}: 1 √ 2π |V |−1 exp   − i∈V u i − W {i,j}∈E ((cosh(u i − u j ) − 1)   D(W, u) i∈V \{i 0 } du i with D(W, u) = T ∈T {i,j}∈T W e u i +u j where T is the set of spanning trees of (V, E). This density was originally studied in [DSZ10] in order to study random band matrices. Remark that the distribution of U does not have any obvious property of compatibility. Therefore, this was not possible to extend the field U on a general infinite graph. However, in [STZ17], Sabot, Zeng and Tarrès introduced a smart change of variable which relates the field U and the β-potential. More precisely, if (V, E) is a finite graph, then the field U of Proposition E rooted at i 0 is distributed as (G (V ) (i 0 , i)/G (V ) (i 0 , i 0 )) i∈V where G (V ) is the inverse of H (V ) β which is the operator associated with the potential β with distributionν P,0 V where P (i, j) = W 1{i ∼ j}. In order to have a representation of the environment of the VRJP on infinite graph, Sabot and Zeng extended the β-potential on infinite graphs thanks to the measure ν W V and they proved the following result: Proposition F (Theorem 1 in [SZ19]). If V is Z d with d ≥ 1 or an infinite tree, then the time-changed VRJP (Z t ) t≥0 on V with constant weights W > 0 is a mixture of Markov processes. Moreover, the associated random environment can be described in the following way: if the VRJP started from i 0 , it jumps from i to j at rate (1/2)W G(i 0 , j)/G(i 0 , i) where for every i, j ∈ V , G(i, j) =Ĝ(i, j) + 1 2γ ψ(i)ψ(j) where γ is a random variable with law Γ(1/2, 1) which is independent from the the β-potential with distribution ν W V . In [Ger20], Gerard proved that, in the case of trees, in the transient phase, there are infinitely many different representations of the environment of the VRJP. In this paper, we will often use a representation which is not the same as the one which is given in Proposition F. Now, let us describe this other representation. Specificities of the tree In the density given in Proposition E, if the graph is a tree, one can observe that the random variables U i − U ⃗ i are i.i.d and distributed as the logarithm of an Inverse Gaussian random variable. It comes from the fact that the determinant term in the density becomes a product. Therefore, when the graph (V, E) is an infinite tree with a root o, this is natural to define an infinite version of the field U in the following way: for every i ∈ V , e U i := o<u≤i A u where (A i ) i∈V \{o} is a family of independent Inverse Gaussian random variables with parameters (1, W ). This representation implies directly the following result: Proposition G (Theorem 3 in [CZ18]). If V is a tree with root o, the discrete-time VRJP (Z n ) n∈N which is associated with (Z t ) t≥0 is a random walk in random environment whose random conductances are given by c(i, ⃗ i) = W e U i +U ⃗ i = W A i o<u≤ ⃗ i A 2 u for every i ∈ V \{o}. This representation of the environment of the VRJP on trees is particularly useful because the conductances are almost products of i.i.d random variables along a branch of the tree. This situation is very close from branching random walks. This observation is crucial for the proofs in this paper. In particular, thanks to this representation and its link with branching random walks, this is much easier to compute the critical point on Galton-Watson trees. Proposition H (Theorem 1 in [CZ18] or Theorem 1 in [BS12]). Let V be a Galton-Watson tree with offspring law µ satisfying hypothesis A 1 . Then the VRJP on V with constant weights W is recurrent if and only if mQ(W, 1/2) ≤ 1 where m is the mean of µ. In particular, the critical point W c (µ) is the only solution of the equation mQ(W, 1/2) = 1. Now, remind that our goal is to study the martingale (ψ n (o)) n∈N . This martingale is defined through the potential β. If V is an infinite tree with a special vertex o called the root, we can couple the field U and the potential β in the following way: for every i ∈ V , we definẽ β i := W 2 i∼j e U j −U i = W 2 ⃗ u=i A u + 1{i ̸ = o} 1 A i . (3.2) For every i ∈ V ,β i can be interpreted as the total jump rate of the VRJP at i. The potentialβ is very important for our purposes. One reason for that is Lemma 4.4 which makes a link between the effective resistance associated with the VRJP and some quantity defined through (β i ) i∈V . Now, let γ be a Gamma distribution with parameter (1/2, 1) which is independent of (A i ) i∈V \{o} . Then, let us define β =β + 1{· = o}γ. (3.3) Lemma 3.1. Let us assume that V is a tree. Let W > 0. Then, the potential (β i ) i∈V defined by (3.3) has law ν W V . Proof of Lemma 3.1. This is a direct consequence of Theorem 3 in [CZ18] and Corollary 2 in [STZ17]. From now on, when we work on a tree V , we always assume that, under ν W V , the potential (β i ) i∈V is defined by (3.2) and (3.3). This coupling between the field U and the potential (β i ) i∈V is very important in order to relate our questions regarding the martingale (ψ n (o)) n∈N to tractable questions about branching random walks. This allows us to apply techniques coming from the area of branching random walks in order to study (ψ n (o)) n∈N . β-potential and path expansions In this subsection, we explain howĜ can be interpreted as a sum over a set of paths. This representation ofĜ will be very useful in the sequel of this paper. A path from i to j in the graph (V, E) is a finite sequence σ = (σ 0 , · · · , σ m ) in V such that σ 0 = i and σ m = j and σ k ∼ σ k+1 for every k ∈ {0, · · · m−1}. Let us denote by P V i,j the set of paths from i to j in V . Let us also introducē P V i,j the set of paths from i to j which never hit j before the end of the path. More precisely, it is the set of paths σ = (σ 0 , · · · , σ m ) such that σ 0 = i, σ m = j and σ k ̸ = j for every k ∈ {0, · · · , m − 1}. For any path σ = (σ 0 , · · · , σ m ), we denote its length by |σ| = m. For any path σ in V and for any β ∈ D W V , let us write, (2β) σ = |σ| k=0 (2β σ k ), (2β) − σ = |σ|−1 k=0 (2β σ k ). Then, the following lemma stems directly from Proposition 6 in [SZ19]: Lemma I (Proposition 6 in [SZ19]). Let (V, E) be any locally finite graph. Let W > 0. Let i, j ∈ V . For any β ∈ D W V ,Ĝ (i, j) = σ∈P V i,j W |σ| (2β) σ ,Ĝ (i, j) G(i, i) = σ∈P V j,i W |σ| (2β) − σ . In the special case of trees, we can mix this property with the construction given in subsection 3.3 in order to obtain the following lemma. Lemma 3.2. Let V be a Galton-Watson tree with a root o and an offspring law µ satisfying hypothesis A 1 . Let us assume that W ≤ W c (µ). Then, P µ,W -a.s, for every i ∈ V , G(o, i) G(o, o) = e U i . Proof of Lemma 3.2. Let us assume that the β-potential is constructed as in subsection 3.3. Let us consider the Markov chain (Z k ) k∈N * on V with conductances given by c(i, ⃗ i) = W A −1 i o<u≤i A 2 u = W e U i +U ⃗ i for every i ∈ V . Actually, by Proposition G,Z is the discrete-time process associated with the VRJP. Let us remark that for every i ∈ V , π i := j∼i c(i, j) = e 2U i 2β i . We denote by P c,i the probability measure associated with this Markov chainZ starting from i with random conductances c. Let us introduce the stopping time τ o = inf {n ∈ N,Z n = o}. If σ is a path, we write {Z ∼ σ} to mean thatZ 0 = σ 0 ,Z 1 = σ 1 , etc. Then, it holds that P µ,W -a.s, for every i ∈ V , P c,i (τ o < +∞) = σ∈P V i,o P c,i (Z ∼ σ) = σ∈P V i,o |σ|−1 k=0 W e Uσ k +Uσ k+1 π σ k = σ∈P V i,o |σ|−1 k=0 W e Uσ k+1 −Uσ k 2β σ k . (3.4) There is a telescoping product in (3.4). Consequently, we deduce that P µ,W -a.s, for every i ∈ V , P c,i (τ o < +∞) = e −U i σ∈P V i,o |σ|−1 k=0 W 2β σ k . (3.5) In identity (3.5), remark that σ k is always different from o. Therefore,β can be replaced by β and we obtain that P µ,W -a.s, for every i ∈ V , P c,i (τ o < +∞) = e −U i σ∈P V i,o |σ|−1 k=0 W 2β σ k . (3.6) In (3.6), one can observe the same quantity as in Lemma I. Therefore, P µ,W -a.s, for every i ∈ V , P c,i (τ o < +∞) = e −U iĜ (o, i) G(o, o) . (3.7) However, we assumed W ≤ W c (µ). Thus, by Propositions G and B, we know that P c,i (τ o < +∞) = 1, P µ,W -a.s. Together with (3.7), this concludes the proof. Warm-up about branching random walks In this subsection, we recall the most important facts about one-dimensionnal branching random walks. Indeed, it is a very important tool in this article. One can refer to [Shi15] for more information on this topic. We consider a point process L : = {ρ i , 1 ≤ i ≤ N } such that Nu i + S(u), 1 ≤ i ≤ N u }. The children of individuals of the n-th generation form the n + 1-th generation. In this way, we get an underlying genealogical Galton-Watson tree V with o as a root. For every u ∈ V , we denote the position of u by S(u). The set {(u, S(u)), u ∈ V } is called a branching random walk. Recall that |u| stands for the generation of u ∈ V . Throughout this subsection, we assume there exists δ > 0 such that E      |u|=1 1   1+δ    < +∞. (3.8) Moreover, we assume that for every t ∈ R, E   |u|=1 e tS(u)   < +∞. (3.9) Let us introduce the Laplace transform of L which is defined as f : R → R t → ln E |u|=1 e −tS(u) . Let us also assume that f (0) > 0, f (1) = f ′ (1) = 0. (3.10) For every n ∈ N and for every β > 1, let us define, W n := |u|=n e −S(u) , W n,β = |u|=n e −βS(u) . In [HS09], Hu and Shi proved the following results: Proposition J (Theorem 1.4 of [HS09]). Assume hypotheses (3.8), (3.9) and (3.10) and let β > 1. Conditionally on the system's survival, we have lim sup n→+∞ ln (W n,β ) ln(n) = − β 2 a.s, (3.11) lim inf n→+∞ ln (W n,β ) ln(n) = − 3β 2 a.s. (3.12) Proposition K (Theorem 1.6 in [HS09]). Assume hypotheses (3.8), (3.9) and (3.10) and let β > 1. For any r ∈]0, 1/β[, E W r n,β = n −3rβ/2+o(1) . In many situations, hypothesis (3.10) is not satisfied. However, in most cases, we can transform the branching random walk in order to be reduced to hypothesis (3.10). Indeed, if there exists t * > 0 such that t * f ′ (t * ) = f (t * ), then (S(u)) u∈V := (t * S(u) + f (t * )|u|) u∈V is a branching random walk satisfying (3.10). However, one still has to check that such a t * > 0 does exist. Proposition L (Proposition 7.2, Chapter 3 in [Jaf10]). Let us assume that for every M ∈ R, P(L(] − ∞, −M ]) ̸ = ∅) > 0. Then, there exists t * > 0 such that t * f ′ (t * ) = f (t * ). Remark 3.1. Be careful when you look at reference [Jaf10]. The result is wrongly stated but the proof (of the corrected statement) is correct. Moreover, this is possible to know the sign of f (t * ) and whether t * is unique or not. Proposition 3.3. Let us assume that f (0) > 0 and that there exists t * > 0 such that t * f ′ (t * ) = f (t * ). We assume also that f is strictly convex and that there exists a point t min such that f is strictly decreasing on [0, t min ] and strictly increasing on [t min , +∞[. Then t * is the unique solution in R * + of the equation tf ′ (t) = f (t) and sgn(f (t * )) = sgn(f (t min )). In this subsection, V is a deterministic countable graph with constant weights W > 0. For every n ∈ N, we introduce the sigma-field G n : Moreover, t * < t min if f (t min ) < 0 and t * > t min if f (t min ) > 0. Proof of Proposition 3.3. Let us introduce the function Φ : t → tf ′ (t) − f (t). As f is stricly convex, for every t ∈ R * + , Φ ′ (t) = tf ′′ (t) > 0. Therefore, Φ is stricly increasing on R + . Thus, t * must be unique. Moreover, Φ(t min ) = tf ′ (t min ) − f (t min ) = −f (t min ). Thus, if f (t min ) < 0, then Φ(t min ) > 0. Furthermore, Φ(0) = −f (0) < 0. Therefore, as t * is the unique zero of Φ, t * must be in ]0, t min [. In particular, f (t * ) = t * f ′ (t * ) < 0 because f is strictly decreasing on [0, t min ].= σ (β i ) i∈Vn\{o} . (Recall that V n = {x ∈ V, d(o, x) ≤ n}.) Moreover, for every n ∈ N, let us introduce D n := 1 2 o∼j WĜ n (o, j) G n (o, o) . Then, it is remarkable that ψ n (o) has an Inverse Gaussian distribution conditionally on G n . L(β o |G n ) = D n + 1 2 × IG Ĝ n(o,o) ψn(o) , 1 (ii) L (ψ n (o)|G n ) = IG 1, ψ n (o) G n (o, o) where we recall that IG(a, λ) stands for an Inverse Gaussian distribution with parameters a and λ. The computation achieved in the following proof is basically the same as Proposition 3.4 in [CZ21] but we use it in a different way. with: k) whereĜ Vn\{o} is the inverse of (H β ) Vn\{o},Vn\{o} . • W o,o = o∼j o∼k W 2Ĝ Vn\{o} (j,•η = o∼j k∈Vn\{o} WĜ Vn\{o} (j, k)η (n) k . Nevertheless, reasonning on path-expansions (see Lemma I), one remarks that for every k ∈ V n \{o}, o∼j WĜ Vn\{o} (j, k) =Ĝ n (o, k) G n (o, o) . (4.1) Consequently, by definition of D n and ψ n (o), it holds that • W o,o = o∼k WĜ n(o,k) Gn(o,o) = 2D n . •η = k∈Vn\{o}Ĝ n(o,k) Gn(o,o)η (n) k = 1 Gn(o,o) × k∈∂VnĜ n (o, k)η (n) k = ψn(o) Gn(o,o) . Moreover D n and ψn(o) Gn(o,o) are G n measurable. Indeed D n = 1 2 o∼k WĜ n (o, k) G n (o, o) and ψ n (o) G n (o, o) = k∈∂VnĜ n (o, k)η (n) k G n (o, o) . Further, for every k ∈ V n ,Ĝ n(o,k) Gn(o,o) does not depend on β o by (4.1) and, thus, it is G n measurable. Therefore, by (3.1), conditionally on G n , the law of β o is given by the density 1{β > D n } 1 π(β − D n ) e −(β−Dn) e − 1 4(β−Dn) ψn(o) 2 Gn(o,o) 2 e ψn(o) Gn(o,o) . We can recognise the reciprocal of an Inverse Gaussian distribution. More precisely, L (β o |G n ) = D n + 1 2 × IG Ĝ n(o,o) ψn(o) , 1 . Besides, asĜ n is the inverse of (H β ) Vn,Vn , β o −D n = 1 2Ĝn(o,o) . Consequently, as D n is G n measurable, this yields L Ĝ n (o, o)|G n = IG Ĝ n (o, o) ψ n (o) , 1 . Moreover for every positive numbers (t, a, b), one can check that tIG(a, b) law = IG(ta, tb). Further- moreĜ n(o,o) ψn(o) is G n measurable. Thus, it holds that L (ψ n (o)|G n ) = IG 1, ψ n (o) G n (o, o) . Moreover, we can pass to the limit in Lemma 4.1. Let us define G ∞ := σ (β i ) i∈Z d \{o} . Let us recall that (Ĝ n (i, j)) n∈N converges toward some finite limitĜ(i, j) for every (i, j) ∈ V 2 . Then, we (o, o) . introduce D = 1 2 o∼j WĜ (o,j) G(o,o) .L (β o |G ∞ ) = D + 1 2 × IG Ĝ (o,o) ψ(o) , 1 . (ii) L (ψ(o)|G ∞ ) = IG 1, ψ(o) GProofE ν W V F (β o )1{(β i ) i∈Λ ∈ A} = E ν W V   +∞ 0 F (β + D n ) 1 √ πβ e − 1 4β ψn(o) Gn(o,o) −2β 2 dβ1{(β i ) i∈Λ ∈ A}   . (4.2) Moreover, the function (x, y) → +∞ 0 F (β + x) 1 √ πβ e − 1 4β (y−2β) 2 dβ is clearly continuous and uniformly bounded on (R * + ) 2 . Therefore, as D n , ψ n (o) G n (o, o) a.s − −−−− → n→+∞ D, ψ(o) G(o, o) , by means of the dominated convergence theorem, we can take the limit in (4.2) which implies the first point of our lemma. Then, the second point of Lemma 4.2 stems from the first point, exactly in the same way as in the proof of Lemma 4.1. Now we are able to prove Theorem 1. Proof of Theorem 1. By Lemma 4.2, we know that L (ψ(o)|G ∞ ) = IG 1, ψ(o) G(o, o) . In particular, E ν W V [ψ(o)] = E ν W V IG 1, ψ(o) G(o, o) = 1 (4.3) Thus for every n ∈ N * , E ν W V [ψ n (o)] = E ν W V [ψ(o)] = 1. Moreover, ψ n (o) a.s − −−−− → n→+∞ ψ(o). Thus, by Scheffé's lemma, ψ n (o) L 1 − −−−− → n→+∞ ψ(o). Therefore (ψ n (o)) n∈N is uniformly integrable. Besides, Lemma 4.1 implies the following useful result: Lemma 4.3. Let p ∈ R. For every n ∈ N, E ν W V [ψ n (o) p ] = E ν W V ψ n (o) 1−p . Proof of Lemma 4.3. Let us define Y n = ψn(o) Gn(o,o) . Then, by Lemma 4.1, E ν W V [ψ n (o) p ] = E ν W V Y 1/2 n (2π) −1/2 x p−3/2 exp − Y n (x − 1) 2 /(2x) dx = E ν W V Y 1/2 n (2π) −1/2 x −p+3/2 x −2 exp − Y n x(1/x − 1) 2 /2 dx = E ν W V Y 1/2 n (2π) −1/2 x (−p+1)−3/2 exp − Y n (x − 1) 2 /(2x) dx = E ν W V ψ n (o) 1−p . Resistance formula on a tree In this subsection we assume that V is a tree. Let n ∈ N. Let us define the matrixH n on V n × V n such that for every (i, j) ∈ V n × V n ,H n (i, j) = 2β i 1{i = j} − W 1{i ∼ j}. We assume that the potentialsβ and β are constructed as in (3.2) and (3.3). We also introduce D (n) U which is the diagonal matrix on V n × V n with diagonal entries D (n) U (i, i) = e U i for every i ∈ V n . We can observe that D (n) UH n D (n) U = M n where for every (i, j) ∈ V n × V n , M n (i, j) = k∼i W e U i +U k 1{i = j} − W e U i +U j 1{i ∼ j}. M n is almost a conductance matrix with conductances W e U i +U j between two neighbouring vertices i and j. However, if i ∈ ∂V n , M n (i, i) = k∼i W e U i +U k > k∼i,k∈Vn W e U i +U k . Therefore, M n is strictly larger than a conductance matrix (for the order between symmetric matrices). Moreover conductance matrices are non-negative. Thus, M n andH n are symmetric positive definite matrices. Then, we are allowed to define the inverseG n ofH n . Moreover, for every n ∈ N, we construct a wired version (Ṽ n ,Ẽ n ) of (V n , E n ) in the following way: Ṽ n = V n ∪ {δ n } E n = E n ∪ {(δ n , i), i ∈ ∂V n } where δ n is a new vertex. For every (i, j) ∈ E, recall from the notation of Proposition G that c(i, j) = W e U i +U j . The conductances c are the environment of the VRJP. Now, let us introduce a family of conductances c n onẼ n .    ∀(i, j) ∈ E n , c n (i, j) = c(i, j) ∀i ∈ ∂V n , c n (δ n , i) = j∼i,j∈V c n c(i, j) We denote by R(o ←→ δ n ) the effective resistance between o and δ n in (Ṽ n ,Ẽ n , c n ). Then, we have the following key identity: and h(δ n ) = 0. We are going to prove that h is harmonic everywhere excepted at o and δ n where h(o) = 1 and h(δ n ) = 0. Let i ∈ V n \{o}. Then, it holds that, i∼j c n (i, j)h(j) = i∼j,j∈Vn W e U i +U j ×G n (o, j)e −U j G n (o, o) = e U ĩ G n (o, o) i∼j,j∈Vn WG n (o, j). (4.4) By definitionG n =H −1 n . Together with (4.4), this yields i∼j c n (i, j)h(j) = e U ĩ G n (o, o) × 2β iGn (o, i). (4.5) Then, by definition of U i andβ i , we infer that i∼j c n (i, j)h(j) =G n (o, i) G n (o, o) × i∼j W e U j =G n (o, i)e −U ĩ G n (o, o) ×   c n (i, δ n ) + i∼j,j∈Vn c n (i, j)   = h(i) × i∼j c n (i, j). Consequently, h is harmonic. Therefore, by identity (2.3) in [LP16], R(o ←→ δ n ) = 1 o∼j c n (o, j)(1 − h(j)) . (4.6) Besides, it holds that, o∼j c n (o, j)(1 − h(j)) = o∼j W e U j × 1 −G n (o, j)e −U j G n (o, o) =G n (o, o) −1 o∼j W e U jG n (o, o) −G n (o, j) (4.7) HoweverG n is the inverse ofH n . Therefore, o∼j WG n (o, j) = −1 + 2β 0Gn (o, o). Moreover, o∼j W e U j = 2β 0 . Together with (4.7), this yields o∼j c n (o, j)(1 − h(j)) =G n (o, o) −1 2β 0Gn (o, o) − −1 + 2β 0Gn (0, 0) =G n (o, o) −1 . (4.8) Combining (4.6) and (4.8) concludes the proof. By means of Lemma 4.4, one can prove the following lemma which shall be useful later in this paper. Lemma 4.5. Let V be a Galton-Watson tree whose offspring law satisfies hypothesis A 1 . Burkholder-Davis-Gundy inequality As (ψ n (o)) n∈N is a martingale, there is a relation between its moments and the moments of its bracket (Ĝ n (o, o)) n∈N under mild assumptions. This relation is known as the BDG inequality. This inequality is not always true for discrete martingales. (See [BG70].) However, this is always true for continuous martingales. Fortunately, by [SZ20], for every n ∈ N, ψ n (o) can be obtained as the limit of some continuous martingale. That is why we can prove the following lemma: Lemma 4.6. Let V be a locally finite graph. Let W > 0. Let p > 1. Then, there exist positive constants C 1,p and C 2,p which do not depend on V and W such that for every n ∈ N, C 1,p E ν W V Ĝ n (o, o) p/2 ≤ E ν W V [|ψ n (o) − 1| p ] ≤ C 2,p E ν W V Ĝ n (o, o) p/2 . Proof of Lemma 4.6. By [SZ20], for every n ∈ N, there exists a continuous non-negative martingale (ψ n (o, t)) t≥0 such that, where ⟨· · · , · · · ⟩ is the bracket for semimartingales. For t ≥ 0, let us introduce ψ * n (o, t) = sup s≤t |ψ n (o, s)− 1|. Then, if p > 1, by BDG inequality for continuous martingales (see Theorem 4.1 in [RY98]), there exist positive constants κ 1,p and κ 2,p such that for every n ∈ N, for every t ≥ 0, κ 1,p E ν W V ⟨ψ n (o, t), ψ n (o, t)⟩ p/2 ≤ E ν W V [ψ * n (o, t) p ] ≤ κ 2,p E ν W V ⟨ψ n (o, t), ψ n (o, t)⟩ p/2 . (4.10) As p > 1, by Doob's martingale inequality, there exist C 1,p > 0 and C 2,p > 0 such that for every n ∈ N, for every t ≥ 0, C 1,p E ν W V ⟨ψ n (o, t), ψ n (o, t)⟩ p/2 ≤ E ν W V [|ψ n (o, t) − 1| p ] ≤ C 2,p E ν W V ⟨ψ n (o, t), ψ n (o, t)⟩ p/2 . (4.11) Let us define ψ * n (o) as the increasing limit of ψ * n (o, t) when t goes toward infinity. By monotone convergence theorem in (4.10), for every n ∈ N, E ν W V [ψ * n (o) p ] ≤ κ 2,p E ν W V Ĝ n (o, o) p < +∞. (4.12) Moreover, for any fixed value of n, (|ψ n (o, t) − 1| p ) t≥0 is dominated by ψ * n (o) p which is integrable by (4.12). Therefore, by dominated convergence theorem, we can make t go to infinity in (4.11) which concludes the proof. Link betweenĜ n andG n Let us recall that (Ĝ n (o, o)) n∈N is the bracket of the martingale (ψ n (o)) n∈N whose moments we are seeking an upper bound for. Therefore, it would be very interesting for our purpose to be able to control the moments ofĜ n (o, o) for n ∈ N. The following lemma shows there is a relation between the moments ofĜ n (o, o) and the moments ofG n (o, o) for n ∈ N. Remind thatG n (o, o) has been defined in subsection 4.2. For every x > 0, let us define F p (x) = +∞ 0 x p (1 + 2yx) p e −y √ πy dy. Lemma 4.7. We assume that V is a deterministic graph. Then, for every n ∈ N * and for every p > 1/2, o, o)) . E ν W V Ĝ n (o, o) p = E ν W V F p (G n ( Moreover, F p (x) ∼ x→+∞ a p x p−1/2 with a p = +∞ 0 dy (πy) 1/2 (1 + 2y) p .G n (o, o) =G n (o, o) 1 + 2γG n (o, o) . (4.13) Remind that γ is a Gamma random variable with parameters (1/2,1) which is independent ofβ. Together with (4.13), this implies directly the link between the moments ofĜ n (o, o) andG n (o, o). We only have to look at the asymptotic behaviour of F p . By a change of variable, for every x > 0, F p (x) = x p−1/2 +∞ 0 e −y/x (1 + 2y) p (πy) 1/2 dy. (4.14) Then, by dominated convergence theorem, if p > 1/2, +∞ 0 e −y/x (1 + 2y) p (πy) 1/2 dy −−−−→ x→+∞ a p . (4.15) The transient phase We are now ready to prove Theorem 2. Let us explain quickly the strategy of the proof. Strategy of the proof: The idea is to find an upper bound for the moments ofĜ n (o, o). Indeed, it is enough for us because (Ĝ n (o, o)) n∈N is the bracket of (ψ n (o)) n∈N . Consequently, by Lemma 4.7, this is enough to find an upper bound forG n (o, o) which is also the effective resistance until level n associated with the environment of the VRJP according to Lemma 4.4. Thus, we only need to show that the global effective resistance R(o ←→ ∞) has moments of order p for every p > 0. By standard computations, the effective resistance of the VRJP on a tree satisfies the equation in law R(x) = 1 ⃗ i=x A 2 i W A i +W R(i) where the random variables R(i) for ⃗ i = x are i.i.d copies of R(x). We will analyse this equation in law in order to bound the moments of the effective resistance. Proof of Theorem 2. Step 1: The potential (β i ) i∈V on V is constructed as in (3.2). For every x ∈ V , recall that e Ux = o<u≤x A u . For every x ∈ V , let us define the subtree V x := {u ∈ V, x ≤ u}. Moreover, for any neighbouring i, j ∈ V x , let us define c x (i, j) = W e U i +U j −2Ux . Then, for every x ∈ V , let R(x) be the electrical resistance between 0 and ∞ in the tree V x with conductances c x . Remark that, under P µ,W , (R(x)) x∈V is a family of identically distributed random variables. Furthermore, by Proposition G, as W > W c (µ), R(x) is finite for every x ∈ V , P µ,W -a.s. The figure 1 bellow explains the situation from an electrical point of view. Figure 1: Electrical network on a subtree. In this situation, the vertex x has three children, u 1 , u 2 , u 3 . On each edge the resistance in V x is written. By standard computations on electrical networks we infer that for every x ∈ V , R(x) = 1 ⃗ i=x A 2 i W A i +W R(i) . For sake of convenience, we defineR(x) = W R(x) for every x ∈ V . Therefore, it holds that for every x ∈ V ,R (x) = 1 ⃗ i=x A 2 i A i +R(i) . (5.1) Step 2: The following lines are inspired by the proof of Lemma 2.2 in [Aid10]. For every n ∈ N, the leftest vertex in generation n of V is denoted by v n . We denote by B(v n ) the set of "brothers" of v n . Remark that this set is possibly empty if µ(1) ̸ = 0. Let C > 0. Let α > 0. We define c α = 1 if α ≤ 1 and c α = 2 α−1 otherwise. For every n ∈ N * , let us introduce the event E n = {∀k ∈ {1, · · · , n}, ∀u ∈ B(v k ), cα A α u + cαR(u) α A 2α u > C}. By convention we write 1{E 0 } := 1. Now, let us prove the following key-inequality: for every n ∈ N * , P µ,W -a.s, R(o) α ≤ C n−1 k=0 1{E k } k i=1 c α A 2α v i + n k=1 1{E k }A α v k k i=1 c α A 2α v i + 1{E n } n i=1 c α A 2α v i R (v n ) α . (5.2) Let us prove it for n = 1. By (5.1), we can observe that for every child u of o, R(o) α ≤ 1 A u +R (u) A 2 u α ≤ c α A α u + c α A 2α uR (u) α . (5.3) If E 1 is satisfied, then we can apply (5.3) with u = v 1 which implies R(o) α ≤ 1{E 1 } c α A α v 1 + c α A 2α v 1R (v 1 ) α . (5.4) If E 1 is not satisfied, then we can apply (5.3) with a brother of v 1 which implies R(o) α ≤ C. (5.5) Therefore, combining (5.4) and (5.5), we infer R(o) α ≤ C + 1{E 1 } c α A α v 1 + c α A 2α v 1R (v 1 ) α (5.6) which is inequality (5.2) with n = 1. Remark, that the inequality (5.6) is true even if v 1 is the only child of o. The proof of (5.2) for any n is obtained by induction by iterating the inequality (5.6). Moreover, by construction, the events ∀u ∈ B(v k ), c α A α u + c αR (u) α A 2α u > C k∈N * are P µ,W -independent. In addition, the probability of each of these events is the same and it is strictly less than 1 becauseR(u) < +∞ for every u ∈ V as W > W c (µ). Therefore, P µ,W -a.s, there exists N ∈ N * such that 1{E n } = 0 for every n ≥ N . That is why we can make n go to infinity in (5.2) which implies, P µ,W -a.s, R(o) α ≤ C +∞ k=0 1{E k } k i=1 c α A 2α v i + ∞ k=1 1{E k }A α v k k i=1 c α A 2α v i . (5.7) Now, let us introduce the random set A = {i ∈ N * , B(v i ) ̸ = ∅} and for every k ∈ N * the random variable Γ k = |A∩{1, · · · k}|. Under GW µ , the sequence (Γ k ) k∈N is a random walk whose increments are independent Bernoulli random variables with parameter 1 − µ(1). Further, A can be written as {J 1 ≤ J 2 ≤ J 3 ≤ · · · }. For every i ∈ N * , there exists a brother L i of v J i . The situation is summarized by the figure 2 bellow. Figure 2 By construction, conditionally on the underlying Galton-Watson tree, the random variables 1{∀u ∈ B(v k ), cα A α u + cαR(u) α A 2α u > C} k∈N * and (A v k ) k∈N * are mutually independent. Therefore, together with (5.7), this implies that, GW µ -a.s, E ν W V R (o) α ≤ C + C + Q(W, −α) Q(W, −2α) +∞ k=1 (c α Q(W, −2α)) k Γ k i=1 ν W V c α A α L i + c αR (L i ) α A 2α L i > C (5.8) where we recall that Q(W, t) is the moment of order t of an Inverse Gaussian random variable with parameters (1, W ). Remark that, under GW µ , conditionally on (Γ k ) k∈N * , (P k ) k∈N * := ν W V c α A α L k + c αR (L k ) α A 2α L k > C k∈N * is an i.i.d sequence. Therefore, by the strong law of large numbers, GW µ -a.s, Γ k i=1 P i = exp (Γ k + o(Γ k ))E GW µ [ln (P 1 )] . Moreover, by the strong law of large numbers applied with (Γ k ) k∈N * , GW µ -a.s, Γ k i=1 P i = exp (1 − µ(1))(k + o(k))E GW µ [ln (P 1 )] . (5.9) Besides, as W > W c (µ), we know thatR(u) < +∞ for every u ∈ V , P µ,W a.s. Consequently, by monotone convergence theorem, −E GW µ [ln(P 1 )] = −E GW µ ln ν W V c α A α L 1 + c αR (L 1 ) α A 2α L 1 > C can be made as large as we want by making C go toward infinity. Therefore, there exists C(α) > 0 such that ln (c α Q(W, −2α)) + (1 − µ(1))E GW µ [ln(P 1 )] < 0. (5.10) Hence, for every α > 0, using (5.10) and (5.9) in (5.8) with C = C(α) implies that, GW µ -a.s, I α := E ν W V R (o) α < +∞. (5.11) Step 3: By (5.11), we can control any moment ofR(o). Together with Lemma 4.4, this implies that for every α > 0, for every n ∈ N * , GW µ -a.s, E ν W V G n (o, o) α = E ν W V [R(0 ←→ δ n ) α ] ≤ W α E ν W V R (o) α = W α I α < +∞. (5.12) Let p > 1. By Lemma 4.7, for every n ∈ N * , GW µ -a.s, E ν W V Ĝ n (o, o) p/2 = E ν W V F p/2 (G n (o, o)) where F p/2 (x) ∼ a p/2 x p/2−1/2 . Therefore, together with (5.12), this shows there exists positive constants K 1 and K 2 such that for every n ∈ N * , GW µ -a.s, E ν W V Ĝ n (o, o) p/2 ≤ K 1 + K 2 E ν W V G n (o, o) (p−1)/2 ≤ K 1 + K 2 W I (p−1)/2 . (5.13) By Lemma 4.6, it implies that, GW µ -a.s, sup n∈N * E ν W V [ψ n (o) p ] < +∞. Remark 5.1. In the proof of Theorem 2, identity (5.1) shows that the distribution ofĜ(o, o) is directly linked to the solution of the equation in law R(o) = 1 ⃗ i=o A 2 i A i +R(i) . A non-trivial solution to this equation must exist in the transient phase. However, we do not know how to express this solution with standard distributions and if it is even possible. 6 The subcritical phase 6.1 Proof of Theorem 3 In the study of the transient phase, we used the fact that the asymptotic behaviour of (ψ n (o)) n∈N is related to the effective resistance associated with the environment of the VRJP. We will also use this crucial property in the recurrent phase. In order to study the effective resistance of the VRJP between o and the level n, we will use techniques coming from the area of branching random walks. Indeed the fact that the environment of the VRJP on trees can be expressed as products of independent Inverse Gaussian random variables along branches of the tree makes our situation very similar to branching random walks. Proof of Theorem 3. Step 1: For every vertex x in the Galton-Watson tree V , let us define S(x) = − o<u≤x ln(A u ). We recall that f m,W (t) = ln (mQ(W, t)) for every t ∈ R. f m,W is the Laplace transform associated with the branching random walk {(x, S(x)), x ∈ V }. In particular, remark that {(x, S(x)), x ∈ V } satisfies (3.9). By assumption A 3 , it satisfies also (3.8). Remark that f m,W (0) = ln(m) > 0 because m > 1 by assumption A 1 . Moreover, this is easy to check that f m,W is stricly convex, strictly decreasing on [0, 1/2] and strictly increasing on [1/2, +∞[. In addition, the support of the point process L which is associated with {(x, S(x)), x ∈ V } is R because the support of an Inverse Gaussian distribution is R * + . Therefore, by Lemma L and Lemma 3.3, there exists a unique t * (m, W ) > 0 such that −τ (m, W ) := f ′ m,W (t * (m, W )) = f m,W (t * (m, W )) t * (m, W ) . For every x ∈ V , we definẽ S(x) := t * (m, W )S(x) + f m,W (t * (m, W ))|x| = t * (m, W ) S(x) − τ (m, W )|x| . By definition of t * (m, W ), the branching random walk {(x,S(x)), x ∈ V } satisfies (3.10). Consequently, with the branching random walkS, we are allowed to use the results of Hu and Shi, that is, Propositions J and K. Moreover W < W c (µ). By Proposition H, this is equivalent to say that Q(W, 1/2) < 1/m. Therefore, f m,W (1/2) < 0. Thus, by Proposition 3.3, t * (m, W ) < 1/2 and τ (m, W ) > 0. Now, we are ready to estimate the moments of (ψ n (o)) n∈N . By Lemma 4.3, we only have to control E µ,W [ψ n (o) p ] when p > 1 or p ∈]0, τ (m, W )[. Step 2: lower bound in (i). By Lemma 4.4, we know that for every n ∈ N, G n (o, o) = R(o ←→ δ n ) where R(o ←→ δ n ) is the effective resistance between o and δ n with conductances c. Recall that if i ∈ V \{o}, then c(i, ⃗ i) = W A −1 i o<u≤i A 2 u . By the Nash-Williams inequality (see 2.15 in [LP16]), for every n ∈ N * , P µ,W -a.s, G n (o, o) ≥ 1 W |x|=n A −1 x o<y≤x A 2 y . (6.1) Let p > 0. It holds that, for every n ∈ N * E µ,W G n (o, o) p/2 ≥ 1 W p/2 E µ,W      |x|=n A −1 x o<y≤x A 2 y   −p/2    ≥ 1 W p/2 E µ,W    min |x|=n A p/2 x ×   |x|=n o<y≤x A 2 y   −p/2    = 1 W p/2 E µ,W    min |x|=n A p/2 x ×   |x|=n e −2S(x)   −p/2    = 1 W p/2 e pτ (mln(n) = −1/t * (m, W ). Therefore, P µ,W -a.s, W −p/2 n,2/t * (m,W ) ≥ n p/(2t * (m,W ))+o(1) . (6.3) Moreover, for every n ∈ N * , P µ,W min |x|=n A x < n −2 = P µ,W   |x|=n {A x < n −2 }   ≤ E GW µ Z n ν W V A < n −2 where A has an Inverse Gaussian distribution with parameter (1, W ) and Z n = |x|=n 1. In addition, the cumulative distribution function of an Inverse Gaussian random variable decreases exponentially fast at 0. Therefore there exists λ > 0 such that for every n ∈ N * , P µ,W min |x|=n A x < n −2 ≤ e −λn 2 E GW µ [Z n ] ≤ m n e −λn 2 (6.4) which is summable. Therefore, by Borel-Cantelli lemma, P µ,W -a.s, min |x|=n A p/2 x ≥ n −p+o(1) . (6.5) Consequently, using (6.5) and (6.3) and Fatou's lemma, we infer that E µ,W min |x|=n A p/2 x × W −p/2 n,2/t * (m,W ) ≥ n p/(2t * (m,W ))−p+o(1) . (6.6) Then (6.6) and (6.2) imply that, E µ,W G n (o, o) p/2 ≥ e pτ (m,W )n+o(n) . (6.7) Together with Lemma 4.6 and Lemma 4.7, this yields E µ,W ψ n (o) 1+p ≥ e pτ (m,W )n+o(n) . (6.8) Step 3: upper bound in (i). This part of the proof is partially inspired from [FHS12]. For every n ∈ N * , let us denote by C(o ←→ δ n ) the effective conductance between o and δ n with respect to conductances c n . (See subsection 4.2 for the definition of the conductances c and c n .) By Lemma 4.4, for every n ∈ N * , C(o ←→ δ n ) =G n (o, o) −1 . (6.9) Now, we introduce (Z k ) k∈N * a Markov chain on V with conductances c starting from o (which is actually the discrete-time process associated with the VRJP). When we want to integrate only with respect to this Markov chain, we use the notations P c,o and E c,o . By definition of the effective conductance, we know that C(o ←→ δ n ) = W ⃗ i=o A i × P c,o (τ n < τ + o ) ≥ W ⃗ i=o A i × max |x|=n P c,o (τ x < τ + o ) (6.10) where τ n = inf{k ∈ N, |Z k | = n}, τ x = inf{k ∈ N,Z k = x} and τ + o = inf{k ∈ N * ,Z k = o}. For every x ∈ V \{o}, we define x 1 the unique child of o which is an ancestor of x. By standard computations, for every n ∈ N * , for every x such that |x| = n, W ⃗ i=o A i × P c,o τ x < τ + 0 = ⃗ i=o A i A −1 x 1 o<u≤x c(u, ⃗ u) −1 ≥ 1 o<u≤x c(u, ⃗ u) −1 . (6.11) By (6.11) and the expression of c, we infer that W ⃗ i=o A i × P c,o τ x < τ + 0 ) ≥ W o<u≤x A u o<v≤u A −2 v ≥ W o<u≤x A u e 2S(u) ≥ W e −2Sm(x) n × min |z|≤n A −1 z (6.12) where S m (x) = max o<u≤x S(u). Therefore, combining identities (6.12), (6.10) and (6.9), we get for every n ∈ N * , P µ,W -a.s,G n (o, o) ≤ n W × max |z|≤n A z × e 2 min |x|=n Sm(x) . (6.13) Moreover, as τ (m, W ) > 0, it holds that for every x ∈ V , S m (x) = max o<u≤x S(u) = max o<u≤xS (u)/t * (m, W ) + τ (m, W )|u| ≤ τ (m, W )|x| + (1/t * (m, W )) max o<u≤xS (u) = τ (m, W )|x| + (1/t * (m, W ))S m (x) (6.14) whereS m (x) = max o<u≤xS (u). Combining (6.13) and (6.14), it holds that for every n ∈ N * , P µ,W -a.s, . (6.16) G n (o, o) ≤ n W × max |z|≤n A z × e 2τ If we show that (a) and (b) have a subexponential growth, it gives the good upper bound for E µ,W G n (o, o) p/2 . In order to majorize (a), let us introduce a function h p on R + which is increasing, convex, bijective and such that there exists γ p > 0 such that h p (x) = e (W/4)x 1/p for every x > γ p . Such a function does clearly exist. By Jensen's inequality, for every n ∈ N * , it holds that h p E µ,W max |z|≤n A p z ≤ E µ,W max |z|≤n h p (A p z ) ≤ h p (γ p ) + E µ,W max |z|≤n e (W/4)Az ≤ h p (γ p ) + E µ,W   |z|≤n e (W/4)Az   ≤ h p (γ p ) + (m − 1) −1 m n+1 E µ,W e (W/4)A where A is an Inverse Gaussian distribution with parameters (1, W ). Remark that E µ,W e (W/4)A < +∞. Thus, there exist positive constants C 1 and C 2 such that for every n big enough, E µ,W max |z|≤n A p z ≤ h −1 p (C 1 + C 2 m n ) ≤ 4 W ln (C 1 + C 2 m n ) p . (6.17) Consequently, (a) in (6.16) has a subexponential growth. Now, let us look at (b) in (6.16). Let us define a * := 2p/t * (m, W ). Let ε > 0. Then, remark that for every n ∈ N * , By the branching property, for every n ∈ N * and hypothesis A 2 , (b) ≤ e na * ε + E µ,W e a * minP µ,W ∀z, |z| = ⌊δn⌋, min |x|z=⌊(1−δ)n⌋ max z<u≤xS z (u) ≥ εn/2 ≤ P µ,W min |x|=⌊(1−δ)n⌋ max o<u≤xS (u) ≥ εn/2 2 ⌊δn⌋ . Therefore, using inequality (2.12) in [FHS12], there exists η > 0 such that for every integer n which is large enough, If we take t large enough and δ small enough, we get an exponential decay with a decreasing rate which is as large as we want. Therefore, combining (6.21), (6.20) and (6.19), we know that (c) in (6.18) decreases faster than any exponential function. Consequently, by (6.18), (b) has a subexponential growth. Moreover, we also proved that (a) has subexponential growth. Step 4: upper bound in (ii). For every x ∈ V , let us denote by ν x the number of children of x. For every n ∈ N * , by definition of ψ n (o) we know that ψ n (o) = W |x|=nĜ n (o, x)ν x . Moreover, for every x ∈ V , for every n ∈ N * ,Ĝ n (o, x) ≤Ĝ(o, x). This can be proved thanks to path expansions. (See Lemma I.) Consequently, for every n ∈ N * , ψ n (o) ≤ W |x|=nĜ (o, x)ν x . (6.24) As W < W c (µ), by Lemma 3.2, for every n ∈ N * , P µ,W -a.s, it holds that ψ n (o) ≤ WĜ(o, o) |x|=n e Ux ν x = WĜ(o, o) |x|=n o<u≤x A u ν x . (6.25) Together with the notation introduced in step 1 of this proof, we get that for every n ∈ N * , P µ,W -a.s, By identity (4.13) and Lemma 4.5, as W < W c (µ), it holds thatĜ(o, o) = 1 2γ . Together with (6.26) this implies that for every n ∈ N * , P µ,W -a.s, ψ n (o) ≤ W 1 2γ e −τ (m,W )n |x|=n e −S(x)/t * (m,W ) ν x . (6.27) Nevertheless, by the construction of the β-potential introduced in subsection 3.1, we know that γ, (S(x)) |x|=n and (ν x ) |x|=n are independent and γ has a Gamma distribution with parameters (1/2, 1). Consequently, for every p ∈]0, t * (m, W )[, for every n ∈ N * , it holds that E µ,W [ψ n (o) p ] ≤ W p e −pτ (m,W )n +∞ 0 x −p−1/2 √ 4 p π e −x dx × E µ,W     |x|=n e −S(x)/t * (m,W ) ν x   p   . (6.28) For every p ∈]0, 1/2[, we denote κ p = W p +∞ 0 x −p−1/2 √ 4 p π e −x dx < +∞. As t * (m, W ) < 1/2 < 1, we are allowed to use concavity in (6.28) which implies that for every p ∈]0, t * (m, W )[, for every n ∈ N * , E µ,W [ψ n (o) p ] ≤ κ p e −pτ (m,W )n × E µ,W      |x|=n e −S(x)/t * (m,W ) ν x   t * (m,W )    p/t * (m,W ) ≤ κ p e −pτ (m,W )n × E µ,W   |x|=n e −S(x) ν t * (m,W ) x   p/t * (m,W ) . (6.29) However (S(x)) |x|=n and (ν x ) |x|=n are independent. Therefore, for every n ∈ N * and for every p ∈]0, t * (m, W )[, E µ,W [ψ n (o) p ] ≤ κ p e −pτ (m,W )n × E µ,W [W n ] p/t * (m,W ) × E µ,W ν t * (m,W ) p/t * (m,W ) (6.30) where ν has distribution µ and W n = |x|=n e −S(x) . Therefore, as W n is a martingale with mean 1, we get that for every n ∈ N * and for every p ∈]0, t * (m, W )[, E µ,W [ψ n (o) p ] ≤ κ p × E µ,W ν t * (m,W ) p/t * (m,W ) × e −pτ (m,W )n In order to conclude the proof, we need the same estimate for p ∈]1 − t * (m, W ), 1[. This stems from Lemma 4.3. Proof of Theorem 4 First, we need the following lemma which establishes a link "in law" between ψ n (o) and the effective resistance associated with the VRJP. Now, let us define a potential β ′ on the wired graphṼ n with distributionνP n,0 Vn whereP n is the adjacency matrix of the weighted graphṼ n . We can associate a matrix H β ′ with the potential β ′ in the usual way and the inverse of H β ′ is denoted by G ′ . We define γ ′ = 1/ (2G ′ (o, o)) and β ′ = β ′ − 1{· = o}γ ′ . By Theorem 3 in [STZ17], γ ′ is distributed as Γ(1/2, 1) and is independent ofβ ′ . Let us define the matrixH β ′ in the same way as H β ′ but we replace 2β ′ o by 2β ′ o . Moreover, we defineĜ ′ n andG ′ n as the inverse of (H β ′ ) Vn,Vn and (H β ′ ) Vn,Vn respectively. Further, let us write ψ ′ n =Ĝ ′ nη (n) . Then, by Proposition 8 in [SZ19], it holds that 1 2γ ′ = G ′ (o, o) =Ĝ ′ n (o, o) + G ′ (δ n , δ n )ψ ′ n (o) 2 . (6.32) The equality (6.32) can be proved by means of the results about path expansions given by Lemma I. By (6.32), we get ψ ′ n (o) 2 1/(2γ ′ ) −Ĝ ′ n (o, o) = 1 G ′ (δ n , δ n ) . (6.33) Besides, by Cramer's formula, 1 2γ ′ −Ĝ ′ n (o, o) = 1 2γ ′ −G ′ n (o, o) 1 + 2γ ′G′ n (o, o) = 1 2γ ′ (1 + 2γ ′G′ n (o, o)) . Together with (6.33), this yields ψ ′ n (o) 2 × 2γ ′ × (1 + 2γ ′G′ n (o, o)) = 1 G ′ (δ n , δ n ) . (6.34) Further, with the same function F n as in (6.31), it holds that (ψ ′ n (o),G ′ n (o), 2γ ′ ) = F n (β ′ Vn , γ ′ ). (6.35) Moreover, the joint law of (β ′ Vn , γ ′ ) is the same as the joint law of (β Vn , γ). It stems from the restriction properties in Lemma C and Lemma D. Therefore, combining this with (6.31), (6.35) and (6.34), we obtain that ψ n (o) 2 × 2γ × (1 + 2γG n (o, o)) law = ψ ′ n (o) 2 × 2γ ′ × (1 + 2γ ′G′ n (o, o)) = 1 G ′ (δ n , δ n ) . By Theorem 3 in [STZ17], 1/G ′ (δ n , δ n ) law = 2Γ(1/2, 1) and by Proposition 4.4,G n (o, o) = R(o ←→ δ n ). This concludes the proof. Now, we are ready to prove Theorem 4. Proof of Theorem 4. For every n ∈ N, it holds that ψ n (o) 2 = 1 2γ(1 + 2γR(0 ←→ δ n )) × Φ n (6.36) where Φ n = ψ n (o) 2 × 2γ(1 + 2γR(o ←→ δ n )). By Lemma 6.1, we know that for every n ∈ N, Φ n law = 2Γ(1/2, 1). Therefore for every n ∈ N, P µ,W (Φ n < 2/n 4 ) = That is why, in order to conclude, we only have to prove that, P µ,W -a.s, R(o ←→ δ n ) = e 2τ (m,W )n+o(n) . Remark that the identity (6.2) is also true without the expectation and remember from Lemma 4.4 that R(o ←→ δ n ) =G n (0, 0). Therefore, for every n ∈ N. R(o ←→ δ n ) ≥ 1 W e 2τ (m,W )n × min |x|=n A x × W −1 n,2/t * (m,W ) . (6.38) First, min |x|=n A x has at most polynomial decay P µ,W -a.s. This can be shown exactly as in (6.5). Furthermore, by Proposition J, W −1 n,2/t * (m,W ) has also polynomial asymptotics. Consequently, this proves the lower bound of R(o ←→ δ n ). More precisely, P µ,W almost surely, R(o ←→ δ n ) ≥ e 2τ (m,W )n+o(n) . Now, let us prove the upper bound. By (6.15), it holds that W, t)). R(0 ←→ δ n ) ≤ n W × max |z|≤n A z × e 2τ Obviously, F ∈ C ∞ R * + × R * + . We introduce another function G defined by G(W, t) = F (W, t) − t ∂F ∂t (W, t) for every (t, W ) ∈ R * + × R * + . Moreover, by step 1 in the proof of Theorem 3, we know that for every W > 0, there exists a unique t * (m, W ) > 0 such that G(W, t * (m, W )) = 0. Further, for every (t, W ) ∈ R * + × R * + , ∂G ∂t (W, t) = −t ∂ 2 F ∂t 2 (W, t) = −t E µ,W A t E µ,W ln(A) 2 A t − E µ,W ln(A)A t 2 E µ,W [A t ] 2 (6.40) where A is an Inverse Gaussian distribution with parameters (1, W G(W c (µ), 1/2) = F (W c (µ), 1/2) − (1/2) ∂F ∂t (W c (µ), 1/2) = ln (mQ(W c (µ), 1/2)) = 0. Therefore, t * (m, W c (µ)) = 1/2. (6.43) Thus, by Taylor expansion in a neighborhood of W c (µ), it holds that, F (W, t * (m, W )) = F (W c (µ), 1/2) + (W − W c (µ)) ∂F ∂W (W c (µ), 1/2) + (t * (m, W ) − 1/2) ∂F ∂t (W c (µ), 1/2) + o W c (µ) − W, t * (m, W ) − 1/2 = (W − W c (µ)) ∂F ∂W (W c (µ), 1/2) + o(W c (µ) − W ) (6.44) where in the last equality, we used the fact that F (W c (µ), 1/2) = 0 and (6.42). Moreover o(W c (µ) − W, t * (m, W ) − 1/2) becomes o(W c (µ) − W ) in the last equality because t * (m, W ) − 1/2 = t * (m, W ) − t * (m, W c (µ)) = O(W c (µ) − W ) as t * (m, ·) is a smooth function. Besides, τ (m, W ) = −F (W, t * (m, W ))/t * (m, W ) ∼ −2F (W, t * (m, W )) in the neighborhood of W c (µ) because t * (m, W c (µ)) = 1/2. Together with (6.44), it yields τ (m, W ) ∼ W →Wc(µ) 2 ∂F ∂W (W c (µ), 1/2) (W c (µ) − W ) ((x + 1/x − 2)(2π) −1/2 x −1 e −(W/2)(x+1/x−2) dx +∞ 0 (2π) −1/2 x −1 e −(W/2)(x+1/x−2) dx = 1 2W − 1 2 Q(W, 3/2) + Q(W, −1/2) − 2Q(W, 1/2) Q(W, 1/2) = 1 + 1 2W − Q(W, 3/2) Q(W, 1/2) . (6.47) In the last equality, we used the fact that Q(W, 3/2) = Q(W, −1/2). Moreover, remark that for every W > 0, Q(W, 3/2) = +∞ 1 W 2π (x + 1/x) x e −(W/2)(x+1/x−2) dx = 2W π +∞ 0 cosh(u)e −W (cosh(u)−1) du = 2W π e W K 1 (W ) = K 1 (W ) K 1/2 (W ) (6.48) where K α is the modified Bessel function of the second kind with index α. Besides, recall that mQ(W c (µ), 1/2) = 1. Now, let us evaluate (6.47) at W = W c (µ). Together with (6.48), this implies ∂F ∂W (W c (µ), 1/2) = 1 + 1 2W c (µ) − m K 1 (W c (µ)) K 1/2 (W c (µ)) . (6.49) Moreover, we still have to prove that ∂F ∂W (W c (µ), 1/2) > 0. Actually, it is enough to prove that for every W > 0, 1 + 1 2W − Q(W, 3/2) Q(W, 1/2) > 0. Exactly as in (6.48), one can prove that Q(W, 1/2) = K 0 (W ) K 1/2 (W ) . Therefore, we have to prove that for every W > 0, 1 + 1 2W > K 1 (W ) K 0 (W ) . Nevertheless, it is exactly Corollary 3.3 in [CY17]. Proof of Proposition 2.2 Proof of Proposition 2.2. Recall from Proposition G that the measure P V RJP µ,W is defined as follows: • First, under measure P µ,W , we choose randomly a Galton-Watson tree V and the random conductances c on V which are given by Proposition G. • Secondly, we choose randomly a trajectory on V for the discrete-time process (Z n ) n∈N with distribution P c,o where P c,o is the law of a random walk on the tree (V, E) starting from o with conductances c. Step 1: proof of the lower bound. Let n ∈ N * . By Jensen's inequality, it holds that 1 P V RJP µ,W (τ + o > τ n ) = 1 E µ,W P c,o (τ + o > τ n ) ≤ E µ,W 1 P c,o (τ + o > τ n ) . (6.50) However, by definition of the effective resistance, we know that 1 P c,o (τ + o > τ n ) = W   ⃗ i=o A i   × R(o ←→ δ n ). Therefore, by Proposition 4.4 1 P c,o (τ + o > τ n ) = W   ⃗ i=o A i   ×G n (o, o). Combining this with (6.50) and Cauchy-Schwarz inequality, there exists a positive constant C such that 1 P V RJP µ,W (τ + o > τ n ) ≤ C E µ,W G n (o, o) 2 . (6.51) Combining (6.22) and (6.51), we obtain 1 P V RJP µ,W (τ + o > τ n ) ≤ e 2τ (m,W )n+o(n) . This is exactly the lower bound in Proposition 2.2. Step 2: proof of the upper bound. Let α ∈]0, t * (m, W )/2[. Remark that t * (m, W )/2 < 1/4 because W < W c (µ). Let n ∈ N * . It holds that P V RJP µ,W (τ + o > τ n ) = E µ,W P c,o (τ + o > τ n ) ≤ E µ,W P c,o (τ + o > τ n ) α . (6.52) Furthermore, by definition of the effective conductance C(o ←→ δ n ) between o and level n of the tree, we know that P c,o (τ + o > τ n ) = C(o ←→ δ n ) W ⃗ i=o A i . (6.53) Let ε > 0 such that (1 + 2ε)α < t * (m, W )/2. Combining Hölder inequality, (6.52) and (6.53), there exists C > 0 such that P V RJP µ,W (τ + o > τ n ) ≤ CE µ,W C(o ←→ δ n ) (1+ε)α 1/(1+ε) . (6.54) However,G n (o, o) −1 = C(o ←→ δ n ). Consequently, following exactly the same lines as in (6.2), we get C(o ←→ δ n ) ≤ W e −2τ (m,W )n × max |x|=n A −1 x × W n,2/t * (m,W ) . Combining this with (6.54), it yields One can prove that the first term in (6.56) has at most polynomial growth by following exactly the same lines as for the proof of (6.17). Moreover, the second term in (6.56) decreases with a polynomial decay by Proposition K because α(1 + 2ε) < t * (m, W )/2. Together with (6.55), as α can be taken as close from t * (m, W )/2 as we want, this concludes the proof. P V RJP µ,W (τ + o > τ n ) ≤ Ce − 7 The critical point 7.1 Proof of Theorem 5 Now, we are going to prove Theorem 5 which describes the asymptotic behaviour of (ψ n (o)) n∈N at the critical point. Proof of Theorem 5. For simplicity of notation, we write W = W c (µ) in the entirety of this proof. Exactly as in the proof of Theorem 4, by using Lemma 6.1, we only need to find the almost sure behaviour of C(o ←→ δ n ), the effective conductance associated with the VRJP, in order to get the asymptotics of ψ n (o) 2 . Remember that the local conductance from any vertex x ∈ V \{o} to ⃗ x is W A −1 x   o<u≤x A 2 u   which is not exactly the effective conductance associated with a branching random walk. Remark that for every n ∈ N, W min |z|≤n A −1 z ϱ n ≤ C(o ←→ δ n ) ≤ W max |z|≤n A −1 z ϱ n (7.1) where ϱ n is the effective conductance from o to level n when the local conductance from any vertex x ∈ V \{o} to ⃗ x is given by   o<u≤x A 2 u   . As usual, min |z|≤n A −1 z and max |z|≤n A −1 z have polynomial asymptotics almost surely. Thus, we only need to focus on the behaviour of (ϱ n ) n∈N . For every x ∈ V , let us denotê S(x) = −2 o<u≤x ln(A u ). We writeψ(t) = ln E µ,W |x|=1 e −tŜ(x) = ln E µ,W |x|=1 A 2t x . As we are at the critical point and thanks to Proposition H,ψ strictly decreases on [0, 1/4] and increases strictly on [1/4, 1],ψ(1/4) = 0 andψ ′ (1/4) = 0. Our ϱ n is exactly the same as the one defined in [FHS12] with the branching random walkŜ. By the proof of Theorem 1.2 in [FHS12], we get that, P µ,W -a.s, lim n→+∞ ln(ϱ n ) n 1/3 = − 3π 2 2 × 4 ×ψ ′ (1/4) 1/3 = −   24π 2 E µ,W   |x|=1 A 1/2 x ln(A x ) 2     1/3 . This concludes the proof. Positive recurrence at the critical point Now, let us prove Theorem 6. Proof of Theorem 6. We want to prove the positive recurrence of the discrete process (Z n ) n∈N associated with (Z t ) t≥0 . By Proposition G, (Z n ) n∈N is a Markov chain in random conductances with conductances given by c(x, ⃗ x) = W e Ux+U ⃗ x = W A x o<u≤ ⃗ x A 2 u for every x ∈ V \{o}. For every x ∈ V , let us definẽ S(x) = − 1 2 o<y≤x ln(A u ). We assumed that W = W c (µ), that is, mQ(W, 1/2) = 1 by Proposition H. Therefore, {(x,S(x)), x ∈ V } is a branching random walk which satisfies hypothesis (3.10). This is easily checked that it satisfies also (3.9). Moreover it satisfies hypothesis (3.8) by hypothesis A 3 . Therefore, we are allowed to use the results of Hu and Shi (Propositions K and J.) with this branching random walk. Following the notations of Hu and Shi, we define In order to prove Theorem 6, this is enough to prove that for some r ∈]0, 1[, where ν has the same distribution as ν y for any y ∈ V . The last equality comes from the fact that (W n ) n∈N is a martingale because the branching random walkS satisfies hypothesis (3.10). Combining identities (7.3), (7.4) and (7.5), in order to make E µ,W [Λ r n ] summable, we need n 3r/2 E µ,W W r n,4 and E µ,W ν 1/4 1 ν≥n 3/2 4r to be summable. Moreover, recall we assumed that r < 1/4. By Proposition K, we know that n 3r/2 E µ,W W r n,4 = n 3r/2 × n −6r+o(1) = n −9r/2+o(1) . Moreover by Hölder's inequality with p = 4, E µ,W ν 1/4 1 ν≥n 3/2 4r ≤ n −9r/2 . In order to conclude, we only need to choose r between 2/9 and 1/4 which is possible because 2/9 < 1/4. Acknowledgments I would like to thank my Ph.D supervisors Christophe Sabot and Xinxin Chen for suggesting working on this topic and for their very useful pieces of advice. takes values in N and each point ρ i is in R. At time 0, there is a unique ancestor called the root o. We define S(o) = 0. At time n, each individual u generates independently a point process L u := {ρ u i , 1 ≤ i ≤ N u } with the same law as L. Each point in L u stands for a child of u. The positions of the children of u are given by the point process {ρ The case where f (t min ) > 0 can be treated in the same way. ψ n (o) as a mixture of Inverse Gaussian distributions and proof of Theorem 1 Lemma 4. 1 . 1For every n ∈ N, under ν W V ,(i) Proof of Lemma 4.1. By Lemma D, (β i ) i∈Vn has lawνP(n) ,η n ,i∼j W for every i ∈ V n andP (n) (i, j) = W 1{i ∼ j} for every i, j ∈ V n . Further, by Lemma C, the law of β o conditionally on G n isν Wo,o,η {o} o, o) = +∞, P µ,W − a.s. (ii) ∀W ∈]W c (µ), +∞[, lim n→+∞G n (o, o) :=G(o, o) < +∞, P µ,W − a.s. Proof of Lemma 4.5. By Propositions G and H, W ≤ W c (µ) if and only if the random walk with conductances (c i,j ) (i,j)∈E is recurrent almost surely. By Theorem 2.3 in [LP16], this is equivalent to say that lim n→+∞ R(o ←→ δ n ) = +∞.Therefore, Lemma 4.4 concludes the proof. ψ n (o) and ⟨ψ n (o, t), ψ n (o, t)⟩ ≤ e na * ε + P µ,+ fast when n goes toward infinity. Therefore we only have to prove that (c) decreases faster than any exponential function. Let δ > 0. The crucial point is to remark that for every n ∈ N * , P µ,W ∀z, |z| = ⌊δn⌋,min |x|z=⌊(1−δ)n⌋ max z<u≤xS z (u) +S(z) ≥ εn ∩S(z) ≤ εn/2 whereS z (u) =S(u) −S(z). Therefore, for every n ∈ N * , P µ,W min |x|=n max o<u≤xS (u) ≥ εn ≤ P µ,W max |z|=⌊δn⌋ max o<u≤zS (u) ≥ εn/2 + P µ,W ∀z, |z| = ⌊δn⌋, min |x|z=⌊(1−δ)n⌋ max z<u≤xS z (u) ≥ εn/2 . (6.19) Pr µ,W ∀z, |z| = ⌊δn⌋, min |x|z=⌊(1−δ)n⌋ max z<u≤xS z (u) ≥ εn/2 ≤ 1 − e −ηn 1/3 2 ⌊δn⌋ (6.20)which decreases faster than any exponential function. Now, let t > 0. By Markov inequality, for every n ∈ N * ,P µ,W max |z|=⌊δn⌋ max o<u≤zS (u) ≥ εn/2 ≤ e −nεt(t) k where r(t) = E µ,W |x|=1e tS(x) . Consequently, there exists a constant C > 0 such that for every n ∈ N * , P µ,W max |z|=⌊δn⌋ max o<u≤zS (u) ≥ εn/2 ≤ C exp (n (δ ln(r(t)) − tε/2)) . (6.21) ψ n (o) ≤ WĜ(o, o)e −τ (m,W )n |x|=n e −S(x)/t * (m,W ) ν x (6.26) Lemma 6 . 1 . 61Let V be a rooted tree with root o. Let W > 0. Then, under ν W V , it holds that for everyn ∈ N * , ψ n (o) 2 × 2γ × (1 + 2γR(o ←→ δ n )) law = 2Γ(1/2, 1)where γ is the Γ(1/2, 1) random variable which was used to define the potential β on a tree (see identity (3.3)) and R(o ←→ δ n ) is the effective resistance from o to δ n associated with the conductances c defined in Proposition G.Proof of Lemma 6.1. Let n ∈ N. The proof is based on a coupling with a potential on the wired graphṼ n . (See subsection 4.2 for the definition of the wired graph.) Recall that, under ν W V , thanks to (3.3), the potential β can be decomposed as β =β + 1{· = o}γ where γ andβ are independent. For every i ∈ V n , we writeη(n) i = j∼i,j / ∈Vn W . Then, recall that ψ n (o) =Ĝ nη (n) . In particular,there exists a deterministic function F n from R |Vn|+1 into R 3 such that (ψ n (o),G n (o), 2γ) = F n (β Vn , γ). (6.31) Further, for every n ∈ N * , let us defineΛ n := |x|=n c(x, ⃗ x). a.s, for every i ∈ Z d , ψ(i) > 0 and the VRJP is transient.Moreover, W c (d) < +∞ if and only if d ≥ 3. Now let us assume that (V,E)is a supercritical Galton Watson tree with offspring law µ such that µ(0) = 0. Then there exists W c (µ) ∈ R * + depending only on the mean of µ such that: W -a.s, for every i ∈ V , ψ(i) > 0 and the VRJP is transient. 2.5 Statement of the results 2.5.1 Results on Z d of Lemma 4.2. Let Λ be a finite subset of V including o. Let us defineΛ = Λ\{o}. Let A be a borelian set of RΛ. Let F be a bounded continuous function of R d . Then, by Lemma 4.1, for every n large enough, Proof of Lemma 4.7. Let n ∈ N. Recall that (H β ) Vn,Vn =H n + 2γE o,o where E o,o is the matrix which has only null coefficients, excepted at (o, o) where it has coefficient 1. Then, by Cramer's formula, we have the following key-equality: which is summable. Moreover, for every n ∈ N,P µ,W (Φ n > 2n) =which is summable. Consequently, by Borel-Cantelli lemma, P µ,W -a.s, for n large enough,1/n 4 0 e −y √ πy dy ≤ 1 √ π 1/n 4 0 dy √ y = 2 √ πn 2 +∞ n e −y √ πy dy ≤ 1 √ πn e −n 2 n 4 ≤ Φ n ≤ 2n. (6.37) (m,W )n × eIn the same way as in (6.5), max {A z : |z| ≤ n} has at most polynomial growth P µ,W -a.s. Moreover, by Theorem 1.4 in[FHS12], there exists some constant c > 0 such that min {S m (x) : |x| = n} ∼ cn 1/3 P µ,W -a.s. This concludes the proof.Proof of Proposition 2.1. Let m > 1. For every W > 0 and for every t > 0, let us define F (W, t) = ln(mQ(2/t * (m,W ) min |x|=nS m(x) . (6.39) 6.3 Proof of Proposition 2.1 ). From (6.40) and Cauchy-Schwarz inequality, we deduce that for every (t, W ) ∈ R * + × R * + , Therefore, we can apply the implicit function theorem which implies that W → t * (m, W ) is smooth. By Proposition H, W c (µ) is the unique W > 0 such that mQ(W, 1/2) = 1. Moreover, for every W ∈ R * + , because the minimum of t → Q(W, t) is achieved for t = 1/2. Consequently,∂G ∂t (W, t) < 0. (6.41) ∂F ∂t (W, 1/2) = 0 (6.42) 6.45) Therefore, we only have to compute ∂F ∂W (W c (µ), 1/2) in order to conclude the proof. Let us recall that for every W > 0,F (W, 1/2) = ln(m) + 1 2 ln(W ) + ln +∞ 0 e −(W/2)(x+1/x−2) √ 2πx dx . (6.46) Differentiating (6.46), we get ∂F ∂W (W, 1/2) = 1 2W − 1 2 +∞ 0 2ατ (m,W )n E µ,W max|x|=n A −(1+ε)α x × W (1+ε)α n,2/t * (m,W ) 1/(1+ε) . (6.55) Moreover, by Hölder inequality, we get E µ,W max |x|=n A −(1+ε)α x × W (1+ε)α n,2/t * (m,W ) ≤ E µ,W max |x|=n A −α(1+ε)(1+2ε)/ε x ε/(1+2ε) × E µ,W W (1+2ε)α n,2/t * (m,W ) 1/(1+2ε) (6.56) Let n ∈ N * and r ∈]0, 1[. r shall be made precise later in the proof. First, let us remark that,E µ,W [Λ r n ] ≤ E µ,W For every y ∈ V ,let us define the random variable which is the number of children of y. Then, it holds that, ≤ n 3r/2 E µ,W [W r n−1,4 ] + E µ,W Moreover, by Jensen's inequality, if r < 1/4, we get, (b) ≤ E µ,W 4r = E µ,W [W n−1 ] 4r E µ,W ν 1/4 1 ν≥n 3/2+∞ n=1 E µ,W [Λ r n ] < +∞. (7.2)     |x|=n   o<u≤x A 2 u   A −1 x 1 Ax≥1   r   + E µ,W     |x|=n   o<u≤x A 2 u   A −1 x 1 Ax≤1   r   ≤ E µ,W W r n,4 + E µ,W     |x|=n   o<u≤x A 2 u   A −1 x 1 Ax≤1   r   (a) . (7.3) ν y = ⃗ x=y 1 (a) = E µ,W     |y|=n−1   o<u≤y A 2 u   ⃗ x=y A x 1 Ax≤1   r   ≤ E µ,W     |y|=n−1   o<u≤y A 2 u   ν y   r       |y|=n−1   o<u≤y A 2 u   ν y 1 νy≥n 3/2   r   (b) . (7.4)      |y|=n−1   o<u≤y A 2 u   ν y 1 νy≥n 3/2   1/4    4r ≤ E µ,W   |y|=n−1   o<u≤y A 1/2 u     4r E µ,W ν 1/4 1 ν≥n 3/2 4r = E µ,W ν 1/4 1 ν≥n 3/2 4r (7.5) Large deviations for transient random walks in random environment on a Galton-Watson tree. E Aidékon, Ann. Inst. Henri Poincaré. 461Probab. Stat.E. Aidékon. Large deviations for transient random walks in random environment on a Galton-Watson tree. Ann. Inst. Henri Poincaré, Probab. Stat., 46(1):159-189, 2010. Extrapolation and interpolation of quasi-linear operators on martingales. D L Burkholder, R F Gundy, Acta Mathematica. 124D. L. Burkholder and R. F. Gundy. Extrapolation and interpolation of quasi-linear opera- tors on martingales. Acta Mathematica, 124:249 -304, 1970. Directed polymers on trees: a martingale approach. E Buffet, A Patrick, J V Pule, Journal of Physics A: Mathematical and General. 268E. Buffet, A. Patrick, and J. V. Pule. Directed polymers on trees: a martingale approach. Journal of Physics A: Mathematical and General, 26(8):1823-1834, 1993. A Basdevant, A Singh, Continuous-time vertex reinforced jump processes on Galton-Watson trees. 22A. Basdevant and A. Singh. Continuous-time vertex reinforced jump processes on Gal- ton-Watson trees. The Annals of Applied Probability, 22(4):1728 -1743, 2012. The critical temperature of a directed polymer in a random environment. A Camanes, P Carmona, Markov Process. Relat. Fields. 151A. Camanes and P. Carmona. The critical temperature of a directed polymer in a random environment. Markov Process. Relat. Fields, 15(1):105-116, 2009. Directed Polymers in Random Environments. F Comets, SpringerF. Comets. Directed Polymers in Random Environments. Springer, 2017. On approximating the modified Bessel function of the second kind. Y.-M Chu, Z.-H Yang, J. Inequal. Appl. 841Y.-M. Chu and Z.-H. Yang. On approximating the modified Bessel function of the second kind. J. Inequal. Appl., pages Paper No. 41, 8, 2017. Speed of vertex-reinforced jump process on Galton-Watson trees. X Chen, X Zeng, J. Theor. Probab. 312X. Chen and X. Zeng. Speed of vertex-reinforced jump process on Galton-Watson trees. J. Theor. Probab., 31(2):1166-1211, 2018. A note on recurrence of the Vertex reinforced jump process and fractional moments localization. A Collevecchio, X Zeng, Electronic Journal of Probability. 2663A. Collevecchio and X. Zeng. A note on recurrence of the Vertex reinforced jump process and fractional moments localization. Electronic Journal of Probability, 26:63, 2021. Anderson localization for a supersymmetric sigma model. M Disertori, T Spencer, Commun. Math. Phys. 3003M. Disertori and T. Spencer. Anderson localization for a supersymmetric sigma model. Commun. Math. Phys., 300(3):659-671, 2010. Quasi-diffusion in a 3D Supersymmetric Hyperbolic Sigma Model. M Disertori, T Spencer, M R Zirnbauer, Communications in Mathematical Physics. 3002M. Disertori, T. Spencer, and M. R. Zirnbauer. Quasi-diffusion in a 3D Supersymmetric Hyperbolic Sigma Model. Communications in Mathematical Physics, 300(2):435-486, 2010. Vertex-reinforced jump processes on trees and finite graphs. B Davis, S Volkov, 128Probab. Theory Relat. FieldsB. Davis and S. Volkov. Vertex-reinforced jump processes on trees and finite graphs. Probab. Theory Relat. Fields, 128(1):42-62, 2004. Almost sure convergence for stochastically biased random walks on trees. G Faraud, Y Hu, Z Shi, 154Probab. Theory Relat. FieldsG. Faraud, Y. Hu, and Z. Shi. Almost sure convergence for stochastically biased random walks on trees. Probab. Theory Relat. Fields, 154(3-4):621-660, 2012. Representations of the vertex reinforced jump process as a mixture of Markov processes on Z d and infinite trees. T Gerard, Electron. J. Probab. 25108T. Gerard. Representations of the vertex reinforced jump process as a mixture of Markov processes on Z d and infinite trees. Electron. J. Probab., 25:45, 2020. Id/No 108. Minimal position and critical martingale convergence in branching random walks, and directed polymers on disordered trees. Y Hu, Z Shi, The Annals of Probability. 372Y. Hu and Z. Shi. Minimal position and critical martingale convergence in branching ran- dom walks, and directed polymers on disordered trees. The Annals of Probability, 37(2):742 -789, 2009. Marches aléatoires avec branchement et absorption. Theses. B , Université Pierre et Marie Curie -Paris VIB. Jaffuel. Marches aléatoires avec branchement et absorption. Theses, Université Pierre et Marie Curie -Paris VI, 2010. Power-law decay of weights and recurrence of the two-dimensional VRJP. G Kozma, R Peled, Electron. J. Probab. 2682G. Kozma and R. Peled. Power-law decay of weights and recurrence of the two-dimensional VRJP. Electron. J. Probab., 26:19, 2021. Id/No 82. Probability on Trees and Networks. R Lyons, Y Peres, of Cambridge Series in Statistical and Probabilistic Mathematics. New YorkCambridge University Press42R. Lyons and Y. Peres. Probability on Trees and Networks, volume 42 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, New York, 2016. Monotonicity and phase transition for the vrjp and the errw. preprint. R Poudevigne, R. Poudevigne. Monotonicity and phase transition for the vrjp and the errw. preprint., 2019. Grundlehren der mathematischen Wissenschaften. D Revuz, M Yor, SpringerContinuous martingales and Brownian motion. 3rd editionD. Revuz and M. Yor. Continuous martingales and Brownian motion. Grundlehren der mathematischen Wissenschaften. Springer, 3rd edition, 1998. Polynomial localization of the 2d-vertex reinforced jump process. C Sabot, Electron. Commun. Probab. 269C. Sabot. Polynomial localization of the 2d-vertex reinforced jump process. Electron. Commun. Probab., 26:9, 2021. Id/No 1. Branching random walks. Z Shi, Springer2151École d'Été de Probabilités de Saint-Flour XLII -2012Z. Shi. Branching random walks. École d'Été de Probabilités de Saint-Flour XLII -2012, volume 2151. Springer, 2015. Edge-reinforced random walk, vertex-reinforced jump process and the supersymmetric hyperbolic sigma model. C Sabot, P Tarrès, J. Eur. Math. Soc. 179JEMS)C. Sabot and P. Tarrès. Edge-reinforced random walk, vertex-reinforced jump process and the supersymmetric hyperbolic sigma model. J. Eur. Math. Soc. (JEMS), 17(9):2353-2378, 2015. The vertex reinforced jump process and a random schrödinger operator on finite graphs. The Annals of Probability. C Sabot, P Tarrès, X Zeng, 45C. Sabot, P. Tarrès, and X. Zeng. The vertex reinforced jump process and a random schrödinger operator on finite graphs. The Annals of Probability, 45:3967-3986, 2017. A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs. C Sabot, X Zeng, J. Am. Math. Soc. 322C. Sabot and X. Zeng. A random Schrödinger operator associated with the vertex reinforced jump process on infinite graphs. J. Am. Math. Soc., 32(2):311-349, 2019. Hitting times of interacting drifted Brownian motions and the vertex reinforced jump process. C Sabot, X Zeng, The Annals of Probability. 483C. Sabot and X. Zeng. Hitting times of interacting drifted Brownian motions and the vertex reinforced jump process. The Annals of Probability, 48(3):1057 -1085, 2020.
[]
[ "DRAPS: Dynamic and Resource-Aware Placement Scheme for Docker Containers in a Heterogeneous Cluster", "DRAPS: Dynamic and Resource-Aware Placement Scheme for Docker Containers in a Heterogeneous Cluster" ]
[ "Ying Mao [email protected] ", "Jenna Oak ", "Anthony Pompili [email protected] ", "Daniel Beer [email protected] ", "Tao Han [email protected] \nDepartment of Electrical & Computer Engineering\nUniversity of North Carolina at Charlotte\n\n", "Peizhao Hu \nDepartment of Computer Science\nRochester Institute of Technology\n\n", "\nDepartment of Computer Science\nThe College of New Jersey\n\n" ]
[ "Department of Electrical & Computer Engineering\nUniversity of North Carolina at Charlotte\n", "Department of Computer Science\nRochester Institute of Technology\n", "Department of Computer Science\nThe College of New Jersey\n" ]
[]
Virtualization is a promising technology that has facilitated cloud computing to become the next wave of the Internet revolution. Adopted by data centers, millions of applications that are powered by various virtual machines improve the quality of services. Although virtual machines are well-isolated among each other, they suffer from redundant boot volumes and slow provisioning time. To address limitations, containers were born to deploy and run distributed applications without launching entire virtual machines. As a dominant player, Docker is an opensource implementation of container technology. When managing a cluster of Docker containers, the management tool, Swarmkit, does not take the heterogeneities in both physical nodes and virtualized containers into consideration. The heterogeneity lies in the fact that different nodes in the cluster may have various configurations, concerning resource types and availabilities, etc., and the demands generated by services are varied, such as CPUintensive (e.g. Clustering services) as well as memory-intensive (e.g. Web services). In this paper, we target on investigating the Docker container cluster and developed, DRAPS, a resourceaware placement scheme to boost the system performance in a heterogeneous cluster.
10.1109/pccc.2017.8280474
[ "https://arxiv.org/pdf/1805.08598v1.pdf" ]
44,012,644
1805.08598
28c780b30976715b6f1f0965f67d44211f24eed8
DRAPS: Dynamic and Resource-Aware Placement Scheme for Docker Containers in a Heterogeneous Cluster 22 May 2018 Ying Mao [email protected] Jenna Oak Anthony Pompili [email protected] Daniel Beer [email protected] Tao Han [email protected] Department of Electrical & Computer Engineering University of North Carolina at Charlotte Peizhao Hu Department of Computer Science Rochester Institute of Technology Department of Computer Science The College of New Jersey DRAPS: Dynamic and Resource-Aware Placement Scheme for Docker Containers in a Heterogeneous Cluster 22 May 2018 Virtualization is a promising technology that has facilitated cloud computing to become the next wave of the Internet revolution. Adopted by data centers, millions of applications that are powered by various virtual machines improve the quality of services. Although virtual machines are well-isolated among each other, they suffer from redundant boot volumes and slow provisioning time. To address limitations, containers were born to deploy and run distributed applications without launching entire virtual machines. As a dominant player, Docker is an opensource implementation of container technology. When managing a cluster of Docker containers, the management tool, Swarmkit, does not take the heterogeneities in both physical nodes and virtualized containers into consideration. The heterogeneity lies in the fact that different nodes in the cluster may have various configurations, concerning resource types and availabilities, etc., and the demands generated by services are varied, such as CPUintensive (e.g. Clustering services) as well as memory-intensive (e.g. Web services). In this paper, we target on investigating the Docker container cluster and developed, DRAPS, a resourceaware placement scheme to boost the system performance in a heterogeneous cluster. I. INTRODUCTION In the past few decades, we have witnessed a spectacular information explosion over the Internet. Hundreds of thousands of users are consuming the Internet through various services, such as websites, mobile applications, and online games. The service providers, at the back-end side, are supported by stateof-the-art infrastructures on the cloud, such as Amazon Web Service [1] and Microsoft Azure [2]. Focusing on providing the services at scale, virtualization is one of the emerging technologies used in data centers and cloud environments to improve both hardware and development efficiency. At the system level, the virtual machine is a widely-adopted virtualization method [3], which isolates CPU, memory, block I/O, network resources, etc [4]. In a large-scale system, however, providing services through virtual machines would mean that the users are probably running many duplicate instances of the same OS and many redundant boot volumes [5]. Recent research shows that virtual machines suffer from noticeable performance overhead, large storage requirement, and limited scalability [6]. To address these limitations, containers were designed for deploying and running distributed applications without launching entire virtual machines. Instead, multiple isolated service units of the application, called containers, share the host operating system and physical resources. The concept of container virtualization is yesterday's news; Unix-like operating systems leveraged the technology for over a decade and modern big data processing plforms utilize containers as a basic computing unit [7]- [9]. However, new containerization platforms, such as Docker, make it into the mainstream of application development. Based on previously available opensource technologies (e.g. cgroup), Docker introduces a way of simplifying the tooling required to create and manage containers. On a physical machine, containers are essentially just regular processes; in the system view, that enjoy a virtualized resource environment, not only just CPU and memory, but also bandwidth, ports, disk i/o, etc. We use "Docker run image" command to start a Docker container on physical machines. In addition to the disk image that we would like to initiate, users can specify a few options, such as "-m" and "-c", to limit a container's access to resources. While options set a maximum amount, resource contention still happens among containers on every host machine. Upon receiving "Docker run" commands from clients, the cluster, as the first step, should select a physical machine to host those containers. The default container placement scheme, named Spread, uses a bin-pack strategy and tries to assign a container on the node with the fewest running containers. While Spread aims to equally distribute tasks among all nodes, it omits two major characteristics of the system. First of all, the nodes in a cluster do not necessary have to be identical with each other. It is a common setting to have multiple node types, in terms of total resource, in the cluster. For example, a cutting edge server can easily run more processes concurrently than a off-theshelf desktop. Secondly, the resource demands from containers are different. Starting with various images, services provided by containers are varied, which leads to a diverse resource demands. For instance, a clustering service, e.g. Kmeans, may need more computational power and a logging service, e.g. Logstash, may request more bandwidth. In this project, we propose a new container placement scheme, DRAPS, a Dynamic and Resource-Aware Placement Scheme. Different from the default Spread scheme, DRAPS assigns containers based on current available resources in a heterogeneous cluster and dynamic demands from containers of various services. First, DRAPS identifies the dominant resource type of a service by monitoring containers that offer this service. It, then, places the containers with complementary needs to the same machine in order to re-duce the balance resource usages on the nodes. If one type of resource, finally, becomes a bottleneck in the system, it migrates the resource-intensive containers to other nodes. Our main contributions are as follows: • We introduce the concept of dominant resource type that considers the dynamic demands from different services. • We propose a complete container placement scheme, DRAPS, which assigns the tasks to appropriate nodes and balance resource usages in a heterogeneous cluster. • We implement DRAPS into the popular container orchestration tool, Swarmkit, and conduct the experiment with 18 services in 4 types. The evaluation demonstrates that DRAPS outperforms the default Spread and reduces usage as much as 42.6% on one specific node. II. RELATED WORK Virtualization serves as one of the fundamental technologies in cloud computing systems. As a popular application, virtual machines (VMs) have been studied for decades. However, in the reality, VMs suffer from noticeable performance overhead, large storage requirement, and limited scalability [6]. More recently, containerization, a lightweight virtualization technique, is drawing increasing popularity from different aspects and on different platfroms [10]- [21]. The benefits and challenges of containerized systems have been studied in many aspects. A comprehensive performance study is presented in [22], where it explores the traditional virtual machine deployments, and contrast them with the use of Linux containers. The evaluation focuses on overheads and experiments that show containers' resulting performance to be equal or superior to VMs performances. Although containers outperform VMs, the research [23] shows that the startup latency is considerably larger than expected. This is due to a layered and distributed image architecture, in which copying package data accounts for most of container startup time. The authors propose Slacker which can significantly reduce the startup latency. While Slacker reduces the amount of copying and transferring packages, if the image is locally available, the startup could be even faster. CoMICon [24] addresses the problem by sharing the image in a cooperative manner. From different aspect, SCoPe [25] tries to manage the provisioning time for large scale containers. It presents a statistical model, used to guide provisioning strategy, to characterize the provisioning time in terms of system features. Besides the investigations on standalone containers, the cluster of containers is another important aspect in this field. Docker Swarmkit [26] and Google Kubernetes [27] are dominant cluster management tools in the market. The authors of [28], first, conduct a comparison study of scalabilities under both of them. Then, firmament is proposed to achieve low latency in large-scale clusters by using multiple min-cost max-flow algorithms. On the other hand, focusing on workload scheduling, the paper [29] describes an Ant Colony Optimization algorithm for a cluster of Docker containers. However, the algorithm does not distinguish various containers, which usually have a divese requirements. In this paper, we investigate the container orchestration in the prospective of resource awareness. While users can set limits on resources, containers are still competing for resources in a physical machine. Starting from different images, the containers target various services, which results in different requirements on resources. Through analyzing the dynamic resource demands, our work studies a node placement scheme that balance the resource usages in a heterogeneous cluster. III. BACKGROUND AND MOTIVATION A. Docker Containers A Docker worker machine runs a local Docker daemon. New containers may be created on a worker by sending commands to its local daemon, such as "docker run -it ubuntu bash". A Docker container image is a lightweight, standalone, executable package of a piece of software that includes everything needed to run it: code, run-time, system tools, system libraries, and settings. In general, each container targets a specific service of an application. If the application needs to scale up this particular service, it initiates duplicated containers by using the same image. One physical machine can host many applications with various services in a standalone mode. B. Container Orchestration When deploying applications into a production environment, it's difficult to achieve resilience and scalability on a single container host. Typically, a multi-node cluster is used to provide the infrastructures for running containers at scale. Introduced by Docker, SwarmKit is an open source toolkit for container orchestration in the cluster environment. There are two types of nodes in a cluster that are running SwarmKit, worker nodes, and manager nodes. Worker nodes are responsible for running tasks; on the other hand, manager nodes accept specifications from the user and are responsible for reconciling the desired state with the actual cluster state. A Docker container can be initiated with specific requirements (e.g. memory and CPU) and user-defined labels. The scheduler that runs on a manager combines the user-input information with states of each node to make various scheduling decisions, such as choosing the best node to perform a task. Specifically, it utilizes filters and scheduling strategies to assign tasks. There are four filters available. ReadyFilter: checks that the node is ready to schedule tasks; ResourceFilter: checks that the node has enough resources available to run; PluginFilter: checks that the node has a specific volume plugin installed. ConstraintFilter: selects only nodes that match certain labels. If there are multiple nodes that pass the filtering process, SwarmKit supports three scheduling strategies: spread (currently available), binpack, and random (under development based on Swarm Mode). Spread strategy: places a container on the node with the fewest running containers. Binpack strategy: places a container onto the most packed node in the cluster. Random strategy: randomly places the container into the cluster. The default spread strategy, which attempts to schedule a service task based on the number of active containers on IV. DRAPS SYSTEM A. Framework of Manager and Worker Nodes As described in the previous section, there are multiple managers and workers in the system. A manager has six hierarchical modules. Client API accepts the commands from clients and creates service objects. Orchestrator handles the lifecycle of service objects and manages mechanics for service discovery and load balancing. Allocator provides network model specific allocation functionality and allocates IP addresses to tasks. Scheduler assigns tasks to worker nodes. Dispatcher communicates with worker nodes, checks their states, and collects the heartbeats from them. A worker node, on the other hand, manages the Dockers containers and sends back their states to managers through periodical heartbeat messages. An executor is used to run the tasks that are assigned to the containers in this worker. B. DRAPS modules To simplify the implementation, we integrate the DRAPS components into the current framework. As shown on Fig 2, it mainly consists of three parts: a container monitor that resides in the worker nodes, a worker monitor, and a DRAPS scheduler that implement in manager nodes. Container Monitor: a container monitor collects the runtime resources usage statistics of Docker containers on worker nodes. At each application level, the monitored resources contain memory, CPU percentage, block I/O, and network I/O. The average usage report in a given time window of top users will be injected into the DRAP-Heartbeat messages and sent back to managers. At the host system level, the tracking information includes I/O wait, reminder percentage of available memory, CPU, and bandwidth. The information is used by worker nodes to conduct a self-examination to identify its own bottleneck. If a bottleneck is found, a DRAP-Alert message will be produced and sent back to managers. Work Monitor: a worker monitor processes the messages from worker nodes. It maintains a table for each worker and the corresponding containers. Through analyzing the data, it will generate tasks, such as migrating a resource-intensive container to another host. DRAP-Scheduler: the DRAP-Scheduler assigns a task to a specific node based on the current available resources. For a duplicated Docker container, DRAP-Scheduler checks its characteristics on resource consumption, such as memory intensity, through the records of the previous containers in the same services. The DRAPS scheduler aims to optimize the container placement such that the available resources on each worker node are maximized. In this paper, we assume that a container requires multiple resources such as memory, CPU, bandwidth, and I/O for running its services. Since the services and their workloads in a container change over time, the resource requirements in a container also exhibit temporal dynamics. Therefore, we formulate the resource requirements of a container as a function of time. Denote r k i (t) as the kth resource requirement of the ith container at time t. Let x i,j = {0, 1} be the container placement indicator. If x i,j = 1, the ith container is placed in the jth work node. Denote W k j as the total amount of the kth resource in the jth work node. Let C, N , K be the set of containers, work nodes, and the resources, respectively. The utilization ratio of the k resource in the jth work node can be expressed as u k j (t) = i∈C x i,j r k i (t) W k j (1) We assume that the utilization ratio of the jth work node is defined by its highest utilized resource. Then, the utilization ratio of the jth work node is max k∈K u k j (t). The highest resource utilization among all the work nodes can be identified as ν = max j∈N max k∈K u k j (t).(2) Since our objective when designing the DRAPS scheduler is to maximize the available resources in each worker node, the DRAPS scheduling problem can be formulated as max xi,j ν (3) s.t. j x i,j = 1; ∀i ∈ C; (4) u k j (t) ≤ 1, ∀k ∈ K, ∀j ∈ N .(5) The constraint in E.q. (4) requires that each container should be placed in one worker node. The constrain in E.q. (5) enforces that the utilization ratio of any resource in a worker is less than one. Lemma 1. The DRAPS scheduling problem is NP-hardness. Proof: In proving the Lemma, we consider a simple case of the DRAPS scheduling problem in which the resource requirements of each container are constant over time. The simplified DRAPS scheduling problem equals to the multidimensional bin packing problem which is NP-hard [30]- [32]. Hence, the lemma can be proved by reducing any instance of the multidimensional bin packing to the simplified DRAPS scheduling problem. For the sake of simplicity, we omit the detail proof in the paper. VI. DRAPS IN A HETEROGENEOUS CLUSTER Previously, we discussed the different modules in DRAPS and their major responsibilities. We also formulated the DRAPS scheduling problem and proved that the problem is NP-hard. In this section, we present the detailed design of DRAPS with heuristic container placement and migration algorithms, in a heterogeneous cluster, which aims to increase resource availability on each worker node and boost the service performance by approximating the optimal solution of the DRAPS scheduling problem. To achieve the objectives, DRAPS system consists of three strategies: 1) Identify dominant resource demands of containers; 2) Initial container placement; 3) Migrate a container A. Identify Resource Demands from Containers Before improving the overall resource efficiency, the system needs to understand the dynamic resource demands of various containers. A container is, usually, focused on providing a specific service, such as web browsing, data sorting, and database querying. Different algorithms and operations will be applied to the services, which result in diverse resource demands. As an intuitive example, we conducted the experiments on NSF Cloudlab [33] (M400 node hosted by University of Utah). The containers are initiated by using the following four images and the data is collected through "docker stats" command. 1) MySQL: the relational database management system. Tested workloads: scan, select, count, join. 2) Tomcat: provides HTTP web services with Java. Tested workloads: HTTP queries at 10/second and 20/second of a HelloWorld webpage. 3) YUM: a software package manager that installs, updates, and removes packages. Tested workload: download and install "vim" package. 4) PI: a service to calculate PI. Tested workload: top 3,000 digits with single thread, top 7,500 digits with two threads. Figs. 3a to 3d plots the dynamic resource demands under different workloads on the above four Docker containers. The figures illustrate very diverse usage patterns on four types of resources: CPU, memory, network I/O, and block I/O. For example, without workload, container PI consumes very limited resources. However, when the jobs arrive at 10th and 38th second, the CPU usage jumps to 100% for a single thread job and 200% for a two-threads job. The usages of the other three types of resources still remain at very low levels. For MySQL service container, with tested operations, the CPU usage shows a burst when clients submit a request. At time 84, a "join" operation that involves 3 tables is submitted, and we can find CPU usage jumps, as well as memory usage. This is because the join operation needs a lot of computation and copies of tables in memory. Different usage trends are found on YUM and Tomcat services, where YUM uses less CPU and memory, but more network I/O and block I/O to download and install packages. On the other hand, Tomcat consumes a very small amount of network I/O and block I/O due to the size of a tested HelloWorld page, but more than 200MB of memory is used to maintain the service. To balance the resource usage, it's crucial to place the containers with complementary demands on the same worker. As shown on the graphs, there is a dominant resource demand of a service in a given period despite multiple types of resources. In DRAPS , we need to identify the dominant resource demand for each service. A manager, in the system, can monitor all of the containers' resource usage and group them by their associated service ID. Suppose the service s i ∈ S contains m running containers that store in a set, RC si . The resources consumed by c i ∈ RC si is denoted by a vector, R ci , where each attribute, r i , in the vector represents a type of resources, such as memory and CPU. If there are q types of resources in the system, the average resource cost of s i is a vector, R si , R si = ci∈RCs i R ci =< ci∈RCs i r 1 /m, ci∈RCs i r 2 /m, ..., ci∈RCs i r q /m > On the worker nodes, there is a limited amount of resources in each type. The resource limit is a vector that contains q attributes, < l 1 , l 2 , ..., l q >. The limit of a system, < L 1 , L 2 , ..., L q >, is obtained by from the sum of vectors from workers. Therefore, R si can be represented by a percentage of the total resources in the system, for the i th type, the container cost for s i in on average is B. Initial Container Placement To use a SwarmKit cluster, clients need to execute the command "docker run" to start a new container. Therefore, the first task for the cluster is to choose a worker node to host the container. As discussed in section III, the default container placement strategy fails to take dynamic resource contention into consideration. This is because the managers in SwarmKit do not have a mechanism that can monitor the current available resource. DRAPS, on the other hand, addresses the problem by introducing DRAPS-Heartbeat. DRAPS-Heartbeat is an enhanced heartbeat message that not only the states of worker node, but also the containers' resource usage over a given time window, the usage includes memory, CPU, bandwidth, and block I/O. On the manager side, the data will be organized into a table that keeps tracking the current available resource on each worker and its corresponding containers' resource usages. Running on managers, Algorithm 1 assigns a container initialization task to a specific worker. Firstly, each manager maintains a known service set that records dockers' characteristics, such as the usage of memory, CPU, bandwidth, and block i/o (line 1). The initial candidate worker are all running workers (line 2). When a new container starting task arrives, the algorithm applies all filters that the user specified to shrink the candidate work set, W cand (line 3-6). Then, it checks whether the container belongs to a known service (line 7). If it is, the S dom parameter will be used to store the container's dominant resource attribute (line 8). In DRAPS, we consider four types, memory, CPU, bandwidth, and block i/o. The W cand set will be sorted according to the dominant resource attribute and return the W id with the highest available resource in S dom type (line 9-10). If the service cannot be found in {KS}, W id with the highest available resource on average will be chosen (line [11][12][13]. C. Migrating a Container In a Swarmkit cluster, resource contention happens on every worker. The container conitor, which is a module of DRAPS, runs on each worker to record resource usages of hosting containers. In addition, the worker keeps tracking available resources on itself. Whenever it finds a draining type of resources becomes a bottleneck, it sends to managers a Return w id with highest r SDOM 11: else 12: Sort i=q i=0 r i /m for w id ∈ W cand 13: Return w id with highest average available resource DRAPS alert message that contains the bottleneck type and the most costly container of this type. Upon receiving the DRAPS alert message, the manager needs to migrate this container to an appropriate worker and kill it on the worker to release the resources. Algorithm 2 presents the procedure to process an alert message from w i . It first builds a candidate set W cand , which includes all running workers expect w i that sends the alert (line 1). Then, the manager extracts the resource type, r i that causes the bottleneck and finds the corresponding S id for the C id (lines 2-4). With W cand and S id , the algorithm can decide whether this S id is a global service (line 5). If S id is a global service and it is in the known service set, {KS}, the algorithm returns w id that is included in W cand , with the highest available r SDOM . On the other hand, it returns w id with the highest available r i if S id is not in {KS} and S DOM on unknown (lines [6][7][8][9][10][11][12]. When S id is not a global service, we want to increase the reliability of S id by placing its containers to different workers as much as possible. In this situation, we have a similar process expect a different W cand , where W cand contains all running workers that do not hosting any containers for S id (lines 13 -23). VII. PERFORMANCE EVALUATION A. Implementation, Testbed and Workloads We implement our new container placement scheme, DRAPS, on Docker Community Edition (CE) v17. As described in Section IV, the major modules in DRAPS are integrated into the existing Docker Swarmkit framework. Return w id with highest r i 13: else 14: for w id ∈ W cand do 15: if S id ∈ w id then 16 Return w id with highest r i To evaluate DRAPS, we build a heterogeneous cluster on Alibaba Cloud [34], which supports multiple types of computing nodes. Specifically, we use three different types of instances, small (1 CPU core and 4G memory), medium (4 CPU cores and 8G memory) and large (8 CPU cores and 16G memory). In the small-scale testing, we setup a cluster with 3 nodes, one of each type, and configure it with 1 manager and 3 worker (1 of the 3 physical nodes hosts both manager and worker). In experiments on scalability, we configure the cluster with 1 manger and 9 workers that consist 3 instances of each type. The main objective of DRAPS is to understand resource demands of services and place them on appropriate worker nodes. As we discussed in Section VI-A, characteristics of services are varied. Therefore, workloads for the cluster are images of various services. In the evaluation, we select 18 different images in 4 types from Docker Hub B. Evaluation Results 1) Idle containers: In this subsection, we present the result of a cluster with idle containers. If a container is in a running state but does not serve any clients, we call it a idle container. Idle container is an important concept since every node, right after initialization will act as an idle container. Understanding the resource demands of an idle container will help us select its host. In these experiments, we first randomly choose 14 images form the pool, and each image will be used to initiate 10 containers. Therefore, there are 140 containers in the cluster. Those containers are started one by one with 5 seconds interval. This is because previous containers will result in different available resources on worker nodes, which we can utilize to test DRAPS. Fig 4 illustrates a comparison of memory and CPU usages between Spread, a Swarmkit default placement scheme, with DRAPS. As we can see from the subfigures, most of the CPU usage happens from 0 to 500s. This is caused by submission pattern that used to initiate containers. The percentage grows continuously from 0 to 500s since we have 100 containers and the submission interval is 5 seconds. While in both systems, the usage of CPU stays at a low level on average. However, the memory usage keeps increasing along with the number of containers on each worker. Due to the idle container setting, the utilization of memory is stable after 500s (all the containers have successfully initiated). There are some jitters on the curve of CPU, because some supporting programs, such as the Docker engine and service daemon, are running on the same worker and, of course, they are consuming resources. Comparing the memory usage rates after 500s, DRAPS significantly reduces rate on worker 1, from 80.5% to 46.7%. On worker 2, Spread and DRAPS achieve similar performance on memory, 39.1% verse 40.6%. On worker 3, Spread results in 23.6% and DRAPS consumes 33.3%. The DRAPS outperforms Spread by considering the heterogeneity in the cluster and various resource demands of services. When a task arrives at the system, it selects a worker based on the service demands and current available resources. Fig 5 shows the number of containers on workers. For Swarmkit with Spread, it uses a bin-pack strategy and tries to equally distribute the containers to every worker, which results in 34, 33, 33 containers for worker 1, 2, 3. While in DRAPS, worker 3 has more power than others and hosts more containers than worker 1, which has limited resource. While DRAPS achieves better performance, it introduces more data transfers between managers and workers through heartbeat messages. Fig 6 plots the network consumption of Swarmkit and DRAPS on worker 3, which hosts both a manager and a worker. As expected, DRAPS consumes more bandwidth than Swarmkit due to the enhanced heartbeat messages includes more statistical information resource usages of containers. Considering the distributed architecture, the system can have multiple managers and each of them in charge of controllable number of workers, the increase of bandwidth consumption that brought by DRAPS is reasonable. Next, we conduct the same experiments with 40% more containers to test the scalability of DRAPS. Fig 7 plots the system performance with 140 Docker containers. Comparing the figures, the first impression is that on Fig 7a, the usages suddenly drop from 95.2% to 11.1% for memory and 100% to 0 for CPU. The reason lies in the fact that, at time 726, the memory becomes bottlenecked on work 1 with Spread scheme. However, the manager does not award this situation on worker 1, and assign a new container to it. Worker 1 fails to start the new container, and drains the memory, which results in the death of all containers on it. The Docker engine decides to kill them all when it can not communicate with them. On the other hand, DRAPS considers dynamic resources usages on workers, and it stops assigning task to a worker if it has already overwhelming. It is shown on Fig 7d that the usages of memory and CPU remains at 46.3% and 18.8% for worker 1 with DRAPS. While worker 2 with Spread still runs smoothly at the end of the testing, its memory usage is at a high level, 76.6%, comparing to work 2 with DRAPS the value is 54.1%. 2) Loaded containers: Besides idle containers, we set up a mix environment that includes both idle and loaded containers. If clients are generating workloads to the services on the running containers, we call it loaded containers. Evidenced by Fig. 3, we know that loaded containers consume more resources than idle ones. In addition, the usage pattern of a loaded container changes along with the workload. Fig 8 plots the memory usage and number of containers on Worker-1. For the experiments running with Spread, it drains the memory at time 825s that the memory usage drops from 98.5% to 11.9%. Simultaneously, the number of running containers on worker-1 drops from 44 to 9 and then, to 0 at time 825s and 837s. This is because the docker engine kills all containers when the memory is not enough to maintain the system itself. Due to less containers on worker-1 with DRAPS (44 v.s 24), it runs normally throughout the entire experiments. Fig 9 shows the value of I/O wait in percentage, which measures the percent of time the CPU is idle, but waiting for an I/O to complete. It shows a similar trend that at time 849s the value drops to 0 for Spread, while DRAPS maintains stable performance. VIII. CONCLUSION This paper studies the container placement strategy in a heterogeneous cluster. We target on distributing containers to the worker nodes with the best available resources. In this paper, we develop DRAPS, which considers various resource demands from containers and current available resources on each node. We implemented DRAPS on Docker Swarmkit platform and conducted extensive experiments. The results show a significant improvement on the system stability and scalability when comparing with the default Spread strategy. Fig. 2 : 2Docker Framework with DRAPS Implemention V. PROBLEM FORMULATION Fig. 3 : 3ci∈RCs i r i /m ÷ L i . With the analysis, we define the dominant function, DOM (s i ) = max{ ci∈RCs i r i /m ÷ L i } Function DOM (s i ) returns the Resource demonds under different workloads on four services, MySQL, Tomcat, YUM, PI. type of a dominant resource demand of service s i within a given time period. The value of DOM (s i ) changes along with the system depending on the running containers for s i and the current cost of them. a known characteristics service set {KS} 2: {W cand } = All running W id ; 3: Function ContainerPlacement(S ID ) 4: for w id ∈ {W cand } do 5:if ! F ilters(w id ) then6: Remove w id from {W cand } 7: if S ID ∈ {KS} then 8: S DOM = DOM (S ID ) 9:Sort W cand according to r SDOM 10: Fig. 4 : 4Memory and CPU resources usage comparison between Spread and DRAPS placement scheme (100 containers) Algorithm 2 Process DRAPS Alert Message from w i 1: {W cand } = All running workers expect w i ; 2: Function ReceiveAlertMsg(C id ) 3: Extract the bottleneck type r i 4: Find corresponding S id for C id 5: if ∀w id ∈ W cand → S id ∈ w id then 6: if S id ∈ {KS} then 7: S DOM = DOM (S id ) [35] to build our image pool. Database Services: MongoDB, MySQL, Postgres, Cassandra, RethinkDB. Storage/Caching Services: Registry, Memcached. Web Services: Tomcat, Httpd, Redis, HAProxy, Jetty, Nginx, GlassFish. Message Services: Rab-bitMQ, Apache ZooKeeper, ActiveMQ, Ghost. Fig. 6 : 6Network consumption comparison on worker 3 Fig. 7 :Fig. 9 : 79Memory and CPU resources usage comparison between Spread and DRAPS placement scheme (Value of I/O Wait on worker1 : Remove w id from W cand 17: if S id ∈ {KS} then 18:S DOM = DOM (S id )19: Sort W cand according to r SDOM 20: Return w id with highest r SDOM 21: else 22: Sort W cand according to r i 23: Amazon web service. Amazon web service. https://aws.amazon.com/. Virtual machine isolation. R Jithin, P Chandran, Communications in Computer and Information Science. 420R. Jithin and P. Chandran. Virtual machine isolation. Communications in Computer and Information Science, 420:91-102, 2014. A survey of migration mechanisms of virtual machines. V Medina, J M García, ACM Computing Surveys (CSUR). 46330V. Medina and J. M. García. A survey of migration mechanisms of virtual machines. ACM Computing Surveys (CSUR), 46(3):30, 2014. Managing performance overhead of virtual machines in cloud computing: A survey, state of the art, and future directions. F Xu, F Liu, H Jin, A V Vasilakos, Proceedings of the IEEE. 1021F. Xu, F. Liu, H. Jin, and A. V. Vasilakos. Managing performance overhead of virtual machines in cloud computing: A survey, state of the art, and future directions. Proceedings of the IEEE, 102(1):11-31, 2014. Fresh: Fair and efficient slot configuration and scheduling for hadoop clusters. J Wang, Y Yao, Y Mao, B Sheng, N Mi, IEEE 7th International Conference on. IEEECloud Computing (CLOUD)J. Wang, Y. Yao, Y. Mao, B. Sheng, and N. Mi. Fresh: Fair and efficient slot configuration and scheduling for hadoop clusters. In Cloud Computing (CLOUD), 2014 IEEE 7th International Conference on, pages 761-768. IEEE, 2014. Omo: Optimize mapreduce overlap with a good start (reduce) and a good finish (map). J Wang, Y Yao, Y Mao, B Sheng, N Mi, Computing and Communications Conference (IPCCC). IEEEIEEE 34th International PerformanceJ. Wang, Y. Yao, Y. Mao, B. Sheng, and N. Mi. Omo: Optimize mapreduce overlap with a good start (reduce) and a good finish (map). In Computing and Communications Conference (IPCCC), 2015 IEEE 34th International Performance, pages 1-8. IEEE, 2015. Seina: A stealthy and effective internal attack in hadoop systems. J Wang, T Wang, Z Yang, Y Mao, N Mi, B Sheng, Computing, Networking and Communications (ICNC. IEEEJ. Wang, T. Wang, Z. Yang, Y. Mao, N. Mi, and B. Sheng. Seina: A stealthy and effective internal attack in hadoop systems. In Computing, Networking and Communications (ICNC), 2017 International Confer- ence on, pages 525-530. IEEE, 2017. Secpod: a framework for virtualization-based security systems. X Wang, Y Chen, Z Wang, Y Qi, Y Zhou, Proceedings of the 2015 USENIX Annual Technical Conference. the 2015 USENIX Annual Technical ConferenceX. Wang, Y. Chen, Z. Wang, Y. Qi, and Y. Zhou. Secpod: a framework for virtualization-based security systems. In Proceedings of the 2015 USENIX Annual Technical Conference, pages 347-360, 2015. Interface for performance environment autoconfiguration framework. L Men, B Hadri, H You, High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion. IEEEL. Men, B. Hadri, and H. You. Interface for performance environ- ment autoconfiguration framework. In High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:, pages 1356-1356. IEEE, 2012. Efficiently handling skew in outer joins on distributed systems. L Cheng, S Kotoulas, T E Ward, G Theodoropoulos, Cluster, Cloud and Grid Computing (CCGrid). IEEEL. Cheng, S. Kotoulas, T. E. Ward, and G. Theodoropoulos. Efficiently handling skew in outer joins on distributed systems. In Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on, pages 295-304. IEEE, 2014. Qbdj: A novel framework for handling skew in parallel join processing on distributed memory. L Cheng, S Kotoulas, T E Ward, G Theodoropoulos, High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (HPCC EUC). IEEEIEEE 10th International Conference onL. Cheng, S. Kotoulas, T. E. Ward, and G. Theodoropoulos. Qbdj: A novel framework for handling skew in parallel join processing on dis- tributed memory. In High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (HPCC EUC), 2013 IEEE 10th International Conference on, pages 1519-1527. IEEE, 2013. Efficient data redistribution to speedup big data analytics in large systems. L Cheng, T Li, IEEE 23rd International Conference on. High Performance Computing (HiPC)L. Cheng and T. Li. Efficient data redistribution to speedup big data analytics in large systems. In High Performance Computing (HiPC), 2016 IEEE 23rd International Conference on, pages 91-100. Efficient parallel dictionary encoding for rdf data. L Cheng, A Malik, S Kotoulas, T E Ward, G Theodoropoulos, Proc. 17th Int. Workshop on the Web and Databases. 17th Int. Workshop on the Web and DatabasesCiteseerL. Cheng, A. Malik, S. Kotoulas, T. E. Ward, and G. Theodoropoulos. Efficient parallel dictionary encoding for rdf data. In in Proc. 17th Int. Workshop on the Web and Databases. Citeseer, 2014. Edos: Edge assisted offloading system for mobile devices. H H Harvey, Y Mao, Y Hou, B Sheng, 26th International Conference on Computer Communication and Networks (ICCCN. H. H. Harvey, Y. Mao, Y. Hou, and B. Sheng. Edos: Edge assisted offloading system for mobile devices. In 2017 26th International Conference on Computer Communication and Networks (ICCCN), 2017. Understanding performance of i/o intensive containerized applications for nvme ssds. J Bhimani, J Yang, Z Yang, N Mi, Q Xu, M Awasthi, R Pandurangan, V Balakrishnan, Performance Computing and Communications Conference (IPCCC). IEEEJ. Bhimani, J. Yang, Z. Yang, N. Mi, Q. Xu, M. Awasthi, R. Pan- durangan, and V. Balakrishnan. Understanding performance of i/o intensive containerized applications for nvme ssds. In Performance Computing and Communications Conference (IPCCC), 2016 IEEE 35th International, pages 1-8. IEEE, 2016. Accelerating big data applications using lightweight virtualization framework on enterprise cloud. J Bhimani, Z Yang, M Leeser, N Mi, J. Bhimani, Z. Yang, M. Leeser, and N. Mi. Accelerating big data applications using lightweight virtualization framework on enterprise cloud. Gpu accelerated convex hull computation. M Tang, J.-Y Zhao, R Tong, D Manocha, Computers & Graphics. 365M. Tang, J.-Y. Zhao, R.-f. Tong, and D. Manocha. Gpu accelerated convex hull computation. Computers & Graphics, 36(5):498-506, 2012. Gpu accelerated real-time collision handling in virtual disassembly. P Du, J.-Y Zhao, W.-B Pan, Y.-G Wang, Journal of Computer Science and Technology. 303P. Du, J.-Y. Zhao, W.-B. Pan, and Y.-G. Wang. Gpu accelerated real-time collision handling in virtual disassembly. Journal of Computer Science and Technology, 30(3):511-518, 2015. Mesh segmentation for parallel decompression on gpu. J Zhao, M Tang, R Tong, Computational Visual Media. J. Zhao, M. Tang, and R. Tong. Mesh segmentation for parallel decompression on gpu. Computational Visual Media, pages 83-90, 2012. An updated performance comparison of virtual machines and linux containers. W Felter, A Ferreira, R Rajamony, J Rubio, Performance Analysis of Systems and Software (ISPASS). IEEEIEEE International Symposium OnW. Felter, A. Ferreira, R. Rajamony, and J. Rubio. An updated performance comparison of virtual machines and linux containers. In Performance Analysis of Systems and Software (ISPASS), 2015 IEEE International Symposium On, pages 171-172. IEEE, 2015. Slacker: Fast distribution with lazy docker containers. T Harter, B Salmon, R Liu, A C Arpaci-Dusseau, R H Arpaci-Dusseau, FAST. 16T. Harter, B. Salmon, R. Liu, A. C. Arpaci-Dusseau, and R. H. Arpaci- Dusseau. Slacker: Fast distribution with lazy docker containers. In FAST, volume 16, pages 181-195, 2016. Comicon: A co-operative management system for docker container images. S Nathan, R Ghosh, T Mukherjee, K Narayanan, 2017 IEEE International Conference on. IEEECloud Engineering (IC2ES. Nathan, R. Ghosh, T. Mukherjee, and K. Narayanan. Comicon: A co-operative management system for docker container images. In Cloud Engineering (IC2E), 2017 IEEE International Conference on, pages 116-126. IEEE, 2017. Scope: A decision system for large scale container provisioning management. A Hegde, R Ghosh, T Mukherjee, V Sharma, IEEE 9th International Conference on. IEEECloud Computing (CLOUD)A. Hegde, R. Ghosh, T. Mukherjee, and V. Sharma. Scope: A decision system for large scale container provisioning management. In Cloud Computing (CLOUD), 2016 IEEE 9th International Conference on, pages 220-227. IEEE, 2016. Containers and cloud: From lxc to docker to kubernetes. D Bernstein, IEEE Cloud Computing. 13D. Bernstein. Containers and cloud: From lxc to docker to kubernetes. IEEE Cloud Computing, 1(3):81-84, 2014. Firmament: Fast, centralized cluster scheduling at scale. I Gog, M Schwarzkopf, A Gleave, R N Watson, S Hand, UsenixI. Gog, M. Schwarzkopf, A. Gleave, R. N. Watson, and S. Hand. Firmament: Fast, centralized cluster scheduling at scale. Usenix, 2016. Improvement of container scheduling for docker using ant colony optimization. C Kaewkasi, K Chuenmuneewong, Knowledge and Smart Technology (KST), 2017 9th International Conference on. IEEEC. Kaewkasi and K. Chuenmuneewong. Improvement of container scheduling for docker using ant colony optimization. In Knowledge and Smart Technology (KST), 2017 9th International Conference on, pages 254-259. IEEE, 2017. Improved approximation algorithms for multidimensional bin packing problems. N Bansal, A Caprara, M Sviridenko, 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). N. Bansal, A. Caprara, and M. Sviridenko. Improved approximation algorithms for multidimensional bin packing problems. In 2006 47th An- nual IEEE Symposium on Foundations of Computer Science (FOCS'06), pages 697-708, Oct 2006. Online Multidimensional Load Balancing. A Meyerson, A Roytman, B Tagiku, Berlin, HeidelbergA. Meyerson, A. Roytman, and B. Tagiku. Online Multidimensional Load Balancing, pages 287-302. Berlin, Heidelberg, 2013. Tight bounds for online vector scheduling. S Im, N Kell, J Kulkarni, D Panigrahi, Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), FOCS '15. the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), FOCS '15S. Im, N. Kell, J. Kulkarni, and D. Panigrahi. Tight bounds for online vector scheduling. In Proceedings of the 2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS), FOCS '15. . Alibaba Cloud, Alibaba cloud. https://www.alibabacloud.com/.
[]
[ "A Coordinated X-ray and Optical Campaign on the Nearest Massive Eclipsing Binary, δ Ori Aa. I. Overview of the X-ray Spectrum", "A Coordinated X-ray and Optical Campaign on the Nearest Massive Eclipsing Binary, δ Ori Aa. I. Overview of the X-ray Spectrum" ]
[ "M F Corcoran \nCRESST and X-ray Astrophysics Laboratory\nNASA/Goddard Space Flight Center\n20771GreenbeltMDUSA\n\nUniversities Space Research Association\n7178 Columbia Gateway Dr. Columbia21046MDUSA\n", "J S Nichols \nHarvard-Smithsonian Center for Astrophysics\n60 Gar-den Street, MS 3402138CambridgeMAUSA\n", "H Pablo \nDépartement de physique and Centre de Recherche en Astrophysique du Québec (CRAQ)\nUniversité de Montréal\nSucc. Centre-Ville\nC.P. 6128, H3C 3J7MontréalQuébecCanada\n", "T Shenar \nInstitut für Physik und Astronomie\nUniversität Pots-dam\nKarl-Liebknecht-Str. 24/25, Ger-many 6 European Space Agency, XMM-Newton Science Op-erations CentreD-14476Potsdam\n\nEuropean Space Astronomy Centre\nApartado 78E-28691Villanueva de la CañadaSpain\n", "A M T Pollock ", "W L Waldron ", "A F J Moffat \nDépartement de physique and Centre de Recherche en Astrophysique du Québec (CRAQ)\nUniversité de Montréal\nSucc. Centre-Ville\nC.P. 6128, H3C 3J7MontréalQuébecCanada\n", "N D Richardson \nDépartement de physique and Centre de Recherche en Astrophysique du Québec (CRAQ)\nUniversité de Montréal\nSucc. Centre-Ville\nC.P. 6128, H3C 3J7MontréalQuébecCanada\n", "C M P Russell ", "K Hamaguchi \nCRESST and X-ray Astrophysics Laboratory\nNASA/Goddard Space Flight Center\n20771GreenbeltMDUSA\n", "D P " ]
[ "CRESST and X-ray Astrophysics Laboratory\nNASA/Goddard Space Flight Center\n20771GreenbeltMDUSA", "Universities Space Research Association\n7178 Columbia Gateway Dr. Columbia21046MDUSA", "Harvard-Smithsonian Center for Astrophysics\n60 Gar-den Street, MS 3402138CambridgeMAUSA", "Département de physique and Centre de Recherche en Astrophysique du Québec (CRAQ)\nUniversité de Montréal\nSucc. Centre-Ville\nC.P. 6128, H3C 3J7MontréalQuébecCanada", "Institut für Physik und Astronomie\nUniversität Pots-dam\nKarl-Liebknecht-Str. 24/25, Ger-many 6 European Space Agency, XMM-Newton Science Op-erations CentreD-14476Potsdam", "European Space Astronomy Centre\nApartado 78E-28691Villanueva de la CañadaSpain", "Département de physique and Centre de Recherche en Astrophysique du Québec (CRAQ)\nUniversité de Montréal\nSucc. Centre-Ville\nC.P. 6128, H3C 3J7MontréalQuébecCanada", "Département de physique and Centre de Recherche en Astrophysique du Québec (CRAQ)\nUniversité de Montréal\nSucc. Centre-Ville\nC.P. 6128, H3C 3J7MontréalQuébecCanada", "CRESST and X-ray Astrophysics Laboratory\nNASA/Goddard Space Flight Center\n20771GreenbeltMDUSA" ]
[ "M. Leutenegger" ]
We present an overview of four deep phase-constrained Chandra HETGS X-ray observations of δ Ori A. Delta Ori A is actually a triple system which includes the nearest massive eclipsing spectroscopic binary, δ Ori Aa, the only such object that can be observed with little phasesmearing with the Chandra gratings. Since the fainter star, δ Ori Aa2, has a much lower X-ray luminosity than the brighter primary (δ Ori Aa1), δ Ori Aa provides a unique system with which to test the spatial distribution of the X-ray emitting gas around δ Ori Aa1 via occultation by the photosphere of, and wind cavity around, the X-ray dark secondary. Here we discuss the X-ray spectrum and X-ray line profiles for the combined observation, having an exposure time of nearly 500 ks and covering nearly the entire binary orbit. The companion papers discuss the X-ray variability seen in the Chandra spectra, present new space-based photometry and ground-based radial velocities obtained simultaneous with the X-ray data to better constrain the system parameters, and model the effects of X-rays on the optical and UV spectra. We find that the X-ray emission is dominated by embedded wind shock emission from star Aa1, with little contribution from the tertiary star Ab or the shocked gas produced by the collision of the wind of Aa1 against the surface of Aa2. We find a similar temperature distribution to previous X-ray spectrum analyses. We also show that the line half-widths are about 0.3 − 0.5 times the terminal velocity of the wind of star Aa1. We find a strong anti-correlation between line widths and the line excitation energy, which suggests that longer-wavelength, lower-temperature lines form farther out in the wind. Our analysis also indicates that the ratio of the intensities of the strong and weak lines of Fe XVII and Ne X are inconsistent with model predictions, which may be an effect of resonance scattering.
10.1088/0004-637x/809/2/132
[ "https://arxiv.org/pdf/1507.05101v2.pdf" ]
17,339,779
1507.05101
7f6228bb8bb19439d7ea465ea3a0308aaa19c251
A Coordinated X-ray and Optical Campaign on the Nearest Massive Eclipsing Binary, δ Ori Aa. I. Overview of the X-ray Spectrum 22 Jul 2015 M F Corcoran CRESST and X-ray Astrophysics Laboratory NASA/Goddard Space Flight Center 20771GreenbeltMDUSA Universities Space Research Association 7178 Columbia Gateway Dr. Columbia21046MDUSA J S Nichols Harvard-Smithsonian Center for Astrophysics 60 Gar-den Street, MS 3402138CambridgeMAUSA H Pablo Département de physique and Centre de Recherche en Astrophysique du Québec (CRAQ) Université de Montréal Succ. Centre-Ville C.P. 6128, H3C 3J7MontréalQuébecCanada T Shenar Institut für Physik und Astronomie Universität Pots-dam Karl-Liebknecht-Str. 24/25, Ger-many 6 European Space Agency, XMM-Newton Science Op-erations CentreD-14476Potsdam European Space Astronomy Centre Apartado 78E-28691Villanueva de la CañadaSpain A M T Pollock W L Waldron A F J Moffat Département de physique and Centre de Recherche en Astrophysique du Québec (CRAQ) Université de Montréal Succ. Centre-Ville C.P. 6128, H3C 3J7MontréalQuébecCanada N D Richardson Département de physique and Centre de Recherche en Astrophysique du Québec (CRAQ) Université de Montréal Succ. Centre-Ville C.P. 6128, H3C 3J7MontréalQuébecCanada C M P Russell K Hamaguchi CRESST and X-ray Astrophysics Laboratory NASA/Goddard Space Flight Center 20771GreenbeltMDUSA D P A Coordinated X-ray and Optical Campaign on the Nearest Massive Eclipsing Binary, δ Ori Aa. I. Overview of the X-ray Spectrum M. Leutenegger 172122 Jul 20151Subject headings: stars: individual (δ Ori A) -binaries: close -binaries: eclipsing -stars: early-type -stars: mass-loss -X-rays: stars We present an overview of four deep phase-constrained Chandra HETGS X-ray observations of δ Ori A. Delta Ori A is actually a triple system which includes the nearest massive eclipsing spectroscopic binary, δ Ori Aa, the only such object that can be observed with little phasesmearing with the Chandra gratings. Since the fainter star, δ Ori Aa2, has a much lower X-ray luminosity than the brighter primary (δ Ori Aa1), δ Ori Aa provides a unique system with which to test the spatial distribution of the X-ray emitting gas around δ Ori Aa1 via occultation by the photosphere of, and wind cavity around, the X-ray dark secondary. Here we discuss the X-ray spectrum and X-ray line profiles for the combined observation, having an exposure time of nearly 500 ks and covering nearly the entire binary orbit. The companion papers discuss the X-ray variability seen in the Chandra spectra, present new space-based photometry and ground-based radial velocities obtained simultaneous with the X-ray data to better constrain the system parameters, and model the effects of X-rays on the optical and UV spectra. We find that the X-ray emission is dominated by embedded wind shock emission from star Aa1, with little contribution from the tertiary star Ab or the shocked gas produced by the collision of the wind of Aa1 against the surface of Aa2. We find a similar temperature distribution to previous X-ray spectrum analyses. We also show that the line half-widths are about 0.3 − 0.5 times the terminal velocity of the wind of star Aa1. We find a strong anti-correlation between line widths and the line excitation energy, which suggests that longer-wavelength, lower-temperature lines form farther out in the wind. Our analysis also indicates that the ratio of the intensities of the strong and weak lines of Fe XVII and Ne X are inconsistent with model predictions, which may be an effect of resonance scattering. Introduction Massive O-type stars, though rare, are a primary drivers of the chemical, ionization, and pressure evolution of the interstellar medium. The evolution of these stars from the main sequence to supernova depends on their mass and is significantly affected by stellar wind mass-loss. Our best estimates of mass, radius, and luminosity for O stars come from direct dynamical analyses of photometric and radial velocity variations in massive, eclipsing binaries. However, because massive stars are rare and massive binaries which have been studied in detail rarer still (of the 2386 systems listed in the Ninth Catalog of Spectroscopic Binaries, only 82 of them have O-type components), direct dynamical determinations of stellar param-eters are only known for a few systems. Current uncertainties regarding the amount and distribution of mass lost through stellar winds are even larger, since it is difficult to determine stellar wind parameters in a direct, modelindependent way. Radiatively driven stellar winds have mass-loss rates ofṀ ∼ 10 −5 − 10 −7 M yr −1 (for a review, see Kudritzki & Puls 2000). However, observationally determined mass-loss rates have been estimated, in many, if not most cases, using an idealized smooth, spherically symmetric wind. Stellar winds are probably not spherical; variations of photospheric temperature with latitude are inevitable because of stellar rotation (and tidal deformation of stars in binaries), and these temperature variations will produce latitudinally dependent wind densities and velocities (Owocki et al. 1996). Stellar winds are not smooth either; the radiative driving force is inherently unstable to small velocity perturbations, and wind instabilities are expected to grow into dense structures (clumps) distributed through the wind. In addition, clumps can also be produced by subsurface convective zones in massive stars caused by opacity peaks associated with the ionization state of helium and iron (Cantiello et al. 2009). Wind clumps play an important role in determining the overall mass-loss rate, since they carry most of the mass but occupy little volume. An outstanding question is to determine the number and mass/spatial distribution of embedded wind clumps. Collisions between clumps, or between clumps and ambient wind material at high differential velocities can produce pockets of hot shocked gas embedded in the wind. Given wind speeds of up to thousands of kilometers per second, these embedded wind shocks should generate observable X-ray emission (as originally proposed by Lucy & White 1980). There have been efforts to determine the fraction of the wind that is clumped, and the radial distribution of the embedded wind shocks, through analysis of the X-ray radiation they produce. High spectral resolution X-ray grating spectrometry provides a unique tool to determine the properties of the X-ray emitting hot shocked gas produced by embedded wind clumps. In particular, the forbidden-to-intercombination (f /i) line ratios of strong He-like transitions, and analysis of profiles of H-like ions and other strong lines from high resolution spectra (mostly from the Chandra and XMM grating spectrometers) indicate that significant X-ray emission exists within 1 to 2 radii of the stellar photosphere (Waldron & Cassinelli 2001;Leutenegger et al. 2006;Waldron & Cassinelli 2007). X-ray lines of strong Lyα transitions (mainly O VIII, Ne X, Mg XII, Si XIV, and S XVI) show profiles ranging from broad and asymmetric to narrow and symmetric, apparently dependent on stellar spectral type (Walborn et al. 2009). Observed line profile shapes are an important probe of the radius of the maximum X-ray emissivity, modified by absorption from the overlying, cooler, clumped wind. Clumping-corrected mass-loss rates derived from the analysis of resolved X-ray emission lines (Oskinova et al. 2006) are generally in good agreement with predictions of line-driven wind theory, while mass-loss rates derived from analyses of resolved X-ray emission lines are lower (by a factor of a few) if clumping is not taken into account (Cohen et al. 2014). Reducing mass-loss rates by such a large factor would significantly influence our understanding of the ultimate evolution of massive stars. However, while important wind properties, such as the onset radius of clumping, the fraction of the wind that is clumped, and the radial distribution of clumps through the wind, have been indirectly inferred from detailed X-ray line analysis (Oskinova et al. 2006;Owocki & Cohen 2006;Hervé et al. 2013), to date, there have been no attempts to determine these properties directly. In this paper, we try to directly constrain the location of the X-ray emitting gas in the wind of a massive eclipsing binary, δ Ori Aa, via occultation by the companion star of the hot gas embedded in the primary's wind. Delta Ori (Mintaka, HD 36486, 34 Ori) is a visual triple system composed of components A, B, and C. Delta Ori A itself is composed of a massive, short period close eclipsing system δ Ori Aa, and a more distant component, δ Ori Ab, which orbits δ Ori Aa with a period of 346 years (Tokovinin et al. 2014). The inner binary, δ Ori Aa, is the nearest massive eclipsing system in the sky. It consists of a massive O9.5 II primary (star Aa1) + a fainter secondary (star Aa2, B2V-B0.5 III), in a high-inclination (i > 67 • ), short period (P = 5 d .7324), low eccentricity (e ≈ 0.1) orbit (Hartmann 1904;Stebbins 1915;Koch & Hrivnak 1981;Harvin et al. 2002;Mayer et al. 2010). Because it is nearby, bright, with a high orbital inclination, δ Ori Aa is an important system since it can serve as a fundamental calibrator of the massradius-luminosity relation in the upper HR diagram. It is disconcerting, though, that published stellar masses for the primary star δ Ori Aa1 are different by about a factor of two (Harvin et al. 2002;Mayer et al. 2010) 1 . Delta Ori Aa is also a bright X-ray source (Long & White 1980;Snow et al. 1981;Cassinelli & Swank 1983) and is the only eclipsing short-period O-type binary system that is bright enough to be observable with the Chandra gratings with little phase smearing, offering the chance to study of variations of the X-ray emission line profiles as a function of the orbital phase. Since the luminosity of the secondary, δ Ori Aa2, is less than 10% that of the primary, and since X-ray luminosity scales with stellar bolometric luminosity (Pallavicini et al. 1981;Chlebowski et al. 1989;Berghoefer et al. 1997) for stars in this mass range, it should also be less than 10% as bright in X-rays as the primary. Thus the X-ray emission from the system is dominated by the hot gas in the wind of the primary star. Therefore, occultation of different X-ray-emitting regions in the wind of δ Ori Aa1 by the photosphere and/or wind of the X-ray faint secondary, δ Ori Aa2, presents the opportunity to directly study the radial distribution of the hot shocked gas in the primary's wind, by measuring occultation effects in X-ray line emission as a function of ionization potential and orbital phase. Since X-ray lines of different ionization potentials are believed to form at different radial distances above the primary's surface, differential variations in the observed set of Xray lines as a function of orbital phase allow us to probe the hot gas distribution within the primary wind's acceleration zone, where most of the X-ray emission is believed to originate. He-like ions in the X-ray spectrum provide a complementary measure of the radial distribution of the hot gas, since these lines are sensitive to wind density and the dilute ambient UV field. This makes δ Ori Aa a unique system with which to constrain 1 Some progress has been recently made by Harmanec et al. (2013) and by Richardson et al. (2015) in disentangling lines of δ Ori Aa2 from δ Ori Aa1 and δ Ori Ab in the composite spectrum directly the spatial distribution of X-ray emitting clumps embedded in the wind of an important O star. The main challenge, however, is the relatively small size of δ Ori Aa2 compared to the size of the X-ray emitting region, since the hot gas is expected to be distributed in a large volume throughout the stellar wind. This paper provides an overview of the Xray grating spectra obtained during a 479 ksec Chandra campaign on δ Ori Aa+Ab in 2012. The purpose of this project was to obtain high signal-to-noise observations with the Chandra High Energy Transmission Grating Spectrometer (HETGS; Canizares et al. 2005) of δ Ori Aa over almost an entire binary orbit, including key orbital phases, with coordinated ground-based radial velocity monitoring at Hα and He I 6678 (primarily obtained by a group of amateur astronomers), and high precision, simultaneous photometry from space by the Canadian Space Agency's Microvariability and Oscillations of Stars telescope (MOST, Walker et al. 2003). This paper provides an overview of the combined HETGS spectrum from our four observations, and is organized as follows. In Section 4 we present a summary of the four observations and discuss the acquisition and reduction of the data sets. Section 5 presents an analysis of the zeroth-order image of the system to constrain the X-ray contribution of δ Ori Ab to the observed X-ray emission. Section 6 presents the temperature distribution and overall properties of the strong emission lines in the combined spectrum of the four observations. Section 7 discusses the possible influence of the collision of the wind from the primary with the weak wind or photosphere of the secondary, and the influence of any such collision on the wind's thermal and density structure. We present conclusions in Section 8. A series of companion papers presents the results of the variability analysis of the X-ray continuum and line emission (Nichols et al. 2015, in press, Paper II), the ground-based radial velocity and MOST space-based photometric monitoring and analysis (Pablo et al. 2015, in press, Paper III), and a complete non-LTE analysis of the spectral energy distribution of δ Ori Aa+b from optical through X-rays (Shenar et al. 2015, in press, Paper IV). Stellar And System Parameters The stellar parameters given by Harvin et al. (2002) and Mayer et al. (2010) differ significantly, and this difference has important consequences for our understanding of the evolutionary state of the system, and the influence of mass-loss and/or nonconservative mass transfer. Harvin et al. (2002) derived masses of M Aa1 = 11.2M and M Aa2 = 5.6M for the primary and secondary stars, making the primary significantly overluminous for its mass (or undermassive for its spectral type). The radial velocity and photometric analysis of Mayer et al. (2010) were consistent with a substantially higher mass for the primary, M Aa1 = 25M , after a correction for perceived contamination of the radial velocity curve by lines from δ Ori Ab. Whether the O9.5 II primary has a normal mass and radius for its spectral type is important for understanding the history of mass exchange/massloss from δ Ori Aa, and how this history is related to the current state of the radiatively driven wind from the primary. An important goal of our campaign is to derive definitive stellar and system parameters for δ Ori Aa. To this end, we obtained high-precision photometry of the star with the MOST satellite, along with coordinated ground-based optical spectra to allow us to obtain contemporaneous lightand radial-velocity curve solutions, and to disentangle the contributions from Aa2 and/or Ab from the stellar spectrum. We also performed an analysis of the optical and archival IUE UV spectra using the non-LTE Potsdam Wolf-Rayet (PoWR) code (Gräfener et al. 2002;Hamann & Gräfener 2003). The light curve and radial velocity curve analysis is presented in Pablo et al. (2015), while the non-LTE spectral analysis is presented in Shenar et al. (2015). Table 1 summarizes these results. In this table, the values and errors on the parameters derived from the MOST photometry and radial velocities are given for the low-mass solution provided in Pablo et al. (2015). Note that we find better agreement between the derived stellar parameters (luminosities, masses, radii, and temperatures) and the spectral type of δ Ori Aa1 if we use the σ-Orionis cluster distance (d = 380 pc, Caballero & Solano 2008) for δ Ori A, rather than the smaller Hipparcos distance. Therefore, we adopt D = 380 pc as the distance to δ Ori A (for a full discussion of the distance to δ Ori A, see Shenar et al. 2015). The spectral type of δ Ori Aa2 is not well constrained; Harvin et al. (2002) assign it a spectral type of B0.5 III, while Mayer et al. (2010) do not assign a spectral type due to the difficulty in identifying lines from the star. Shenar et al. (2015) assign an early-B dwarf spectral type to δ Ori Aa2 (≈ B1V). Previous X-ray Observations X-ray emission from δ Ori was first tentatively identified via sounding rocket observations (Fisher & Meyerott 1964). X-ray imaging spectrometry of δ Ori A at low or modest resolution was obtained by the EINSTEIN (Long & White 1980), ROSAT (Haberl & White 1993), and ASCA (Corcoran et al. 1994) X-ray observatories. Its X-ray luminosity is typically L x ∼ 10 31−32 ergs s −1 , with L x /L bol ≈ 10 −7 in accord with the canonical relation for massive stars (Pallavicini et al. 1981;Chlebowski et al. 1989;Berghoefer et al. 1997). The X-ray spectrum of δ Ori A was observed at high resolution by X-ray grating spectrometers on Chandra in two previous observations at restricted orbital phases. An analysis of a fifty kilosecond Chandra HETGS spectrum from 2000 January 13 by Miller et al. (2002) revealed strong line emission from O, Ne, Mg, and Fe, along with weaker emission from higher-ionization lines like Si XIII and S XV, and unusually narrow line half-widths of ≈ 400 km s −1 . Using a simple analysis taking into account dilution of the photospheric UV field and a 1/r 2 falloff in wind density, Miller et al. (2002) derived formation regions for the dominant He-like ions Mg XI, Ne IX, and O VII extending just above the stellar photosphere to 3-10 times the photospheric radius. An analysis of a 100 ks Chandra Low Energy Transmission Grating Spectrometer (LETGS; Brinkman et al. 1987) + High Resolution Camera observation from 2007 November 09 by Raassen & Pollock (2013) also showed that the Mg XI, Ne IX, and O VII emission regions extend from 2-10 stellar radius, and showed that the longer wavelength ions like N VI and C V form at substantially greater distances from the star (50-75 times the stellar radius), and that the spectrum could be modeled by a three-temperature plasma in collisional ionization equilibrium with temperatures of 0.1, 0.2, and 0.6 keV. New Chandra Observations A listing of the Chandra observations of δ Ori Aa+Ab obtained as part of this campaign is given in Table 2. These observations were obtained with the Chandra HETGS+ACIS-S spectrometric array. The HETGS consists of 2 sets of gratings: the Medium Resolution Grating (MEG) covering the range 2.5-26Å and and the High Resolution Grating (HEG) covering the range 1.2-15Å; the HEG and MEG have resolving powers of λ/∆λ ≈ 1000 at long wavelengths, falling to ∼ 100 near 1.5Å (Canizares et al. 2005). Four observations covering most of the orbit were obtained within a 9-day timespan to reduce any influence of orbit-to-orbit X-ray variations, for a combined exposure time of 479 ks. Table 2 lists the start and stop HJD, phases, and exposure durations for the four individual observations. Figure 1 shows the time intervals of each observation superposed on the simultaneous MOST optical light curve of δ Ori A . The Chandra observations provide both MEG and HEG dispersed first order spectra as well as the zeroth order image. Due to spacecraft power considerations as well as background count rate issues, it was necessary to use only five ACIS CCD chips instead of six; thus, chip S5 was not used. This means that wavelengths longer than about 19Å in the MEG plus-side dispersed spectrum and about 9.5Å in the HEG plus-side dispersed spectrum are not available. Therefore, the strong O VII line at 21Å was only observed in the MEG-1 order. The buildup of contaminants on the ACIS-S optical blocking filters with time further degraded the long wavelength sensitivity for all first-order spectra. Each of the four observations experienced a large variation in focal plane temperature during the observation. While a temperature-dependent calibration is applied to each observation in standard data processing, the calibration is based on a single temperature measurement taken at the end of the observation. In particular, the focal plane temperature for portions of each observation exceeded the temperature at which the temperature-dependent effects of charge transfer inefficiency (CTI) are calibrated (Grant et al. 2006). This could cause residual errors in the correction of pulse heights for those portions of the observations in the hightemperature regime. Table 1: Stellar, Wind and System parameters for δ Ori Aa1+Aa2 from Analysis of the Optical, UV and X-ray spectra (Shenar et al. 2015) and the Solution to the MOST Light Curve and Ground-Based Radial Velocities . Each ObsID was processed using the standard processing pipeline used in production of the Chandra Transmission Grating Data Archive and Catalog (TGCAT; Huenemoerder et al. 2011). Briefly, event filtering, event transformation, spectral extraction, and response generation are done with standard Chandra Interactive Analysis of Observations software tools (Fruscione et al. 2006) as described in detail by Huenemoerder et al. (2011). This pipeline produces standard X-ray events, spectra, responses, effective areas, aspect histograms, and light curves. We used version 4.5.5 of the Chandra Calibration Database (CALDB), along with CIAO version 4.5 & 4.6 in the analysis presented here. In order to examine variability, the data were also divided into ∼ 10 ks segments, and spectra, response files, effective areas and light curves were generated for each segment. Analysis of the time-sliced data is presented in Nichols et al. (2015). Method Parameter POWR Analysis a light curve & RV solution b T eff [kK] (Aa1) 29.5 ± 0.5 30 (adopted) T eff [kK] (Aa2) 25.6 ± 3 24.1 +0.4 −0.7 R[R ] (Aa1) 16.5 ± 1 15.1 R[R ] (Aa2) 6.5 +2 −1.5 5.0 M [M ] (Aa1) 24 +10 −8 23.8 M [M ] (Aa2) 8.4 e 8.5 L [log L ] (Aa1) 5.28 ± 0.05 5.20 L [log L ] (Aa2) 4.2 ± 0.2 3.85 v ∞ [km s −1 ] (Aa1) 2000 ± 100 v ∞ [km s −1 ] (Aa2) 1200 e logṀ [M /yr] (Aa1) −6.4 ± 0.15 logṀ [M /yr] (Aa2) ≤ −6.8 E B−V (ISM) 0.065 ± 0.002 A V (ISM) 0.201 ± 0.006 log N H (ISM) 20.65 ± 0.05 P [d] 5.732436 d E 0 (primary min, HJD) 2456277.790 ± 0.024 T 0 (periastron, HJD) 2456295.674 ± 0.062 a[R ] 43.1 ± 1.7 i [deg.] 76.5 ± 0.2 ω [deg.] 141.3 ± 0.2 ω [deg. yr −1 ] 1.45 ± 0.04 e 0.1133 ± 0.0003 γ [km s −1 ] 15.5 ± 0.7 Sp. Type (Aa1) O9.5II a,c,d Sp. Type (Aa2) B1V a D [ Analysis of the X-ray Image The δ Ori Aa1,2 inner binary is orbited by a more distant tertiary component (δ Ori Ab) at a current projected separation of 0 .3 with an orbital period of ≈ 346 years (Tokovinin et al. 2014). This separation is just below the spatial resolution of Chandra, and thus Chandra imaging observations allow us to spatially examine the X-ray contribution from the Ab component. Figure 2 shows unbinned zeroth-order images from our four HETGS+ACIS observations, along with the expected location of Ab and the Aa pair at the times of the Chandra observations in 2012. To constrain the X-ray contribution of δ Ori Ab, we generated zeroth-order images for the four individual pointings listed in Table 2, using the Energy-Dependent Subpixel Event Repositioning (EDSER 23 ) method to generate images with a pixel size of 0 .125. We generated images in 0.3-1 and 1-3 keV bands, but found no significant differences in any of the four observations when we compared the soft and hard band images. For each image, we then applied the CIAO tool SRCEX-TENT to calculate the size and associated uncertainty of the photon-count source image or using the Mexican Hat Optimization algorithm 24 . The results of the SRCEXTENT analysis are given in Table 3. The derived major and minor axes of each image are equal and consistent with the Chandra point spread function, ∼ 0.3 . The peak of the image is consistent with the location of the Aa component, and is about a factor of two farther than the Ab component. We conclude that the peak positions of the zeroth-order images in-23 http://cxc.harvard.edu/ciao4.4/why/acissubpix.html 24 http://cxc.harvard.edu/ciao/ahelp/srcextent.html Pablo et al. (2015). In the images, the orbital angular momentum vector lies close to the plane of the paper and points to the top of the page. dicate that Aa is the primary X-ray source, with little or no contribution from Ab. Our analysis also suggests that the ObsID 14568 image may be slightly elongated, which may indicate a possible issue with the instrumental pointing or aspect reconstruction for this observation. Figure 3 shows the co-added spectrum from the four observations, with a total exposure of 479 ks. This represents the second longest exposure yet obtained on a massive star at wavelengths 8Å and a resolving power of λ/∆λ > 400. The strongest lines are O VIII, Fe XVII, Ne IX & Ne X, Mg XI & Mg XII, and Si XIII. Combined Spectrum Temperature Distribution We modeled the combined spectrum with a combination of absorbed collisional ionization equilibrium models using the Interactive Spectral Interpretation System (ISIS; Houck & Denicola 2000). The model we applied includes two low-temperature components seen through a common absorption component, plus a third hotter component with its own absorption component to account for any contribution from a hot colliding wind region embedded within the wind of the binary (see Section 7 below). In ISIS terminology, the mode we used was "(xaped(1) + xaped(2)) * TBabs(3) + xaped(4) * TBabs(5)", where "xaped" represents emission from an optically thin plasma in collisional ionization equilibrium based on the ATOMDB atomic database version 2.0.2 (Smith & Brickhouse 2000;Foster et al. 2012), and "TBabs" represents interstellar absorption (Wilms et al. 2000a). Solar abundances were assumed for both the emission and absorp- tion components 25 . This model is an approximation to the actual temperature distribution and absorption, but is the simplest one we found that adequately describes the observed grating spectrum. We allowed for velocity broadening of the emission lines, with turbulent velocity broadening constrained to be less than roughly twice the maximum wind terminal velocity, 3000 km s −1 . We allowed the line centroid velocities of the three emission components to vary, but found that overall the line centroids are unshifted in the combined spectrum. Figure 4 compares the best-fit model to the data, while the model components are given in Table 4. In this table, we also convert the derived turbulent velocities V turb to equivalent line half-widths at half maximum, using O VIII, Ne X and Mg XII for the low-, medium-, and high-temperature components, respectively. The derived temperature distribution is similar to that found by Miller et al. (2002) in their study of the 2000 January HETGS spectrum, and by Raassen & Pollock (2013) in their analysis of an LETGS spectrum from 2007 November. In general, aside from the overall weakness of the forbidden lines compared to the model spectrum (which assumes a low-density plasma with no UV photoexcitation), the overall distribution of emission line strengths, and the continuum, are described reasonably well by the model. We note, in reality, that this three-temperature model is a simplified representation of the actual emission measure distribution with temperature. This multitemperature model mainly provides us with an adequate approximation of the local (pseudo-) continuum in order to improve line fitting and modeling. Emission Lines The observed X-ray emission lines in our δ Ori A spectrum provide important diagnostic information about the phase-averaged state of the hot gas within the wind of the system, and, as we show below, this is dominated by the shocked gas embedded within the wind of δ Ori Aa1, with little contribution (if any) from gas heated by the shock produced by the collision of the wind from δ Ori Aa1 with the wind or photosphere of 25 Shenar et al. (2015) show that N and Si are slightly subsolar, but these differences are not significant for our analysis. δ Ori Aa2. The analysis of the set of emission lines depends on choice of line profile, continuum level, and accounting for line blends. Gaussian Modeling To better account for blends and uncertainties in the continuum level, we performed a Gaussian fit to the strong lines, allowing flux, line width, and centroid velocity to vary. These fits, shown in Figure 5, were done using the three-temperature fit given in Section 6.1 above to define the continuum and amount of line blending. We set the abundance of the element to be measured to zero, with the abundances of other elements set to solar and other parameters (temperature, absorptions) fixed at the values given in Section 4. This procedure is useful to account for line blends, in particular, for the Ne X line at 12.132Å, which is blended with an Fe XVII line at 12.124Å. We assumed simple Gaussian line profiles for the line to be fit, and fit for both the Lyα1 and Lyα2 lines, with line widths and velocities fixed for both components, and the intensity ratio of the Lyα2 to the Lyα1 line set to the emissivity ratio at the temperature of peak emissivity. We used the Cash statistic and ISIS to perform the fits, simultaneously fitting the HEG and MEG ±1 order spectrum from all four observations simultaneously. Table 5 shows the result of fits of the H-like Lyα lines, plus the strong Fe XVII line at 15.014Å. In general, the Gaussian fits are poor (the reduced Cash statistic > 1.5) except for the weak Si XIV line, though the asymmetries in the bright lines are not very strong. All of the line centroids are near zero velocity, though the Ne X line is blue-shifted at about the 2-σ level. Table 4. The model spectra, which assume low density and do not include effects of UV photoexcitation, generally overestimate the strength of the forbidden lines and underestimate the strengths of the intercombination lines, especially at longer wavelengths, most notably at OVII. The lines are plotted in the velocity range of −3000 km s −1 to +3000 km s −1 . The best-fit Gaussian profile, and the continuum derived from the model parameters given in Table 4 is shown in red. Note that while most of the Lyα lines are adequately described by a symmetric Gaussian, the Fe XVII and Ne X lines are not as well fit by simple Gaussian profiles as the other lines. This may be due to the effects of non-uniform X-ray line opacity, as discussed in Section 6.2.2. We also measured the forbidden (z), intercombination (x + y), and resonance components (w) above continuum for each of the helium-like ions (O VII, Mg X, Ne IX, and Si XIII) by Gaussian fitting. As before, we used the three-temperature fit given in Section 6.1 above to define the local continuum near the line region. Although the individual intercombination components (x + y) are unresolved in the HETGS spectra for all of the He-like ions, we included a Gaussian line for the x and y lines, but restricted the centroid velocity and line widths to be the same for both the x and y components. Because the forbidden, intercombination and resonance lines can have different spatial distributions throughout the wind, we allowed the widths, centroids, and line fluxes of these lines to vary individually. The forbidden component of the O VII line is weak, and, in addition, this line was only observed in the MEG-1 spectrum arm because ACIS-S chip S5 was turned off due to spacecraft power constraints. To increase signal to noise for the O VII forbidden line, and for the weak Si XIII and S XV triplets, we included data from the 2001 HETG and 2008 LETG observations when fitting. Figure 6 shows the fits to the He-like lines, and Table 6 shows the results of this three-Gaussian component fitting, while Table 7 shows the R = z/(x + y) and G = (x + y + z)/w ratios. Table 4, is shown in red. This anti-correlation shows that the more highly excited lines form at lower velocities, and thus closer to the stellar surface of the primary, indicating that the higher-temperature X-ray emission emerges from deeper regions in the wind than the cooler emission. In Figure 7, the O VII line width seems lower compared to the trend defined by the more highly excited ions. Excluding the O VII line, a linear fit to the remaining He-like lines yields a linear correlation coefficient of −0.87, indicating a stronger anti-correlation, and also results in a steeper linear slope. This linear fit predicts that the O VII line should have a half-width of 918 eV, a factor of 1.2 larger than observed. We caution that, unlike the other lines, the O VII line was only observed in one grating order since ACIS-S chip 5 was switched off during these observations. As a crude approximation, if we assume that the X-ray emitting material resides in a thin spherical shell at radius r around δ Ori Aa1, then the line profile will extend from −V (r) to +V (r) 1 − (R Aa1 /r) 2 , where R Aa1 is the radius of δ Ori Aa1, and V (r) = V ∞,Aa1 (1 − R Aa1 /r) β , the standard velocity law for radiatively driven winds. The inverse correlation of the line widths with excitation energy suggests that the hotter X-ray emitting gas is formed over a smaller volume in the wind acceleration zone closer to the star, where wind radial velocity differentials are larger and where higher temperature shocks can be generated; cooler ions can be maintained farther out in the wind where the acceleration (and thus the velocity differential) is smaller. A similar conclusion was reached by Hervé et al. (2013) in their analysis of ζ Puppis. Effects of X-ray Line Opacity The possibility that strong resonance line photons might be scattered out of the line of sight has significant implications on our physical understanding of the X-ray emission from hot stars, especially in the interpretation of mass-loss rates derived from X-ray line profiles and abundances derived from X-ray line ratios. Resonance scattering may be important for lines with high oscillator strengths and could, in principle, change the line shape or intensity ratios, though recent analysis by Bernitt et al. (2012) suggested that our poor knowledge of the underlying atomic physics may play the dominant role in accounting for discrepancies in line intensities. Miller et al. (2002) focussed on the Fe XVII lines at 15.014Å and 15.261Å, which have oscillator strengths of 2.49 and 0.64, respectively. Resonance scattering might significantly affect the 15.014Å emission line, which is one of the strongest lines in the δ Ori A X-ray spectrum, while scattering should be unimportant for the weak 15.261Å line. Miller et al. (2002) found that the observed ratio of these two lines, as derived from their Chandra grating spectrum, was I 15.01 /I 15.26 = 2.4 ± 1.3, nominally (though not significantly) below the optically thin limit I 15.01 /I 15.26 = 3.5 derived from the Smith & Brickhouse (2000) version of the Astrophysical Plasma Emission Code (APEC). We re-examined this issue for these two Fe XVII lines using our deeper spectrum and a slightly different technique. We isolated the Fe XVII line region in the combined spectrum and fit this restricted region with an APEC-derived model, with abundances fixed at solar, including line broadening. We first fit the Fe XVII line at 15.261Å, ignoring the region around the stronger 15.014Å line. We then included the 15.014Å line region and compared the predicted strength of the model 15.014Å line to the observed line. This technique, in which we use a full thermal model to fit the spectra rather than a simple comparison of line intensities, has the benefit that line blends in the region will be more properly taken into account. We found that the model based on the best fit to the 15.261Å line greatly overpredicted the strength of the 15.014Å line, and can be ruled out at high confidence (χ 2 ν = 3.57, restricted to the 14.90-15.14Å region; excluding this region, χ 2 ν = 0.72). This may be an indication of the effect of resonance scattering on the 15.014Å Fe XVII line. Since it appears that the 15.014Å line is a bit narrower than the 15.261Å line, we also re-did the fit, allowing the width of the 15.014Å line to differ from that of the 15.261Å line. We then re-fit only the 15.014Å line, allowing the line broadening to vary and also allowing the normalization to vary. Figure 8 shows the resulting fit. The best-fit HWHMs for the 15.014Å and 15.261Å lines are 1275 +48 −268 km s −1 and 1496 +109 −113 km s −1 , respectively, while the model normalizations are 0.0024 +0.0001 −0.001 and 0.0030 +0.001 −0.001 for the 15.014Å and 15.261Å lines, respectively. This analysis also shows the 15.014Å line is significantly weaker than expected compared to the 15.261Å line. This again may indicate that resonance scattering plays a role in determining the line profile shape and line strength, at least for the Fe XVII line, though uncertainties in the atomic models and in our definition of the temperature distribution for δ Ori A may play a significant role in altering the intensity ratios for these lines. To further investigate the importance of resonance scattering, we also considered the Ne X lines at 10.239Å and at 12.132Å, which have oscillator strengths of 0.052 and 0.28, respectively. These lines complement the Fe XVII analysis since for Ne X the stronger line appears at longer wavelength; this means that any effects of differential absorption that might affect the Fe XVII line analysis would have the opposite effect on the Ne X lines. We again fit the Ne X 10.239Å line with a single temperature APEC model, but fixed the temperature to the temperature of maximum emissivity of the Ne X lines, i.e. T = 6.3 × 10 6 K. We then compared the model that best fits the Ne X 10.239Å line to the Ne X 12.132Å line. Note that the Ne X 12.132Å line is blended with the Fe XXI line at 12.285Å (which has a temperature of maximum emissivity of 12.6 × 10 6 K, about twice that of the Ne X line), so we restricted the Ne X 12.132Å fitting region to the interval 12.0-12.22Å. We again find that the model, which provides a good fit to the weaker line (χ 2 ν = 0.79), overpredicts the strength of the stronger line (χ 2 ν = 8.63), again a possible indication that resonance scattering is important in determining the flux of the strong line. The Influence of Colliding Winds on the Embedded X-ray Emission Colliding winds can have important observable effects in our analysis of the X-ray emission from δ Ori Aa in two ways. The collision of the primary wind with the surface or wind of the secondary could produce hot shocked gas which might contaminate the X-ray emission from the embedded wind shocks in the primary's unperturbed wind. In addition, the colliding wind "bow shock" around the weaker-wind secondary produces a low-density cavity in the primary wind, and this cavity, dominated by the weak wind of δ Ori Aa2, should show little emission from embedded wind shocks. Along the line between the stars, the stellar winds will collide at the point at which their ram pressures ρv 2 ⊥ are equal (e.g. Stevens et al. 1992). Using the stellar, wind, and orbital parameters in Table 1, Figure 9 shows the ram pressures for Aa1 (solid) and Aa2 (dashed: apastron, dotted: periastron) assuming that the wind from each star follows the standard β velocity law, V (r) = v ∞ (1 − R/r) β , where V (r) is the wind radial velocity at a distance r from the star, R is the stellar radius, and we assume that β = 0.8 or 1.0. The ram pressure of Aa1's wind is greater than that of Aa2 throughout the orbit, so the wind from Aa1 should directly impact Aa2's surface, in this simple analysis. A more thorough treatment includes the effects of Aa2's radiation on the wind of Aa1 (and vice versa). These effects include "radiative inhibition" (Stevens & Pollock 1994) in which Aa1's wind acceleration along the line between the stars is reduced by Aa2's radiative force acting in opposition to the wind flow, and "sudden radiative braking" (Owocki & Gayley 1995;Gayley et al. 1997), where Aa1's strong wind, which would otherwise impact the surface of Aa2, is suddenly decelerated by Aa2's radiation just above the surface of Aa2. To estimate the magnitude of these effects, we solve the 1D equation of motion along the line of centers, accounting for both star's radiative forces via the standard Castor, Abbott, and Klein (CAK) line forces (Castor et al. 1975) including the finite disk correction factor (Friend & Abbott 1986;Pauldrach et al. 1986) and gravitational acceleration. We determine the CAK parametersQ and α (Gayley 1995) to yield the desired mass-loss rates and and temperature (right) structure in the orbital plane from the SPH simulation of δ Ori Aa1 (larger star) and δ Ori Aa2 (smaller star). The arrow shows the orientation of the line of sight. The system is pictured at phase φ = 0.87. The collision of the wind from δ Ori Aa1 against δ Ori Aa2 produces a low-density cavity in the wind of δ Ori Aa1, where the emission from embedded wind shocks is reduced. The collision also produces a layer of hot shocked gas at the boundary of the cavity which produces < 10% of the emission from the wind shocks embedded in the unperturbed wind from δ Ori Aa1. In the temperature plot on the right, the hot gas from embedded wind shocks in the winds from δ Ori Aa1 and δ Ori Aa2 is ignored, to emphasize the hot gas along the wind collision boundary. terminal speeds for each star by using the standard reduction in mass-loss rate from the finite disk correction factor, i.e.,Ṁ fd =Ṁ CAK /(1 + α) (1+α) . We numerically integrate the equation of motion to distances far from the star to yield the terminal velocity. Then we repeat the process including the radiation and gravity of both stars to determine the speed of each wind along the line between the stars. Figure 9 shows the equation-of-motion solution for the primary wind. The initial velocity corresponds to a β = 0.8 law, but radiative inhibition causes the wind (solid) to accelerate less compared to the unmodified β-law (dashed). In addition, the primary wind velocity does begin to decrease from radiative braking. However, Star Aa2's surface is located at the end of each line, so that the primary wind does not completely stop before it impacts the secondary surface. This indicates that the wind from star Aa1 should still impact the surface of Aa2, even when the influence of the radiation field of star Aa2 is taken into account. Furthermore, due to the strong radiation of Aa1, the wind of Aa2 does not accelerate off the surface of the star toward Aa1, further suggesting that Aa1's wind will directly impact Aa2's surface. We used a 3D smoothed particle hydrodynamics (SPH) code developed by Benz (1990) and Bate et al. (1995) to model the effects of the wind-wind collision on the extended system wind. Okazaki et al. (2008) was the first to apply this code to a colliding-wind system, and Russell (2013) and Madura et al. (2013) describe the current capabilities of the code, which we briefly state here. The stars are represented as two point masses, and throughout their orbit they inject SPH particles into the simulation volume to represent their stellar winds. The SPH particles are accelerated away from their respective stars according to a β=1 law (absent from any influence from the companion's radiation) by invoking a radiative force with a radially varying opacity κ(r), i.e. g rad = κ(r)F/c, where F is the stellar flux. We take effects of the occultation of one star's radiation by the other star into account. Radiative inhibition is included in the code (within the context of the radially varying opacity method), but radiative braking is not since it requires the full CAK solution for the wind driving, which is not yet included in the SPH code. Radiative cooling is implemented via the Exact Integration Scheme (Townsend 2009), and the abundances of both winds are assumed to be solar (Asplund et al. 2009). The importance of radiative cooling of the shocked material is determined by the parameter χ = d 12 v 4 8 /Ṁ −7 (Stevens et al. 1992), where d 12 is the distance to the shock in 10 12 cm, v 8 is the preshock velocity in 10 8 cm s −1 , andṀ −7 is the mass-loss rate in 10 −7 M yr −1 . χ > 1 indicates adiabatic expansion is more important, while χ < 1 indicates that the shocked gas will cool radiatively. For the β=1 law, χ ranges from 0.5 χ 1.3 between periastron to apastron, so the shocked gas should cool through a combination of adiabatic expansion and radiation. Figure 10 shows the density and temperature structure of the interacting winds in the orbital plane using the parameters in Table 1. The primary wind impacts the secondary star as expected from the analytical treatment above, where it shocks with newly injected secondary SPH particles. If this interaction leads to SPH particles, either belonging to Aa1 or Aa2, going within the boundary of the secondary star, these particles are accreted, i.e. removed from the simulation. The temperature plot of figure 10 shows that this leads to hot, shocked gas around Aa2, but this must be deemed approximate since the code does not force the Aa1 particles to accrete at the sound speed, which would increase the shock temperature, nor does it include any reflection of Aa1's radiation off of the surface of Aa2, which would decrease the shock temperature. The half-opening angle is ∼ 30 • , so ∼ 8% of the solid angle of Aa1's wind is evacuated by Aa2 and its wind. To determine the X-ray flux from the windwind/wind-star collision, we solve the formal solution to radiative transfer along a grid of rays through the SPH simulation volume, for which we use the SPH visualization program Splash (Price 2007) as our basis. The emissivity is from the APEC model (Smith et al. 2001) obtained from XSPEC (Arnaud 1996), the circumstellar material absorbs according to the windtabs model (Leutenegger et al. 2010), and the interstellar absorption is from TBabs (Wilms et al. 2000b). The radiative transfer calculation is performed at 170 energies logarithmically spaced from 0.2 to 10 keV (100 per dex), and generates surface brightness maps for each energy. These are then summed to determine the model spectrum, and finally folded through X-ray telescope response functions to directly compare with observations. The overall contamination level of wind-wind/wind-star collision X-rays is < 10% of the Chandra zeroth-order ACIS-S observation, so the influence of emission from shocked gas along the wind-wind boundary is not very significant , though contamination may be larger in some regions of the spectrum, depending on the emissionmeasure temperature distribution of the collidingwind X-rays compared to that of the X-rays arising from embedded wind shocks. We caution, however, that the model X-ray flux is dependent on the boundary condition imposed at the surface of Aa2, and so imposing a condition where the incoming wind from star Aa1 shocks more strongly (weakly) will increase (decrease) the amount of Xray emission from the wind-star collision. Conclusions Delta Ori Aa is an X-ray bright, nearby, eclipsing binary and so offers the potential to directly probe the X-ray emitting gas distribution in the primary star's wind as the secondary star revolves through the primary's wind. Our Chandra program was designed to obtain high signal-to-noise and high spectral resolution spectrometry of this system throughout an entire orbit. In this paper, we have sought to characterize the overall spectrum at its highest signal-to-noise ratio by combining all of the Chandra spectra and examining temperature distributions and line parameters. Our main results are presented below. 1. Our analysis of the Chandra image shows that the emission is mostly dominated by δ Ori Aa, with little detectable emission from δ Ori Ab. 2. The temperature distribution of the X-ray emitting gas can be characterized by three dominant temperatures, which agrees fairly well with the temperature distributions derived by the earlier analysis of Miller et al. (2002) and Raassen & Pollock (2013). 3. The strong lines are generally symmetric, and Gaussian profiles provide a reasonable representation of the profile shape, though in most cases, and especially for the Ne X and Fe XVII there are significant deviations from Gaussian symmetry. 4. The line widths determined by Gaussian modeling shows that half-widths are typically 0.3−0.5×V ∞ , where V ∞ is the terminal velocity of the wind of δ Ori Aa1. These values are generally larger than the line widths measured by Miller et al. (2002), though it is unclear whether this represents a real change in the line profile or if there is a calibration issue in the analysis of the earlier data set, which was obtained at an anomalously high focal plane temperature. 5. We find a strong anti-correlation between the widths of the H-like and He-like transitions and the excitation energy. This indicates that the lower-energy transitions occur in a region with larger velocities. Assuming a standard wind acceleration law, this correlation probably indicates that the lowerenergy lines emerge from further out in the wind. 6. Analysis of strong and weak transitions of Fe XVII and Ne X indicates that resonance scattering may be important in determining the flux and/or shape of the stronger line. This agrees with the analysis of the Fe XVII line by Miller et al. (2002) but at higher significance. We caution that some of these differences in the observed to predicted line ratios may be influenced by an inaccurate temperature distribution and/or uncertainties in the atomic physics. It is also interesting to note that these two lines also have the most non-Gaussian profiles, as shown in Figure 5, perhaps indicative that some line photons have been scattered out of the line of sight. The spectrum combined from the four individual Chandra-HETGS observations represents a very high signal-to-noise view of the emission from δ Ori Aa. However, these observations were obtained at a variety of orbital phases, so that the combined spectrum is a phase-averaged view of the overall X-ray emission from δ Ori Aa. In a companion paper we look for the effects of phase-and time-dependent changes in the continuum and line spectrum. Astrophysics Division of the Smithsonian Astrophysical Observatory. This research made use of the Chandra Transmission Grating Catalog and archive (http://tgcat.mit.edu). The SPH simulations presented in this paper made use of the resources provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. a Shenar et al. (2015); b from the low-mass model solution ofPablo et al. (2015);c Maíz Apellániz et al. (2013); d Mayer et al. (2010); e Adopted assuming a spectral type of B1V. Fig. 1 . 1-Timings of the Chandra observations along with the MOST light curve. The images above the plot show the orientations of δ Ori Aa1 and Aa2 near the midpoint of the observation according to the photometric and spectroscopic analysis of Fig. 3 . 3-The combined MEG+HEG spectrum of δ Ori A, from 3.5Å to 26Å. Fig. 4 . 4-The combined MEG+HEG spectrum of δ Ori A (in black) with the 3-component fit (shown in red) given in Fig. 6 . 6-Top to bottom, left to right: O VII; Ne IX; Mg XI; Si XIII; S XV. The best fit, using a model of 4 Gaussian lines (w, x, y, & z components) and the continuum derived from the model parameters given in Fig. 7 . 7-Half-widths of the H-like Lyα lines (km s −1 ) and the He-like resonance lines versus excitation energy (eV) of the upper level of the transition. The full and dashed lines represent the best linear fit to the HWHMs from the H-like lines, and the He-like lines (excluding the O VII width), respectively. Figure 7 7shows the dependence of the half width at half maximum of the Gaussian fit versus the excitation energy of the upper level of the transition. The linear correlation coefficient for the H-like half-widths is −0.89, indicating a strong anti-correlation between line half-width and excitation energy. For the He-like lines, the linear correlation coefficient is −0.81, also indicating a strong anti-correlation of line half-widths and excitation energy. Thus the line widths are anticorrelated with the upper energy level, in that the line width decreases with excitation energy. Fig. 9 . 9-Left: Ram pressure of Aa1 (solid) and Aa2 at apastron (dashed) and periastron (dotted). The black lines show a β=1 law, while the gray lines show a β=0.8 law. The gray vertical lines represent the location of Aa2's surface for these two phases. Right: 1D solution to the equation of motion of the primary wind along the line between the stars (solid) at three different separations -apastron (top), semi-major axis (middle), and periastron (bottom). For comparison, the dashed curve shows a β=0.8 law, and the dashed line shows terminal velocity. Fig. 10 . 10-Density (left) Table 2 : 2New Chandra Observations of δ Ori Aa+AbObsID Start Start End End Midpoint Midpoint ∆T Exposure Roll HJD Phase HJD Phase HJD Phase Days s deg. 14567 2456281.21 396.604 2456282.58 396.843 2456281.90 396.724 1.37 114982 345.2 14569 2456283.76 397.049 2456285.18 397.297 2456284.47 397.173 1.42 119274 343.2 14570 2456286.06 397.450 2456287.52 397.705 2456286.79 397.578 1.46 122483 83.0 14568 2456288.67 397.905 2456290.12 398.159 2456289.39 398.032 1.45 121988 332.7 Table 3 : 3SRCEXTENT Analysis ResultsBand Major Axis Minor Axis PA Peak distance Aa Peak distance Ab ObsID keV arcsec arcsec deg. arcsec arcsec 14567 0.3-1 0.34 0.33 83.3 0.19 0.40 1-3 0.32 0.28 83.8 0.19 0.42 14569 0.3-1 0.32 0.32 32.1 0.23 0.44 1-3 0.29 0.28 27.6 0.25 0.47 14570 0.3-1 0.32 0.32 136.9 0.09 0.35 1-3 0.26 0.22 48.3 0.08 0.34 14568 0.3-1 0.51 0.32 35.9 0.24 0.41 1-3 0.48 0.25 31.2 0.24 0.42 Table 4 : 4Best-Fit to the Combined HETGS spectrum. The adopted model is (APED 1 +APED 2 )*N H,1 +APED 3 *N H,2Component Parameter Value T 1 (MK) 1.25 1 EM 1 (10 55 cm −3 ) 4.46 V turb,1 (km s −1 ) 1313 HWHM (km s −1 ) 1094 T 2 (MK) 3.33 2 EM 2 (10 55 cm −3 ) 0.87 V turb,2 (km s −1 ) 1143 HWHM (km s −1 ) 953 Absorption 1 N H,1 (10 22 cm −2 ) 0.14 T 3 (MK) 9.11 3 EM 3 (10 55 cm −3 ) 0.26 V turb,3 (km s −1 ) 685 HWHM (km s −1 ) 574 Absorption 2 N H,2 (10 22 cm −2 ) 0.24 f x (ergs cm −2 s −1 ) (observed, 1.7 − 25Å) 8.2 × 10 −12 L x (ergs s −1 ) (observed, 1.7 − 25Å) 1.4 × 10 32 log L x /L bol −6.73 EM-weighted Average Temperature (MK) 1.94 Table 5 : 5Gaussian Fits to the H-like lines, plus Fe XVIIλ Flux V HWHM A 10 −5 ph. s −1 cm −2 km s −1 km s −1 O VIII 18.967 219 +9 −10 −9 +37 −33 918 +38 −29 Fe XVII 15.014 76 +4 −3 −24 +42 −35 971 +53 −27 Ne X 12.132 10 +1 −1 −102 +50 −42 726 +48 −58 Mg XII 8.419 1 +0 −0 −12 +33 −55 547 +58 −61 Si XIV 6.180 0.35 +0.05 −0.05 −49 +45 −134 544 +116 −124 Fig. 5.-Top to bottom, left to right: O VIII; Fe XVII; Ne X; Mg XII; and Si XIV.18.8 18.85 18.9 18.95 19 19.05 19.1 19.15 0 2 0 4 0 6 0 Best Fit Model to OVIII Wavelength (Ang.) Counts/bin 14.9 14.95 15 15.05 15.1 15.15 0 50 100 Best Fit Model to FeXVII Wavelength (Ang.) Counts/bin 12.05 12.1 12.15 12.2 12.25 50 100 150 Best Fit Model to NeX Wavelength (Ang.) Counts/bin 8.34 8.36 8.38 8.4 8.42 8.44 8.46 8.48 8.5 20 40 60 80 100 Best Fit Model to MgXII Wavelength (Ang.) Counts/bin 6.12 6.14 6.16 6.18 6.2 6.22 6.24 10 20 30 40 Best Fit Model to SiXIV Wavelength (Ang.) Counts/bin Table 7 : 7R and G ratiosion R = z/(x + y) G = (x + y + z)/w O VII 0.04 ± 0.01 0.94 ± 0.26 Ne IX 0.27 ± 0.10 1.44 ± 0.65 Mg XI 0.96 ± 0.36 0.95 ± 0.37 Si XIII 1.77 ± 0.18 0.90 ± 0.12 S XV 3.88 ± 2.86 0.72 ± 0.74 Table 6 : 6Gaussian fits to the He-like linesCentroid Velocity (km s −1 ) HWHM (km s −1 ) ion w x + y z w x + y z OVII 166 ± 19 −194 ± 18 −810 ± 384 761 ± 14 826 ± 40 160 ± 270 Ne IX −146 ± 166 −410 ± 231 441 ± 466 849 ± 138 1057 ± 222 1289 ± 49 Mg XI 8 ± 74 31 ± 109 −63 ± 270 782 ± 97 584 ± 146 1302 ± 386 Si XIII 42 ± 64 88 ± 191 −60 ± 21 488 ± 69 704 ± 361 506 ± 79 S XV 99 ± 357 1168 ± 1203 −27 ± 633 540 ± 206 966 ± 1256 69 ± 1254 1200 1000 800 600 400 HWHM (km s -1 ) 2500 2000 1500 1000 500 Excitation Energy (eV) H-like lines He-like lines Fe XVII Linear fit to the H-line widths Linear fit to the He-like line widths (excluding OVII) This shows that the model tgat fits the 15.26Å line overpredicts the strength of the 15.01Å line. The vertical lines from the continuum to the X-axis at 14.9Å and 15.14Å show the adopted wavelength range of the 15.014Å FeXVII line. Right: APEC model fit to the Ne X 10.24Å compared to the NeX 12.134Å line. The model (shown by the thick histogram) that fits the weaker 10.24Å line overpredicts the strength of the stronger 12.134Å line.14.6 14.8 15 15.2 15.4 15.6 0 50 100 150 200 Fit to the Fe XVII 15.26 line, and comparison to the 15.01 line Wavelength (Ang.) Counts/bin 10 10.5 11 11.5 12 12.5 0 50 100 150 200 Fit to the NeX 10.24 line, and comparison to the 12.13 line Wavelength (Ang.) Counts/bin Fig. 8.-Left: APEC-based modeling of the Fe XVII 15.014Å vs 15.261Å lines. We first fit the 15.261Å line by itself. The thick histogram compares that model to the observed spectrum in the 14.5-15.6Å range. 20 25 30 35 40 0.0 0.2 0.4 0.6 0.8 1.0 r R Ρv 2 rel. units 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 0 500 1000 1500 r R Aa1 v km s We thank the MOST team for the award of observing time for δ Ori A. We also thank our anonymous referee, whose comments significantly improved this paper. M.F.C. would like to thank John Houck and Michael Nowak for many helpful discussions concerning data analysis with ISIS. Support for this work was provided by the National Aeronautics and Space Administration through Chandra Award Number GO3-14015A and GO3-14015E issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. M.F.C., J.S.N., W.L.W., C.M.P.R., and K.H. gratefully acknowledge this support. M.F.C. acknowledges support from NASA under cooperative agreement number NNG06EO90A. N.R.E. is grateful for support from the Chandra X-ray Center NASA Contract NAS8-03060. C.M.P.R. is supported by an appointment to the NASA Postdoctoral Program at the Goddard Space Flight Center, administered by Oak Ridge Associated Universities through a contract with NASA. T.S. is grateful for financial support from the Leibniz Graduate School for Quantitative Spectroscopy in Astrophysics, a joint project of the Leibniz Institute for Astrophysics Potsdam (AIP) and the institute of Physics and Astronomy of the University of Potsdam. Y.N. acknowledges support from the Fonds National de la Recherche Scientifique (Belgium), the Communauté Française de Belgique, the PRODEX XMM and INTEGRAL contracts, and the 'Action de Recherche Concertée' (CFWB-Académie Wallonie Europe). N.D.R. gratefully acknowledges his CRAQ (Centre de Recherche en Astrophysique du Québec) fellowship. A.F.J.M. is grateful for financial support from NSERC (Canada) and FRQNT (Quebec). J.L.H. acknowledges support from NASA award NNX13AF40G and NSF award AST-0807477. This research has made use of NASA's Astrophysics Data System. This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy K A Arnaud, Astronomical Society of the Pacific Conference Series. G. H. Jacoby & J. Barnes10117Astronomical Data Analysis Software and Systems VArnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astro- nomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17 . M Asplund, N Grevesse, A J Sauval, P Scott, ARA&A. 47481Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, ARA&A, 47, 481 . M R Bate, I A Bonnell, N M Price, MNRAS. 277362Bate, M. R., Bonnell, I. A., & Price, N. M. 1995, MNRAS, 277, 362 W Benz, Numerical Modelling of Nonlinear Stellar Pulsations Problems and Prospects. J. R. Buchler269Benz, W. 1990, in Numerical Modelling of Nonlin- ear Stellar Pulsations Problems and Prospects, ed. J. R. Buchler, 269 . T W Berghoefer, J H M M Schmitt, R Danner, J P Cassinelli, A&A. 322167Berghoefer, T. W., Schmitt, J. H. M. M., Danner, R., & Cassinelli, J. P. 1997, A&A, 322, 167 . S Bernitt, G V Brown, J K Rudolph, Nature. 492225Bernitt, S., Brown, G. V., Rudolph, J. K., et al. 2012, Nature, 492, 225 . A C Brinkman, J J Van Rooijen, J A M Bleeker, Astrophysical Letters and Communications. 2673Brinkman, A. C., van Rooijen, J. J., Bleeker, J. A. M., et al. 1987, Astrophysical Letters and Communications, 26, 73 . J A Caballero, E Solano, A&A. 485931Caballero, J. A., & Solano, E. 2008, A&A, 485, 931 . C R Canizares, J E Davis, D Dewey, PASP. 1171144Canizares, C. R., Davis, J. E., Dewey, D., et al. 2005, PASP, 117, 1144 . M Cantiello, N Langer, I Brott, A&A. 499279Cantiello, M., Langer, N., Brott, I., et al. 2009, A&A, 499, 279 . J P Cassinelli, J H Swank, ApJ. 271681Cassinelli, J. P., & Swank, J. H. 1983, ApJ, 271, 681 . J I Castor, D C Abbott, R I Klein, ApJ. 195157Castor, J. I., Abbott, D. C., & Klein, R. I. 1975, ApJ, 195, 157 . T Chlebowski, F R HarndenJr, S Sciortino, ApJ. 341427Chlebowski, T., Harnden, Jr., F. R., & Sciortino, S. 1989, ApJ, 341, 427 . D H Cohen, E E Wollman, M A Leutenegger, arXiv:1401.7995ArXiv e-printsCohen, D. H., Wollman, E. E., Leuteneg- ger, M. A., et al. 2014, ArXiv e-prints, arXiv:1401.7995 . M F Corcoran, W L Waldron, J J Macfarlane, ApJ. 43695Corcoran, M. F., Waldron, W. L., Macfarlane, J. J., et al. 1994, ApJ, 436, L95 . M F Corcoran, J S Nichols, H Pablo, arXiv:1507.05101ArXiv e-printsCorcoran, M. F., Nichols, J. S., Pablo, H., et al. 2015, ArXiv e-prints, arXiv:1507.05101 . P C Fisher, A J Meyerott, ApJ. 139123Fisher, P. C., & Meyerott, A. J. 1964, ApJ, 139, 123 . A R Foster, L Ji, R K Smith, N S Brickhouse, ApJ. 756128Foster, A. R., Ji, L., Smith, R. K., & Brickhouse, N. S. 2012, ApJ, 756, 128 . D B Friend, D C Abbott, ApJ. 311701Friend, D. B., & Abbott, D. C. 1986, ApJ, 311, 701 A Fruscione, J C Mcdowell, G E Allen, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 62701Society of Photo-Optical Instrumentation Engineers (SPIE) Conference SeriesFruscione, A., McDowell, J. C., Allen, G. E., et al. 2006, in Society of Photo-Optical Instrumenta- tion Engineers (SPIE) Conference Series, Vol. 6270, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 1 . K G Gayley, ApJ. 454410Gayley, K. G. 1995, ApJ, 454, 410 . K G Gayley, S P Owocki, S R Cranmer, ApJ. 475786Gayley, K. G., Owocki, S. P., & Cranmer, S. R. 1997, ApJ, 475, 786 . G Gräfener, L Koesterke, W.-R Hamann, A&A. 387244Gräfener, G., Koesterke, L., & Hamann, W.-R. 2002, A&A, 387, 244 C E Grant, M W Bautz, S E Kissel, B Lamarr, G Y Prigozhin, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 62761Society of Photo-Optical Instrumentation Engineers (SPIE) Conference SeriesGrant, C. E., Bautz, M. W., Kissel, S. E., LaMarr, B., & Prigozhin, G. Y. 2006, in So- ciety of Photo-Optical Instrumentation Engi- neers (SPIE) Conference Series, Vol. 6276, So- ciety of Photo-Optical Instrumentation Engi- neers (SPIE) Conference Series, 1 . F Haberl, N E White, A&A. 280519Haberl, F., & White, N. E. 1993, A&A, 280, 519 . W.-R Hamann, G Gräfener, A&A. 410993Hamann, W.-R., & Gräfener, G. 2003, A&A, 410, 993 P Harmanec, P Mayer, M &amp;šlechta, Massive Stars: From Alpha to Omega. 70Harmanec, P., Mayer, P., &Šlechta, M. 2013, in Massive Stars: From Alpha to Omega, 70 . J Hartmann, ApJ. 19268Hartmann, J. 1904, ApJ, 19, 268 . J A Harvin, D R Gies, W G BagnuoloJr, L R Penny, M L Thaller, ApJ. 5651216Harvin, J. A., Gies, D. R., Bagnuolo, Jr., W. G., Penny, L. R., & Thaller, M. L. 2002, ApJ, 565, 1216 . A Hervé, G Rauw, Y Nazé, A&A. 55183Hervé, A., Rauw, G., & Nazé, Y. 2013, A&A, 551, A83 J C Houck, L A Denicola, Astronomical Society of the Pacific Conference Series. N. Manset, C. Veillet, & D. Crabtree216591Astronomical Data Analysis Software and Systems IXHouck, J. C., & Denicola, L. A. 2000, in Astro- nomical Society of the Pacific Conference Se- ries, Vol. 216, Astronomical Data Analysis Soft- ware and Systems IX, ed. N. Manset, C. Veillet, & D. Crabtree, 591 . D P Huenemoerder, A Mitschang, D Dewey, AJ. 141129Huenemoerder, D. P., Mitschang, A., Dewey, D., et al. 2011, AJ, 141, 129 . R H Koch, B J Hrivnak, ApJ. 248249Koch, R. H., & Hrivnak, B. J. 1981, ApJ, 248, 249 . R Kudritzki, J Puls, ARA&A. 38613Kudritzki, R., & Puls, J. 2000, ARA&A, 38, 613 . M A Leutenegger, D H Cohen, J Zsargó, ApJ. 7191767Leutenegger, M. A., Cohen, D. H., Zsargó, J., et al. 2010, ApJ, 719, 1767 . M A Leutenegger, F B S Paerels, S M Kahn, D H Cohen, ApJ. 6501096Leutenegger, M. A., Paerels, F. B. S., Kahn, S. M., & Cohen, D. H. 2006, ApJ, 650, 1096 . K S Long, R L White, ApJ. 23965Long, K. S., & White, R. L. 1980, ApJ, 239, L65 . L B Lucy, R L White, ApJ. 300Lucy, L. B., & White, R. L. 1980, ApJ, 241, 300 . T I Madura, T R Gull, A T Okazaki, MNRAS. 4363820Madura, T. I., Gull, T. R., Okazaki, A. T., et al. 2013, MNRAS, 436, 3820 J Maíz Apellániz, A Sota, N I Morrell, Massive Stars: From alpha to Omega. Maíz Apellániz, J., Sota, A., Morrell, N. I., et al. 2013, in Massive Stars: From alpha to Omega, 198 . P Mayer, P Harmanec, M Wolf, H Božić, M Slechta, A&A. 52089Mayer, P., Harmanec, P., Wolf, M., Božić, H., & Slechta, M. 2010, A&A, 520, A89+ . N A Miller, J P Cassinelli, W L Waldron, J J Macfarlane, D H Cohen, ApJ. 577951Miller, N. A., Cassinelli, J. P., Waldron, W. L., MacFarlane, J. J., & Cohen, D. H. 2002, ApJ, 577, 951 . J S Nichols, D P Huenemoerder, M F Corcoran, arXiv:1507.04972ArXiv e-printsNichols, J. S., Huenemoerder, D. P., Corco- ran, M. F., et al. 2015, ArXiv e-prints, arXiv:1507.04972 . A T Okazaki, S P Owocki, C M P Russell, M F Corcoran, MNRAS. 38839Okazaki, A. T., Owocki, S. P., Russell, C. M. P., & Corcoran, M. F. 2008, MNRAS, 388, L39 . L M Oskinova, A Feldmeier, W.-R Hamann, MNRAS. 372313Oskinova, L. M., Feldmeier, A., & Hamann, W.-R. 2006, MNRAS, 372, 313 . S P Owocki, D H Cohen, ApJ. 648565Owocki, S. P., & Cohen, D. H. 2006, ApJ, 648, 565 . S P Owocki, S R Cranmer, K G Gayley, ApJ. 472115Owocki, S. P., Cranmer, S. R., & Gayley, K. G. 1996, ApJ, 472, L115+ . S P Owocki, K G Gayley, ApJ. 454145Owocki, S. P., & Gayley, K. G. 1995, ApJ, 454, L145 . H Pablo, N D Richardson, A F J Moffat, arXiv:1504.08002ArXiv e-printsPablo, H., Richardson, N. D., Moffat, A. F. J., et al. 2015, ArXiv e-prints, arXiv:1504.08002 . R Pallavicini, L Golub, R Rosner, ApJ. 248279Pallavicini, R., Golub, L., Rosner, R., et al. 1981, ApJ, 248, 279 . A Pauldrach, J Puls, R P Kudritzki, A&A. 16486Pauldrach, A., Puls, J., & Kudritzki, R. P. 1986, A&A, 164, 86 . D J Price, PASA. 24159Price, D. J. 2007, PASA, 24, 159 . A J J Raassen, A M T Pollock, A&A. 55055Raassen, A. J. J., & Pollock, A. M. T. 2013, A&A, 550, A55 . N D Richardson, A F J Moffat, T R Gull, arXiv:1506.05530ArXiv e-printsRichardson, N. D., Moffat, A. F. J., Gull, T. R., et al. 2015, ArXiv e-prints, arXiv:1506.05530 . C M P Russell, University of DelawarePhD thesisRussell, C. M. P. 2013, PhD thesis, University of Delaware . T Shenar, L Oskinova, W.-R Hamann, arXiv:1503.03476ArXiv e-printsShenar, T., Oskinova, L., Hamann, W.-R., et al. 2015, ArXiv e-prints, arXiv:1503.03476 R K Smith, N S Brickhouse, Revista Mexicana de Astronomia y Astrofisica Conference Series. S. J. Arthur, N. S. Brickhouse, & J. Franco9Smith, R. K., & Brickhouse, N. S. 2000, in Revista Mexicana de Astronomia y Astrofisica Confer- ence Series, Vol. 9, Revista Mexicana de As- tronomia y Astrofisica Conference Series, ed. S. J. Arthur, N. S. Brickhouse, & J. Franco, 134-136 . R K Smith, N S Brickhouse, D A Liedahl, J C Raymond, ApJ. 55691Smith, R. K., Brickhouse, N. S., Liedahl, D. A., & Raymond, J. C. 2001, ApJ, 556, L91 . Jr Snow, T P Cash, W Grady, C A , ApJ. 24419Snow, Jr., T. P., Cash, W., & Grady, C. A. 1981, ApJ, 244, L19 . J Stebbins, ApJ. 42133Stebbins, J. 1915, ApJ, 42, 133 . I R Stevens, J M Blondin, A M T Pollock, ApJ. 386265Stevens, I. R., Blondin, J. M., & Pollock, A. M. T. 1992, ApJ, 386, 265 . I R Stevens, A M Pollock, MNRAS. 269226Stevens, I. R., & Pollock, A. M. T. 1994, MNRAS, 269, 226 . A Tokovinin, B D Mason, W I Hartkopf, AJ. 147123Tokovinin, A., Mason, B. D., & Hartkopf, W. I. 2014, AJ, 147, 123 . R H D Townsend, ApJS. 181391Townsend, R. H. D. 2009, ApJS, 181, 391 . N R Walborn, J S Nichols, W L Waldron, ApJ. 703633Walborn, N. R., Nichols, J. S., & Waldron, W. L. 2009, ApJ, 703, 633 . W L Waldron, J P Cassinelli, ApJ. 54845Waldron, W. L., & Cassinelli, J. P. 2001, ApJ, 548, L45 . ApJ. 668456-. 2007, ApJ, 668, 456 . G Walker, J Matthews, R Kuschnig, PASP. 1151023Walker, G., Matthews, J., Kuschnig, R., et al. 2003, PASP, 115, 1023 . J Wilms, A Allen, R Mccray, ApJ. 542914Wilms, J., Allen, A., & McCray, R. 2000a, ApJ, 542, 914 . ApJ. 542914-. 2000b, ApJ, 542, 914
[]
[ "Unexpected Scaling in Path Copying Trees", "Unexpected Scaling in Path Copying Trees", "Unexpected Scaling in Path Copying Trees", "Unexpected Scaling in Path Copying Trees", "Unexpected Scaling in Path Copying Trees", "Unexpected Scaling in Path Copying Trees" ]
[ "Ilya Kokorin [email protected] \nITMO University\nRussia\n", "Alexander Fedorov [email protected] \nIST\nAustria Austria\n", "Trevor Brown [email protected] \nUniversity of Waterloo\nCanada Canada\n", "Vitaly Aksenov [email protected] \nITMO University\nRussia Russia\n", "Ilya Kokorin [email protected] \nITMO University\nRussia\n", "Alexander Fedorov [email protected] \nIST\nAustria Austria\n", "Trevor Brown [email protected] \nUniversity of Waterloo\nCanada Canada\n", "Vitaly Aksenov [email protected] \nITMO University\nRussia Russia\n", "Ilya Kokorin [email protected] \nITMO University\nRussia\n", "Alexander Fedorov [email protected] \nIST\nAustria Austria\n", "Trevor Brown [email protected] \nUniversity of Waterloo\nCanada Canada\n", "Vitaly Aksenov [email protected] \nITMO University\nRussia Russia\n" ]
[ "ITMO University\nRussia", "IST\nAustria Austria", "University of Waterloo\nCanada Canada", "ITMO University\nRussia Russia", "ITMO University\nRussia", "IST\nAustria Austria", "University of Waterloo\nCanada Canada", "ITMO University\nRussia Russia", "ITMO University\nRussia", "IST\nAustria Austria", "University of Waterloo\nCanada Canada", "ITMO University\nRussia Russia" ]
[]
Although a wide variety of handcrafted concurrent data structures have been proposed, there is considerable interest in universal approaches (henceforth called Universal Constructions or UCs) for building concurrent data structures. These approaches (semi-)automatically convert a sequential data structure into a concurrent one. The simplest approach uses locks that protect a sequential data structure and allow only one process to access it at a time. The resulting data structures use locks, and hence are blocking. Most work on UCs instead focuses on obtaining non-blocking progress guarantees such as obstruction-freedom, lock-freedom, or wait-freedom. Many non-blocking UCs have appeared. Key examples include the seminal wait-free UC by Herlihy, a NUMA-aware UC by Yi et al., and an efficient UC for large objects by Fatourou et al.We borrow ideas from persistent data structures and multiversion concurrency control (MVCC), most notably path copying, and use them to implement concurrent versions of sequential persistent data structures. Despite our expectation that our data structures would not scale under write-heavy workloads, they scale in practice. We confirm this scaling analytically in our model with private per-process caches.
10.1145/3572848.3577512
[ "https://export.arxiv.org/pdf/2212.00521v2.pdf" ]
254,125,320
2212.00521
3d9d8c0c3e05b8a570f3a1bf3bd4a75a364ea521
Unexpected Scaling in Path Copying Trees Ilya Kokorin [email protected] ITMO University Russia Alexander Fedorov [email protected] IST Austria Austria Trevor Brown [email protected] University of Waterloo Canada Canada Vitaly Aksenov [email protected] ITMO University Russia Russia Unexpected Scaling in Path Copying Trees Although a wide variety of handcrafted concurrent data structures have been proposed, there is considerable interest in universal approaches (henceforth called Universal Constructions or UCs) for building concurrent data structures. These approaches (semi-)automatically convert a sequential data structure into a concurrent one. The simplest approach uses locks that protect a sequential data structure and allow only one process to access it at a time. The resulting data structures use locks, and hence are blocking. Most work on UCs instead focuses on obtaining non-blocking progress guarantees such as obstruction-freedom, lock-freedom, or wait-freedom. Many non-blocking UCs have appeared. Key examples include the seminal wait-free UC by Herlihy, a NUMA-aware UC by Yi et al., and an efficient UC for large objects by Fatourou et al.We borrow ideas from persistent data structures and multiversion concurrency control (MVCC), most notably path copying, and use them to implement concurrent versions of sequential persistent data structures. Despite our expectation that our data structures would not scale under write-heavy workloads, they scale in practice. We confirm this scaling analytically in our model with private per-process caches. Introduction Although a wide variety of handcrafted concurrent data structures have been proposed, there is considerable interest in universal approaches (henceforth called Universal Constructions or UCs) for building concurrent data structures. These approaches (semi-)automatically convert a sequential data structure into a concurrent one. The simplest approach uses locks [3,5] that protect a sequential data structure and allow only one process to access it at a time. The resulting data structures use locks, and hence are blocking. Most work on UCs instead focuses on obtaining non-blocking progress guarantees such as obstruction-freedom, lock-freedom or waitfreedom. Many non-blocking UCs have appeared. Key examples include the seminal wait-free UC [2] by Herlihy, a PL'18, January 01-03, 2018, New York, NY, USA 2018. NUMA-aware UC [9] by Yi et al., and an efficient UC for large objects [1] by Fatourou et al. In this work, we consider the simpler problem of implementing persistent (also called functional) data structures, which preserve the old version whenever the data structure is modified [6]. Usually this entails copying a part of the data structure, for example, the path from the root to a modified node in a tree [4], so that none of the existing nodes need to be changed directly. We borrow ideas from persistent data structures and multi version concurrency control (MVCC) [8], most notably path copying, and use them to implement concurrent versions of sequential persistent data structures. Data structures implemented this way can be highly efficient for searches, but we expect them to not scale in write-heavy workloads. Surprisingly, we found that a concurrent treap implemented in this way obtained up to 2.4x speedup compared to a sequential treap [7] with 4 processes in a write-heavy workload. We present this effect experimentally, and analyze it in a model with private per-processor caches: informally, as the number of processes grows large, speedup in our treap of size tends to Ω(log ). Straightforward Synchronization for Persistent Data Structures In the following discussion, we focus on rooted data structures, but one could imagine generalizing these ideas by adding a level of indirection in data structures with more than one entry point (e.g., one could add a dummy root node containing all entry points). We store a pointer to the current version of the persistent data structure (e.g., to the root of the current version of a persistent tree) in a Read/CAS register called Root_Ptr. Read-only operations (queries) read the current version and then execute sequentially on the obtained version. Note that no other process can modify this version, so the sequential operation is trivially atomic. Modifying operations are implemented in the following way: 1) read the current version; 2) obtain the new version by applying the sequential modification using path copying (i.e., by copying the root, and copying each visited node); 3) try to atomically replace the current version with the new one using CAS; if the CAS succeeds, return: the modifying operation has been successfully applied; otherwise, the data structure has been modified by some concurrent process: retry the execution from step (1). This approach clearly produces a lock-free linearizable data structure. We expect read-only operations to scale extremely well. Indeed, two processes may concurrently read the current version of the persistent data structure and execute read-only persistent operations in parallel. However, modification operations seemingly afford no opportunity for scaling. When multiple modifications contend, only one can finish successfully, and the others must retry. For example, consider concurrent modification operations on a set: 1) process P calls insert(2) and fetches the current pointer RP; 2) process Q calls remove(5) and fetches the current pointer RP; 3) P constructs a new version RP P with key 2; 4) Q constructs a new version RP Q without key 5; 5) P successfully executes CAS(&Set.Root_Pointer, RP, RP P ); 6) Q executes CAS from RP to RP Q but fails; thus, Q must retry its operation. Successful modifications are applied sequentially, one after another. Intuitively, this should not scale at all in a workload where all operations must perform successful modifications. As we will see in Section 4, this intuition would be incorrect. Analysis The key insight is that failed attempts to perform updates load data into processor caches that may be useful on future attempts. To better understand, consider the binary search tree modification depicted in Fig. 1. Suppose we want to insert two keys: 5 and 75. We compare how these insertions are performed sequentially and concurrently. At first, we consider the sequential execution. We insert key 5 into the tree. It should be inserted as a left child of 10. Thus, we traverse the tree from the root to the leaf 10. On the way, we fetch nodes {40, 30, 20, 10} into the processor's cache. Note this operation performs four uncached loads. Now, we insert 75. It should be inserted as the right child of 70. Our traversal loads four nodes: {40, 50, 60, 70}. Node 40 is already cached, while three other nodes must be loaded from memory. Thus, we perform three uncached loads, for a total of seven uncached loads. Now, we consider a concurrent execution with two processes, in which P inserts 5 and Q inserts 75. Initially, both processes read Root_Ptr to load the current version. Then, 1) P traverses from the root to 10, loading nodes {40, 30, 20, 10}, and 2) Q traverses from the root to 70, loading nodes {40, 50, 60, 70}. Each process constructs a new version of the data structure, and tries to replace the root pointer using CAS. Suppose P succeeds and Q fails. Q retries the operation, but on the new High-level analysis We use a simple model that allows us to analyze this effect. (The full proof appears in Appendix A.) In this model, the processes are synchronous, i.e., they perform one primitive operation per tick, and each process has its own cache of size . We show that for a large number of processes , the speedup is Ω(log ), where is the size of the tree. Now, we give the intuition behind the proof. To simplify it, we suppose that the tree is external and balanced, i.e., each operation passes though log nodes. We also assume that the workload consists of successful modification operations on keys chosen uniformly at random. We first calculate the cost of an operation for one process: (log −log )· +log where = ( 1− ) is the cache size and is the cost of an uncached load. This expression captures the expected behaviour under least-recently-used caching. The process should cache the first log levels of the tree, and thus, log nodes on a path are in the cache and log − log are not. To calculate the throughput in a system with processes, we suppose that is quite large (≈ Ω( ( , log ))). Thus, each operation performs several unsuccessful attempts, ending with one successful attempt, and all successful attempts (over all operations) are serialized. Since the system is synchronous, each operation attempt loads the version of the data structure which is the result of a previous successful attempt ′ . The nodes evicted since the beginning of are those created by ′ . One can show that in expectation only two nodes on the path to the key are uncached. Finally, the successful attempt of an operation incurs cost 2 · + (log − 2). Since successful attempts are serialized, the expected total speedup is (log −log ) · +log 2· +(log −2) giving Ω(log ) with = Ω(log ). Experiments We implemented a lock-free treap and ran experiments comparing it with a sequential treap in Java on a system with an 18 core Intel Xeon 5220. Each data point is an average of 15 trials. We highlight the following two workloads. (More results appear in Appendix B.) Batch inserts and batch removes Suppose we have concurrent processes in the system. Initially the set consists of 10 6 random integer keys. Processes operate on mutually disjoint sets of keys. Each process repeatedly: inserts all of its keys, one by one, then removes all of its keys. Since the key sets are disjoint, each operation successfully modifies the treap. We report the speedup for our treap over the sequential treap below. Random inserts and removes In this workload, we first insert 10 6 random integers in [−10 6 ; 10 6 ], then each process repeatedly generates a random key and tries to insert/remove it with equal probability. Some operations do not modify the data structure (e.g., inserting a key that already exists). binary search tree is external, i.e., data is contained only in leaves, while internal nodes maintain only routing information. Suppose tree contains keys and the tree is balanced, therefore the tree height is (log ). We suppose uniform workload: all keys from the tree are accessed uniformly at random. Suppose the cache size is = ( 1− ), therefore, approximately upper log levels of the tree are cached, while log − log lower levels of the tree are not (Fig. 2). Thus, the sequential execution will take · (log + · (log − log )) time units to finish, where is the number of operations. A.2 Concurrent execution Suppose we have concurrent processes { } =1 executing operations concurrently, while each process has its own cache of size larger than log . In our model we assume that each successful try of a modifying operation causes − 1 unsuccessful tries of modifying operations on other processes (Fig. 3). We also assume that operation completion events are distributed among processes in a round-robin pattern: first process 1 executes its successful try of the operation, then process 2 executes its successful try of the operation, and so on. Finally, manages to complete its operation, the next process to get its successful try is yet again 1 (Fig. 4). As follows from the diagram, almost each successful try of an operation is preceded by −1 unsuccessful retries (except for − 1 first successful operation, which are preceded by the lower number of unsuccessful retries). Let us estimate, how long the first retry takes to execute. We must load log nodes, none of which might be cached. Thus, we spend · log time units on the first retry. Let us estimate now how much time we spend on subsequent retries. We begin with estimating, how many nodes on the path to the requested leaf have been modified (Fig. 5). Consider the successful modifying operation , that led to a latest failure of our CAS and made us retry our operation the last time. Remember, that arguments of operations are chosen uniformly at random, therefore: • There is 1 2 probability that modified some leaf from Root->Right subtree, thus, the number of modified nodes on our path is 1; • Similarly, there is 1 4 probability that the number of modified nodes on our path is 2; . . . • Similarly, there is 1 2 probability that the number of modified nodes on our path is . Thus, we can calculate the expected number of modified nodes on our path log =1 2 ≤ ∞ =1 2 = 2. Thus, the expected number of modified nodes on our path is not greater than 2. Modified nodes were created by another process, thus they do not exist in our process cache. Therefore, they should be loaded out-of-cache, while all the remaining nodes reside in the local cache and can be loaded directly from it. Therefore, we spend 2· time on average to load all the necessary nodes. In addition, we spend log − 2 time on average to load all the necessary nodes from the the local cache. Therefore, we spend 2 · + log − 2 time to fetch all the nodes required for a last operation retry. An operation execution consists of the first retry, executed in · log and − 1 subsequent retries executed in ( − 1) · (2 · + log − 2). Thus, a single operation is executed in · log + ( − 1) · (2 · + log − 2). Therefore, we execute operations in · ·log + ·( −1) ·(2· +log −2) time, since we execute these operations in parallel on processes. To measure the speedup we simply divide the sequential execution time by parallel execution time: ·(log + ·(log −log )) · ·log + ·( −1) ·(2· +log −2) = · log + ·(log −log ) ·log +( −1) ·(2· +log −2) . This gives us Ω(log ) speedup when = Ω( ( , log )) and = Ω(log ). B Experiments on other processors We did the same experiments on Intel Xeon Platinum 8160 with 24 cores and AMD EPYC 7662 with 64 cores. Unfortunately, one can see that the results are not so impressive when the number of processes is large enough. We suggest that the bottleneck for our benchmarks occurs in Java memory allocator. Figure 1 . 1The new version (green) of the tree shares its nodes with the old version (white) version (Fig. 1). Note that the new version shares most nodes with the old one.Q inserts 75 into the new version. Again, the key should be inserted as the right child of 70. Q loads four nodes {40, 50, 60, 70} from the new version of the tree. Crucially, nodes {50, 60, 70} are already cached by Q. This retry only incurs one cache miss! Thus, there are only five serialized loads in the concurrent execution, compared to seven in the sequential execution. Figure 2 . 2Upper levels of the tree are cached, while lower levels reside in RAM Each operation first loads log nodes from the cache, spending 1 time unit per each cache fetch. After that, it loads log − log nodes from the RAM, spending time units per RAM fetch. Figure 3 . 3Each successful try of an operation causes unsuccessful tries of − 1 operations Figure 4 . 4Nearly each successful modifying operation consists of retries: − 1 unsuccessful and one successful Figure 5 . 5The number of modified nodes on the path to the requested node Workload Seq Treap UC 1p UC 4p UC 10p UC 17pBatch 451 940 0.89x 1.23x 1.47x 1.47x Random 419 736 1.48x 2.38x 3.07x 3.19x A Mathematical model A.1 Sequential execution Let us estimate how much time is spent on executing op- erations sequentially on a binary search tree. Suppose our Table 2 . 2Workload Seq Treap UC 1p UC 6p UC 12p UC 23p Table 1. Results for Intel Xeon Platinum 8160. Workload Seq Treap UC 1p UC 8p UC 16p UC 32p UC 63p Results for AMD EPYC 7662.Batch 638 600 0.93x 1.31x 1.37x 1.08x Random 487 161 1.24x 3.23x 3.55x 2.8x Batch 459 580 0.96x 1.7x 1.91x 1.55x 1.02x Random 396 898 1.36x 3.63x 2.41x 2.81x 2.3x PL'18, January 01-03, 2018, New York, NY, USA Ilya Kokorin, Alexander Fedorov, Trevor Brown, and Vitaly Aksenov An efficient universal construction for large objects. Panagiota Fatourou, D Nikolaos, Eleni Kallimanis, Kanellou, arXivPanagiota Fatourou, Nikolaos D Kallimanis, and Eleni Kanellou. 2020. An efficient universal construction for large objects. arXiv (2020). Wait-free synchronization. Maurice Herlihy, ACM Transactions on Programming Languages and Systems (TOPLAS). 13Maurice Herlihy. 1991. Wait-free synchronization. ACM Transactions on Programming Languages and Systems (TOPLAS) 13, 1 (1991), 124-149. 2020. The art of multiprocessor programming. Maurice Herlihy, Nir Shavit, Victor Luchangco, Michael Spear, NewnesMaurice Herlihy, Nir Shavit, Victor Luchangco, and Michael Spear. 2020. The art of multiprocessor programming. Newnes. Persistent data structures. Haim Kaplan, Handbook of Data Structures and Applications. Chapman and Hall/CRCHaim Kaplan. 2018. Persistent data structures. In Handbook of Data Structures and Applications. Chapman and Hall/CRC, 511-527. A fast mutual exclusion algorithm. Leslie Lamport, ACM Transactions on Computer Systems (TOCS). 5Leslie Lamport. 1987. A fast mutual exclusion algorithm. ACM Trans- actions on Computer Systems (TOCS) 5, 1 (1987), 1-11. Purely functional data structures. Chris Okasaki, Cambridge University PressChris Okasaki. 1999. Purely functional data structures. Cambridge University Press. Randomized search trees. Raimund Seidel, Cecilia R Aragon, Algorithmica. 16Raimund Seidel and Cecilia R Aragon. 1996. Randomized search trees. Algorithmica 16, 4 (1996), 464-497. On supporting efficient snapshot isolation for hybrid workloads with multi-versioned indexes. Y Sun, Blelloch, A Lim, Pavlo, VLDB. 13Y Sun, G Blelloch, W Lim, and A Pavlo. 2019. On supporting efficient snapshot isolation for hybrid workloads with multi-versioned indexes. VLDB 13, 2 (2019). A Universal Construction to implement Concurrent Data Structure for NUMA-muticore. Z Yi, K Yao, Chen, 50th ICPP. Z Yi, Y Yao, and K Chen. 2021. A Universal Construction to implement Concurrent Data Structure for NUMA-muticore. In 50th ICPP. 1-11.
[]
[ "Enhanced diffusivity and skewness of a diffusing tracer in the presence of an oscillating wall", "Enhanced diffusivity and skewness of a diffusing tracer in the presence of an oscillating wall" ]
[ "Lingyun Ding \nDepartment of Mathematics\nUniversity of North Carolina\nChapel Hill27599NCUnited States\n", "Robert Hunt [email protected] \nDepartment of Mathematics\nUniversity of North Carolina\nChapel Hill27599NCUnited States\n", "Richard M Mclaughlin \nDepartment of Mathematics\nUniversity of North Carolina\nChapel Hill27599NCUnited States\n", "Hunter Woodie [email protected] \nDepartment of Mathematics\nUniversity of North Carolina\nChapel Hill27599NCUnited States\n" ]
[ "Department of Mathematics\nUniversity of North Carolina\nChapel Hill27599NCUnited States", "Department of Mathematics\nUniversity of North Carolina\nChapel Hill27599NCUnited States", "Department of Mathematics\nUniversity of North Carolina\nChapel Hill27599NCUnited States", "Department of Mathematics\nUniversity of North Carolina\nChapel Hill27599NCUnited States" ]
[]
We develop a theory of enhanced diffusivity and skewness of the longitudinal distribution of a diffusing tracer advected by a periodic time-varying shear flow in a straight channel. Although applicable to any type of solute and fluid flow, we restrict the examples of our theory to the tracer advected by flows which are induced by a periodically oscillating wall in a Newtonian fluid between two infinite parallel plates as well as flow in an infinitely long duct. These wall motions produce the well-known Stokes layer shear solutions which are exact solutions of the Navier-Stokes equations. With these, we first calculate the second Aris moment for all time and its long-time limiting effective diffusivity as a function of the geometrical parameters, frequency, viscosity, and diffusivity. Using a new formalism based upon the Helmholtz operator we establish a new single series formula for the variance valid for all time. We show that the viscous dominated limit results in a linear shear layer for which the effective diffusivity is bounded with upper bound κ(1 + A 2 /(2L 2 )), where κ is the tracer diffusivity, A is the amplitude of oscillation, and L is the gap thickness. Alternatively, for finite viscosities, we show that the enhanced diffusion is unbounded, diverging in the high-frequency limit. Non-dimensionalization and physical arguments are given to explain these striking differences. Asymptotics for the high-frequency behavior as well as the low viscosity limit are computed. We present a study of the effective diffusivity surface as a function of the non-dimensional parameters which shows how a maximum can exists for various parameter sweeps. Physical experiments are performed in water using particle tracking velocimetry (PTV) to quantitatively measure the fluid flow. Using fluorescein dye as the passive tracer, we document that the theory is quantitatively accurate. Specifically, image analysis suggests that the distribution variance be measured using the full width at half maximum statistic which is robust to noise. Further, we show that the scalar skewness is zero for linear shear flows at all times, whereas for the nonlinear Stokes layer, exact analysis shows that the skewness sign can be controlled through the phase of the oscillating wall. Further, for single frequency wall modes, we establish that the long-time skewness decays at the faster rate of t −3/2 as compared with steady shear scalar skewness which decays at rate t −1/2 . These results are confirmed using Monte-Carlo simulations.
10.1007/s40687-021-00257-4
[ "https://export.arxiv.org/pdf/2008.10717v2.pdf" ]
221,292,954
2008.10717
d26ed812487f8cad32d4e92947ada994f0704bb0
Enhanced diffusivity and skewness of a diffusing tracer in the presence of an oscillating wall Lingyun Ding Department of Mathematics University of North Carolina Chapel Hill27599NCUnited States Robert Hunt [email protected] Department of Mathematics University of North Carolina Chapel Hill27599NCUnited States Richard M Mclaughlin Department of Mathematics University of North Carolina Chapel Hill27599NCUnited States Hunter Woodie [email protected] Department of Mathematics University of North Carolina Chapel Hill27599NCUnited States Enhanced diffusivity and skewness of a diffusing tracer in the presence of an oscillating wall Preprint submitted to Research in the Mathematical Sciences March 14, 2021 arXiv:2008.10717v2 [physics.flu-dyn] 18 Apr 2023 analysis, Channel flow 2010 MSC: 82C70, 82C80, 34E13, 76R50(Robert Hunt), (Hunter Woodie)Passive scalarEffective diffusionSkewnessTaylor dispersionMultiscale * Corresponding author We develop a theory of enhanced diffusivity and skewness of the longitudinal distribution of a diffusing tracer advected by a periodic time-varying shear flow in a straight channel. Although applicable to any type of solute and fluid flow, we restrict the examples of our theory to the tracer advected by flows which are induced by a periodically oscillating wall in a Newtonian fluid between two infinite parallel plates as well as flow in an infinitely long duct. These wall motions produce the well-known Stokes layer shear solutions which are exact solutions of the Navier-Stokes equations. With these, we first calculate the second Aris moment for all time and its long-time limiting effective diffusivity as a function of the geometrical parameters, frequency, viscosity, and diffusivity. Using a new formalism based upon the Helmholtz operator we establish a new single series formula for the variance valid for all time. We show that the viscous dominated limit results in a linear shear layer for which the effective diffusivity is bounded with upper bound κ(1 + A 2 /(2L 2 )), where κ is the tracer diffusivity, A is the amplitude of oscillation, and L is the gap thickness. Alternatively, for finite viscosities, we show that the enhanced diffusion is unbounded, diverging in the high-frequency limit. Non-dimensionalization and physical arguments are given to explain these striking differences. Asymptotics for the high-frequency behavior as well as the low viscosity limit are computed. We present a study of the effective diffusivity surface as a function of the non-dimensional parameters which shows how a maximum can exists for various parameter sweeps. Physical experiments are performed in water using particle tracking velocimetry (PTV) to quantitatively measure the fluid flow. Using fluorescein dye as the passive tracer, we document that the theory is quantitatively accurate. Specifically, image analysis suggests that the distribution variance be measured using the full width at half maximum statistic which is robust to noise. Further, we show that the scalar skewness is zero for linear shear flows at all times, whereas for the nonlinear Stokes layer, exact analysis shows that the skewness sign can be controlled through the phase of the oscillating wall. Further, for single frequency wall modes, we establish that the long-time skewness decays at the faster rate of t −3/2 as compared with steady shear scalar skewness which decays at rate t −1/2 . These results are confirmed using Monte-Carlo simulations. Introduction An extremely important class of problems concerns how fluid motion can increase solute mixing. Since G. I. Taylor [50] first introduced the calculation showing that a pressure driven flow in a pipe leads to a greatly enhanced effective diffusivity, the literature on this topic has exploded in many directions spanning many disciplines. The mathematics of this problem is particularly important and just one of the many areas of Modern Applied Mathematics which Andy Majda pioneered, starting with work on developing a rigorous formulation characterizing how a scale separated flow with general streamline topology can give rise to an effective diffusivity [8,40], extending to non-scale separated flows showing anomalous results [9,10,11], and eventually yielding models of scalar intermittency [39,42] which produced explicit models for the full probability density function (PDF) of a passive scalar advected by a random, white in time linear shear layer [22,21,54,25,54,24]. Shortly following G. I. Taylor, Aris [6] presented an alternative approach for shear layers yielding a hierarchy for the spatial moments of the scalar field. More recent results about the steady shear flow have explored how geometry can be used to control these moments to seek different effective diffusivities [49,1], and even how geometry can be used to control how solute in pressure-driven flow can be delivered with either a sharp front or with a gradual build-up through a detailed study of the scalar skewness [2,3,4]. In many practical applications, flows are unsteady and therefore typically generate different properties than their steady counterparts. The first investigation of the Taylor dispersion in time-dependent flow dates back to Aris [7], who presented the study of a solute in pulsating flow through a circular tube. After that, a number of studies reported on cases involving a non-transient, single-frequency pulsating flow [18,26,57,45,33]. Most of those studies focused on pressure-driven flow; fewer studies have addressed wall driven flows. Numerical studies of the enhanced mixing induced by a single frequency Couette-Poiseuille flow are reported in [12,47], and recently a multiscale analysis for a single frequency Couette-flow yielded formulae for the enhanced diffusivity [14,13]. Recently, Vedel and Bruus [55,56] explored the case of a time-dependent, multifrequency flow and developed formulas of effective diffusivity. Our study develops the general theory for the enhanced diffusion and skewness for the case of an arbitrary, periodic time varying shear flow and then focuses upon the physically realizable flow induced by the oscillatory motion of a wall adjacent to a Newtonian fluid theoretically, computationally, and experimentally. First, we non-dimensionalize the problem and identify the non-dimensional parameters. Next, we derive the solutions to the Navier-Stokes equations resulting from such a wall motion, known as Stokes' second problem [28]. We see that in the high viscosity limit, this flow results in a time-varying linear shear layer. In turn, we compute the effective diffusivity produced by this flow, implementing a new formulation based on the Helmholtz operator which yields a new single series formula for the scalar variance in contrast to the double series formula in the literature, e.g. [55]. We establish an upper bound for the case of a time-varying linear shear, showing the maximum possible diffusion is set by the amplitude 2 of wall motion and the gap thickness of the parallel-plate channel and is independent of the frequency of motion. Alternatively, we demonstrate that, for finite viscosities, the effective diffusivity is unbounded in increasing frequency of the wall motion. These results are validated with experiments performed using a wall driven by a programmable linear motor. Particle tracking velocimetry shows that the experimental fluid motion is accurately predicted by the Stokes layer solutions. Image analysis with different camera exposure times suggests that dye distribution variances can be accurately measured using the full width at half maximum statistic. Experiments with fluorescein dye are carried out and compare favorably with the effective diffusion theory. We additionally study how the more nonlinear Stokes layer solutions can yield greater effective diffusivities than the linear counterpart. Moreover, we document that the nonlinear case (with finite but nonzero viscosity) generates a much larger vertical concentration gradient, which leads to enhanced vertical tracer concentration on transient timescales. Next, we prove that for the case of the time-varying linear shear layer, the scalar spatial skewness is zero for all time, while Monte-Carlo simulations for wall-driven flows show that at finite viscosities the skewness can be non-zero. Short-time asymptotics akin to prior work [3] are computed for the skewness and compared directly to the Monte-Carlo simulations. Finally, we present a complete mathematical analysis of the skewness showing how its sign can be completely controlled by the phase of the wall motion and further demonstrates that for single-frequency wall motions, the skewness decays to zero as t −3/2 for large time, faster than the familiar steady flow counterpart, which decays as t −1/2 . Theoretical calculations Governing equation and nondimensionalization 2.1.1. Stokes Layer We consider a layer of incompressible viscous fluid between two infinite parallel walls with gap thickness L. As sketched in the figure 1, the front wall is stationary, while the back wall is moving periodically parallel to itself with the velocity ξ(t) and the base frequency ω. The flow u(y, t) induced by the back moving wall satisfies the Navier-Stokes equations: ∂ t u = ν∂ 2 y u, u(y, 0) = 0, u(0, t) = 0, u(L, t) = ξ(t),(1) where ν is the fluid kinematic viscosity and the parallel-plate channel domain is R × Ω and x ∈ R, Ω = {y|y ∈ [0, L]}. When ξ(t) = Aω cos ωt, the long time solution of equation (1) is available in the chapter 4 of the book [28] or equation 17 in the article [44]. This model was extended by Ferry and others to visco-elastic fluids [30,44]. We derive the exact solution (with the transient term) and its high viscosity asymptotic expansion in the appendix 7.2.1 for completeness. In three dimensional space, we are interested in the duct R × Ω, Ω = {(y, z) |y ∈ [0, L], z ∈ [0, H]}. For the closed duct, the solid boundary imposes the no-slip boundary condition u| z=0,H = 0. For the open duct, we have the no-stress boundary condition at the free surface ∂u ∂z z=H = 0. In both of these domains, for the parameters we used in our experiments, the analysis in appendix 7.2.2 shows the Stokes layer solution in a parallel-plate channel is a good approximation for the region away from the boundary in the z-direction. Hence, we neglect the boundary in the z-direction in the following calculation. Advection-diffusion equation The passive scalar is governed by the advection-diffusion equation with a general timevarying shear flow u(y, z, t) and no-flux boundary conditions which takes the form ∂ t T + u(y, z, t)∂ x T = κ∆T, T (x, y, z, 0) = T I (x, y, z), ∂ n T | R×∂Ω = 0,(2) where κ is the diffusivity, T I (x, y, z) is the initial data and n is the outward normal vector of the boundary R × ∂Ω. Nondimensionalization With the change of variables Lx = x, Ly = y, Lz = z, L 2 κ t = t, κ L 2 ω 0 = ω, U = Aω, U u (y , z , t ) = u(y, z, t), U ξ (t ) = ξ(t), LΩ = Ω, T I (x , y , z )L −3 R×Ω T I (x, y, z)dxdΩ = T I (x, y, z), T (x , y , z , t )L −3 R×Ω T I (x, y, z)dxdΩ = T (x, y, z, t),(3) after dropping the primes, we obtain the nondimensionalized flow equation ∂ t u = Sc∂ 2 y u, u(y, 0) = 0, u(0, t) = 0, u(1, t) = ξ(t),(4) where Sc = ν/κ is the Schmidt number. The dimensionless frequency ω 0 also can be written as ω 0 = Wo 2 Sc, where Wo = L ω/ν is the Womersley number. When ξ(t) = cos ω 0 t, the 4 long time solution of equation (4) is u(y, t) = k=±1 exp (itkω 0 ) sinh e i π 4 √ kWoy 2 sinh e i π 4 √ kWo .(5) At a fixed time, the Womersley number uniquely determines the spatial shape of the Stokes layer solution. The advection-diffusion equation (2) becomes ∂ t T + Peu(y, z, t)∂ x T = ∆T, T (x, y, z, 0) = T I (x, y, z), ∂ n T | R×∂Ω = 0,(6) where Pe = U L/κ = AωL/κ is the Péclet number, the domain is R × Ω and x ∈ R, Ω = {y|y ∈ [0, 1]} for the two dimensional problem, R×Ω and x ∈ R, Ω = {(y, z)|y ∈ [0, 1], z ∈ R} for the three dimensional problem. Aris moment hierarchy The nth Aris moment is defined by T n (y, z, t) = ∞ −∞ x n T (x, y, z, t)dx. With the assumption T (±∞, y, z, t) = 0, the Aris moments satisfy the recursive relationship called the Aris equation, (∂ t − ∆)T n = n(n − 1)T n−2 + nPeu(y, z, t)T n−1 , T n (y, z, 0) = ∞ −∞ x n T I (x, y, z)dx, ∂ n T | R×∂Ω = 0,(7) where T n = 0 if n ≤ −1. The full moments of T are then obtained though the cross-sectional average of the momentsT n = 1 |Ω| Ω T n dydz, where Ω = {(y, z)|y ∈ [0, 1], z ∈ R} is the cross section and |Ω| is the area of Ω. In this following context, we use the overline to denote the cross sectional average. Applying the divergence theorem and boundary conditions gives dT n dt =n(n − 1)T n−2 + nPeu(y, z, t)T n−1 , T n (0) = 1 |Ω| Ω ∞ −∞ x n T I (x, y, z)dxdydz.(8) The multiscale analysis in appendix 7.1 suggests that, assuming a scale separation in the initial data, the solution of equation (2) can be approximated by a diffusion equation with an effective diffusion coefficient. Inspired by this observation, we study the longitudinal effective diffusivity through the cross sectional averageT . The effective longitudinal diffusivity is defined as κ eff = lim t→∞ Var(T ) 2t ,(9) where Var(T ) =T 2 −T 2 1 is the variance of the cross sectional averageT . In this paper, we use κ eff to denote the dimensional effective diffusivity computed by the dimensional Aris moment and use theκ eff = κ eff /κ to denote the non-dimensional effective diffusivity. 5 We are also interested in the symmetry properties ofT . Skewness is the lowest order integral measure of the asymmetry of a real-valued probability distribution, which is defined as S(T ) =T 3 − 3T 2T1 + 2T 3 1 T 2 −T 2 1 3 2 .(10) For a unimodal distribution, negative skewness commonly indicates that the distribution has the property median>mean while positive skewness indicates that the median<mean, see [38,4] for sufficient conditions which guarantee this correlation. The information of shape provided by the skewness could improve the design of microfluidic flow injection analysis [4,53] and chromatographic separation [17]. Enhanced diffusivity and skewness induced by a general periodic time-varying flow In this section, we derive the formulae for the enhanced diffusivity and skewness induced by a general periodic time varying flow u(y, t) which has the Fourier series representation u(y, t) = ∞ k=−∞ u k e ikω 0 t ,(11) where u k = ω 0 2π 2π/ω 0 0 u(y, t)e −ikω 0 t dt. Several observations and assumptions can simplify our calculation. Firstly, we take T (x, y, z, 0) = δ(x) as the initial data. Hence T 0 (y, z, 0) = 1 and T n (y, z, 0) = 0 for n ≥ 1 by the definition (7). Since the initial function and flow studied here are independent of z, the three dimensional advection-diffusion equation (6) reduces to an equation in two spatial dimensions. Secondly, to shorten the expression, we denote φ 0 = 1, λ 0 = 0 and φ n = √ 2 cos nπy, λ n = n 2 π 2 , n ≥ 1 as the eigenfunctions and eigenvalues of the Laplace operator in the cross section of the parallel-plate channel. Those eigenfunctions form an orthogonal basis on the cross section Ω with respect to the inner product f, g = To compute the effective longitudinal diffusivity, we need to compute the Aris moments T 0 , T 1 ,T 2 in turn. When n = 0, equation (7) becomes ∂ t T 0 − ∂ 2 y T 0 = 0, T 0 (y, 0) = 1, ∂ y T 0 | y=0,1 = 0.(12) The solution is T 0 = 1. When n = 1, equation (7) is ∂ t T 1 − ∂ 2 y T 1 = Peũ(y, t)T 0 , T 1 (y, 0) = 0, ∂ y T 1 | y=0,1 = 0.(13) Then T 1 has the series representation which takes the form T 1 = Pe ∞ k 1 =−∞ ∞ n=1 u k 1 , φ n φ n e ik 1 ω 0 t − e −tλn λ n + ik 1 ω 0 = Pe ∞ k 1 =−∞ Q (1) k 1 e ik 1 ω 0 t − ∞ n=1 u k 1 , φ n φ n e −tλn λ n + ik 1 ω 0 ,(14) where Q (1) k 1 = (−∆+ik 1 ω 0 ) −1 (u k −ū k ) and the inverse Helmholtz operator, b(y) = (−∆ + λ) −1 a(y), solves − ∂ 2 y b(y) + λb(y) = a(y), ∂ y b| y=0,1 = 0.(15) We note that b(y) has the integral representation b(y) = 1 √ λ   cosh √ λy 1 0 a(s) cosh √ λ(1 − s) ds sinh √ λ − y 0 a(s) sinh √ λ(y − s) ds , λ = −λ n . b(y) = − y 0 y 1 0 a(y 2 )dy 2 dy 1 + 1 0 y 0 y 1 0 a(y 2 )dy 2 dy 1 dy, λ = 0.(16) When λ = −λ n = −n 2 π 2 , a(y) should satisfy the solvability condition a, φ n = 0. In this case, the boundary value problem has infinite solutions. We choose the particular solution which satisfies b, γ n = 0, b(y) = (−∆ − λ n ) −1 a = lim λ→−λn (−∆ + λ) −1 a − (−∆ + λ) −1 a, φ n .(17) When the cross section Ω has more general geometry, b = (−∆+λ) −1 a becomes the solution of the Helmholtz equation (−∆ + λ)b = a on Ω with no-flux boundary conditions. When n = 2, equation (8) is: dT 2 dt = 2T 0 + 2Peũ(y, t)T 1 ,T 2 (0) = 0.(18) By solving this equation, we havē T 2 =2t + 2Pe 2 k 1 ,k 2 ∈Z ∞ n=1 u k 1 , φ n u k 2 , φ n −1 + e i(k 1 +k 2 )ω 0 t (k 1 + k 2 ) ω 0 (iλ n − k 1 ω 0 ) − 1 − e −λnt+ik 2 ω 0 t (λ n + ik 1 ω 0 ) (λ n − ik 2 ω 0 ) ,(19) where the summand is understood as an entire function whose value is determined by its power series. For example, f (z) = e z −1 z = 1 + z 2 + O(z 2 ), so f (0) = 1. The effective longitudinal diffusivity defined in (9) is theñ κ eff =1 + Pe 2 ∞ k=−∞ ∞ n=1 u k , φ n u −k , φ n λ n + ikω 0 = 1 + Pe 2 ∞ k=−∞ Q (1) k , u −k .(20) The double series representation for effective diffusivity is identical to equation (3.24) in [55], while the single series representation presented here is new. For the steady flowũ(y, t) = u 0 (y), the last expression in equation (20) becomes equation (1.30) in [51]. Moreover, by the divergence theorem, we havẽ κ eff =1 + Pe 2 −ũ∆ −1ũ = 1 + Pe 2 ∇∆ −1ũ · ∇∆ −1ũ = 1 + Pe 2 ũ H −1 ,(21) where u H −1 = (∇∆ −1 u) · (∇∆ −1 u) is the H −1 norm of u. Interestingly, the H −1 norm is widely used for measuring mixing efficiency in the field of chaotic advection [52,36,37,5]. It also appears in the effective diffusivity here which is a measurement of mixing efficiency in this shear dispersion problem. Comparing equation (55) and (13), we can see that the solution θ of the cell problem is the first Aris moment T 1 . The formula of effective diffusivity (57) is equivalent to equation (9). Hence, we conclude that the Aris moment approach and the multiscale analysis approach yield the same effective diffusivity for the time-varying shear flow. Of course, we note that the limiting procedure here, with → 0, may be different than the Aris moment approach where the limit is t → ∞. To compute the skewness of the cross sectional averageT , we need to compute the Aris moments T 2 ,T 3 in turn. When n = 2, equation (7) is ∂ t T 2 − ∂ 2 y T 2 = 2T 0 + 2Peũ(y, t)T 1 , T 2 (y, 0) = 0, ∂ y T 2 | y=0,1 = 0.(22) Hereũ(y, t)T 1 has the series representatioñ u(y, t)T 1 =Pe k 1 ,k 2 ∈Z ∞ n 1 =1,n 2 =0 u k 1 , φ n 1 u k 2 φ n 1 , φ n 2 φ n 2 e i(k 1 +k 2 )ω 0 t − e ik 2 ω 0 t−λn 1 t λ n 1 + ik 1 ω 0 .(23) Hence, T 2 has the series representation T 2 =2t + 2Pe 2 k 1 ,k 2 ∈Z ∞ n 1 =1,n 2 =0 u k 1 , φ n 1 u k 2 φ n 1 , φ n 2 φ n 2 λ n 1 + ik 1 ω 0 × −e −λn 2 t + e i(k 1 +k 2 )ω 0 t λ n 2 + i (k 1 + k 2 ) ω 0 − e −λn 2 t − e −λn 1 t+ik 2 ω 0 t −ik 2 ω 0 + λ n 1 − λ n 2 .(24) When n = 3, equation (8) is dT 3 dt = 6T 1 + 3Peũ(y, t)T 2 ,T 3 (0) = 0.(25) T 1 = 0 follows from the choice of the frame of reference. Hence, we obtain T 3 =6Pe 3 k 1 ,k 2 ,k 3 ∈Z ∞ n 1 ,n 2 =1 u k 1 , φ n 1 u k 2 φ n 1 , φ n 2 u k 3 , φ n 2 λ n 1 + ik 1 ω 0 × 1 −ik 2 ω 0 +λn 1 −λn 2 1−e −t(λn 1 −i(k 2 +k 3 )ω 0 ) λn 1 −i(k 2 +k 3 )ω 0 − 1−e −tλn 2 +ik 3 tω 0 λn 2 −ik 3 ω 0 − 1 λn 2 +i(k 1 +k 2 )ω 0 1−e −tλn 2 +ik 3 tω 0 λn 2 −ik 3 ω 0 − −1+e i(k 1 +k 2 +k 3 )tω 0 i(k 1 +k 2 +k 3 )ω 0 .(26) 8 With the definition of skewness (10) andT 1 = 0, we have S(T ) = 3Pe 3 √ 2κ 3 eff t k 1 ,k 2 ∈Z ∞ n 1 ,n 2 =1 u k 1 ,φn 1 u k 2 φn 1 ,φn 2 u −k 1 −k 2 ,φn 2 (λn 1 +ik 1 ω 0)( λn 2 +i(k 1 +k 2 )ω 0) + O(t − 3 2 ) = 3Pe 3 k 1 ,k 2 ∈Z Q (2,1) k 1 ,k 2 ,u −k 1 −k 2 √ 2t 1+Pe 2 ∞ k=−∞ Q (1) k ,u −k 3 2 + O(t − 3 2 ).(27) where Q (2,1) k 1 ,k 2 = (i(k 1 + k 2 )ω 0 − ∆) −1 Q (1) k 1ũ k 2 − Q (1) k 1ũ k 2 . For steady flow, equation (27) reduces to equation (24) in the supplementary materials of article [2]. For a single frequency flow, k i ∈ {−1, 1} and δ k 1 +k 2 ,−k 3 = 0 for all combinations of k 1 , k 2 , k 3 . Hence, the leading order in equation (27) vanishes, which leads to the long time asymptotic expansion of skewness S(T ) = 3Pe 3 √ 2κ 3 eff t 3 k 1 ,k 2 ,k 3 ∈Z ∞ n 1 ,n 2 =1 u k 1 , φ n 1 u k 2 φ n 1 , φ n 2 u k 3 , φ n 2 λ n 1 + ik 1 ω 0 × 1 −ik 2 ω 0 +λn 1 −λn 2 1 λn 1 −i(k 2 +k 3 )ω 0 − 1 λn 2 −ik 3 ω 0 − 1 λn 2 +i(k 1 +k 2 )ω 0 1 λn 2 −ik 3 ω 0 − −1+e i(k 1 +k 2 +k 3 )tω 0 i(k 1 +k 2 +k 3 )ω 0 + O(e −λ 1 t ).(28) This expression implies that single frequency flows or multiple frequency flows with suitable frequency separation could relax more quickly to a symmetricT than other flows, e.g., steady Poiseuille flow. Enhanced diffusivity induced by an oscillating wall With the formula we derived in the previous section, we present a detailed analysis of the enhanced diffusivity induced by the Stokes layer solution and its dependence on the parameters. With the formulae for the Stokes layer solution (5) and second Aris moment (19), we haveT 2 =2t + 2Pe 2 k 1 ,k 2 =±1 ∞ n=1 −1+e i(k 1 +k 2 )ω 0 t (k 1 +k 2 )ω 0 (iλn−k 1 ω 0 ) − 1−e −λnt+ik 2 ω 0 t (λn+ik 1 ω 0 )(λn−ik 2 ω 0 ) × 2 j=1 e i π 4 √ k j Wo((−1) n cosh(e i π 4 √ k j Wo)−1) √ 2sinh(e i π 4 √ k j Wo)(π 2 n 2 +ik j Wo 2 ) .(29) With equation (20), the effective longitudinal diffusivity induced by the Stokes layer solution is theñ κ eff =1 + Pe 2 Wo 2 2 √ 2 cosh( √ 2Wo) − cos( √ 2Wo) − sin( √ 2Wo) + sinh( √ 2Wo) Wo Wo 4 − ω 2 0 + 1 √ ω 0 ω 2 0 − Wo 4 cos √ 2ω 0 − cosh √ 2ω 0 × 4 √ 2e π 4 i cos Wo √ 2 cosh Wo √ 2 sin e π 4 i √ ω 0 − sinh e π 4 i √ ω 0 −(cos( √ 2Wo) + cosh( √ 2Wo) + 2) sin √ 2ω 0 − sinh √ 2ω 0 .(30) The three non-dimensional parameters ω 0 , Wo, Sc are connected by the relation ω 0 = Wo 2 Sc. To study limiting cases, we need to assume two of them are independent and eliminate the remaining parameter from equation (30). We first study the low and high limit of Womersley number with a given ω 0 , i.e. Sc becomes a function of Wo. The expansion (66) shows that the Stokes layer solution converges to the linear shear flow u(y, t) = y cos (ω 0 t) as Wo → 0. In the low Womersley number limit, the effective diffusivity (30) becomes κ eff = 1 + Pe 2 2ω 2 0   1 − √ 2 √ ω 0 sin √ ω 0 √ 2 + sinh √ ω 0 √ 2 cos √ ω 0 √ 2 + cosh √ ω 0 √ 2   + O(Wo 4 ),(31) which is the same as formula (60) obtained by the homogenization approach. We also can compute the asymptotic expansion in the high Womersley number limit Wo → ∞ which yieldsκ eff =1 + Pe 2 Wo 2 2 √ 2 sinh √ 2ω 0 − sin √ 2ω 0 √ ω 0 ω 2 0 − Wo 4 cos √ 2ω 0 − cosh √ 2ω 0 − 1 Wo 5 − Woω 2 0 + O e − √ 2 2 Wo .(32) Either low viscosity or large gap thickness yields a large Womersley number. In the low viscosity limit, since no fluid motion is generated for a parallel wall moving in an ideal fluid, the boosted diffusivity vanishes. The numerical simulation results in figure 12 show that the mixing is confined in a thinner boundary layer for a smaller viscosity. Next, we study the limiting cases involving the non-dimensional frequency ω 0 with fixed Womersley number. In other words, we change ω 0 while keeping the spatial shape of the Stokes layer unchanged. As ω 0 → 0, we havẽ We have the following asymptotic expansion as ω 0 → ∞: κ eff =1 + Pe 2 2Wo 2 cosh √ 2Wo − cos √ 2Wo − sin √ 2Wo + sinh √ 2Wo √ 2Wo + cos √ 2Wo + cosh √ 2Wo + 2 cos Wo √ 2 cosh Wo √ 2 + 2 3   + O(ω 2 0 ).(33)κ eff =1 + Pe 2 Wo 2 2 √ 2 cosh( √ 2Wo) − cos( √ 2Wo) sin( √ 2Wo) + sinh( √ 2Wo) Woω 2 0 − cos( √ 2Wo) + cosh( √ 2Wo) + 2 ω −5/2 0 + O ω −7/2 0 .(34) One may be interested inκ eff as ω 0 → ∞ or ω 0 → 0 for a given Schmidt number Sc. In this case, the Stokes shear wave becomes a steady flow u(t, y) = y + O(ω 0 ) as ω 0 → 0. Equation (30) becomes the classical result of Taylor dispersion for a steady moving wall κ eff =1 + Pe 2 1 240 + ω 2 0 7 − 155Sc 2 3628800Sc 2 + O ω 5/2 0 .(35) When ω 0 → ∞, we havẽ κ eff =1 + Pe 2 Sc 2 √ 2 √ Sc + 1 (Sc + 1)ω 3/2 0 + O e − min(1, 1 √ Sc ) √ 2ω 0 .(36) These asymptotic expansions imply the potential existence of a maximum effective diffusivity as Schmidt number or Womersley number is varied when ω 0 is given. We denote the Schmidt number and Womersley number for reaching the maximum ofκ eff as f Sc (ω 0 ) and f Wo (ω 0 ) respectively. When ω 0 is large, equation (36) leads to f Sc (ω 0 ) ∼ 1 3 3 53 + 6 √ 78 + 3 53 − 6 √ 78 + 2 ≈ 2.3146, ω 0 → ∞.(37) When ω 0 is small, we numerically calculate the maximum using (33) and find The results of other cases can be obtained by the relation ω 0 = Wo 2 Sc. Figure 2 shows how the enhanced diffusivity varies for different dimensionless parameters. The black curves represent the functions f Wo (ω 0 ), f Sc (ω 0 ) and the red dashed curve represents their asymptotic results. f Wo (ω 0 ) ∼2.49426, ω 0 → 0.(38) To further explore the maximal properties, we plot in figure 3 the normalized enhanced diffusivity as a function of the fluid kinematic viscosity with experimental parameters. As the viscosity increases, the effective diffusivity first reaches its maximum value then decreases to a plateau. The difference between the peak and the plateau is smaller for smaller frequencies. Due to this phenomenon, it is hard to distinguish the maximum and plateau value ofκ eff at small frequencies in figure 2. All of these results are obtained with a fixed Pe, which occurs as the amplitude A → 0 as ω → ∞. Hence, in those cases, the effective diffusivity vanishes for large frequency. Things are different in dimensional variables. Figure 3 suggests that higher dimensional frequency may yield higher effective diffusivity for a fixed amplitude A. Based on this observation, we are next interested in studying the effective diffusivity at large frequencies while holding all other physical parameters constant. For linear shear flow, the dimensional effective diffusivity κ eff is bounded by a constant set solely by the gap thickness L and the amplitude of wall motion A, κ eff ≤ κ 1 + Pe 2 2ω 2 0 = κ 1 + A 2 2L 2 ,(39) which follows from equation (31). Alternatively, at finite viscosities, the Stokes wave solution induces an effective diffusivity which is unbounded in the high frequency limit ω → ∞ and has the following asymptotic expansion: κ eff = κ 1 + A 2 ν √ ω 2 √ 2L ( √ κ + √ ν) (κ + ν) + O e − min( 1 √ κ , 1 √ ν )L √ 2ω .(40) The log-log plot (4) shows the exponential convergence of κ eff to its high frequency asymptotic expansion (40). One may be interested in whether such growth of the variance as a function of high frequency is visible at large but finite times. This is a question which involves commuting limits and joint asymptotic expansion. Careful examination of the formula in equation (29) shows the high frequency expansion at fixed time produces a linearly growing term in time whose slope exactly matches that in equation (30) as well as the correction which is bounded in both frequency and time. Hence, the time and high-frequency limits will commute in this case. There could be cases of incommensurate limits amongst the non-dimensional parameters. The fluid viscosity and tracer diffusivity are both functions of temperature. For instance, they may satisfy the Stokes-Einstein relationship (page 320 of the book [27]) κ(θ) = kθ 6πη(θ)r , where k = 1.3807×10 −23 J ·K −1 is the Boltzmann constant, r is the hydrodynamic radius of the tracer, η is the dynamic viscosity, and θ is the absolute temperature with the unit Kelvin K. Of course, this relationship is correct for a small spherical particle experiencing Brownian motion: the solute is a molecule, and not a sphere. Still, measuring the diffusivity at one temperature can be nonetheless used to calculate an effective hydrodynamic radius. Hence, equation (37) and (38) could provide good guidance for finding the temperature for the maximum ofκ eff (θ). Since κ eff (θ) =κ eff (θ)κ(θ), we should also notice that the temperature for reaching the maximum of κ eff (θ) andκ eff (θ) could be different. We consider the case of the fluorescein diffusion in water. As a function of the temperature, the diffusivity of fluorescein takes the form κ(θ) = With these formulas, we plot the Schmidt number as a function of temperature for θ ∈ tures, the minimum Schmidt number over this range of temperatures is 172.2862, and thus, no interior maximum is observed. In fact, over this range of temperatures,κ eff (θ) increases monotonically as seen in panel (b) of figure 5. A tracer-fluid system with a Schmidt number smaller than 2.3146 could exhibit an interior effective diffusivity maximum as a function of temperature. Skewness In this section, we utilize the formulae derived in section 2.3 to study the skewness ofT for left-right symmetric initial data. At infinite viscosity, the Stokes layer solution (5) becomes a periodic time-varying linear shear flow yξ(t). It is fairly straightforward to show that the passive scalar skewness is generally zero for initial data δ(x) by the analysis of parity. Observe that the linear shear admits an odd cosine expansion in y and produces an odd T 1 cosine expansion in y. In turn, we see that T 2 is even from inspection, since the driver in the equation for T 2 is the product of two functions u and T 1 which are odd about the centerline of the channel y = 1/2. Lastly, the driver for the T 3 equation contains T 1 (odd) and the product ofũ (odd) and T 2 (even). When computing the net third moment by cross-sectional averaging,T 1 = 0 as well asũT 2 = 0. Hence, the skewness is zero for a linear shear. Alternatively, it is easy to check that (y − 1 2 )φ n 1 , φ n 2 (y − 1 2 ), φ n 2 = 0 for any pair of (n 1 , n 2 ). Then, we also see the skewness is zero for all time from equation (26). At finite viscosities, the skewness ofT has more interesting behavior. With the formula for the Stokes layer solution (5), we have u k 1 , φ n 1 = e i π 4 √ k 1 Wo((−1) n 1 cosh(e i π 4 √ k 1 Wo)−1) √ 2sinh(e i π 4 √ k 1 Wo)(π 2 n 2 1 +ik 1 Wo 2 ) , u k 3 , φ n 2 = e i π 4 √ k 3 Wo((−1) n 2 cosh(e i π 4 √ k 3 Wo)−1) √ 2sinh(e i π 4 √ k 3 Wo)(π 2 n 2 2 +ik 3 Wo 2 ) , u k 2 φ n 1 , φ n 2 = Therefore the formula of S(T ) is available by applying formula (26) and (10). Figure 6 shows the coefficient of t − 3 2 in the long time asymptotic expansion of S(T ) with the wall velocity cos(2πt + s). As predicted by equation (28), the sign of the skewness changes periodically. The skewness sign stays positive longer than negative when the phase shift s of the wall motion is zero. However, it stays strictly positive when s = π/2 and strictly negative when s = −π/2. This observation suggests that we can control the symmetry properties ofT by simply shifting the phase. In addition, figure 6 shows that the skewness is not zero at the end of each period when the wall goes back to the initial position. The numerical simulation results in figure 12 also show that the distribution of tracer is asymmetric about the centerline of the initial data x = 0. This phenomenon implies that, even with periodic flow in time, the symmetry of the tracer's distribution may break in the presence of diffusion. Also note that, upon close inspection of the linear shear case documented in figure 10 and 11 , one can see broken symmetry near the top and bottom of the graphs, though when cross-sectionally averaged, this effect cancels. We also are interested in the short time behavior of the skewness. Article [3] presented a method for computing the short-time asymptotics of the Aris moment in an arbitrary crosssectional domain. They found there is a plateau of skewness ofT at short time which only depends on the geometry of the cross section. They denoted this quantity as the geometric skewness. The geometric skewness is independent of the Péclet number. Hence, it can be computed by neglecting the molecular diffusion. For given initial data T I (x, y), the solution can be obtained by method of characteristics as T (x, y, t) = T I (x − t 0 u(y, s)ds, y), then T n = ∞ −∞ x n 1 0 T I (x − t 0 u(y, s)ds, y)dydx. For general initial data, this leads to a lengthy analytical formula for the geometric skewness, which is too long to list here. We will study its behavior in section 5 and compare with computational simulations, which will also show that the skewness depends significantly on the phase shift at short times. Computational approaches In this section, we describe two computational approaches for solving the advectiondiffusion equation: the Monte-Carlo method and the Fourier spectral method. The Monte-Carlo method is advantageous to problems involving complex geometry and is ideally suited to parallel computing. Moreover, its convergence rate only depends on the number of samples which makes it particularly useful for higher-dimensional integrals. Based on those features, the Monte-Carlo methods are more suitable for computing the Aris moments on larger time scales. However, it is expensive to store the positions of millions of particles at every observation time instant. The spectral method is more efficient and flexible to compute the distribution of the tracer for different parameters on a shorter time scale, which can remedy the weakness of Monte-Carlo method. First, we introduce the setup of the Monte-Carlo method. The Monte-Carlo simulations are used to compare with the laboratory experiments described in the following section. To get a global approximation of the solution of the advection-diffusion equation, we adopt the forward Monte-Carlo method which is based on the Fokker-Planck equation. We determine the initial position of 10 7 particles according to the intensity distribution of the experimental photographs on a uniform grid. We assume that the tracer is uniformly distributed on the cross section of the channel. Each particle's trajectory satisfies the stochastic differential equation (SDE), dX t = u(Y t , t)dt + √ 2κdW 1 , dY t = √ 2κdW 2 , dZ t = √ 2κdW 3 .(43) where u(y, t) is a shear flow, κ is the molecular diffusivity and dW i are independent white noises. We solve the SDE by the Euler scheme with a time increment ∆t = 0.05 s which resolves the frequencies studied experimentally, X t i+1 = X t i + u(Y t i , t i )∆t + √ 2κ∆tn i,1 , Y t i+1 = Y t i + √ 2κ∆tn i,2 , Z t i+1 = Z t i + √ 2κ∆tn i,3 .(44) Here, n i,j are independent and identically distributed standard normal random variables which are produced by the Mersenne Twister uniform random number generator and Marsaglia polar method [41]. We impose the billiard-like reflection rules on the boundary plane z = 0 cm, z = 16 cm, y = 0 cm, y = L. We note that the tank height is chosen to be 16 cm to match the experimental height. At a given time t, the histogram of the N = 10 7 particle positions is an approximation of the solution T (x, y, z, t). The cross-sectional average of nth Aris moment can be approximated by the formulā T n (t i ) = 1 N N j=1 X n t i ,j ,(45) where X t i ,j is the x-coordinate of jth particle at time t i . The simulations are performed on UNC's Longleaf computing cluster by using 200 cores. The computation takes approximately 8 h to perform 3 × 10 5 time steps needed to resolve the flow and reach the diffusion timescale L 2 /κ. Additionally, we utilize the Fourier spectral method to solve the two-dimensional advectiondiffusion equation (2) Experimental methods Experimental setup Experiments were performed in a 50 × 25 × 30 cm glass tank. To reduce effects of thermal convection, the fluid was density stratified using the two bucket method [29,46] with sodium chloride as the stratifying agent. The density of the background fluid linearly decreases with height, with total variation approximately 0.1 g/cc over 20 cm. One wall, made of 0.75 in thick glass, is fixed to both sides of the tank, while a second 0.25 in thick aluminum wall is connected from above to a linear stage driven by an Oriental motor model ARM66MC with driver model ARD-A, which translates the wall in the horizontal direction parallel to the fixed wall. The motor is controlled by custom software written in MATLAB for the ATMEL ATMEGA2560 microcontroller and implemented using an Arduino MEGA 2560. To prepare the tracer, fluorescein powder is mixed with saline solution of density 1.05 g/cc to a concentration of 0.9 g/L. About 50 µL of fluorescein solution is injected between the walls near the center of the interrogation region and allowed to freely diffuse for several hours to make the dye uniformly in the cross-section. The tank and motor frame are draped in black fabric to block ambient light, and a blacklight is placed on top of the tank to illuminate the tracer. The illuminated fluorescein dye is photographed from the side using a Nikon D750 which is synchronized with the oscillating wall period using the Arduino. A first-surface mirror tilted back 45 degrees from vertical is placed below the tank to allow for easily viewing the dye from below. To capture particle tracking velocimetry (PTV) images, saline solution of density 1.05 g/cc is mixed with 50 micron diameter hollow glass microspheres and injected into the interrogation region. A laser sheet with normal in the vertical direction illuminates the fluid which is viewed from below using 30 fps video captured on a Nikon D750 equipped with a Nikon AF-S micro Nikkor 105 mm lens. PTV processing is performed in MATLAB using PTVlab [20]. Figure 1 shows a schematic of the experimental setup from three different views. Image Analysis To process the dye images, a Gaussian filter is applied, and then the intensity is integrated along the vertical direction. Then the full width at half maximum (FWHM) is measured as a function of time, first for the case of no wall movement to measure the bare diffusivity of sodium fluorescein in the saline solution, then after turning on the wall to measure the effective diffusivity. In a distribution, the FWHM statistic is the difference between the two values of the independent variable at which the dependent variable is equal to half of its maximum value. The motivation for using the FWHM statistic in lieu of moment based measurements is summarized in figure 7 and table 1. Photographs with different exposure times of the same dye distribution are taken after the dye has been diffusing for several hours. This provides a sequence of images with different signal to noise ratios of the same dye concentration field. Small noise in the far field gives a large contribution to the moments as we see a large variation of the variance computed by the moment integral method in the second row of table 1. To obtain a measurement of variance that is more robust to noise, we can take advantage of the explicit formula of the tracer's distribution. The multiscale analysis in appendix 7.1 shows thatT can be approximated as a normal distribution at long times, T =T 0 √ 4πtκ eff exp − (x −T 1 ) 2 4tκ eff .(46) Hence the relationship between FWHM and the effective diffusivity κ eff is FWHM = 2 √ 2 ln 2 T 0 2tκ eff ≈ 2.355 T 0 2tκ eff .(47) The first row of table 1 shows the FWHM is more robust to noise, particularly when the signal-noise ratio is small. For these reasons, we adopt the FWHM for measuring the effective diffusivity. Experimental and theoretical results Here, we present a comparison of experimental results with the theory developed above as well as Monte-Carlo and pseudo-spectral simulations for the evolving passive scalar field. figure 7. Here, the label 'BNS' indicates the background noise was subtracted. The label 'FWHM' indicates the variance was calculated by the full width at half maximum method which is given in equation (47), and '2nd Moment' indicates the variance was calculated by the second Aris full moment which is given in equation (9). this figure correspond to trial 3 (panel a) and 7 (panel b) from table 2. A few comments regarding our experimental data. First, since the width of the initial blobs is larger in panel b, the observed spreading is less than that in panel a even though the effective diffusivities are similar. Second, in the absence of a flow, the cloud would have spread at a much slower rate than those observed in this figure. Table 2 shows the detailed comparison between the experimental campaign and theoretical prediction of the effective diffusivity. First, we remark that the bare molecular diffusivity shows some variation. This is primarily due to the unexpected dependence of fluorescein's diffusivity upon the concentration of NaCl which has been observed in other work by Gupta et al. [32]. In future work, we will explore this subtle effect. Consequently, for the present case, we always measure the diffusivity first in our experiments. We can gain some insight into the transient effects giving rise to the long time limiting effective diffusion by studying the short time behavior using the spectral method with different diffusivities. Shown in figure 10 are images of the scalar distributions, each case output at 5 different times taken on quarter cycles of the wall oscillation. The top cases correspond to a pure time-varying linear shear with a single frequency sine wall motion, while the bottom panels correspond to cases with a nonlinear Stokes layer, with parameters ν = 0.001 St, ω = 0.2π rad/s, L = 0.2 cm, A = 1 cm. The left panels have zero diffusivity, while the right panels have κ = 10 −5 cm 2 /s. Observe in the case of the Stokes layer, the scalar is stretched into an extremely thin filament in the upper part of the channel which diffuses rapidly in the non-zero diffusivity case. Compared to the linear shear, this case diffuses faster locally in the upper channel. The case with linear shear is more uniformly mixed across the channel. In the nonlinear Stokes layer case, the upper channel mixes very quickly. This in turn increases the vertical concentration gradient, which gives rise to increased transient vertical diffusive tracer mixing. To demonstrate this, we plot the integral of the absolute value of the vertical concentration gradient in the right panel of figure 11, for the cases examined in the left panel of zero, finite, and infinite viscosity showing that Table 2: Comparison of the experimental and theoretical effective diffusivity: A(cm) is the amplitude of the wall motion, ω (rad· s −1 ) is the frequency of the wall motion, L (cm) is the gap thickness, ρ (g/cc) is the local density, κ (cm 2 /s) is the molecular diffusivity measured from the pure diffusion stage in the experiment, κ eff,e (cm 2 /s) is the effective diffusivity computed by the FWHM approach from the experimental data, the viscosity is ν = 0.0113 St, and κ eff,t (cm 2 /s) is the theoretical value based on the experimental parameters. The last column is the relative error between experimental and theoretical effective diffusivity, the finite viscosity Stokes layer has a significantly larger concentration gradient. This effect is perhaps more pronounced than in the more familiar steady pressure driven flow as a full cycle returns the Lagrangian map to its initial configuration. κ eff,t −κ eff,e κ eff,t . The left panel of figure 11 shows the mixing result at the end of one period of wall motion for different flows. Flows create more dispersion in the longitudinal direction than the bare molecular diffusion. However, the physical mechanisms between a linear shear flow and a nonlinear Stokes layer flow give rise to very different enhanced diffusivities: for the linear shear case, κ eff = 0.00013 cm 2 /s, 13.14 times the bare molecular diffusivity. This value is nearly the upper bound (39) for a linear shear described above, which in this case is 13.5 times the molecular diffusivity. On the other hand, in the nonlinear Stokes layer case, κ eff = 0.00041 cm 2 /s, which is 40.96 times the bare molecular diffusivity. To further explore the effects of the diffusivity and viscosity upon the mixing using the spectal method, we present figure 12. This figure shows a sweep of viscosities (decreasing from left to right) and diffusivities (decreasing from top to bottom) which depicts the nature of the boundary layer for the passive scalar. All of the mixing as the diffusivity and viscosity are decreased occurs in a small boundary layer adjacent to the moving wall. We next examine the skewness behavior for a nonlinear Stokes layer with parameters ν = 0.01 St, ω = 0.2π rad/s, L = 0.2 cm, A = 1 cm, and κ = 5 × 10 −6 cm 2 /s and document how its sign can be controlled the initial phase of sinusoidal wall motion. The initial function is a symmetric function T I (x, y) = √ 2πσ −1 exp − x 2 2σ 2 and σ = 1/40. Shown in figure 13 is the evolution of the total skewness, computed using Monte-Carlo simulations, as the phase of the wall motion is changed. Clearly the skewness shows rapid oscillation on these timescales, and the phase clearly can be used to adjust the sign of the skewness. Lastly, in figure 14 we show the short time comparison of the Geometric skewness derived in the absence of diffusion with that computed with diffusion via Monte-Carlo simulations. Conclusions In this paper, we develop a theory of enhanced diffusivity and skewness of the longitudinal distribution of a diffusing tracer advected by a periodic, time-varying shear flow in a straight channel. Based upon this, we present a detailed study of the tracer advected by the flows which are induced by a periodically oscillating wall in a Newtonian fluid between two infinite parallel plates as well as in an infinitely long duct. Using a new formalism built upon the Helmholtz operator, we derive new single series formulas for the variance, effectively re-summing the double sum formulae presented in literature, e.g., Vedel et al. [55]. In the study of the effective diffusion, we find the optimal Schmidt numberf Sc (ω 0 ) or Womersley number f Wo (ω 0 ) for mixing when the dimensionless frequency ω 0 is given. The asymptotic analysis of the effective diffusivity shows that f Sc (ω 0 ) ≈ 2.3146 for a large ω 0 and f Sc (ω 0 ) ≈ ω 0 /6.2213 for a small ω 0 . Via the relation ω 0 = Wo 2 Sc, we have f Wo (ω 0 ) ≈ ω 0 /2.3146 for a large ω 0 and f Wo (ω 0 ) ≈ 2.49426 for a small ω 0 . For fluoresceinwater mixtures, we document that no interior maximum of effective diffusivity is observed because this mixture's Schmidt numbers are too large (in this case the Schmidt depends monotonically upon the temperature. Other solute-fluid mixtures may possess enhanced diffusivities with internal maxima as a function of temperature. Further, a new mixing mechanism is identified distinguishing linear shear from the nonlinear Stokes layer. A bound for the enhanced diffusion for the linear case is derived and shown to solely depend on the aspect ratio and molecular diffusivity, whereas for the nonlinear Stokes layer occurring at finite viscosity, the enhanced diffusion is unbounded in increasing frequency. In the study of the skewness, we show that the single-frequency flow can create a more symmetric distribution of the tracer than the steady flow, with skewness decay rate t −3/2 compared to t −1/2 for the steady case. As an extreme example, we prove the periodic time varying linear shear flow case has zero skewness for all time. Besides that, we document how the phase of the wall motion can be used to control the sign of the skewness. Experiments compare favorably with the theory and numerical simulations. PTV flow measurements show that the experiments are well predicted by the Stokes layer solutions. Image analysis of photographs taken at exposure times suggests that the full width at half maximum statistic is a good measure of the scalar variance and is robust to noise. Advection-diffusion experiments with a robotically controlled moving wall show that the theory for effective diffusivity predicts the observed experimental spreading on diffusion timescales. Future directions we intend to explore include utilizing the lubrication theory and center manifold theory [43,15,16] to assess the role of non-planar wall motions and their ability to further increase the effective diffusivity, along with pushing the wall motion into the stochastic regime to further understand how random wall motion creates intermittency in a passive scalar [23]. Appendix Multiscale analysis Following the prior work [25], assuming a scale separation in the initial data, we utilize multiscale analysis below to derive the effective diffusion equation induced by the periodic time-varying shear flow. We consider the following advection-diffusion equation in the parallel-plate channel with impermeable boundaries ∂ t T + u(y, t)∂ x T = κ∆T, T (x, y, 0) = T I x a , ∂ y T | y=0,L = 0. We assume u(y, t) y,t = 0, where the angle bracket denotes the average of u(y, t) over the region y ×t ∈ [0, L]×R + . Unlike the non-dimensionalization presented in section 2.1.3, here, we need two different characteristic lengths in the x and y direction. Hence, we introduce the following change of variables, ax = x, Ly = y, = L a , L 2 κ 2 t = t, κ L 2 ω = ω, T T = T, U = Aω Pe = LU κ , U u y , t 2 = u(y, t).(49) We can drop the primes without confusion and obtain the non-dimensionalized equation, ∂ t T + Pe u y, t 2 ∂ x T = ∂ 2 x T + 1 2 ∂ 2 y T, T (x, y, 0) = T I (x), ∂ y T | y=0,1 = 0.(50) We seek the asymptotic approximation to T (x, y, t) in the limit → 0 that has the following multiscale expansion, T (x, y, t) = T 0 (x, ξ, y, t, τ ) + T 1 (x, ξ, y, t, τ ) + 2 T 2 (x, ξ, y, t, τ ) + O( 3 ),(51) with two different scales in the x direction: x (slow), ξ = x/ (fast), and in the t direction: t (slow), τ = t/ 2 (fast). Consequently, the differential operators along the x and t directions will be replaced ∂ x → ∂ x + 1 ∂ ξ , ∂ 2 x → ∂ 2 x + 2 ∂ x ∂ ξ + 1 2 ∂ 2 ξ , ∂ t → ∂ t + 1 2 ∂ τ .(52) We would have a hierarchy of equations, as one would see in a classical homogenization problem, such that the following equation holds for arbitrarily small . For O( −2 ), we have: LT 0 = 0, T 0 (x, ξ, y, t, τ )| t=0,τ =0 = T I (x),(53) where LT = ∂ τ + Peu(y, τ )∂ ξ − ∂ 2 ξ − ∂ 2 y T . Since the initial condition is a function of the variable x only, we have T 0 (x, ξ, y, t, τ ) = T 0 (x, t). For O( −1 ), we have LT 1 = −Peu(y, τ )∂ x T 0 + 2∂ x ∂ ξ T 0 , T 1 (x, ξ, y, 0, 0) = 0.(54) The last term on the right hand side is zero. The solvability condition is guaranteed by −Peu(y, τ )∂ x T 0 y,τ = −Pe∂ x T 0 u(y, τ ) y,τ = 0. Due to the linearity of the equation, the general form of the solution is T 1 = ∂ x T 0 (x, t)θ(ξ, y, τ ) + C(x, t). Therefore, we have Lθ = −Peu, θ(ξ, y, 0) = 0, ∂ y θ| y=0,1 = 0.(55) Since the initial condition and the driver are independent of ξ, we have θ(ξ, y, τ ) = θ(y, τ ). For O( 0 ), we have LT 2 = −∂ t T 0 − Peu(y, τ )∂ x T 1 + ∂ 2 x T 0 + 2∂ x ∂ ξ T 1 , T 2 (x, ξ, y, 0, 0) = 0.(56) Since θ is independent of ξ, the last term on the right hand side is zero. The solvability condition yields the effective diffusion equation ∂ t T 0 = κ eff ∂ 2 x T 0 , κ eff = 1 − Pe u(y, τ )θ y,τ .(57) Comparing equation (55) and (13), we can see that the solution θ of the cell problem is the first Aris moment. The formula of effective diffusivity (57) is equivalent to equation (9). Hence, we conclude that the Aris moment approach and the multiscale analysis approach yield the same effective diffusivity for the time-varying shear flow. Of course, we note that the limiting procedure here, with → 0, may be different than the Aris moment approach where the limit is t → ∞. Let's use the periodic time-varying linear shear flow u(y, t) = y sin ω 0 t as an example. In this case, the cell problem (55) becomes ∂ τ θ − ∂ 2 y θ = −Pey sin ω 0 τ, θ(y, 0) = 0, ∂ y θ| y=0,1 = 0. The solution θ(y, τ ) has the series representation θ = Pe(cos(τ ω 0 ) − 1) 2ω 0 + 4Pe π 2 n∈odd ω 0 e −π 2 n 2 τ + π 2 n 2 sin(τ ω 0 ) − ω 0 cos(τ ω 0 ) n 2 (π 4 n 4 + ω 2 0 ) cos nπy. (59) Based on the formula (57), the effective diffusivity κ eff is κ eff = 1 + 4Pe 2 π 2 n∈odd 1 n 2 (π 4 n 4 + ω 2 0 ) = 1 + Pe 2 ω 2 0   1 2 − sin √ ω 0 √ 2 + sinh √ ω 0 √ 2 √ 2ω 0 cos √ ω 0 √ 2 + cosh √ ω 0 √ 2   ,(60) which is the same as formula (31) obtained by the Aris moment approach. where Wo = L ω/ν. Since the exponential decay term will not affect the leading order of the Aris moment at long times, we neglect them in the calculation of enhanced diffusivity. Next, we consider the asymptotic expansion of the solution in the high viscosity limit. As ν → ∞, we have the following expansion Particularly, for a periodic function ξ(ωt), as Wo → 0, we have u(y, t) = ξ(ωt)y L + ξ (ωt)Wo 6 y 3 L 3 − y L + ξ (ωt)Wo 2 360 3y 5 L 5 − 10y 3 L 3 + 7y L + O Wo 3 .(66) For the PTV experiment presented in figure 8, the Womersley number is Wo = 0.16 2π/100 0.0113 ≈ 0.3773. The low Womersley number expansion in equation (66) would be a good approximation for the flow in this experiment. The Stokes wave in infinite duct In the experiment, the fluid domain is a three-dimensional space. It is natural to ask, can the Stokes layer solution derived in parallel-plate channel approximate the Stokes layer derived in a closed duct or open duct well? We will answer this question in this section. In an infinitely long rectangular closed duct y × z ∈ [0, L] × [0, H], the flow induced by one moving wall satisfies the equation ∂ t u = ν ∂ 2 y u + ∂ 2 z u , u(y, z, 0) = 0, u(0, z, t) = 0, u(L, z, t) = ξ(t), u(y, 0, t) = u(y, H, t) = 0. (67)   + O e − π 2 ν ( H 2 +L 2 ) H 2 L 2 .(74) For the open duct, the no-stress boundary condition at the free surface leads to the flow equation ∂ t u = ν ∂ 2 y u + ∂ 2 z u , u(y, z, 0) = 0, u(0, z, t) = 0, u(L, z, t) = ξ(t), u(y, 0, t) = 0, ∂ z u(y, z, t)| z=H = 0. With the basis sin π(n+ 1 2 )z H , n ≥ 0, the similar calculation yields u = 4Aω π ∞ n=0     e itω sin π(n+ 1 2 )z H sinh y √ νπ 2 (2n+1) 2 −4iH 2 ω 2H √ ν (2n + 1) sinh L √ νπ 2 (2n+1) 2 −4iH 2 ω 2H √ ν     + O exp − π 2 ν 4H 2 . (76) Figure 15 shows equation (63), (76) and (74) are only significantly different at the boundary z = 0, H and are indistinguishable at interior of the domain. When the tracer is concentrated at the middle of the domain, for the experimental parameters, equation (63) is a good approximation of (76) and (74). Lists of abbreviations Figure 1 : 1Schematic showing the setup for the experiment and theory. . Thirdly, the centered cross sectional average, e.g., variance and skewness, is invariant under the Galilean transformationx = x − problem in a frame of reference moving with the spatial mean speedū. Then the advection-diffusion equation (6) has the same form but a new shear flowũ = u −ū with u k = u k −ū k . Hence,T 1 = 0 for all time which simplifies the calculation of variance and skewness ofT . Figure 2 : 2Enhanced diffusivityκ eff − 1 for Péclet number Pe = 1, (left panel) varying the dimensionless frequency ω 0 and the Womersley number Wo or (right panel) varying the dimensionless frequency ω 0 and the Schmidt number Sc. The black curves indicate the location of the enhanced diffusivity maximum in the non-dimensional parameter space(s) for a given non-dimensional frequency. The red dashed curves are the asymptotic approximation of these functions for large or small ω 0 . Figure 3 : 3The dimensionless enhanced diffusivity κ eff − 1 versus the viscosity with parameters L = 0.2 cm, A = 1 cm, κ = 3.3 * 10 −6 cm 2 /s, ω = 2π/100 rad/s (red solid curve, Pe = 3808), ω = 2π/10 s −1 (blue dashed curve, Pe = 38080) Figure 4 : 4Comparison of dimensionless enhanced diffusivity κ eff − 1 computed by the full expression of κ eff in equation (30) (solid red) with the one computed by the high frequency asymptotic expansion of κ eff given in equation (40) (dashed blue), for the Stokes layer solution with parameters L = 0.2 cm, A = 1 cm, κ = 3.3 * 10 −6 cm 2 /s, ν = 0.01 St (Sc = 3030.3). − 0 . 000798704(θ − 273) 2 − 0.0000461705(θ − 273) 3 +1.0556302 × 10 −7 (θ − 273) 4 − 2.8054253 × 10 −10 (θ − 273) 5 g/cm 3 . [273, 373] K in the panel (a) of figure 5. To observe an interior maximum in the effective diffusivity, the Schmidt number must be smaller than 2.3146. For fluorescein-water mix-) κ eff (θ),κ eff (θ) Figure 5 : 5Panel (a) Schmidt number of fluorescein-water system varies with the temperature θ ∈ [273, 373]K. Panel (b)κ eff (θ) (left y axis, red color), κ eff (θ) (right y axis, blue color) with parameters A = 1 cm, L = 1/5 cm, ω = 2π/10 rad/s. n 2 1 −n 2 2 22Wo(k 2 Wo 2 −iπ 2 (n 2 1 +n 2 2 )) 1+(−1) 1+n 1 +n 2 cosh e iπ 4 √ k 2 Wo (−2π 2 k 2( n 2 1 +n 2 2 )Wo 2 −ik 2 2 Wo 4 +iπ 4 () 2 ) sinh e iπ 4 √ k 2 Wo. Figure 6 : 6The coefficient of t −3/2 in the long-time asymptotic expansion of the skewness ofT (i.e. equation(28)) with the parameters Pe = 2, Wo = 1, ω 0 = 2π and the velocity of the wall cos(ω 0 t + s). The red solid curve, blue dash curve and black dash-dot curve correspond to the phase shift s = 0, s = π/2, s = −π/2, respectively. with Stokes layer solution(5). All computations of solution and Aris moments are performed on the domain [−H, H] × [0, L]. When H is large enough, we can assume there is a periodic boundary condition in the x-direction. Since there are non-penetration conditions in the y-direction, we perform the even extension in the ydirection to obtain the periodic condition on the extended domain. Thus, we solve the advection-diffusion equation with periodic boundary conditions on the rectangular domain [−H, H]×[0, 2L]. It can be solved by the standard Fourier spectral method with the explicit fourth order Runge-Kutta method as the time-marching scheme. In the dealiasing process at each time step, we apply the all-or-nothing filter with the two-thirds rule to the spectrum; that is, we set the upper one-third of the resolved spectrum to zero (see chapter 11 of the book[19] for details). We solve equation with the parameters H = 16 cm, L = 0.2 cm, and time increment ∆t = 0.005 s over 2000 time steps. The grid resolution is 2048 × 257 before the even extension and 2048 × 512 after the extension. First, in figure 8 we show an experimental and theoretical comparison of the Stokes layer (63) for two different cases corresponding to two different amplitude wall motions. The left panels show the shear velocity time series at 8 different locations uniformly distributed across the channel for a case with A = 1 cm, ω = 2π × 0.01rad/s, ν = 0.0113 St, and L = 0.16 cm, while the right panels change the amplitude to A = 2 cm. For the PTV experiment presented in figure 8, the Womersley number is Wo = 0.16 2π/100 0.0113 ≈ 0.3773. The low Womersley number expansion in equation (66) would be a good approximation for the flow in this experiment. Next, in figure 9 we show the experimental and Monte-Carlo simulations for the dye distribution viewed from the side at times t = 0 s, t = 7, 200 s, and t = 14, 400 s, with parameters listed in the figure caption. We also plot the averaged concentrations,T for the experiment and the simulation in the left columns of each panel. The parameters for Figure 7 : 7Study of the image noise: For a fixed experimental observation of dye concentration with no flow, we take photos at different shutter speeds and process the resultingT , which effectively adjusts the signal to noise ratio while keeping the signal fixed. Panel (a) We apply a 2-D Gaussian filter by the Matlab built-in function imgaussfilt with parameter sigma=25. Here each curve is rescaled to have maximum one. Panel (b) We apply the 2-D Gaussian filter with the same parameter and subtract the background noise from the images (subset of exposure times and associated curve colors indicated in top legend). Here each curve is normalized to be a PDF. Figure 8 : 8Comparison of particle tracking velocimetry (PTV) data (black curves) with the Stokes layer analytical solution (color curves) given in equation(5). Each curve which is plotted by black curves corresponds with a time series of the shear velocity over a duration of one period taken at different distances between the fixed wall and the moving wall (located at L = 0.16 cm). Left panel has wall oscillation amplitude A = 1 cm, right panel has A = 2 cm, other parameters: ω = 2π/100 rad/s, ν = 0.0113 St, and L = 0.16 cm. Figure 9 : 9Experimental and Monte-Carlo simulation comparison: First column of panels shows the longitudinal distribution of the tracerT , where the red solid line and blue dash line represent the experiment data and Monte-Carlo simulation, respectively. The second column of panels shows the experimental photographs of the tracer distributions viewed from the side at times t = 0, 2, 4 h. The third column of the panels shows the corresponding Monte-Carlo simulations of the second column, where we also apply a 2-D Gaussian filter by the MATLAB built-in function imgaussfilt with parameter sigma=1. The parameters are A = 1 cm, ν = 0.0113 St, panel (a) L = 0.5 cm, ω = 2π/200 rad/s, κ = 8.7 × 10 −6 cm 2 /s Figure 10 :Figure 11 : 1011Spectral method comparison between mixing by linear shear versus nonlinear Stokes layer with a single-frequency sinusoidal wall motion. Upper panels correspond to linear shear, while the lower panels correspond to the nonlinear Stokes layer, with parameters ν = 0.001 St, ω = 0.2π rad/s, L = 0.2 cm, A = 1 cm. The left panels are computed with κ = 0 cm 2 /s, while the right panels utilize κ = 10 −5 cm 2 /s. Output times are taken at quarter periods, i.e., t=0 s, 2.5 s, 5 s, 7.5 s, 10s. |∂ y T (x, y, 10)| dx Spectral method comparison between mixing by different flows after one-period of motion t = 10 s. Panel (a) The top panel has no flow, the middle has a linear shear, and the bottom panel has a nonlinear Stokes layer, with parameters ν = 0.001 St, ω = 0.2π rad/s, L = 0.2 cm, A = 1 cm and κ = 10 −5 cm 2 /s. Panel (b)Integral of the absolute value of the concentration gradient ∞ −∞ |∂ y T (x, y, 10)| dx, the red solid curve, blue dash curve, black dash dot curve correspond to no flow, linear shear flow, Stokes layer flow, respectively. Figure 12 :Figure 13 :Figure 14 : 121314Spectral method comparison between mixing by Stokes layer flows after one-period of motion t = 10 s for ω = 0.2π rad/s, L = 0.2 cm, A = 1 cm, different diffusivities and viscosities. The viscosity decreases from left to right (ν =0.01 St, 0.001 St, 0.0001 St) and the diffusivity decreases from the top to bottom (κ = 5 × 10 −5 cm 2 /s, 10 −5 cm 2 /s, 2 × 10 −6 cm 2 /s). Note that the mixing is confined in a thinner boundary layer for a smaller viscosity. Skewness arising from wall velocities Aω cos(ωt + s) started at different phase s, (a)s = 0, (b)s = π/4, (c)s = π/2, (d)s = 3π/4, for the nonlinear Stokes layer with parameters ν = 0.01 St, ω = 0.2π rad/s, L = 0.2 cm, A = 1 cm, and κ = 5 × 10 −6 cm 2 /s. Comparison of short-time skewness (solid red) with analytically predicted short time asymptotic Geometric skewness (dashed blue) arising from wall motion with the velocity Aω cos(ωt + s) started at phase s = π/2, for the Stokes layer solution with parameters ν = 0.01 St, ω = 0.2π rad/s, L = 0.2 cm, A = 1 cm, and κ = 5 × 10 −6 cm 2 /s . The Stokes wave in parallel-plate channel In this section, we derive the exact solution (with the transient term) of equation (1) and its high viscosity asymptotic expansion for completeness. The solution obtained by Laplace transform takes the form s) is the Laplace transform of the wall velocity ξ(t). Consider a harmonic wall motion ξ(t) = Aω cos ωt, the integrand in equation (61) becomes e stû (y, s) = e st Asω s 2 ofû(y, s) are s = ±iω, s = − π 2 νn 2 L 2for n ∈ Z + . By the residue theorem, we have u(y, t) = Res(e stû , iω) + Res(e stû , −iω) Figure 15 : 15Comparison of flows with different boundary conditions. Panel (a): The difference between the solution (63) in parallel-plate channel and 105 terms of the solution (74) in the closed duct. Panel (b): The difference between the solution in parallel-plate channel (63) and 105 terms of the solution (76) in the open duct. The parameters are ν = 0.01St, ω = 2π/100s −1 , L = 0.2 cm, A = 1 cm, t = 1 s, y × z ∈ [0cm, 1/5cm] × [0cm, 16cm] . Table 1 : 1The variance computed by various methods for the data presented in See table 3.Full Form Abbreviation Background noise subtraction BNS Full width at half maximum FWHM Partial differential equation PDE Probability density function PDF Particle tracking velocimetry PTV Stochastic differential equation SDE Table 3 : 3Lists of abbreviations. AcknowledgementsWe thank Howard A. Stone and two anonymous referees, whose comments improved the quality of the manuscript. We acknowledge funding received from the following NSF DMS-1910824 and ONR Grant No. ONR N00014-18-1-2490.Applying the Laplace transform yields sû = ν ∂ 2 yû + ∂ 2 zû ,û(0, z, s) = 0,û(L, z, s) =ξ(s).For the harmonic wall motion ξ(t) = Aω cos ωt, we haveξ(s) = Aωs s 2 +ω 2 . According to the no-slip boundary condition at z = 0, H, the solution takes the formSubstituting (69) into (68) leads to the equation for f n (y, s)The boundary condition f n (0, s) = 0 leads to the solutionThe coefficients c n can be determined by the boundary conditionû(L, z, s) = Aωs s 2 +ω 2 and the orthogonality of sin nzπ H ,29 Henceû(y, z, s) isûThe poles ofû(y, z, s) are s = ±iω and s = − π 2 νn 2 (H 2 +L 2 ), n ≥ 1. By the inverse Laplace transform and residue theorem, we have the solution of equation(67 Hydrodynamic dispersion in shallow microchannels: the effect of cross-sectional shape. A Ajdari, N Bontoux, H A Stone, Analytical Chemistry. 782Ajdari, A., Bontoux, N., Stone, H.A.: Hydrodynamic dispersion in shallow microchan- nels: the effect of cross-sectional shape. Analytical Chemistry 78(2), 387-392 (2006) How boundaries shape chemical delivery in microfluidics. M Aminian, F Bernardi, R Camassa, D M Harris, R M Mclaughlin, Science. 3546317Aminian, M., Bernardi, F., Camassa, R., Harris, D.M., McLaughlin, R.M.: How bound- aries shape chemical delivery in microfluidics. Science 354(6317), 1252-1256 (2016) Squaring the circle: Geometric skewness and symmetry breaking for passive scalar transport in ducts and pipes. M Aminian, F Bernardi, R Camassa, R M Mclaughlin, Physical review letters. 11515154503Aminian, M., Bernardi, F., Camassa, R., McLaughlin, R.M.: Squaring the circle: Geometric skewness and symmetry breaking for passive scalar transport in ducts and pipes. Physical review letters 115(15), 154503 (2015) Mass distribution and skewness for passive scalar transport in pipes with polygonal and smooth cross sections. M Aminian, R Camassa, R M Mclaughlin, Studies in Applied Mathematics. 1413Aminian, M., Camassa, R., McLaughlin, R.M.: Mass distribution and skewness for passive scalar transport in pipes with polygonal and smooth cross sections. Studies in Applied Mathematics 141(3), 399-417 (2018) . H Aref, J R Blake, M Budišić, S S Cardoso, J H Cartwright, H J Clercx, K El Omari, U Feudel, R Golestanian, E Gouillart, Frontiers of chaotic advection. Reviews of Modern Physics. 89225007Aref, H., Blake, J.R., Budišić, M., Cardoso, S.S., Cartwright, J.H., Clercx, H.J., El Omari, K., Feudel, U., Golestanian, R., Gouillart, E., et al.: Frontiers of chaotic advection. Reviews of Modern Physics 89(2), 025007 (2017) On the dispersion of a solute in a fluid flowing through a tube. R Aris, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences. 235Aris, R.: On the dispersion of a solute in a fluid flowing through a tube. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 235(1200), 67-77 (1956) On the dispersion of a solute in pulsating flow through a tube. R Aris, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences. 259Aris, R.: On the dispersion of a solute in pulsating flow through a tube. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 259(1298), 370-376 (1960) Stieltjes integral representation and effective diffusivity bounds for turbulent transport. M Avellaneda, A J Majda, Physical review letters. 627753Avellaneda, M., Majda, A.J.: Stieltjes integral representation and effective diffusivity bounds for turbulent transport. Physical review letters 62(7), 753 (1989) Mathematical models with exact renormalization for turbulent transport. M Avellaneda, A J Majda, Communications in mathematical physics. 1312Avellaneda, M., Majda, A.J.: Mathematical models with exact renormalization for turbulent transport. Communications in mathematical physics 131(2), 381-429 (1990) Renormalization theory for eddy diffusivity in turbulent transport. M Avellaneda, A J Majda, Physical review letters. 68203028Avellaneda, M., Majda, A.J.: Renormalization theory for eddy diffusivity in turbulent transport. Physical review letters 68(20), 3028 (1992) Superdiffusion in nearly stratified flows. M Avellaneda, A J Majda, Journal of statistical physics. 693-4Avellaneda, M., Majda, A.J.: Superdiffusion in nearly stratified flows. Journal of statistical physics 69(3-4), 689-729 (1992) On contaminant dispersion in unsteady generalised couette flow. S Bandyopadhyay, B Mazumder, International journal of engineering science. 3711Bandyopadhyay, S., Mazumder, B.: On contaminant dispersion in unsteady generalised couette flow. International journal of engineering science 37(11), 1407-1423 (1999) On transport coefficients in an oscillatory couette flow with nonlinear chemical decay reactions. S Barik, D Dalal, Acta Mechanica. 2287Barik, S., Dalal, D.: On transport coefficients in an oscillatory couette flow with non- linear chemical decay reactions. Acta Mechanica 228(7), 2391-2412 (2017) Multi-scale analysis for concentration distribution in an oscillatory couette flow. S Barik, D Dalal, Proceedings of the Royal Society A. 47520180483Barik, S., Dalal, D.: Multi-scale analysis for concentration distribution in an oscillatory couette flow. Proceedings of the Royal Society A 475(2221), 20180483 (2019) Analysis of enhanced diffusion in taylor dispersion via a model problem. M Beck, O Chaudhary, C E Wayne, Hamiltonian partial differential equations and applications. SpringerBeck, M., Chaudhary, O., Wayne, C.E.: Analysis of enhanced diffusion in taylor dis- persion via a model problem. In: Hamiltonian partial differential equations and appli- cations, pp. 31-71. Springer (2015) M Beck, O Chaudhary, C E Wayne, Rigorous justification of taylor dispersion via center manifolds and hypocoercivity. Archive for Rational Mechanics and Analysis. 235Beck, M., Chaudhary, O., Wayne, C.E.: Rigorous justification of taylor dispersion via center manifolds and hypocoercivity. Archive for Rational Mechanics and Analysis 235(2), 1105-1149 (2020) Onchip hydrodynamic chromatography separation and detection of nanoparticles and biomolecules. M T Blom, E Chmela, R E Oosterbroek, R Tijssen, Van Den, A Berg, Analytical chemistry. 7524Blom, M.T., Chmela, E., Oosterbroek, R.E., Tijssen, R., Van Den Berg, A.: On- chip hydrodynamic chromatography separation and detection of nanoparticles and biomolecules. Analytical chemistry 75(24), 6761-6768 (2003) Horizontal mixing in the sea due to a shearing current. K Bowden, Journal of Fluid Mechanics. 211Bowden, K.: Horizontal mixing in the sea due to a shearing current. Journal of Fluid Mechanics 21(1), 83-95 (1965) Chebyshev and Fourier spectral methods. J P Boyd, Courier Corporation. Boyd, J.P.: Chebyshev and Fourier spectral methods. Courier Corporation (2001) Integrating cross-correlation and relaxation algorithms for particle tracking velocimetry. W Brevis, Y Niño, G Jirka, Experiments in Fluids. 501Brevis, W., Niño, Y., Jirka, G.: Integrating cross-correlation and relaxation algorithms for particle tracking velocimetry. Experiments in Fluids 50(1), 135-147 (2011) The problem of moments and the majda model for scalar intermittency. J C Bronski, R M Mclaughlin, Physics Letters A. 2654Bronski, J.C., McLaughlin, R.M.: The problem of moments and the majda model for scalar intermittency. Physics Letters A 265(4), 257-263 (2000) Rigorous estimates of the tails of the probability distribution function for the random linear shear model. J C Bronski, R M Mclaughlin, Journal of Statistical Physics. 983-4Bronski, J.C., McLaughlin, R.M.: Rigorous estimates of the tails of the probability distribution function for the random linear shear model. Journal of Statistical Physics 98(3-4), 897-915 (2000) On the symmetry properties of a random passive scalar with and without boundaries, and their connection between hot and cold states. R Camassa, Z Kilic, R M Mclaughlin, Physica D: Nonlinear Phenomena. 400132124Camassa, R., Kilic, Z., McLaughlin, R.M.: On the symmetry properties of a random passive scalar with and without boundaries, and their connection between hot and cold states. Physica D: Nonlinear Phenomena 400, 132124 (2019) Evolution of the probability measure for the majda model: New invariant measures and breathing pdfs. R Camassa, Z Lin, R M Mclaughlin, Journal of Statistical Physics. 1302Camassa, R., Lin, Z., McLaughlin, R.M.: Evolution of the probability measure for the majda model: New invariant measures and breathing pdfs. Journal of Statistical Physics 130(2), 343-371 (2008) The exact evolution of the scalar variance in pipe and channel flow. R Camassa, Z Lin, R M Mclaughlin, Communications in Mathematical Sciences. 82Camassa, R., Lin, Z., McLaughlin, R.M.: The exact evolution of the scalar variance in pipe and channel flow. Communications in Mathematical Sciences 8(2), 601-626 (2010) On the longitudinal dispersion of passive contaminant in oscillatory flows in tubes. P Chatwin, Journal of Fluid Mechanics. 713Chatwin, P.: On the longitudinal dispersion of passive contaminant in oscillatory flows in tubes. Journal of Fluid Mechanics 71(3), 513-527 (1975) K Dill, S Bromberg, Molecular driving forces: statistical thermodynamics in biology, chemistry, physics, and nanoscience. Garland Science. Dill, K., Bromberg, S.: Molecular driving forces: statistical thermodynamics in biology, chemistry, physics, and nanoscience. Garland Science (2012) The Navier-Stokes equations: a classification of flows and exact solutions. P G Drazin, N Riley, Cambridge University Press334Drazin, P.G., Riley, N.: The Navier-Stokes equations: a classification of flows and exact solutions. 334. Cambridge University Press (2006) Density stratified environments: the double-tank method. M Economidou, G Hunt, Experiments in fluids. 463Economidou, M., Hunt, G.: Density stratified environments: the double-tank method. Experiments in fluids 46(3), 453-466 (2009) Behavior of concentrated polymer solutions under periodic stresses. J D Ferry, W Sawyer, J Ashworth, Journal of Polymer Science. 26Ferry, J.D., Sawyer, W., Ashworth, J.: Behavior of concentrated polymer solutions under periodic stresses. Journal of Polymer Science 2(6), 593-611 (1947) Temperature dependence of viscosity. R Fogel&apos;son, E Likhachev, Technical Physics. 468Fogel'Son, R., Likhachev, E.: Temperature dependence of viscosity. Technical Physics 46(8), 1056-1059 (2001) Diffusion of multiple electrolytes cannot be treated independently: model predictions with experimental validation. A Gupta, S Shim, L Issah, C Mckenzie, H A Stone, Soft Matter. 1548Gupta, A., Shim, S., Issah, L., McKenzie, C., Stone, H.A.: Diffusion of multiple elec- trolytes cannot be treated independently: model predictions with experimental valida- tion. Soft Matter 15(48), 9965-9973 (2019) Contaminant dispersion in some time-dependent laminar flows. C Jimenez, P Sullivan, Journal of Fluid Mechanics. 142Jimenez, C., Sullivan, P.: Contaminant dispersion in some time-dependent laminar flows. Journal of Fluid Mechanics 142, 57-77 (1984) Its-90 density of water formulation for volumetric standards calibration. F E Jones, G L Harris, Journal of research of the National Institute of Standards and Technology. 973335Jones, F.E., Harris, G.L.: Its-90 density of water formulation for volumetric standards calibration. Journal of research of the National Institute of Standards and Technology 97(3), 335 (1992) Density, thermal expansivity, and compressibility of liquid water from 0. deg. to 150. deg.. correlations and tables for atmospheric pressure and saturation reviewed and expressed on 1968 temperature scale. G S Kell, Journal of Chemical and Engineering Data. 201Kell, G.S.: Density, thermal expansivity, and compressibility of liquid water from 0. deg. to 150. deg.. correlations and tables for atmospheric pressure and saturation re- viewed and expressed on 1968 temperature scale. Journal of Chemical and Engineering Data 20(1), 97-105 (1975) Optimal stirring strategies for passive scalar mixing. Z Lin, J L Thiffeault, C R Doering, Journal of Fluid Mechanics. 675Lin, Z., Thiffeault, J.L., Doering, C.R.: Optimal stirring strategies for passive scalar mixing. Journal of Fluid Mechanics 675, 465-476 (2011) Optimal mixing and optimal stirring for fixed energy, fixed power, or fixed palenstrophy flows. E Lunasin, Z Lin, A Novikov, A Mazzucato, C R Doering, Journal of mathematical physics. 5311115611Lunasin, E., Lin, Z., Novikov, A., Mazzucato, A., Doering, C.R.: Optimal mixing and optimal stirring for fixed energy, fixed power, or fixed palenstrophy flows. Journal of mathematical physics 53(11), 115611 (2012) The mean, median, mode inequality and skewness for a class of densities. H Macgillivray, Australian Journal of Statistics. 232MacGillivray, H.: The mean, median, mode inequality and skewness for a class of densities. Australian Journal of Statistics 23(2), 247-250 (1981) The random uniform shear layer: an explicit example of turbulent diffusion with broad tail probability distributions. A J Majda, Physics of Fluids A: Fluid Dynamics. 58Majda, A.J.: The random uniform shear layer: an explicit example of turbulent dif- fusion with broad tail probability distributions. Physics of Fluids A: Fluid Dynamics 5(8), 1963-1970 (1993) The effect of mean flows on enhanced diffusivity in transport by incompressible periodic velocity fields. A J Majda, R M Mclaughlin, Studies in applied mathematics. 893Majda, A.J., McLaughlin, R.M.: The effect of mean flows on enhanced diffusivity in transport by incompressible periodic velocity fields. Studies in applied mathematics 89(3), 245-279 (1993) A convenient method for generating normal variables. G Marsaglia, T A Bray, SIAM review. 63Marsaglia, G., Bray, T.A.: A convenient method for generating normal variables. SIAM review 6(3), 260-264 (1964) An explicit example with non-gaussian probability distribution for nontrivial scalar mean and fluctuation. R M Mclaughlin, A J Majda, Physics of Fluids. 82McLaughlin, R.M., Majda, A.J.: An explicit example with non-gaussian probability distribution for nontrivial scalar mean and fluctuation. Physics of Fluids 8(2), 536-547 (1996) A centre manifold description of contaminant dispersion in channels with varying flow properties. G Mercer, A Roberts, SIAM Journal on Applied Mathematics. 506Mercer, G., Roberts, A.: A centre manifold description of contaminant dispersion in channels with varying flow properties. SIAM Journal on Applied Mathematics 50(6), 1547-1565 (1990) Extensions of the ferry shear wave model for active linear and nonlinear microrheology. S M Mitran, M G Forest, L Yao, B Lindley, D B Hill, Journal of non-Newtonian fluid mechanics. 1542-3Mitran, S.M., Forest, M.G., Yao, L., Lindley, B., Hill, D.B.: Extensions of the ferry shear wave model for active linear and nonlinear microrheology. Journal of non- Newtonian fluid mechanics 154(2-3), 120-135 (2008) Dispersion of contaminant in oscillatory flows. A Mukherjee, B Mazumder, Acta mechanica. 741-4Mukherjee, A., Mazumder, B.: Dispersion of contaminant in oscillatory flows. Acta mechanica 74(1-4), 107-122 (1988) Density gradients. G Oster, Scientific American. 2132Oster, G.: Density gradients. Scientific American 213(2), 70-79 (1965) Dispersion in unsteady couette-poiseuille flows. S Paul, B Mazumder, International journal of engineering science. 4612Paul, S., Mazumder, B.: Dispersion in unsteady couette-poiseuille flows. International journal of engineering science 46(12), 1203-1217 (2008) Absolute diffusion coefficients: Compilation of reference data for fcs calibration. B Rhodamine, Rhodamine, B.: Absolute diffusion coefficients: Compilation of reference data for fcs calibration Engineering flows in small devices: microfluidics toward a lab-on-a-chip. H A Stone, A D Stroock, A Ajdari, Annu. Rev. Fluid Mech. 36Stone, H.A., Stroock, A.D., Ajdari, A.: Engineering flows in small devices: microflu- idics toward a lab-on-a-chip. Annu. Rev. Fluid Mech. 36, 381-411 (2004) Dispersion of soluble matter in solvent flowing slowly through a tube. G I Taylor, Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences. 219Taylor, G.I.: Dispersion of soluble matter in solvent flowing slowly through a tube. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences 219(1137), 186-203 (1953) Random walks, random flows, and enhanced diffusivity in advectiondiffusion equations. M Taylor, Discrete & Continuous Dynamical Systems-B. 1741261Taylor, M.: Random walks, random flows, and enhanced diffusivity in advection- diffusion equations. Discrete & Continuous Dynamical Systems-B 17(4), 1261 (2012) Using multiscale norms to quantify mixing and transport. J L Thiffeault, Nonlinearity. 2521Thiffeault, J.L.: Using multiscale norms to quantify mixing and transport. Nonlinearity 25(2), R1 (2012) Recent advances in flow injection analysis. M Trojanowicz, K Kołacińska, Analyst. 1417Trojanowicz, M., Kołacińska, K.: Recent advances in flow injection analysis. Analyst 141(7), 2085-2139 (2016) Non-gaussian invariant measures for the majda model of decaying turbulent transport. E Vanden Eijnden, Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences. 549Vanden Eijnden, E.: Non-gaussian invariant measures for the majda model of decaying turbulent transport. Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 54(9), 1146-1167 (2001) Transient taylor-aris dispersion for time-dependent flows in straight channels. S Vedel, H Bruus, Journal of fluid mechanics. 691Vedel, S., Bruus, H.: Transient taylor-aris dispersion for time-dependent flows in straight channels. Journal of fluid mechanics 691, 95-122 (2012) Time-dependent taylor-aris dispersion of an initial point concentration. S Vedel, E Hovad, H Bruus, Journal of fluid mechanics. 752Vedel, S., Hovad, E., Bruus, H.: Time-dependent taylor-aris dispersion of an initial point concentration. Journal of fluid mechanics 752, 107-122 (2014) Diffusion in oscillatory pipe flow. E Watson, Journal of Fluid Mechanics. 133Watson, E.: Diffusion in oscillatory pipe flow. Journal of Fluid Mechanics 133, 233-244 (1983)
[]
[ "Roundoff error problem in L2-type methods for time-fractional problems", "Roundoff error problem in L2-type methods for time-fractional problems" ]
[ "Chaoyu Quan ", "Shijie Wang ", "Xu Wu " ]
[]
[]
Roundoff error problems have occurred frequently in interpolation methods of time-fractional equations, which can lead to undesirable results such as the failure of optimal convergence. These problems are essentially caused by catastrophic cancellations. Currently, a feasible way to avoid these cancellations is using the Gauss-Kronrod quadrature to approximate the integral formulas of coefficients rather than computing the explicit formulas directly for example in the L2-type methods. This nevertheless increases computational cost and arises additional integration errors. In this work, a new framework to handle catastrophic cancellations is proposed, in particular, in the computation of the coefficients for standard and fast L2-type methods on general nonuniform meshes. We propose a concept of δ-cancellation and then some threshold conditions ensuring that δ-cancellations will not happen. If the threshold conditions are not satisfied, a Taylor-expansion technique is proposed to avoid δ-cancellation. Numerical experiments show that our proposed method performs as accurate as the Gauss-Kronrod quadrature method and meanwhile much more efficient. This enables us to complete long time simulations with hundreds of thousands of time steps in short time.
10.48550/arxiv.2212.08344
[ "https://export.arxiv.org/pdf/2212.08344v1.pdf" ]
254,823,433
2212.08344
2daf6bf0e634e400a7a9b91e5d27ce6b45f5496a
Roundoff error problem in L2-type methods for time-fractional problems Chaoyu Quan Shijie Wang Xu Wu Roundoff error problem in L2-type methods for time-fractional problems roundoff error problemcatastrophic cancellationtime-fractional problemL2-type methodssum-of- exponentials approximation Roundoff error problems have occurred frequently in interpolation methods of time-fractional equations, which can lead to undesirable results such as the failure of optimal convergence. These problems are essentially caused by catastrophic cancellations. Currently, a feasible way to avoid these cancellations is using the Gauss-Kronrod quadrature to approximate the integral formulas of coefficients rather than computing the explicit formulas directly for example in the L2-type methods. This nevertheless increases computational cost and arises additional integration errors. In this work, a new framework to handle catastrophic cancellations is proposed, in particular, in the computation of the coefficients for standard and fast L2-type methods on general nonuniform meshes. We propose a concept of δ-cancellation and then some threshold conditions ensuring that δ-cancellations will not happen. If the threshold conditions are not satisfied, a Taylor-expansion technique is proposed to avoid δ-cancellation. Numerical experiments show that our proposed method performs as accurate as the Gauss-Kronrod quadrature method and meanwhile much more efficient. This enables us to complete long time simulations with hundreds of thousands of time steps in short time. Introduction The Caputo derivative [2] has been widely-used to model phenomena which takes account of interactions within the past and problems with nonlocal properties. It is defined as ∂ α t u := 1 Γ(1 − α) t 0 u (s) (t − s) α ds. One issue is how to compute the Caputo derivative numerically, which is important in numerical simulations of timefractional problems. Some approximations have been developed and analyzed, including the piecewise polynomial interpolation methods (such as L1, L2-1 σ and L2 schemes) on uniform meshes [12,27,1,4,19,21] and nonuniform meshes [26,9,3,11,10,14,15,16], discontinuous Galerkin methods [20], and convolution quadrature correction methods [6,7]. The solutions to time-fractional problems typically admit weak singularities, which leads to deterioration of convergence in the case of uniform meshes for interpolation methods and then inspires researchers to use nonuniform meshes. Another important issue is the CPU time and storage problem due to the nonlocality of the fractional derivatives. To reduce the computation cost, fast algorithms are proposed, for example, using the sum-of-exponentials (SOE) approximation to the Caputo derivative [13,18,8,29]. Despite of many works on the convergence analysis, the aforementioned interpolation methods could encounter roundoff error problems in practice where the accuracy is destroyed as well as the convergence rate. However, only a few literatures have discussed about this issue. For example, in [16,3], the authors use the Gauss-Kronrod quadrature to compute the coefficients in the second-order L2-1 σ method for time-fractional diffusion problem. Even earlier, in [5], Jiang et al. encounter the cancellation error in the fast L1 method and use the Taylor expansion of exponentials with a small number of terms to overcome it. Despite of some attempts, a systematic analysis of the roundoff error problem in the interpolation methods remains a gap. In this work, we focus on solving the roundoff error problems in standard and fast L2 methods on general nonuniform time meshes. We consider particularly the L2-type method proposed in [19] but on general nonuniform meshes. When calculating the explicit expressions of L2 coefficients, catastrophic cancellations could happen, the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. This can lead to completely wrong simulations especially for a large number of time steps. To quantify the phenomena of catastrophic cancellation, we first propose a concept of δ−cancellation, where δ denotes the tolerance of relative error of the approximated difference (due to rounding) and the exact difference. Then we reformulate the standard L2 coefficients into a combination of I 1 , I 2 in (2.7) and the fast L2 coefficients into a combination of J 1 , J 2 in (2.13) to make the analysis of δ-cancellation simpler. For any fixed α ∈ (0, 1) and δ > 12δ 0 , we define the following thresholds θ s,1 = 2δ 0 (1 − α)δ , θ s,2 = 6δ 0 (1 − α)δ 1 2 , θ f,1 = 4δ 0 δ , θ f,2 = 12δ 0 δ 1 2 , where δ 0 is the machine error (δ 0 ≈ 2.22 × 10 −16 with 64 bits in double precision). We show that for any j and k, if the following threshold condition τ j /(t k − t j−1 ) ≥ θ s,i holds where τ j is the jth time step and t k is the kth time node, then I i will not meet δ-cancellation (i = 1, 2). Similarly, for any and k, if the following threshold condition θ τ k−1 ≥ θ f,i holds where θ is the th quadrature node in the SOE approximation, then J i will not meet δ-cancellation (i = 1, 2). In other words, if the above threshold conditions are satisfied, one can compute the L2 coefficients directly from the explicit expressions (see Theorem 3.2). If the threshold conditions are not satisfied, we propose to use Taylor's expansion to approximate I i and J i to avoid δ-cancellation, called the Taylor-expansion technique in this work. Usually, only several terms are needed to ensure the relative error of Taylor approximation within machine error. Numerical experiments show that this method using the threshold conditions plus the Taylor-expansion technique, called TCTE method, can perform as accurate as the Gauss-Kronrod quadrature, but much more efficient. This enables us to implement long time simulations with hundreds of thousands of time steps. This work is organized as follows. In Section 2, we describe the roundoff error problems in the standard and fast L2 methods. In Section 3, we propose the concept of δ−cancellation and provide the threshold conditions when δ−cancellation will not happen, and then proposed the Taylor-expansion technique to avoid δ−cancellation if the threshold conditions are not satisfied. In Section 4, some numerical experiments are given to verify the efficiency and accuracy of the proposed TCTE method. Some conclusions are provided in the final section. Catastrophic cancellations in standard and fast Lformulas In this part, we describe the catastrophic cancellations encountered in the L2-type methods, which can lead to bad results when computing the explicit formulas of L2 coefficients. Note that such problems can also happen in L1 and L2-1 σ methods. Reformulation of standard L2 formula Given a general nonuniform time mesh t 0 < t 1 < . . . < t k , the Caputo fractional derivative at time t k is approximated by the L2 discrete fractional operator (see [22] for details) L α 1 u = u 1 − u 0 Γ(2 − α)τ α 1 , L α k u = 1 Γ(1 − α)   k−1 j=1 a (k) j u j−1 + b (k) j u j + c (k) j u j+1 + a (k) k u k−2 + b (k) k u k−1 + c (k) k u k   , k ≥ 2, (2.1) where b (k) j = −a (k) j − c (k) j for 1 ≤ j ≤ k, and for 1 ≤ j ≤ k − 1, a (k) j = tj tj−1 2s − t j − t j+1 τ j (τ j + τ j+1 ) 1 (t k − s) α ds, c (k) j = tj tj−1 2s − t j−1 − t j τ j+1 (τ j + τ j+1 ) 1 (t k − s) α ds,(2.2) and a (k) k = t k t k−1 2s − t k−1 − t k τ k−1 (τ k−1 + τ k ) 1 (t k − s) α ds, c (k) k = t k t k−1 2s − t k−2 − t k−1 τ k (τ k−1 + τ k ) 1 (t k − s) α ds. (2.3) Note that we have the following relationship: c (k) j = τ j τ j+1 a (k) j +c (k) j withc (k) j := tj tj−1 1 τ j+1 (t k − s) α ds. (2.4) Then we can reformulate (2.1) for k ≥ 2 as L α k u = 1 Γ(1 − α)   k−1 j=1 a (k) j (τ j /τ j+1 δ j+1 u − δ j u) +c (k) j δ j+1 u − a (k) k δ k−1 u + c (k) k δ k u   ,(2.5) where δ j u := u j − u j−1 . Note that the reformulation (2.5) will help to obtain simpler threshold conditions in later analysis. We can figure out the explicit expression of a (k) j ,c (k) j , in (2.2) and (2.4), for 1 ≤ j ≤ k − 1: a (k) j = − (2 − α)τ j+1 I 1 + 2I 2 (2 − α)(1 − α)τ j (τ j + τ j+1 ) ,c (k) j = I 1 (1 − α)τ j+1 (2.6) with I 1 = (t k − t j−1 ) 1−α − (t k − t j ) 1−α , I 2 = (2 − α)τ j (t k − t j−1 ) 1−α + (t k − t j ) 2−α − (t k − t j−1 ) 2−α , (2.7) and a (k) k = ατ 2 k (2 − α)(1 − α)τ k−1 (τ k−1 + τ k )τ α k , c (k) k = 1 (1 − α)τ α k + ατ k (2 − α)(1 − α)(τ k−1 + τ k )τ α k . It is not difficult to verify that I 1 > 0 and I 2 > 0. Reformulation of fast L2 formula We consider the fast L2 formula obtained by applying the sum-of-exponentials approximation [5,17] to the historical part of the standard L2 formula. Specifically speaking, the fast L2 formula reads F α 1 u = u 1 − u 0 Γ(2 − α)τ α 1 , F α k u = 1 Γ(1 − α)   Nq =1 H (t k ) − a (k) k δ k−1 u + c (k) k δ k u   , k ≥ 2, where a (k) k , c (k) k are given in (2.3), and H (t k ) = e −θ τ k H (t k−1 ) + a k, k−1 u k−2 + b k, k−1 u k−1 + c k, k−1 u k , (2.8) where θ and are positive quadrature nodes and weights in the SOE approximation, N q is the number of quadrature nodes, H (t 1 ) = 0, b k, k−1 = −a k, k−1 − c k, k−1 and a k, k−1 = t k−1 t k−2 2s − t k−1 − t k τ k−1 (τ k−1 + τ k ) e −θ (t k −s) ds, c k, k−1 = t k−1 t k−2 2s − t k−2 − t k−1 τ k (τ k−1 + τ k ) e −θ (t k −s) ds. (2.9) Note that c k, k−1 = τ k−1 τ k a k, k−1 +c k, k−1 withc k, k−1 := t k−1 t k−2 e −θ (t k −s) τ k ds. (2.10) We can reformulate (2.8) as H (t k ) = e −θ τ k H (t k−1 ) + a k, k−1 (τ k−1 /τ k δ k u − δ k−1 u) +c k, k−1 δ k u. (2.11) We can figure out the explicit expression of a k, k−1 ,c k, k−1 in (2.9) and (2.10): a k, k−1 = − e −θ τ k (θ τ k J 1 + 2J 2 ) τ k−1 (τ k−1 + τ k )(θ ) 2 ,c k, k−1 = e −θ τ k J 1 θ τ k (2.12) with J 1 = 1 − e −θ τ k−1 , J 2 = 1 − θ τ k−1 e −θ τ k−1 − e −θ τ k−1 . (2.13) It is not difficult to verify that J 1 > 0 and J 2 > 0. Catastrophic cancellations in I 1 , I 2 , J 1 and J 2 We introduce a bit about the catastrophic cancellation for readers. Consider the subtraction of two numbers x and y. Assume that the approximations (probably caused by rounding in floating point arithmetic) of x and y arẽ x = x(1 + δ x ),ỹ = y(1 + δ y ), where |δ x | = |x −x|/|x| and |δ y | = |y −ỹ|/|y| are relative errors. Then the relative error of the approximate differencex −ỹ from the true difference x − y is inversely proportional to the true difference: x −ỹ = x(1 + δ x ) − y(1 + δ y ) = (x − y) 1 + xδ x − yδ y x − y . Thus, the relative error ofx −ỹ and x − y is (x −ỹ) − (x − y) x − y = xδ x − yδ y x − y , (2.14) which can be arbitrarily large if the true inputs x and y are close. Catastrophic cancellation happens because subtraction is ill-conditioned at nearby inputs. One can find that I 1 and I 2 in (2.6) can come across the catastrophic cancellation when τ j t k − t j−1 , while J 1 and J 2 in (2.12) can come across it when θ τ k−1 1. To avoid the catastrophic cancellation, the adaptive Gauss-Kronrod quadrature can be used to approximate the integral formula of standard and fast L2-1 σ coefficients (similar to the L2 coefficients), rather than compute the explicit formula directly [16,3,23,24]. The integration quadrature is to compute all coefficients which will certainly increase the computation cost. However, this is unnecessary in our opinion because the catastrophic cancellation does not usually happen. Next, we provide a theory on when the catastrophic cancellation might happen. If the catastrophic cancellation does not happen, there is no doubt that computing the explicit expressions (2.6) and (2.12) directly is the most efficient way. If the catastrophic cancellation might happen, we propose to use Taylor-expansion approximation to do the calculation. Threshold conditions and Taylor-expansion approximation In this part, we shall propose a new method to deal with the roundoff error problems in standard and fast L2 methods. In fact, I 1 and I 2 of (2.7) can be rewritten as I 1 = (t k − t j−1 ) 1−α 1 − (1 − θ) 1−α , I 2 = (t k − t j−1 ) 2−α (2 − α)θ + (1 − θ) 2−α − 1 , (3.1) where 0 < θ = τ j t k − t j−1 < 1, 1 ≤ j ≤ k − 1. (3.2) In (3.1), it is clear that the catastrophic cancellations happen only when θ is very small. Using the Taylor's expansions of (1 − θ) 1−α and (1 − θ) 2−α , we have I 1 = −(t k − t j−1 ) 1−α ∞ m=1 1 − α m (−θ) m , I 2 = (t k − t j−1 ) 2−α ∞ m=2 2 − α m (−θ) m . (3.3) It is clear that 1−α m (−θ) m is always negative for any m ≥ 1, and 2−α m (−θ) m is always positive for any m ≥ 2. As a consequence, I 1 > 0, I 2 > 0, and the catastrophic cancellation will not happen in the Taylor expansion (3.3). This indicates that I 1 and I 2 can be approximated by the following truncated Taylor's expansions (to avoid the catastrophic cancellation): I 1 = −(t k − t j−1 ) 1−α M1 m=1 1 − α m (−θ) m ,Î 2 = (t k − t j−1 ) 2−α M2+1 m=2 2 − α m (−θ) m ,(3.4) where M 1 ≥ 1 and M 2 ≥ 1 are the truncation numbers respectively for I 1 and I 2 . Similarly, applying the Taylor's expansions of e θ τ k−1 to J 1 and J 2 in (2.12), we have J 1 = e −θ τ k−1 ∞ m=1 (θ τ k−1 ) m m! , J 2 = e −θ τ k−1 ∞ m=2 (θ τ k−1 ) m m! . (3.5) Then we have J 1 > 0, J 2 > 0, and the catastrophic cancellation will not happen in the Taylor expansions (3.5). Therefore, J 1 and J 2 can be approximated by the following truncated Taylor's expansions (to avoid the catastrophic cancellation when θ τ k−1 is very small):Ĵ 1 = e −θ τ k−1 N1 m=1 (θ τ k−1 ) m m! ,Ĵ 2 = e −θ τ k−1 N2+1 m=2 (θ τ k−1 ) m m! ,(3.6) where N 1 ≥ 1 and N 2 ≥ 1 are the truncation numbers respectively for J 1 and J 2 . We have shown that the truncated Taylor expansions (3.4) and (3.6) can be used to avoid the catastrophic cancellation for the computation of I i and J i , i = 1, 2. However, in most cases, the catastrophic cancellation won't happen and direct computation can already provide accurate results. To quantify the catastrophic cancellation phenomenon, we first introduce a concept of δ-cancellation. (x −ỹ) − (x − y) x − y ≥ δ, then we call the subtractionx −ỹ a δ-cancellation. According to (2.14), it is clear that if |xδ x | + |yδ y | |x − y| < δ, the δ-cancellation will never happen. Next we state and prove the following theorem on the threshold conditions when δ-cancellation won't happen. Theorem 3.2. Given the fractional order α ∈ (0, 1), the machine relative error δ 0 , and the parameter δ > 12δ 0 in δcancellation, let θ s,1 = 2δ 0 (1 − α)δ , θ s,2 = 6δ 0 (1 − α)δ 1 2 , θ f,1 = 4δ 0 δ , θ f,2 = 12δ 0 δ 1 2 . If the threshold condition τ j /(t k − t j−1 ) ≥ θ s,i (3.7) holds, then the subtraction in I i in (2.7) is not δ-cancellation for i = 1, 2. Similarly, if the threshold condition θ τ k−1 ≥ θ f,i (3.8) holds, then the subtraction in J i in (2.13) is not δ-cancellation for i = 1, 2. Proof. I 1 case: Let x 1 = 1 and y 1 = (1 − θ) 1−α be the two terms in I 1 where θ = τ j /(t k − t j−1 ) < 1. Since the relative errors δ x1 and δ y1 of float-point numbersx 1 andỹ 1 stored in machine are less than δ 0 , we have |x 1 δ x1 | + |y 1 δ y1 | |x 1 − y 1 | ≤ |x 1 | + |y 1 | |x 1 − y 1 | δ 0 < δ 0 1 + (1 − θ) 1−α (1 − α)θ < 2δ 0 (1 − α)θ s,1 = δ, when θ ≥ θ s,1 . I 2 case: Let x 2 = (2 − α)θ + (1 − θ) 2−α and y 2 = 1 be the two terms in I 2 . Since the relative errors δ x2 and δ y2 of float-point numbersx 2 andỹ 2 stored in machine are less than δ 0 , we have |x 2 δ x2 | + |y 1 δ y2 | |x 2 − y 2 | ≤ |x 2 | + |y 2 | |x 2 − y 2 | δ 0 ≤ 2 (2 − α)θ + (1 − θ) 2−α + 1 (1 − α)θ 2 δ 0 < 6δ 0 (1 − α)θ 2 s,2 = δ, when θ ≥ θ s,2 . Here we use the fact that (2 − α)θ + (1 − θ) 2−α increases w.r.t. θ ∈ (0, 1). J 1 case: Let x 3 = 1 and y 3 = e −θ τ k−1 be the two terms in J 1 . Since the relative errors δ x3 and δ y3 of float-point numbers x 3 andỹ 3 stored in machine are less than δ 0 , we have |x 3 δ x3 | + |y 3 δ y3 | |x 3 − y 3 | ≤ |x 3 | + |y 3 | |x 3 − y 3 | δ 0 < 2δ 0 1 − e −θ f,1 < 4δ 0 θ f,1 = δ, when θ τ k−1 ≥ θ f,1 . Here we use the facts that θ f,1 < 1 and 1 − e −θ > θ/2 for θ ∈ (0, 1). J 2 case: Let x 4 = 1 and y 4 = θ τ k−1 e −θ τ k−1 + e −θ τ k−1 be the two terms in J 2 . Since the relative errors δ x4 and δ y4 of float-point numbersx 4 andỹ 4 stored in machine are less than δ 0 , we have |x 4 δ x4 | + |y 4 δ y4 | |x 4 − y 4 | ≤ |x 4 | + |y 4 | |x 4 − y 4 | δ 0 < 2δ 0 1 − e −θ f,2 − θ f,2 e −θ f,2 < 12δ 0 θ 2 f,2 = δ, when θ τ k−1 ≥ θ f,2 . Here we use the facts that θ f,2 < 1 and 1 − e −θ − θe −θ > θ 2 /6 for θ ∈ (0, 1). If the threshold condition (3.7) or (3.8) is not satisfied, the truncated Taylor expansion formulas, i.e.Î i andĴ i , can approximate I i and J i properly. Theorem 3.3. If τ j /(t k − t j−1 ) ≤ θ s,i and θ τ k−1 ≤ θ f,i for i = 1, 2, the relative error δI i of I i in (2.7) andÎ i in (3.4), and the relative error δJ i of J i in (2.13) andĴ i in (3.6) satisfy |δI i | < θ Mi , |δJ i | < (θ τ k−1 ) Ni , i = 1, 2, where θ = τ j /(t k − t j−1 ). Proof. From (3.3)-(3.6), we have |I1 −Î1| = − (t k − tj−1) 1−α ∞ m=M 1 +1 1 − α m (−θ) m = θ M 1 − (t k − tj−1) 1−α ∞ m=1 1 − α m + M1 (−θ) m < θ M 1 − (t k − tj−1) 1−α ∞ m=1 1 − α m (−θ) m = θ M 1 I1, |I2 −Î2| = (t k − tj−1) 2−α ∞ m=M 2 +2 2 − α m (−θ) m = θ M 2 (t k − tj−1) 2−α ∞ m=2 2 − α m + M2 (−θ) m < θ M 2 (t k − tj−1) 2−α ∞ m=2 2 − α m (−θ) m = θ M 2 I2, |J1 −Ĵ1| = e −θ τ k−1 ∞ m=N 1 +1 (θ τ k−1 ) m m! = (θ τ k−1 ) N 1 e −θ τ k−1 ∞ m=1 (θ τ k−1 ) m (m + N1)! < (θ τ k−1 ) N 1 e −θ τ k−1 ∞ m=1 (θ τ k−1 ) m m! = (θ τ k−1 ) N 1 J1, |J2 −Ĵ2| = e −θ τ k−1 ∞ m=N 2 +2 (θ τ k−1 ) m m! = (θ τ k−1 ) N 2 e −θ τ k−1 ∞ m=2 (θ τ k−1 ) m (m + N2)! < (θ τ k−1 ) N 2 e −θ τ k−1 ∞ m=2 (θ τ k−1 ) m m! = (θ τ k−1 ) N 2 J2. Then the relative errors satisfy |δI i | < θ Mi , |δJ i | < (θ τ k−1 ) Ni , i = 1, 2. We can select M i = log θ δ 0 , N i = log θ τ k−1 δ 0 , to ensure θ Mi ≤ δ 0 and (θ τ k−1 ) Ni ≤ δ 0 . Numerical experiments In this section, we give some numerical tests to illustrate the accuracy and efficiency of our TCTE method. All experiments are implemented on a computer with 3.60GHz, Intel-Core i9-9900K in Matlab with 64 bits in double precision, where the machine error is δ 0 = 2 −52 ≈ 2.22e- 16. In the following content, we set by default θ s,1 = θ f,1 = 10 −4 , θ s,2 = θ f,2 = 10 −2 in Algorithm 1-2. Based on Theorem 3.2-3.3, the relative errors of I 1 , I 2 , J 1 , J 2 in the computation of L2 coefficients using the TCTE method satisfy |δI 1 | < 4.44e-12 1 − α , |δI 2 | < 1.332e-11 1 − α , |δJ 1 | < 8.88e-12, |δJ 2 | < 2.664e-11. We focus on the following linear subdiffusion equation: ∂ α t u(t, x) = ∆u(t, x) + f (t, x), (t, x) ∈ (0, T ] × Ω, u(t, x) = 0, (t, x) ∈ (0, T ] × ∂Ω, u(0, x) = u 0 (x), x ∈ Ω (4.1) by using the standard and fast L2 implicit schemes. The graded mesh [26] with grading parameter r ≥ 1 is used t j = j N r T, τ j = t j − t j−1 = j N r − j − 1 N r T, where N is the number of time steps. Input the parameters α, θ s,1 , θ s,2 , τ j , τ j+1 , t j and t k . Compute θ = τ j t k − t j + τ j . If θ ≤ θ s,1 M 1 = log θ δ 0 , I 1 = −(t k − t j + τ j ) 1−α M1 m=1 1 − α m (−θ) m , else I 1 = (t k − t j + τ j ) 1−α 1 − (1 − θ) 1−α . end If θ ≤ θ s,2 M 2 = log θ δ 0 , I 2 = (t k − t j + τ j ) 2−α M2+1 m=2 2 − α m (−θ) m , else I 2 = (t k − t j + τ j ) 2−α (2 − α)θ + (1 − θ) 2−α − 1 . end Compute and output a (k) j = − (2 − α)τ j+1 I 1 + 2I 2 (2 − α)(1 − α)τ j (τ j + τ j+1 ) ,c (k) j = I 1 (1 − α)τ j+1 . Algorithm 2 Compute a k, k−1 ,c k, k−1 in equation (2.12) for fast L2 method. Input the parameters θ f,1 , θ f,2 , θ , τ k−1 and τ k . 1 and f (t, x, y) = (Γ(1 + α) + 2π 2 t α ) sin(πx) sin(πy), whose exact solution is u(t, x, y) = t α sin(πx) sin(πy). If θ τ k−1 ≤ θ f,1 N 1 = log θ τ k−1 δ 0 , J 1 = e −θ τ k−1 N1 m=1 (θ τ k−1 ) m m! , else J 1 = 1 − e −θ τ k−1 . end If θ τ k−1 ≤ θ f,2 N 2 = log θ τ k−1 δ 0 , J 2 = e −θ τ k−1 N2+1 m=2 (θ τ k−1 ) m m! , else J 2 = 1 − θ τ k−1 e −θ τ k−1 − e −θ τ k−1 . end Compute and output a k, k−1 = − e −θ τ k (θ τ k J 1 + 2J 2 ) τ k−1 (τ k−1 + τ k )(θ ) 2 ,c k, k−1 = e −θ τ k J 1 θ τ k . In this example, the spectral collocation method [28,25] is applied in space with 20 2 Chebyshev-Gauss-Lobatto points. We set the final time of SOE T soe = T , the SOE tolerance ε = 1e-12, and the cut-off time ∆t = τ 2 ; see the SOE approximation [5,17]. Failure of direct computation. We first show that the roundoff problem can result in completely wrong results. We test the standard L2 scheme and the fast L2 scheme for different numbers of time steps, using directly the explicit formulas of a (k) j ,c (k) j in (2.6) and a k, k−1 ,c k, k−1 in (2.12). This is done by setting the thresholds θ s,1 = θ s,2 = θ f,1 = θ f,2 = 0 in Algorithm 1-2. In Table 1, it is observed that the maximum error, i.e. err max := max 1≤k≤N u(t k ) − u k with · the L 2 -norm in Ω, can go up to 6.0231e+7 for standard L2 scheme and to 4.6860 for fast L2 scheme, implying that the direct computation is unreliable. Feasibility of Taylor-expansion-based method. Now, we use Algorithm 1-2 with the aforementioned default settings, to compute the coefficients a We show the differences of pointwise L 2 -errors, err GK t k − err TCTE t k , between using the standard L2-Gauss and standard L2-TCTE methods (the left-hand side of Figure 3) and the fast L2-Gauss and fast L2-TCTE methods (the right-hand side of Figure 3). It is observed that the differences are almost the machine error, which verifies the accuracy of our method. However, we shall mention that the Gauss-Kronrod quadrature method could be much more expensive than our TCTE method, which will be discussed in the next subsection. High efficiency of TCTE method = Γ(1 + α)(x 2 − 1)(y 2 − 1) − 2t α (x 2 + y 2 − 2) , whose exact solution is u(t, x, y) = t α (x 2 − 1)(y 2 − 1). In this example, we set the final time of SOE T soe = T , the SOE tolerance ε = 1e-12, the cut-off time ∆t = τ 2 in SOE approximations; see [5,17]. Since the exact solution is polynomial of degree 2 in x and y direction, we use the spectral collocation method in space with 5 2 Chebyshev-Gauss-Lobatto points so that the error caused by space discretization is negligible. Maximum error and L 2 -error at final time T = 10 and CPU time (in seconds) for the standard L2-Gauss, standard L2-TCTE, fast L2-Gauss and fast L2-TCTE methods are given in Table 2, where α = 0.6 and r = (3 − α)/α. We observe that the CPU time of the fast L2 scheme increases approximately linearly w.r.t. N , while the CPU time of the standard L2 scheme increases approximately quadratically. Optimal convergence rate 3 − α is achieved, which is consistent with the convergence result [10,Theorem 5.2]. For a fixed N , the errors of the standard L2-Gauss and L2-TCTE methods are almost the same in sense that their differences are approximately the machine error. Influenced by the chosen SOE tolerance error ε = 1e-12, the differences of L 2 -errors between the standard and fast methods are about ε. As one can see, when N = 32000, the standard L2-TCTE method about 91 times faster than the standard L2-Gauss method, while the fast L2-TCTE method is about 308 times faster than the fast L2-Gauss method in this example. Long time simulation For simplicity, we still consider Example 4.2 but replacing the final time T = 10 with T = 1000. We set the final time of SOE T soe = T , the SOE tolerance ε = 1e-14, the SOE cut-off time ∆t = τ 2 , and the grading parameter r = (3 − α)/α. Still, 5 2 Chebyshev-Gauss-Lobatto points are used in the spectral collocation method for spatial discretization. The fast L2-TCTE method is adopted for this long time simulation. In Table 3, the maximum L 2 -errors, the final-time L 2 -errors, the convergence rates, and the CPU times are reported. We can observe that the convergence rates are optimal and what's more important, the CUP time is not large even for very large number of time steps (N is larger than one hundred of thousands). To the best of knowledge, there are no existing simulations in literatures with more than 10 6 time steps. The high efficiency of our TCTE method helps to achieve this. Conclusion In this work, to handle the roundoff error problems in L2-type method, we reformulate the standard and fast L2 coefficients, provide the threshold conditions on δ-cancellation, and propose the Taylor expansions to avoid δ-cancellation when the threshold conditions are not satisfied. Numerical experiments show the high accuracy and efficiency of our proposed TCTE method. In particular, the fast L2-TCTE method can complete long time simulations with hundreds of thousands of time steps in one minute. Definition 3.1 (δ-cancellation). Given any two numbers x and y, and their approximationsx andỹ, if the relative error According to Theorem 3.3,Î i has the same floating-point number with I i in machine, whileĴ i has another same floating-point number with J i . Combining Theorem 3.2-3.3, we conclude that if the threshold conditions (3.7) and (3.8) are satisfied, δ-cancellation won't happen in I i and J i ; if (3.7)and(3.8) are not satisfied, the Taylor expansionsÎ i andĴ i can be applied to compute I i and J i (usually with only several terms). We call this method the threshold conditions plus Taylor expansions (TCTE) method in this work. At the end of this part, we illustrate our TCTE methods in Algorithms 1for 1 ≤ j ≤ k − 1, and fast L2 coefficients a k, k−1 ,c k, k−1 , in (2.6) and (2.12). Remark 3.4. For the standard and fast L2-1 σ methods, the correspond quantities I i and J i are computed by replacing t k with t k − α/2τ k and e −θ τ k with e −θ (1−α/2)τ k , in Algorithms 1-2. Example 4. 1 . 1Consider the subdiffusion equation (4.1) in Ω = [−1, 1] 2 with T = . The maximum L 2 -errors for different α are showed in Figure-1 for standard and fast L2-TCTE schemes. It can be verified that the convergence rates of maximum errors are min{rα, 3 − α}, which is consistent with the global convergence result in [10, Theorem 5.2]. We show the L 2 -errors at final time T = 1, i.e. err T := u(T ) − u N for standard L2-TCTE scheme with different α and r in Figure 2, using Algorithm 1-2. The convergence rate of final-time errors is observed to be r when r < 3 − α and 3 − α when r > 3 − α, which is consistent with the pointwise-in-time convergence result in [10, Theorem 5.2]. Denote the L 2 -error at time t k by err t k := u(t k ) − u k . Example 4. 2 . 2Consider the subdiffusion equation (4.1) in Ω = [−1, 1] 2 with T = 10 and f (t, x, y) Figure 1 : 1(Example 4.1) Maximum L 2 -errors err max . Top: Standard L2-TCTE method on the graded meshes with different r and N for α = 0.4, α = 0.6, α = 0.8 from left to right. Bottom: Fast L2-TCTE method on the graded meshes with different r and N for α = 0.5, α = 0.7, α = 0.9 from left to right. Figure 2 : 2(Example 4.1) Final-time L 2 -errors err T for the standard L2-TCTE method on the graded meshes with different r and N for α = 0.4, α = 0.6, α = 0.8 from left to right. Figure 3 : 3(Example 4.1) Pointwise L 2 -error differences between the Gauss-Kronrod quadrature method and the TCTE method w.r.t. t, where T = 1 and N = 3200. Left: Standard L2 method with α = 0.6 and r = (3 − α)/0.95. Right: Fast L2 method with α = 0.7 and r = (3 − α)/α. Table 1 1: (Example 4.1) Maximum L 2 -errors err max of standard L2 scheme with α = 0.4 (top) and fast L2 scheme with α = 0.5 (bottom) for different N , where the coefficients are computed directly from explicit formulas and the grading parameter is r = 2/α. N = 200 N = 400 N = 800 N = 1600 N = 3200 2.0980e-1 7.9774e+1 2.4502e+4 5.4546e+6 6.0231e+7 2.02887e-4 1.2529e-1 1.3868 5.7361e-1 4.6860 4.1 Accuracy of TCTE method Table 2 : 2(Example 4.2) Maximum errors, final-time errors, and CPU times (in seconds) computed by standard L2-Gauss, standard L2-TCTE, fast L2-Gauss and fast L2-TCTE methods with α = 0.6, r = (3−α)/α and T = 10 for different numbers of time steps.N = 2000 N = 4000 N = 8000 N = 16000 N = 32000 standard L2-Gauss err max 3.8628e-7 7.3187e-8 1.3866e-8 2.6271e-9 4.9776e-10 - 2.4000 2.4000 2.4000 2.4000 err T 3.1934e-9 6.0409e-10 1.1434e-10 2.1654e-11 4.0991e-12 - 2.4023 2.4013 2.4007 2.4013 CPU time 9.5653e+2 3.8583e+3 1.5218e+4 6.0095e+4 2.4060e+5 standard L2-TCTE err max 3.8628e-7 7.3187e-8 1.3866e-8 2.6271e-9 4.9776e-10 - 2.4000 2.4000 2.4000 2.4000 err T 3.1934e-9 6.0409e-10 1.1434e-10 2.1654e-11 4.0998e-12 - 2.4023 2.4013 2.4007 2.4010 CPU time 8.5312 35.3594 147.7656 623.7969 2639.0823 fast L2-Gauss err max 3.8628e-7 7.3187e-8 1.3866e-8 2.6271e-9 4.9776e-10 - 2.4000 2.4000 2.4000 2.4000 err T 3.1935e-9 6.0417e-10 1.1442e-10 2.1731e-11 4.1806e-12 - 2.4021 2.4005 2.3966 2.3779 CPU time 134.0000 307.0156 634.1875 1322.6875 2810.9687 fast L2-TCTE err max 3.8628e-7 7.3187e-8 1.3866e-8 2.6271e-9 4.9776e-10 - 2.4000 2.4000 2.4000 2.4000 err T 3.1935e-9 6.0417e-10 1.1442e-10 2.1730e-11 4.1803e-12 - 2.4021 2.4005 2.3967 2.3780 CPU time 0.4375 0.9062 1.9688 4.2500 9.1250 Table 3 : 3(Example 4.2) Maximum errors, final-time errors, and CPU times (in seconds) computed by fast L2-TCTE method with r = (3 − α)/α and T = 1000 for different numbers of time steps.N = 8000 N = 16000 N = 32000 N = 64000 N = 128000 α = 0.4 err max 7.9773e-8 1.3157e-8 2.1702e-9 3.5795e-10 5.9040e-11 - 2.6000 2.6000 2.6000 2.6000 err T 4.3198e-11 7.0758e-12 1.1718e-12 1.9549e-13 4.2172e-14 - 2.6100 2.5941 2.5836 2.2127 CPU time 4.3906 6.8438 13.5156 27.9844 58.1406 α = 0.6 err max 2.1976e-7 4.1638e-8 7.8889e-9 1.4946e-9 2.8318e-10 - 2.4000 2.4000 2.4000 2.4000 err T 1.1247e-10 2.1314e-11 4.0756e-12 7.6765e-13 1.3773e-13 - 2.3998 2.3867 2.4085 2.4785 CPU time 2.2031 4.8281 10.1094 21.5000 46.1406 α = 0.8 err max 1.3302e-6 2.9038e-7 6.3250e-8 1.3768e-8 2.9966e-9 - 2.1956 2.1988 2.1997 2.1999 err T 2.2127e-10 4.8295e-11 1.0636e-11 2.4319e-12 5.7001e-13 - 2.1959 2.1829 2.1288 2.0930 CPU time 2.0000 4.0312 8.9219 18.7812 40.8125 A new difference scheme for the time fractional diffusion equation. A Anatoly, Alikhanov, Journal of Computational Physics. 280Anatoly A Alikhanov. A new difference scheme for the time fractional diffusion equation. Journal of Computational Physics, 280:424-438, 2015. Linear models of dissipation whose Q is almost frequency independent-II. Michele Caputo, Geophysical Journal International. 135Michele Caputo. Linear models of dissipation whose Q is almost frequency independent-II. Geophysical Journal International, 13(5):529-539, 1967. Error analysis of a second-order method on fitted meshes for a time-fractional diffusion problem. Hu Chen, Martin Stynes, Journal of Scientific Computing. 791Hu Chen and Martin Stynes. Error analysis of a second-order method on fitted meshes for a time-fractional diffusion problem. Journal of Scientific Computing, 79(1):624-647, 2019. A new fractional numerical differentiation formula to approximate the Caputo fractional derivative and its applications. Zhi-Zhong Guang-Hua Gao, Hong-Wei Sun, Zhang, Journal of Computational Physics. 259Guang-hua Gao, Zhi-zhong Sun, and Hong-wei Zhang. A new fractional numerical differentiation formula to approximate the Caputo fractional derivative and its applications. Journal of Computational Physics, 259:33-50, 2014. Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations. Shidong Jiang, Jiwei Zhang, Qian Zhang, Zhimin Zhang, Communications in Computational Physics. 213Shidong Jiang, Jiwei Zhang, Qian Zhang, and Zhimin Zhang. Fast evaluation of the Caputo fractional derivative and its applications to fractional diffusion equations. Communications in Computational Physics, 21(3):650-678, 2017. Correction of high-order BDF convolution quadrature for fractional evolution equations. Buyang Bangti Jin, Zhi Li, Zhou, SIAM Journal on Scientific Computing. 396Bangti Jin, Buyang Li, and Zhi Zhou. Correction of high-order BDF convolution quadrature for fractional evolution equations. SIAM Journal on Scientific Computing, 39(6):A3129-A3152, 2017. Subdiffusion with time-dependent coefficients: improved regularity and secondorder time stepping. Buyang Bangti Jin, Zhi Li, Zhou, Numerische Mathematik. 1454Bangti Jin, Buyang Li, and Zhi Zhou. Subdiffusion with time-dependent coefficients: improved regularity and second- order time stepping. Numerische Mathematik, 145(4):883-913, 2020. A fast direct method for block triangular Toeplitz-like with tri-diagonal block systems from time-fractional partial differential equations. Rihuan Ke, K Michael, Hai-Wei Ng, Sun, Journal of Computational Physics. 303Rihuan Ke, Michael K Ng, and Hai-Wei Sun. A fast direct method for block triangular Toeplitz-like with tri-diagonal block systems from time-fractional partial differential equations. Journal of Computational Physics, 303:203-211, 2015. Error analysis of the L1 method on graded and uniform meshes for a fractional-derivative problem in two and three dimensions. Natalia Kopteva, Mathematics of Computation. 88319Natalia Kopteva. Error analysis of the L1 method on graded and uniform meshes for a fractional-derivative problem in two and three dimensions. Mathematics of Computation, 88(319):2135-2155, 2019. Error analysis of an L2-type method on graded meshes for a fractional-order parabolic problem. Natalia Kopteva, Mathematics of Computation. 90327Natalia Kopteva. Error analysis of an L2-type method on graded meshes for a fractional-order parabolic problem. Mathematics of Computation, 90(327):19-40, 2021. Error analysis for a fractional-derivative parabolic problem on quasi-graded meshes using barrier functions. Natalia Kopteva, Xiangyun Meng, SIAM Journal on Numerical Analysis. 582Natalia Kopteva and Xiangyun Meng. Error analysis for a fractional-derivative parabolic problem on quasi-graded meshes using barrier functions. SIAM Journal on Numerical Analysis, 58(2):1217-1238, 2020. The accuracy and stability of an implicit solution method for the fractional diffusion equation. Tam Langlands, Bruce, Henry, Journal of Computational Physics. 2052TAM Langlands and Bruce I Henry. The accuracy and stability of an implicit solution method for the fractional diffusion equation. Journal of Computational Physics, 205(2):719-736, 2005. A second-order fast compact scheme with unequal time-steps for subdiffusion problems. Xin Li, Hong-Lin Liao, Luming Zhang, Numerical Algorithms. 863Xin Li, Hong-lin Liao, and Luming Zhang. A second-order fast compact scheme with unequal time-steps for subdiffusion problems. Numerical Algorithms, 86(3):1011-1039, 2021. Sharp error estimate of the nonuniform L1 formula for linear reactionsubdiffusion equations. Dongfang Hong-Lin Liao, Jiwei Li, Zhang, SIAM Journal on Numerical Analysis. 562Hong-lin Liao, Dongfang Li, and Jiwei Zhang. Sharp error estimate of the nonuniform L1 formula for linear reaction- subdiffusion equations. SIAM Journal on Numerical Analysis, 56(2):1112-1133, 2018. A Discrete Grönwall Inequality with Applications to Numerical Schemes for Subdiffusion Problems. Hong-Lin Liao, William Mclean, Jiwei Zhang, SIAM Journal on Numerical Analysis. 571Hong-lin Liao, William McLean, and Jiwei Zhang. A Discrete Grönwall Inequality with Applications to Numerical Schemes for Subdiffusion Problems. SIAM Journal on Numerical Analysis, 57(1):218-237, 2019. A second-order scheme with nonuniform time steps for a linear reaction-subdiffusion problem. Hong-Lin Liao, William Mclean, Jiwei Zhang, Communications in Computational Physics. 302Hong-Lin Liao, William McLean, and Jiwei Zhang. A second-order scheme with nonuniform time steps for a linear reaction-subdiffusion problem. Communications in Computational Physics, 30(2):567-601, 2021. Unconditional convergence of a fast two-level linearized algorithm for semilinear subdiffusion equations. Yonggui Hong-Lin Liao, Jiwei Yan, Zhang, Journal of Scientific Computing. 801Hong-lin Liao, Yonggui Yan, and Jiwei Zhang. Unconditional convergence of a fast two-level linearized algorithm for semilinear subdiffusion equations. Journal of Scientific Computing, 80(1):1-25, 2019. Unconditionally optimal H 1 -error estimate of a fast nonuniform L2-1σ scheme for nonlinear subdiffusion equations. Nan Liu, Yanping Chen, Jiwei Zhang, Yanmin Zhao, Numerical Algorithms. Nan Liu, Yanping Chen, Jiwei Zhang, and Yanmin Zhao. Unconditionally optimal H 1 -error estimate of a fast nonuniform L2-1σ scheme for nonlinear subdiffusion equations. Numerical Algorithms, pages 1-23, 2022. Error analysis of a high order method for time-fractional diffusion equations. Chunwan Lv, Chuanju Xu, SIAM Journal on Scientific Computing. 385Chunwan Lv and Chuanju Xu. Error analysis of a high order method for time-fractional diffusion equations. SIAM Journal on Scientific Computing, 38(5):A2699-A2724, 2016. A discontinuous Petrov-Galerkin method for timefractional diffusion equations. Kassem Mustapha, Basheer Abdallah, Khaled M Furati, SIAM Journal on Numerical Analysis. 525Kassem Mustapha, Basheer Abdallah, and Khaled M Furati. A discontinuous Petrov-Galerkin method for time- fractional diffusion equations. SIAM Journal on Numerical Analysis, 52(5):2512-2529, 2014. Energy stable L2 schemes for time-fractional phase-field equations. Chaoyu Quan, Boyi Wang, Journal of Computational Physics. 458111085Chaoyu Quan and Boyi Wang. Energy stable L2 schemes for time-fractional phase-field equations. Journal of Compu- tational Physics, 458:111085, 2022. H 1 -stability of an L2-type method on general nonuniform meshes for subdiffusion equation. Chaoyu Quan, Xu Wu, arXiv:2205.06060arXiv preprintChaoyu Quan and Xu Wu. H 1 -stability of an L2-type method on general nonuniform meshes for subdiffusion equation. arXiv preprint arXiv:2205.06060, 2022. On stability and convergence of L2-1 σ method on general nonuniform meshes for subdiffusion equation. Chaoyu Quan, Xu Wu, arXiv:2208.01384arXiv preprintChaoyu Quan and Xu Wu. On stability and convergence of L2-1 σ method on general nonuniform meshes for subdiffusion equation. arXiv preprint arXiv:2208.01384, 2022. Long time H 1 -stability of fast L2-1 σ method on general nonuniform meshes for subdiffusion equations. Chaoyu Quan, Xu Wu, Jiang Yang, arXiv:2212.00453arXiv preprintChaoyu Quan, Xu Wu, and Jiang Yang. Long time H 1 -stability of fast L2-1 σ method on general nonuniform meshes for subdiffusion equations. arXiv preprint arXiv:2212.00453, 2022. Jie Shen, Tao Tang, Li-Lian Wang, Spectral methods: algorithms, analysis and applications. Springer Science & Business Media41Jie Shen, Tao Tang, and Li-Lian Wang. Spectral methods: algorithms, analysis and applications, volume 41. Springer Science & Business Media, 2011. Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation. Martin Stynes, Eugene O&apos;riordan, José Luis Gracia, SIAM Journal on Numerical Analysis. 552Martin Stynes, Eugene O'Riordan, and José Luis Gracia. Error analysis of a finite difference method on graded meshes for a time-fractional diffusion equation. SIAM Journal on Numerical Analysis, 55(2):1057-1079, 2017. A fully discrete difference scheme for a diffusion-wave system. Xiaonan Zhi-Zhong Sun, Wu, Applied Numerical Mathematics. 562Zhi-zhong Sun and Xiaonan Wu. A fully discrete difference scheme for a diffusion-wave system. Applied Numerical Mathematics, 56(2):193-209, 2006. Lloyd N Trefethen, Spectral methods in MATLAB. SIAM. Lloyd N Trefethen. Spectral methods in MATLAB. SIAM, 2000. A fast high order method for the time-fractional diffusion equation. Hongyi Zhu, Chuanju Xu, SIAM Journal on Numerical Analysis. 576Hongyi Zhu and Chuanju Xu. A fast high order method for the time-fractional diffusion equation. SIAM Journal on Numerical Analysis, 57(6):2829-2849, 2019.
[]
[ "State Preparation on Quantum Computers via Quantum Steering", "State Preparation on Quantum Computers via Quantum Steering" ]
[ "Daniel Volya \nUniversity of Florida\nGainesvilleFloridaUSA\n", "Prabhat Mishra \nUniversity of Florida\nGainesvilleFloridaUSA\n" ]
[ "University of Florida\nGainesvilleFloridaUSA", "University of Florida\nGainesvilleFloridaUSA" ]
[]
One of the major components for realizing quantum computers is the ability to initialize the computer to a known fiducial state, also known as state preparation. While there are promising state initialization approaches based on passive as well as active reset, they either introduce unacceptable overhead for large quantum systems or are unable to prepare an arbitrary quantum state. We demonstrate a state preparation method via the novel measurement-induced steering protocol on digital quantum computers. Arbitrary quantum states are prepared by applying quantum circuits that exploit the back-action caused by measuring part of an entangled state. By delegating ancilla qubits and systems qubits, the initial states are prepared by repeatedly performing the following steps: (1) executing a designated system-ancilla entangling circuit, (2) measuring the ancilla qubits, and (3) re-initializing ancilla qubits to known states through active reset. While the ancilla qubits are measured and reinitialized to known states, the system qubits are steered from arbitrary initial states to desired final states. We show results of the method by preparing arbitrary qubit states and arbitrary qutrit (three-level) states on contemporary, cloud-accessible, quantum computers. We also demonstrate that the state convergence can be accelerated by utilizing the readouts of the ancilla qubits to guide the protocol in a non-blind manner.
null
[ "https://export.arxiv.org/pdf/2302.13518v2.pdf" ]
257,219,268
2302.13518
f00cf0028423b9846fe64a33e9678ab8bd240c47
State Preparation on Quantum Computers via Quantum Steering Daniel Volya University of Florida GainesvilleFloridaUSA Prabhat Mishra University of Florida GainesvilleFloridaUSA State Preparation on Quantum Computers via Quantum Steering (Dated: March 28, 2023) One of the major components for realizing quantum computers is the ability to initialize the computer to a known fiducial state, also known as state preparation. While there are promising state initialization approaches based on passive as well as active reset, they either introduce unacceptable overhead for large quantum systems or are unable to prepare an arbitrary quantum state. We demonstrate a state preparation method via the novel measurement-induced steering protocol on digital quantum computers. Arbitrary quantum states are prepared by applying quantum circuits that exploit the back-action caused by measuring part of an entangled state. By delegating ancilla qubits and systems qubits, the initial states are prepared by repeatedly performing the following steps: (1) executing a designated system-ancilla entangling circuit, (2) measuring the ancilla qubits, and (3) re-initializing ancilla qubits to known states through active reset. While the ancilla qubits are measured and reinitialized to known states, the system qubits are steered from arbitrary initial states to desired final states. We show results of the method by preparing arbitrary qubit states and arbitrary qutrit (three-level) states on contemporary, cloud-accessible, quantum computers. We also demonstrate that the state convergence can be accelerated by utilizing the readouts of the ancilla qubits to guide the protocol in a non-blind manner. I. INTRODUCTION One of the primary requirements in quantum computing is the ability to prepare an arbitrary quantum state [1,2]. Traditionally, this requirement is fulfilled by: (1) initializing the quantum computer to a known fiducial state (|0 ⊗n ) of n-qubits, and (2) applying a series of discrete quantum gates to the known state to obtain a desired final state (|ψ ⊕ = U |0 ⊗n ) [3]. Initialization of a fiducial state is commonly achieved by waiting for the system to thermalize to the ground state (passive reset) -with the waiting time roughly correlated to T 1 coherence times [4,5]. Although the waiting time for qubits to thermalize is feasible for today's contemporary quantum computers, as technology improves and the coherence times of large collection of qubits increases, the waiting time will dominate in comparison to the program duration. Simply letting qubits equilibrate with their environment is not an option. To avoid passively waiting for a qubit reset, recent efforts investigate active reset such as through projective measurements [6,7]. In reality, a desired state may not be an eigenstate of a measurement operator and thus leads to probabilistic outcomes. Therefore, when the state of the qubits is collapsed via measurement, singlequbit rotations are applied to correct the state based on the readout outcomes [7]. However, such an approach faces two main challenges: first, measurement itself can be a long and error-prone operation depending on the underlying technology [8,9]; and secondly, the correction to the post-measurement state introduces significant overhead as measurement-outcomes need to be classically processed for each qubit. Alternative strategies for initializing fiducial states have been proposed and realized by algorithmically transferring entropy from some qubits to others, or outside the system to an environment -resulting in a cooling effect [10]. In the reversible case, unitary quantum gates are applied to cool some qubits while heating up others [11]. In the irreversible case, heat is transferred to the environment via quantum operations (i.e., with measurement). These are referred as reversible algorithmic cooling [12] and heat-bath algorithmic cooling [13,14], respectively. Both methods utilize the properties of entangled states to cool qubits to simple pure quantum states. However, for real world open quantum systems undergoing non-Markovian dynamics [15,16], a successful state reset implies not only purification, but also erasure of initial correlations between qubits and the environment [17][18][19]. Furthermore, passive reset, active reset, and strategies based on algorithmic cooling are not applicable for scenarios when we need to initialize to an arbitrary (nonfiducial) state. To further prepare an arbitrary state from the initial fiducial state requires careful calibration of quantum gates, as well as extreme fine-tuning on large quantum computers to guarantee an appropriate fidelity. In this paper, we explore an alternative approach that reduces the number of qubits that undergo active resets, lowers the classical processing involved during quantum computation for correction, and can prepare arbitrary states |ψ ⊕ without having to first prepare a known initial state. Historically, an important yet perplexing feature of quantum mechanics is the apparent non-local correlations (or entanglement) between distant particles. Schrödinger famously introduced the term quantum steering in concern of the ability to remotely steer a particle's state through measurements on another particle that is entangled with it [20,21]. The notion of quantum steering has been extended to include continuous variable systems [22], where one party can steer another by performing Gaussian operations on their shared state. This was later generalized to include arbitrary states and measurements [23], and then to the case of more than two parties [24]. Experimental demonstration of this ef- FIG. 1. The measurement-induced steering protocol conceptually consists of (a) passively steering a system (qubit or qutrit) to an arbitrary state via coupling to an ancilla qubit that is exposed to an environment for measurement and simple state reset (i.e., to |0 ). A specifically chosen unitary operator U (J), parameterized by an arbitrary coupling strength J, acts upon the system-ancilla. By repeatedly applying the unitary and measuring the ancilla, a back-action is induced on the system whereby the average of all readout outcomes steer the system to a desired state. Instead of averaging the measurement readouts (b) non-blind passive steering processes the readouts on a classical computer to accelerate the convergence of the system state. We experimentally realize the protocol on IBM's superconducting quantum computers, such as ibm perth with the device connectivity graph shown in (c). To select our system qubit (qutrit), we choose the transmon with the best measurement discrimination between the computational states. The ancilla qubit is then selected as nearest neighbor given by the device connectivity. The Bloch sphere (d) shows the results of passive steering a system qubit on ibm perth to prepare an equal superposition state (shown as yellow dots) where the initial states are arbitrary (shown as black dots). fect was performed with two photons [25,26] and later extended to continuous variable systems [27][28][29][30]. Recently, quantum steering has been creatively exploited in developing a protocol for preparing arbitrary quantum states irrespective of their initial (mixed) state [31]. The protocol consists of a repetition of simple steps: 1. a fixed unitary operation U couples ancilla qubits and system qubits; 2. a measurement is conducted on the ancilla qubits, decoupling it from the system qubits; 3. the ancilla qubits are reinitialized to known simple states. As the ancilla qubits are measured, the back-action on the system qubits steers them from arbitrary (unknown) mixed states to a desired final state. The protocol has been theoretically analyzed in preparing a two-qubit system to arbitrary (mixed or pure) states [32], noting the strength of the entangling operator U for preparing classical, discorded, or entangled target states. Subsequently, the protocol's rate of convergence was studied in preparing an arbitrary qubit state where it is remarked that significant speedup can be achieved with slight compromise to the fidelity of the final target state [33]. Extensions to the protocol have been proposed where instead of ignoring the results of measurement, the ancilla readouts are utilized to perform online decisions in navigating the Hilbert space [34]. By utilizing the readouts, the protocol's convergence may be improved and the entangling operation U may be changed via a feedback mechanism without collapsing the system state. The variations of the protocol can be summarized as follows: (a) blind passive steering where readouts are ignored and U remains fixed, (b) non-blind passive steering where readouts are utilized and U remains fixed, (c) blind active steering where readouts are ignored and U is changed with each iteration, and (d) non-blind active steering where readouts are utilized and U is changed with each iteration. In this paper, we experimentally implement the (non-)blind passive steering protocol on contemporary cloudaccessible quantum computers by delegating ancilla and system qubits (qutrits) that undergo N -repetitions of an operation U implemented as a digital quantum circuit. After repeating the protocol steps, we show that the state of the system approaches a desired state. In summary, we make the following major contributions. • We realize measurement-induced steering [31] for arbitrary state preparation on physical quantum computers. • We develop quantum circuits to implement the steering protocol with primary focus on a qubitqubit coupled system (an ancilla qubit to steer a qubit) and a qubit-qutrit coupled system (an ancilla qubit to steer a qutrit.) • We also investigate the non-blind approach, where instead of disregarding the measurement results, we take advantage of the measurement readouts to accelerate the convergence. • Furthermore, we show that the quantum steering operator can be divided into local and non-local operations using Cartan decomposition [35,36]. The non-local operations convey the strength of the entanglement necessary to perform quantum steering. Furthermore, this decomposition can be viewed as a graphical representation for a qubit-qubit coupled system, providing visualization for non-local operations. Figure 1 conceptually summarizes the quantum steering protocol and shows an overview of mapping the protocol to a cloud-accessible quantum computer. II. DIGITAL IMPLEMENTATION OF MIQS The goal of the measurement-induced quantum steering (MIQS) protocol is to prepare a desired target state Solve for the nullspace: |ψS ⊥ = null(S) 5 Prepare U: 6 Find operators that connect to orthogonal spaces 7 for k = 1 to dim(|ψS ⊥ ) do 8 O k D = |ψD ⊥ ψD| 9 Ω k S = |ψS ψS| ⊥ k 10 H = k O k D |ψD ψD| ⊗ Ω k S + h.c.. 11 Solve for U = exp(−iJHδt) 12 Done |ψ ⊕ , irrespective of the initial state. This is achieved by exploiting the back-action caused by measuring part of an entangled system, steering our system to the target state. In this section, we first provide a review of the formal specification of the MIQS protocol. Next, we describe the circuit implementation of the MIQS protocol, focusing on steering a qubit and a qutrit, providing quantum circuits that satisfy the steering conditions. Finally, we explore the properties of the generated circuits. A. Formulation of MIQS Protocol Suppose we have a system of ancilla qubits initialized to the state |ψ A (density matrix ρ A ) and system qubits in an arbitrary state ρ S . The general MIQS protocol involves the following steps: 1. Couple the ancilla qubits and system qubits with a composite unitary operator U . The state of the ancilla-system after the n-th application of the unitary evolution is ρ n+1 A−S = U ρ A ⊗ ρ n S U † . 2. The ancilla qubits are then decoupled from the system, giving the density state of the system as: ρ n+1 S = Tr A ρ n+1 A−S = Tr A U ρ A ⊗ ρ n S U †(1) 3. The ancilla qubits are reinitialized to their initial states and the steps are repeated. The goal is to steer the system state to a desired target state |ψ ⊕ (ρ ⊕ ). The dynamics of U should be chosen such that the following steering inequality is satisfied: ψ S⊕ | ρ n+1 S |ψ ⊕ ≥ ψ ⊕ | ρ n S |ψ ⊕ .(2) In other words, with each repetition of the steps in the MIQS protocol, the state of our system should get closer to our desired pure target state |ψ ⊕ . The general theory under which Equation 2 will be satisfied is derived in [31]. In brevity, if the quantum dynamics is given as the time evolution U = exp(−iHδt) of a Hamiltonian H, then for H to satisfy Equation 2 it has the following form n = 1 n = 2 n = N |0⟩ |0⟩ . . . |0⟩ . . . A: |0⟩ U U U |0⟩ S: ρ S |ψ S⊕ ⟩ (a) Starting from an unknown initial state ρ S , the system qubit S is steered via a repeated application of an ancilla-system entanglement operation U A−S , followed by measurements and active resets of the ancilla qubit A. After N applications, the system qubit arrives to a target state |ψ S⊕ . U = σ x σ z σ y σ y H H H y H y R z (J) H y R z (−J) H y (b) A Pauli-H = n O (n) A |ψ A ψ A | ⊗ Ω (n) S + h.c.,(3) where n labels the ancilla qubits. The Hamiltonian consists of direct product of operators O (n) A that rotate the ancillas from their initial state to an orthogonal subspace and operators Ω (n) S that rotate the system to an orthogonal subspace. Algorithm 1 summarizes the steps to find an operator U that satisfies the steering protocol. Lines 1-4 compute an orthogonal subspace of our target state, |ψ ⊕ ⊥ . Lines 5-10 produces the operators O Rather than physically engineering and realizing a system with the satisfactory Hamiltonian (Equation 3), we instead simulate the Hamiltonian on a quantum computer through the application of discrete unitary operators [37,38]. In the circuit description of quantum computing, a series of discrete unitary operators (gates) transforms the state of a quantum register (a collection of qubits.) Typically, the quantum gates operate on one or two qubits -but with a universal gate set and an appropriate circuit, any arbitrary unitary operator can be defined [39]. In the next two sections, we investigate the quantum circuits U (Line 11 in Algorithm 1) that steer qubits as well as qutrits. B. Implementation of Qubit-Qubit MIQS Protocol In this section, we derive the unitary operator U that steers a qubit to a desired state. An arbitrary target state of a qubit (excluding global phase) has the form |ψ ⊕ = cos(θ/2) |0 + e iφ sin(θ/2) |1 , (4) with 0 ≤ θ ≤ π and 0 ≤ φ < 2π. A Hamiltonian that satisfies Equation 2 is H A−S = J 2 (− cos(φ) cos(θ)σ x A σ x S − cos(φ)σ y A σ y S + sin(φ)σ y A σ x S + sin(θ)σ x A σ z S − sin(φ) cos(θ)σ x A σ y S ) (5) where J is an arbitrary coupling constant, and σ {x,y,z} u are the standard Pauli matrices acting on the individual subsystem u. Assuming the standard computational basis, the matrix corresponds to H = J 2    0 0 α −β * − 0 0 −β + −α α −β * + 0 0 −β − −α 0 0    (6) with α = sin θ and β ± = e iθ (cos θ±1). A quantum circuit that reproduces the unitary operator U = exp(−iH)(7) will essentially swap the ancilla-qubit space with the system-qubit space. In Section II D we provide the optimal quantum circuits that implements the operator with single qubit rotations and CNOT gates. However, for the remainder of this section we provide an illustrative example with a simple circuit construction. Example: A systematic method to construct the quantum circuit is to consider each Pauli string in the Hamiltonian H. As an example, consider the case when φ = 0, then Equation 5 simplifies tô H A−S = J 2 (− cos(θ) σ x A σ x S H XX + sin(θ) σ x A σ z S H XZ − σ y A σ y S H Y Y ). (8) Therefore, the unitary evolution operator is given as U A−S = exp −iĤ A−S = U XX+XZ • U Y Y ;(9) with two commuting terms U XX+XZ = exp(iαH XX − iβH XZ ),(10)U Y Y = exp i J 2 H Y Y ,(11) where α = J cos(θ) 2 and β = J sin(θ) 2 . The circuit decomposition is done in two main steps. First, the noncommuting terms in Equation 10 are decomposed using an approximation. Next, all the Pauli Hamiltonians, H XX , H XZ , and H Y Y , are decomposed to their circuit representations. A nice simplification occurs when either sin(θ) = 0 or cos(θ) = 0, leaving either U XX or U XZ terms in combination with U Y Y . This specifically occurs when the target state |ψ ⊕ = |+ = 1 √ 2 (|0 + |1 ). With θ = π/2, the Hamiltonian in Equation 8 simplifies tô H = J 2 (σ x A σ z S − σ y A σ y S ) .(12) Since the Pauli operators H XZ and H Y Y commute, we can express the evolution operator as U A,S = exp −i J 2 σ x A σ z S • exp i J 2 σ y A σ y S(13) and obtain the quantum circuit as shown in Figure 2b. The |+ state is particularly interesting due to its prevalence in quantum algorithms, primarily in preparing entangled Bell states by applying a subsequent CNOT operation. Appendix B provides an analytical analysis of steering to the |+ state. C. Implementation of Qubit-Qutrit MIQS Protocol In the previous section, we show a derivation of the quantum circuit to steer a qubit to a desired state. In this section, we derive a quantum circuit to prepare an arbitrary qutrit state. Control of qutrits is typically harder to do via conventional means compared to qubits, therefore, there is additional benefit to using the MIQS protocol. An arbitrary qutrit state (excluding global phase) can be written in terms of four parameters as |ψ ⊕ = sin(ξ/2) cos(θ/2) |0 + e iφ01 sin(ξ/2) sin(θ/2) |1 + e iφ02 cos(ξ/2) |2 ,(14) where 0 ≤ θ, ξ ≤ π quantify the magnitude of the components of |ψ ⊕ while 0 ≤ φ 01 , φ 02 ≤ 2π describe the phases of |0 relative to |1 and |2 , respectively. A Hamiltonian that steers the qutrit will have the following form (15) where σ + is the raising operator and |ψ ⊕ ⊥ i are orthogonal states to our desired state. We note that we may rewrite the Hamiltonian in terms of σ x and σ y Paulimatrices and λ j Gell-Mann matrices, with some coupling α i,j between them. Similar to the previous section, we may take the strings consisting of Pauli and Gell-Mann terms and map them to simple building blocks for our quantum circuits. H = σ + ⊗ |ψ ⊕ ψ ⊕ | ⊥ 1 + σ + ⊗ |ψ ⊕ ψ ⊕ | ⊥ 2 + h.c. For our experimental realization of a qutrit state, we will focus on one particular state: an equal superposition as defined by |ψ ⊕ = 1 √ 3 (|0 + |1 + |2 ) .(16) We may express the orthogonal subspace as being spanned by two vectors |ψ ⊕ ⊥ 1 = 1 √ 3 (|0 + ν |1 + ν * |2 ) ,(17)|ψ ⊕ ⊥ 2 = 1 √ 3 (|0 + ν * |1 + ν |2 )(18) where ν = exp(i2π/3). Thus, a Hamiltonian that will steer the overall qutrit state to the desired target |ψ ⊕ has the following matrix form H A−S = 1 3        0 3×3 2 2 2 −1 −1 −1 −1 −1 −1 2 −1 −1 2 −1 −1 2 −1 −1 0 3×3       (19) again showing that overall operation moves both subsystems to their orthogonal subspace. D. Geometrical Considerations We have derived the quantum circuits that steer qubit and qutrit states to their respective desired states. The quantum circuits specifically entangle the ancilla and systems states such that they satisfy target state convergence given by Equation 2. This section presents the quantum circuits from a geometrical point of view, offering insight to the kinds of entanglement necessary. The machinery for providing our insight is based on the Cartan decomposition of the su(d 1 d 2 ) Lie algebra, where d 1 = 2 and d 2 = 2, 3 for the qubit or qutrit case, respectively [35,36]. Definition II.1 A Cartan decomposition of a Lie al- gebra g is defined as an orthogonal split g = k ⊕ m satis- fying [k, k] ⊂ k, [m, m] ⊂ k, [k, m] = m.(20) A Cartan subalgebra denoted by a refers to a maximal Abelian algebra within m. Picking basis elements one by one, and finding a Cartan decomposition directly through Definition II.1 is difficult in practice. Instead, partitioning the Lie algebra into k and m is done by an involution: a Lie algebra homomorphism θ : g → g, such that θ(θ(g)) = g for any g ∈ g and preserves all commutators. The involution is then used to split the Lie algebra by defining subspaces via θ(k) = k and θ(m) = −m. Cartan's classification revealed that there are only three types of decomposition for su(n). However, we utilize the decomposition given by the corresponding involution θ(g) = −g T for all g ∈ g (referred in literature as an AI type decomposition). The result of the Cartan decomposition is the ability to write any unitary operator U as U = K 1 AK 2(21) where K 1 and K 2 are elements of e ik and A ∈ e ia are elements defined by the Cartan subalgebra. It is well-known that an arbitrary operator acting on two-qubits U ∈ U (4) can be decomposed as product of a gate U ∈ SU (4) and a global phase shift e iθ . Since the global phase does not impact the underlying quantum mechanics, we focus specifically on the SU (4). We are particularly interested in the operations that are nonlocal, giving insight to the necessary entanglement. Such operations are then given as elements in SU (4)\SU (2) ⊗ SU (2). The Cartan decomposition of su(4), any twoqubit operation can be written as U = k 1 Ak 2(22) where k 1 , k 2 ∈ SU (2) ⊗ SU (2) and the non-local part A = exp(i/2(c 1 σ x σ x + c 2 σ y σ y + c 3 σ z σ z ) ). This representation allows separation of steering operator into local (K 1 , K 2 ) and nonlocal (A) parts. The coefficients c k ∈ [0, π] are the non-local coordinates, and contain a geometrical structure [40]. The coefficients for any possible ancilla-qubit steering operator U (J) is given by c = [J, J, 0](23)U A,S = U 3 3 U 1 3 U 4 3 U 2 3 K 3 Z X 1 2 † X J X 1 2 Z K 1 K 4 Z X 1 2 † Z J X 1 2 Z K 2 A FIG. 4. The optimized decomposition of the qubit-qubit steering operator. Ki gates are single-qubit rotations produced by the Cartan decomposition and are parameterized by θ and φ of a desired state. The non-local operator A is decomposed using two CNOT gates and local qubit rotations along X and Z axis. The circuit is further simplified by combining possible local rotations into a single qubit rotation U3 -a native arbitrary rotation gate on IBM Quantum computers. The X (J/2) and Z (J/2) gates are defined as e i π 4 J Rx(πJ/2) and e i π 4 J Rz(πJ/2) respectively. X (1/2) gate is then defined as Rx(π/2). Figure 3 displays these parameters for any ancillaqubit steering operator U on the Weyl chamber -which is the symmetry-reduced version of a cube. The point L corresponds to the gate CNOT and all gates that are locally equivalent, including the CPHASE gate. As shown, CNOT and CPHASE gates are not locally equivalent to the steering operator U . Thus, despite being characterized as perfect entanglers, the CNOT and CPHASE gates do not satisfy the steering conditions and in fact are unital operators on the qubit. Therefore, capability of the steering operator to create entanglement between qubit and ancilla is a necessary but not sufficient condition to steer the qubit. Digital quantum computers, fortunately, allow for implementation of arbitrary unitary operations that satisfy the non-local criteria. Figure 4 is the optimal circuit given by the Cartan decomposition for the ancilla-qubit steering operator which we execute on digital quantum computers. E. Rapid Reset via Measurement Readouts In our current description of the protocol, the results of measuring the ancilla qubits are discarded -i.e. blind passive steering. Effectively, by averaging all possibilities of readout outcomes, the state of our system converges to a desired state. This is advantageous as, in general, classical processing of data is not required avoiding additional overhead. However, by utilizing the readout results of the ancilla qubits we can accelerate convergence of our system state. Contemporary quantum computers have the infrastructure to process readout results during the execution of a quantum circuit. Hence, we take advantage of this capability to demonstrate preparation of a desired state by utilizing readout results via the nonblind passive steering protocol. As a simple demonstration, note in Section II B that the steering operator swaps the detector and system spaces. Therefore, if the ancilla qubit has swapped to its orthogonal state (a readout of "1"), that means the system qubit has successfully swapped to the desired state. In general, the measurement of an ancilla qubit with a readout of "1" is given by the projection operator Π 1 = |1 A 1| A ⊗ I S .(24) The ancilla-system state after applying the steering operator U and measuring the ancilla state in "1" is ρ n+1 A−S = Π 1 U ρ n A−S U † Π 1 p 1(25) where p 1 = Tr U ρ n A−S U † Π 1 is the probability of measuring a "1". For further analysis and extensions of this idea, we refer to Reference [34]. III. EXPERIMENTS In this section, we describe the different steps followed to physically prepare states via measurement-induced quantum steering (MIQS) protocol with the superconducting transmon qubits and qutrits. A. Experimental Setup The experiments were performed using different IBM Quantum computers (accessed through IBM Cloud [41]): ibm lima, ibm belem, and ibm perth. The hardware commands are coded using Qiskit, utilizing the recent additions of mid-circuit measurements and active reset operations. Furthermore, we took advantage of Qiskit Pulse [42] -a pulse-level programming model -which allowed us to define, calibrate, and execute quantum circuits outside conventional definitions. The low-level access to the underlying quantum hardware enables processing quantum information on qutrits (three-level system), extending the concept of quantum computation on two-level systems. For most operations, we used gates calibrated by the IBM team. For each transmon, the local oscillator (LO) frequency is given by IBM's calibrated |0 → |1 frequency, which was kept fixed for the experiments. Transitions between the |1 and |2 states are achieved by using amplitudemodulated microwave pulses via sinusoidal side-band at a frequency f 12 − f 01 . This results in an effective shift of frequency for the pulses from f 01 to f 12 [43]. Appendix C shows the results of the calibration. Figure 5 represents the energy levels of the superconducting transmons architecture. The MIQS circuits are designed using a combination of: default single-qubit gates, which operate in the {|0 , |1 } subspace (01); default entangling CNOT gate; and custom calibrated single-qutrit gates, which operate on the {|1 , |2 } subspace (12). The single-qutrits gates are defined by utilizing the amplitude of the π 1→2 pulse -which we obtained via a Rabi experiment. We use the default implementation of the CNOT gate as defined by IBM Quantum. Extended to a qubit-qutrit system, it acts as a SU (2 × 3 = 6) gate with the truth table as shown in Table I. For the control qubit in the (01) subspace, it acts as a standard qubit CNOT gate but with an additional phase of π/2 to the |2 state of the target qutrit [44,45]. IBM Quantum allows the reuse of qubits through midcircuit measurements and conditional-reset. The reset is achieved by applying a not-gate conditioned on the measurement outcome of the qubit. During the execution of the MIQS protocol, the ancilla qubit is measured and subsequently reset. |12 TABLE I. Truth table for the default IBM CNOT gate where the control qubit acts on a target qutrit. The operation is implemented as two consecutive CNOT gates (more details can be found in Ref. [44]). System Ancilla Control Target Output |0 |0 |00 |0 |1 |01 |0 |2 |02 |1 |0 |10 |1 |1 |11 |1 |2 i For qubit readout, we used the 0 − 1 discriminator provided by IBM Quantum. However, this discriminator is unable to correctly identify excitations to the |2 state, misclassifying them as |1 . Therefore, to read out the qutrits, we developed our own custom 0 − 1 − 2 discriminator to classify in-phase and quadrature (IQ) points. For a desired system state |ψ ⊕ , we construct a batch of MIQS circuits where the total iterations (N ) of U A,S is incremented from 1 to a maximum of N . This enables us to estimate the state of the system as the number of U A,S iterations varies, and reduces the overhead due to cloud access to hardware. For each iteration N , we conduct quantum state tomography on the system qubit. The measurement results from the quantum computer are processed locally. The estimated state of the system qubit is taken as an unbiased average over all ancilla qubit outcomes (i.e., a projective measurement), and estimates of the mixed system state is computed using maximum likelihood, minimum effort method [46]. Once we are content with the results, we fix N = N which provides one MIQS circuit that faithfully prepares the state |ψ ⊕ . We repeat this process for different coupling parameters J, noting the relationship between J, numbers of iterations N , and the achieved state |ψ ⊕ fidelity. Before executing the MIQS protocol, we further verify the correctness of the steering operator U A,S through quantum process tomography (QPT). QPT is a procedure for experimentally reconstructing a complete description of a noisy quantum channel E. This is done by preparing a set of input states {|a i } and performing measurements on a set of operators {B j } to estimate probabilities p ij = Tr[B † j E(|a i a i |)]. If the input states and measurement operators span the input and output spaces respectively, then the set {p ij } reconstructs the channel E. For a n-qubit channel, the input space is constructed via tensor products of {|0 , |1 , |+ = 1 √ 2 (|0 + |1 ), |+i = 1 √ 2 (|0 + |1 )}, and the measurement space via tensor products of σ x , σ y , and σ z . Thus a total of 4 n 3 n experiments are conducted to estimate 4 2n probabilities. After reconstructing the channel U A−S through QPT, we extract the error channel by composing with the inverse of the ideal channel E = U • U −1 ideal . The error channel is converted to the Pauli-transfer matrix representation R, which is strictly real. In the ideal case, R = I, the identity matrix -representing no errors. The absolute difference between the noisy reconstructed R and the ideal |R − I| is shown in Figure 7. The average gate fidelity of the reconstructed channels were F = 0.827, F = 0.877, and F = 0.846 for ibmq lima, ibmq belem, and ibm perth, respectively. While the average gate fidelities are comparable, we can see clear differences in matrix entries in Figure 7. Typically, two-qubit gates will have coherent errors due to imperfections in calibration from unwanted terms in the cross-resonance interaction Hamiltonian [47,48]. B. Evaluation of Qubit-Qubit Protocol We employed the MIQS protocol to prepare 1-qubit stabilizer states. The stabilizer states serve as a suitable unitary 3-design for the randomized benchmarking protocol. Stabilizer states can also be defined as the states that are produced by gates from the Clifford group (H, CN OT , and S gates) applied to |0 state. We express the system-qubit density state as ρ S (n) = 1 2 (I + s(n) · σ) (26) where s(n) is a three-component vector that depends on the current iteration n of the steering protocol, and σ is a vector of the Pauli matrices. The single qubit stabilizers, their vector coordinates s, and the necessary steering operator U A,S are summarized in Table II. Following Section II B, we develop the quantum circuits for each desired stabilizer state. We ran the experiment 30 times, with 1024 shots each, using quantum process tomography to estimate the density state of the system at each step n of the MIQS protocol. Figure 6 shows the average result, along with error bars, of running the circuit from Figure 2a to prepare |ψ ⊕ = |+ for n up to 30. The error bars indicate the decoherence associated with the system qubit. Namely, for increased n, we see an increase in uncertainty of the measured density state. We then compute the fidelity for all stabilizer states, and find their average. Figure 8a shows the average fidelity for all singlequbit stabilizer states. Furthermore, Figure 8b confirms that the steering inequality (Equation 2) is satisfied. The quantum computer ibmq perth, achieved the highest overall fidelity and stability. |ψ⊕ θ φ s UA,S |0 0 0 (0, 0, 1) exp −i J 2 (σ x A σ x S + σ y A σ y S ) |1 π 0 (0, 0, -1) exp −i J 2 (σ x A σ x S − σ y A σ y S ) |+ π 2 0 (1, 0, 0) exp −i J 2 (σ x A σ z S − σ y A σ y S ) |− π 2 π (-1, 0, 0) exp −i J 2 (σ x A σ z S + σ y A σ y S ) |i π 2 π 2 (0, 1, 0) exp −i J 2 (σ y A σ x S − σ x A σ z S ) |−i π 2 3π 2 (0, -1, 0) exp −i J 2 (σ x A σ z S − σ y A σ x S ) As noted in Section II D, the qubit-qubit operator U A,S can be characterized by the coupling parameter J. In theory, the parameter is associated with the strength of entanglement necessary. To experimentally analyze the role that J plays, we prepare the stabilizer states with varying coupling J. Figure 9a shows the fidelity of preparing the |+ state for varying J on ibmq perth. Although J = π/2 achieves the fastest convergence, it does not correspond to the highest fidelity. Figure 9b shows the average of steering all the stabilizer states as computed by F = 1 6 6 i=1 ψ i | ρ i |ψ i .(27) On average, the fidelity tends to decrease with smaller J values. Figure 10 takes that average number of repetitions (application of ancilla-system entanglement operation in Figure 2a) needed to obtain a fidelity F > 0.9 and compares it against the active steering approach. Note that we end the protocol once the readout of the ancilla is a 1. lead to the desired fidelity. For example, the leftmost bar shows that the passive quantum steering can reach the desired fidelity 10% of the time (e.g., out of 100 runs) if we apply the entanglement operation only once (n=1 in Figure 2a). C. Evaluation of Qubit-Qutrit Protocol Quantum control beyond the two-level system has been exploited in superconducting quantum processors since the beginning of this technology. Examples include utilizing the higher levels for qubit readout [49][50][51], faster qubit initialization [52], and spin-1 quantum simulation [53]. Steps towards ternary quantum computation with superconducting transmon devices have developed in the last 10 years [54][55][56][57][58][59][60][61]. Recently, these efforts have led to the implementation of high-fidelity single-qutrit gates [45,62]. Many physical devices, such as superconducting transmons, naturally have higher-energy states which are often ignored to realize qubits. However, controlling the higher-energy states can be tricky, requiring additional techniques to produce a desired evolution. Our goal is to prepare a qutrit in an arbitrary state utilizing an ancilla qubit. However, controlling qutrits can be a difficult task. There are various factors that need to be calibrated, such as frequency of the drive, amplitidue of the drive, leakage, etc. We believe MIQS can simplify initialization of a qutrit, by coupling it to a qubit. We demonstrate the protocol by the preparing an equal superposition qutrit state |ψ ⊕ = 1 √ 3 (|0 + |1 + |2 )(28) via a qubit-qutrit operator as defined by Equation 19. The protocol is repeated N times, where at each step n we perform qutrit quantum state tomography (see Appendix D). Figure 11 shows the estimated average fidelity at each step n on ibmq perth. In comparison with the qubit case, the qutrit fidelity has increased error as a result of: (1) measurement error for classifying the |2 state, (2) coherence time of the |2 state, (3) heightened complexity of perform full qutrit state tomography. IV. CONCLUSIONS AND OUTLOOKS A major challenge in quantum computing is efficiently preparing an initial (arbitrary) state. We experimentally demonstrate measurement-induced steering on contemporary superconducting quantum computer to prepare arbitrary qubit and qutrit states. By applying a simple repetition of gates and ancilla measurements, we generate arbitrary qubit states with fidelity 93 ± 1% and arbitrary qutrit states with fidelity 80 ± 9%. To achieve this, we generate optimal quantum circuits that implement the steering operator, and experimentally reconstruct the density states via quantum state tomography to obtain the fidelity. We explored the dependence of a tunable parameter that relates fidelity convergence with the number of repetitions of the protocol. Additionally, we noted that by taking advantage of readout outcomes, we may accelerate the convergence. Furthermore, for qutrit functionality, we calibrate qutrit gates using the pulse-level programming model Qiskit Pulse via cloud access to IBM Quantum devices. [34], with an exponential decaying count frequency (log scale). The mean number of repetitions is N passive mean ≈ 3.8. The active approach has a 2.5 times improvement compared to the passive approach with a mean repetition of N active mean ≈ 1.6. The cumulative distribution function (CDF) is also shown, further displaying the faster convergence of the active protocol. Traditionally, the fidelity of an initialized state and the fidelity of a quantum gate are considered independently. We demonstrate that by utilizing the programmability of a digital quantum processor, arbitrary quantum state can be prepared via a simple protocol of repeatedly executing the same small set of quantum gates. The success of the protocol -achieving high state initialization fidelity -depends primarily on the fidelity of the quantum gates and stability of qubits. Therefore, from a quantum engineers point of view, the task of state preparation may be considered a byproduct of achieving high gate fidelity. Additionally, we demonstrate state preparation of a qutrit, escaping the conventional notation of a binary quantum system. From a quantum technology point of view, the ability to access more quantum information in higher dimensions has direct advantages in quantum error-correcting codes, as well as asymptotic improvements in computation in comparison with binary computation. Traditional control of a qutrit introduces further engineering overhead, such as careful calibration of drive frequency, drive amplitude, and phases. From a device design standpoint, several compromises need to be made, including speed of readout versus the coherence of a qutrit. However, for the task of qutrit state preparation via steering, a specific subclass of qutrit gates is needed to prepare an arbitrary state which lowers the engineering overhead. We demonstrated the necessary calibrations and executions of qutrit gates on superconducting transmons to prepare an equal-superposition qutrit state. We believe this research paves a path to reliably prepare higher-dimensional quantum states on experimental platforms. Future work in utilizing steering for state preparation on experimental quantum devices consists of several challenges and possible directions: Entangled-state preparation: highly-entangled states are crucial for implementing error-correcting codes and performing quantum information processing. However, preparing an arbitrary entangled state via steering requires appropriately coupling to measurement-capable ancilla qubits. Contemporary superconducting quantum devices have restrictive device connectivity between qubits, which introduces additional overhead to transfer quantum information (i.e. via SWAP). Trapped ion quantum computers may be better suited for this task due to all-to-all coupling between qubits. Unfortunately, compared to superconducting qubits, measurement operations on trapped ion qubits are more disruptive due to stray light [63]. Assessing the feasibility of steering on various contemporary hardware platforms remains an open challenge. Device-specific measurement: it is rarely the case that measurements are conducted on a qubit directly. Instead, measurement typically observes what effect a system |ψ has on an environment. Generally, the system is coupled with an apparatus |θ to give an overall state |Ψ = U |θ ⊗ |ψ after an entangling operation U . Then a measurement is conducted on the apparatus which disentangles it from the system. For example, superconducting transmon qubits are measured through a readout resonator which couples with the transmon. A frequency shift of the resonator is observed depending on the state of the transmon [64]. Therefore, assuming an appropriate entanglement U , it is possible to utilize quantum steering to prepare arbitrary system quantum states by coupling and measuring an apparatus -thereby reducing the overall use of expensive qubits to act as ancillas. Parameterized quantum algorithms: many near-term quantum algorithms utilize parameterized quantum circuits to prepare quantum states such that an expectation value is minimized [65]. Unfortunately, parameterized circuits suffer from barren plateaus whereby a classical optimizer is unable to solve the high-dimensional non-convex optimization [66]. Quantum steering provides theoretical guarantee to state initialization, and may overcome pitfalls in traditional parameterized quantum circuits. Namely, active steering provides a feedback mechanism whereby the optimization may be aided by conducting local decisions rather than finding a global optimal directly. Steering quantum gates: certain systems contain a dark space that is spanned by several dark states. A closed (non-)adiabatic trajectory can be used to induce a unitary operator in the dark space [67,68]. In other words, the generalization of the Berry phase -a nonabelian holonomy -can be used to realize quantum gates [69]. An intriguing direction is to study the role that a steering protocol may play in realizing quantum gates via a holonomy. ACKNOWLEDGMENTS The authors gratefully thank the IBM Quantum team and the services offered through the IBM Quantum Researchers Program. The authors also acknowledge support from the National Science Foundation, Grant No. CCF-1908131. To analyze the recurrence relation given by Equation (1), it is helpful to utilize the theory of open quantum systems. Specifically, we may diagonalize the state of the ancilla qubits, ρ A = i p i |ψ i ψ i |, and evaluate the partial trace: ρ n+1 S = Tr A U i p i |ψ i A ψ i | A ⊗ ρ n S U † = k k| A U i p i |ψ i A ψ i | A ⊗ ρ n S U † |k A = k i A k,i ρ n S A † k,i .(A1) The Kraus operators A k,i = √ p i k| U |ψ i express the evolution of the system ρ S assuming that it is initially separable from the ancilla. In our work, we prepare the ancillas to known states, such as |ψ A = {|0 , |1 }. Therefore, we have a fixed |ψ i and need only Kraus operators A k . Hence, the evolution is ρ n+1 S = k A k ρ n A † k (A2) where k enumerates the possible measurement outcomes of the ancilla. Appendix C: Chip Characterization In this paper, we used the IBM Quantum Falcon Processors ibmq lima, ibmq belem, and ibm perth. Qiskit Pulse was used to perform pulse-level control, particularly to establish qutrit operations. In the steering protocol, the ancilla qubit must be reset to a known state. This is achieved on IBM Quantum Computers via a mid-circuit qubit active reset. We benchmark the probability of the measurement result |0 as shown in Figure 12. Applying several consecutive active resets improves the fidelity of preparing |0 . For the transmons that implements our qutrit, we first found the transition frequency f 12 . Figure 13a shows this sweep in frequency to find the excitation. We then performed a Rabi experiment to obtain the amplitude of the π 1→2 pulse to define rotations in the (12) sub-space. Figure 13b shows the result of this calibration. The measurement discriminator to classify qutrit states is shown in Figure 14, with an accuracy of 0.917. To improve the accuracy of the discriminator, measurement error mitigation was performed by correcting the average counts via a correction matrix. The matrix was generated by preparing 6 basis input states (|00 , |01 , |02 , |10 , |11 , |12 ) and computing the cor- In the qubit case, an arbitrary qubit density state ρ = 1 2 (I + a x σ x + a y σ y + a z σ z ), with real parameters a j , can be recovered by computing the expectation values of the Pauli matrices. For example the expectation value of the σ x Pauli matrix σ x = 1 2 (a x · 2) = a x (D2) yields the coefficient a x of our density state. We utilized the fact that Pauli matrices follow the identity Tr(σ α σ β ) = 2δ α,β . The expectation value of the Pauli matrices has a direct relation with the density matrix of the state. Therefore by computing the expectation values of an appropriate set of observables we can compute the density state. In general, given an observable M , we may diagonalize it by a unitary matrix U and a diagonal matrix with real entries Λ corresponding to the eigenvalues ψ| M |ψ = ψ| U † ΛU |ψ = ψ | Λ |ψ = i ψ |i i| Λ |i i|ψ = i λ i | i|ψ | 2 . (D3) Therefore, we let the quantum computer perform the operation U , and then measure in a standard basis |i . The expectation value of the observable is then recovered by multiplying the outcomes by the eigenvalues λ i . In our qutrit case, a general density state has the form ρ = 1 3 I 3 + n · λ(D4) where n = (n 1 , n 2 , . . . , n 8 ) are 8 real parameters and λ = (λ 1 , λ 2 , . . . λ 8 ) are 3 × 3 Gell-Mann matrices. Similar to the Pauli matrices, the Gell-Mann matrices satisfy the identity Tr(λ α λ β ) = 2δ α,β . Therefore, computing the expectation value λ i of a qutrit density state will uncover the coefficient n i . √ 2 ( 2based quantum circuit representation of the ancillasystem entangling operator U for the target state |0 ⊗ |+ = |0 ⊗ 1 |0 + |1 ). FIG. 2 . 2An overview of the quantum steering protocol. are used in constructing the Hamiltonian. Line 11 solves the time evolution of the Hamiltonian with some coupling parameter J. FIG. 3 . 3The Weyl Chamber representing coordinates of nonlocal two-qubit unitaries. All possible two-qubit steering operators U are represented by the blue line. The coordinates are given by the coupling parameter J, namely [J, J, 0]. Maximum entanglement is achieved when J = π/2, corresponding to the point A2 in the chamber. Individual points correspond to the maximum fidelity achieved when executing the steering protocol with a steering operator given by a choice of J. FIG. 5 . 5The schematic of superconducting computers that realizes our qubit-qubit and qubit-qutrit coupling. FIG. 6 .FIG. 7 . 67Each bar in the figure indicates what percentage of runs Steering experiment on three IBM Quantum (IBMQ) machines.|R − I| ibmq lima |R − I| ibmq belem |R − I| ibm perth Process Tomography of the steering circuit to prepare |+ on IBM Quantum machines. Both ibm lima and ibm belem are 5-qubit Falcon r4 (year 2020) processors with a quantum volume of 8 and 16, respectively. ibm perth is a 7-qubit Falcon r.511H (year 2021) processor with a quantum volume of 32. As indicated by the quantum volume benchmark, ibm perth qubits are expected to have higher stability and lifetime. state fidelity between ρ n and target state. FIG. 8 . 8Convergence of qubit fidelity throughout the execution of the steering protocol. (a) Depicts the estimated fidelity across three IBM quantum machines, with the best fidelity being achieved by ibm perth. (b) Shows that the steering inequality given by Equation 2 is satisfied. fidelity of preparing stabilizer states versus the number of repetitions N with different coupling strengths J. For certains values of J, the fidelity decreases at first before increasing. fidelity of steering to all stabilizer states with different coupling strengths J. The number of repetitions of the protocol (vertical dots) is optimally chosen for each J. Maximum fidelity of 93 ± 1% is observed for J = π/2 + π/8. FIG. 9 . 9Preparation of qubit stabilizer states with various coupling parameter J. The fidelity is given as an average of all stabilizer states. All experiments are performed on ibm perth. .10. Histogram of protocol repetitions (effort) for preparing stabilizer states with varying steering operators determined by coupling strength J. Passive steering exhibits a Poissonian process FIG. 11 . 11Average qutrit state fidelity between ρ n and the desired target state |ψ⊕ = 1 √ 3 (|0 + |1 + |2 ). The errors are primarily from inherent measurement error in discriminating the qutrit state, weaker T1 coherence time of the |1 → |2 subspace, and increased overhead in performing qutrit state tomography. We obtained a state fidelity of 80 ± 9%. . 12. Spread of active reset fidelity in initializing |0 on different IBM Quantum computers. Active reset measures the qubit, classically checks the readout, and then rotates the qubit if necessary. to calibrate the amplitude of the π 1→2 pulse on ibm perth. FIG. 13 . 13Frequency and amplitude sweeps to define qutrit operations in the |1 → |2 subspace of the superconducting transmon. FIG. 14 . 14Discriminator to classify the measurement results for a superconducting transmon qutrit. The accuracy of the discriminator is 0.917. responding probabilities of measuring counts in other basis states. Appendix D: Qutrit Quantum State Tomography TABLE II . IISingle qubit stabilizers parameterized by angles θ and φ the steering operator UA,S for the MIQS protocol. Appendix A: Operator-Sum Representation Measured Signal [arb. units] |1 → |2 Frequency Sweep4.50 4.52 4.54 Frequency [GHz] 2.5 2.6 2.7 2.8 2.9 Appendix B: Analysis of Steering to |+ The ancilla-system entanglement operator that drives the system to |+ is given by Equation13. The Kraus operators that govern system-qubit evolution are given by:By solving the operator-sum evolution given by Equation A2, we have the following recurrence relations for the components of s(n) In other words, the system state converges exponentially to our desired |+ +| = (I + σ x )/2 state with respect to the number of steps n and does not depend on the initial conditions. Furthermore, the fastest convergence is achieved in one step with J = π/2. The Initialization Problem in Quantum Computing. S Kak, 10.1023/A:1018877706849Foundations of Physics. 29267S. Kak, The Initialization Problem in Quantum Comput- ing, Foundations of Physics 29, 267 (1999). The Physical Implementation of Quantum Computation. D P Divincenzo, 10.1002/1521-3978(200009)48:9/11<771::AID-PROP771>3.0.CO;2-EFortschritte der Physik. 48771D. P. DiVincenzo, The Physical Implementation of Quantum Computation, Fortschritte der Physik 48, 771 (2000). Quantum computations: Algorithms and error correction. A Y Kitaev, 10.1070/RM1997v052n06ABEH002155Russ. Math. Surv. 521191A. Y. Kitaev, Quantum computations: Algorithms and error correction, Russ. Math. Surv. 52, 1191 (1997). Superconducting qubit in a waveguide cavity with a coherence time approaching 0.1 ms. C Rigetti, J M Gambetta, S Poletto, B L T Plourde, J M Chow, A D Córcoles, J A Smolin, S T Merkel, J R Rozen, G A Keefe, M B Rothwell, M B Ketchen, M Steffen, 10.1103/PhysRevB.86.100506Phys. Rev. B. 86100506C. Rigetti, J. M. Gambetta, S. Poletto, B. L. T. Plourde, J. M. Chow, A. D. Córcoles, J. A. Smolin, S. T. Merkel, J. R. Rozen, G. A. Keefe, M. B. Rothwell, M. B. Ketchen, and M. Steffen, Superconducting qubit in a waveguide cavity with a coherence time approaching 0.1 ms, Phys. Rev. B 86, 100506 (2012). High-Fidelity Preparation, Gates, Memory, and Readout of a Trapped-Ion Quantum Bit. T P Harty, D T C Allcock, C J Ballance, L Guidoni, H A Janacek, N M Linke, D N Stacey, D M Lucas, 10.1103/PhysRevLett.113.220501Phys. Rev. Lett. 113220501T. P. Harty, D. T. C. Allcock, C. J. Ballance, L. Guidoni, H. A. Janacek, N. M. Linke, D. N. Stacey, and D. M. Lucas, High-Fidelity Preparation, Gates, Memory, and Readout of a Trapped-Ion Quantum Bit, Phys. Rev. Lett. 113, 220501 (2014). Fundamental bounds on qubit reset. D Basilewitsch, J Fischer, D M Reich, D Sugny, C P Koch, 10.1103/PhysRevResearch.3.013110Phys. Rev. Res. 313110D. Basilewitsch, J. Fischer, D. M. Reich, D. Sugny, and C. P. Koch, Fundamental bounds on qubit reset, Phys. Rev. Res. 3, 013110 (2021). Egger, Minimum Quantum Run-Time Characterization and Calibration via Restless Measurements with Dynamic Repetition Rates. C Tornow, N Kanazawa, W E Shanks, D J , 10.1103/PhysRevApplied.17.064061Phys. Rev. Appl. 1764061C. Tornow, N. Kanazawa, W. E. Shanks, and D. J. Eg- ger, Minimum Quantum Run-Time Characterization and Calibration via Restless Measurements with Dynamic Repetition Rates, Phys. Rev. Appl. 17, 064061 (2022). Heralded State Preparation in a Superconducting Qubit. J E Johnson, C Macklin, D H Slichter, R Vijay, E B Weingarten, J Clarke, I Siddiqi, 10.1103/PhysRevLett.109.050506Phys. Rev. Lett. 10950506J. E. Johnson, C. Macklin, D. H. Slichter, R. Vijay, E. B. Weingarten, J. Clarke, and I. Siddiqi, Heralded State Preparation in a Superconducting Qubit, Phys. Rev. Lett. 109, 050506 (2012). Initialization by Measurement of a Superconducting Quantum Bit Circuit. D Ristè, J G Van Leeuwen, H.-S Ku, K W Lehnert, L Dicarlo, 10.1103/PhysRevLett.109.050507Phys. Rev. Lett. 10950507D. Ristè, J. G. van Leeuwen, H.-S. Ku, K. W. Lehn- ert, and L. DiCarlo, Initialization by Measurement of a Superconducting Quantum Bit Circuit, Phys. Rev. Lett. 109, 050507 (2012). D K Park, N A Rodriguez-Briones, G Feng, R R Darabad, J Baugh, R Laflamme, 10.48550/arXiv.1501.00952arxiv:arXiv:1501.00952Heat Bath Algorithmic Cooling with Spins: Review and Prospects. D. K. Park, N. A. Rodriguez-Briones, G. Feng, R. R. Darabad, J. Baugh, and R. Laflamme, Heat Bath Al- gorithmic Cooling with Spins: Review and Prospects (2015), arxiv:arXiv:1501.00952. Algorithmic cooling and scalable NMR quantum computers. P O Boykin, T Mor, V Roychowdhury, F Vatan, R Vrijen, 10.1073/pnas.241641898Proceedings of the National Academy of Sciences. 993388P. O. Boykin, T. Mor, V. Roychowdhury, F. Vatan, and R. Vrijen, Algorithmic cooling and scalable NMR quan- tum computers, Proceedings of the National Academy of Sciences 99, 3388 (2002). J M Fernandez, S Lloyd, T Mor, V Roychowdhury, 10.48550/arXiv.quant-ph/0401135arxiv:arXiv:quant-ph/0401135Algorithmic Cooling of Spins: A Practicable Method for Increasing Polarization. J. M. Fernandez, S. Lloyd, T. Mor, and V. Roy- chowdhury, Algorithmic Cooling of Spins: A Prac- ticable Method for Increasing Polarization (2004), arxiv:arXiv:quant-ph/0401135. Prospects and limitations of algorithmic cooling. G Brassard, Y Elias, T Mor, Y Weinstein, 10.1140/epjp/i2014-14258-0Eur. Phys. J. Plus. 129258G. Brassard, Y. Elias, T. Mor, and Y. Weinstein, Prospects and limitations of algorithmic cooling, Eur. Phys. J. Plus 129, 258 (2014). Heat-bath algorithmic cooling with correlated qubit-environment interactions. N A Rodríguez-Briones, J Li, X Peng, T Mor, Y Weinstein, R Laflamme, 10.1088/1367-2630/aa8fe0New J. Phys. 19113047N. A. Rodríguez-Briones, J. Li, X. Peng, T. Mor, Y. We- instein, and R. Laflamme, Heat-bath algorithmic cooling with correlated qubit-environment interactions, New J. Phys. 19, 113047 (2017). Colloquium: Non-Markovian dynamics in open quantum systems. H.-P Breuer, E.-M Laine, J Piilo, B Vacchini, 10.1103/RevModPhys.88.021002Rev. Mod. Phys. 8821002H.-P. Breuer, E.-M. Laine, J. Piilo, and B. Vacchini, Colloquium: Non-Markovian dynamics in open quantum systems, Rev. Mod. Phys. 88, 021002 (2016). Demonstration of non-Markovian process characterisation and control on a quantum processor. G A L White, C D Hill, F A Pollock, L C L Hollenberg, K Modi, 10.1038/s41467-020-20113-3Nat Commun. 116301G. A. L. White, C. D. Hill, F. A. Pollock, L. C. L. Hol- lenberg, and K. Modi, Demonstration of non-Markovian process characterisation and control on a quantum pro- cessor, Nat Commun 11, 6301 (2020). Fast reset and suppressing spontaneous emission of a superconducting qubit. M D Reed, B R Johnson, A A Houck, L Dicarlo, J M Chow, D I Schuster, L Frunzio, R J Schoelkopf, 10.1063/1.3435463Appl. Phys. Lett. 96203110M. D. Reed, B. R. Johnson, A. A. Houck, L. DiCarlo, J. M. Chow, D. I. Schuster, L. Frunzio, and R. J. Schoelkopf, Fast reset and suppressing spontaneous emis- sion of a superconducting qubit, Appl. Phys. Lett. 96, 203110 (2010). Demonstrating a Driven Reset Protocol for a Superconducting Qubit. K Geerlings, Z Leghtas, I M Pop, S Shankar, L Frunzio, R J Schoelkopf, M Mirrahimi, M H Devoret, 10.1103/PhysRevLett.110.120501Phys. Rev. Lett. 110120501K. Geerlings, Z. Leghtas, I. M. Pop, S. Shankar, L. Frun- zio, R. J. Schoelkopf, M. Mirrahimi, and M. H. Devoret, Demonstrating a Driven Reset Protocol for a Supercon- ducting Qubit, Phys. Rev. Lett. 110, 120501 (2013). Beating the limits with initial correlations. D Basilewitsch, R Schmidt, D Sugny, S Maniscalco, C P Koch, 10.1088/1367-2630/aa96f8New J. Phys. 19113042D. Basilewitsch, R. Schmidt, D. Sugny, S. Maniscalco, and C. P. Koch, Beating the limits with initial correla- tions, New J. Phys. 19, 113042 (2017). E Schrödinger, 10.1007/BF01505681Die Erfassung der Quantengesetze durch kontinuierliche Funktionen. 17486E. Schrödinger, Die Erfassung der Quantengesetze durch kontinuierliche Funktionen, Naturwissenschaften 17, 486 (1929). Discussion of Probability Relations between Separated Systems. E Schrödinger, 10.1017/S0305004100013554Mathematical Proceedings of the Cambridge Philosophical Society. 31555E. Schrödinger, Discussion of Probability Relations be- tween Separated Systems, Mathematical Proceedings of the Cambridge Philosophical Society 31, 555 (1935). Steering, Entanglement, Nonlocality, and the Einstein-Podolsky-Rosen Paradox. H M Wiseman, S J Jones, A C Doherty, 10.1103/PhysRevLett.98.140402Phys. Rev. Lett. 98140402H. M. Wiseman, S. J. Jones, and A. C. Doherty, Steering, Entanglement, Nonlocality, and the Einstein-Podolsky- Rosen Paradox, Phys. Rev. Lett. 98, 140402 (2007). Entanglement, Einstein-Podolsky-Rosen correlations, Bell nonlocality, and steering. S J Jones, H M Wiseman, A C Doherty, 10.1103/PhysRevA.76.052116Phys. Rev. A. 7652116S. J. Jones, H. M. Wiseman, and A. C. Doherty, Entan- glement, Einstein-Podolsky-Rosen correlations, Bell non- locality, and steering, Phys. Rev. A 76, 052116 (2007). Quantum steering: A review with focus on semidefinite programming. D Cavalcanti, P Skrzypczyk, 10.1088/1361-6633/80/2/024001arxiv:1604.00501Rep. Prog. Phys. 8024001quantphD. Cavalcanti and P. Skrzypczyk, Quantum steering: A review with focus on semidefinite programming, Rep. Prog. Phys. 80, 024001 (2017), arxiv:1604.00501 [quant- ph]. Experimental EPR-steering using Bell-local states. D J Saunders, S J Jones, H M Wiseman, G J Pryde, 10.1038/nphys1766Nature Phys. 6845D. J. Saunders, S. J. Jones, H. M. Wiseman, and G. J. Pryde, Experimental EPR-steering using Bell-local states, Nature Phys 6, 845 (2010). D H Smith, G Gillett, M P De Almeida, C Branciard, A Fedrizzi, T J Weinhold, A Lita, B Calkins, T Gerrits, H M Wiseman, S W Nam, A G White, 10.1038/ncomms1628Conclusive quantum steering with superconduct. 3625ing transition-edge sensorsD. H. Smith, G. Gillett, M. P. de Almeida, C. Bran- ciard, A. Fedrizzi, T. J. Weinhold, A. Lita, B. Calkins, T. Gerrits, H. M. Wiseman, S. W. Nam, and A. G. White, Conclusive quantum steering with superconduct- ing transition-edge sensors, Nat Commun 3, 625 (2012). Arbitrarily Loss-Tolerant Einstein-Podolsky-Rosen Steering Allowing a Demonstration over 1 km of Optical Fiber with No Detection Loophole. A J Bennet, D A Evans, D J Saunders, C Branciard, E G Cavalcanti, H M Wiseman, G J Pryde, 10.1103/PhysRevX.2.031003Phys. Rev. X. 231003A. J. Bennet, D. A. Evans, D. J. Saunders, C. Branciard, E. G. Cavalcanti, H. M. Wiseman, and G. J. Pryde, Ar- bitrarily Loss-Tolerant Einstein-Podolsky-Rosen Steering Allowing a Demonstration over 1 km of Optical Fiber with No Detection Loophole, Phys. Rev. X 2, 031003 (2012). Observation of one-way Einstein-Podolsky-Rosen steering. V Händchen, T Eberle, S Steinlechner, A Samblowski, T Franz, R F Werner, R Schnabel, 10.1038/nphoton.2012.202Nature Photon. 6596V. Händchen, T. Eberle, S. Steinlechner, A. Samblowski, T. Franz, R. F. Werner, and R. Schnabel, Observation of one-way Einstein-Podolsky-Rosen steering, Nature Pho- ton 6, 596 (2012). Realization of the Einstein-Podolsky-Rosen paradox for continuous variables. Z Y Ou, S F Pereira, H J Kimble, K C Peng, 10.1103/PhysRevLett.68.3663Phys. Rev. Lett. 683663Z. Y. Ou, S. F. Pereira, H. J. Kimble, and K. C. Peng, Realization of the Einstein-Podolsky-Rosen paradox for continuous variables, Phys. Rev. Lett. 68, 3663 (1992). Loophole-free Einstein-Podolsky-Rosen experiment via quantum steering. B Wittmann, S Ramelow, F Steinlechner, N K Langford, N Brunner, H M Wiseman, R Ursin, A Zeilinger, 10.1088/1367-2630/14/5/053030New J. Phys. 1453030B. Wittmann, S. Ramelow, F. Steinlechner, N. K. Langford, N. Brunner, H. M. Wiseman, R. Ursin, and A. Zeilinger, Loophole-free Einstein-Podolsky-Rosen ex- periment via quantum steering, New J. Phys. 14, 053030 (2012). Measurement-induced steering of quantum systems. S Roy, J T Chalker, I V Gornyi, Y Gefen, 10.1103/PhysRevResearch.2.033347Phys. Rev. Res. 233347S. Roy, J. T. Chalker, I. V. Gornyi, and Y. Gefen, Measurement-induced steering of quantum systems, Phys. Rev. Res. 2, 033347 (2020). Engineering twoqubit mixed states with weak measurements. P Kumar, K Snizhko, Y Gefen, 10.1103/PhysRevResearch.2.042014Phys. Rev. Res. 242014P. Kumar, K. Snizhko, and Y. Gefen, Engineering two- qubit mixed states with weak measurements, Phys. Rev. Res. 2, 042014 (2020). Optimized steering: Quantum state engineering and exceptional points. P Kumar, K Snizhko, Y Gefen, B Rosenow, 10.1103/PhysRevA.105.L010203Phys. Rev. A. 10510203P. Kumar, K. Snizhko, Y. Gefen, and B. Rosenow, Op- timized steering: Quantum state engineering and excep- tional points, Phys. Rev. A 105, L010203 (2022). Y Herasymenko, I Gornyi, Y Gefen, 10.48550/arXiv.2111.09306arxiv:arXiv:2111.09306Measurementdriven navigation in many-body Hilbert space: Activedecision steering (2022). Y. Herasymenko, I. Gornyi, and Y. Gefen, Measurement- driven navigation in many-body Hilbert space: Active- decision steering (2022), arxiv:arXiv:2111.09306. Decompositions of unitary evolutions and entanglement dynamics of bipartite quantum systems. D , R Romano, 10.1063/1.2245205J. Math. Phys. 4782109D. D'Alessandro and R. Romano, Decompositions of uni- tary evolutions and entanglement dynamics of bipartite quantum systems, J. Math. Phys. 47, 082109 (2006). D , Introduction to Quantum Control and Dynamics. CRC PressD. D'Alessandro, Introduction to Quantum Control and Dynamics (CRC Press, 2021). . G H Low, I L Chuang, 10.22331/q-2019-07-12-163Hamiltonian Simulation by Qubitization, Quantum. 3163G. H. Low and I. L. Chuang, Hamiltonian Simulation by Qubitization, Quantum 3, 163 (2019). Fixed Depth Hamiltonian Simulation via Cartan Decomposition. E Kökcü, T Steckmann, Y Wang, J K Freericks, E F Dumitrescu, A F Kemper, 10.1103/PhysRevLett.129.070501Phys. Rev. Lett. 12970501E. Kökcü, T. Steckmann, Y. Wang, J. K. Freericks, E. F. Dumitrescu, and A. F. Kemper, Fixed Depth Hamilto- nian Simulation via Cartan Decomposition, Phys. Rev. Lett. 129, 070501 (2022). M A Nielsen, I Chuang, Quantum computation and quantum information. M. A. Nielsen and I. Chuang, Quantum computation and quantum information (2002). Geometric theory of nonlocal two-qubit operations. J Zhang, J Vala, S Sastry, K B Whaley, 10.1103/PhysRevA.67.042313Phys. Rev. A. 6742313J. Zhang, J. Vala, S. Sastry, and K. B. Whaley, Geometric theory of nonlocal two-qubit operations, Phys. Rev. A 67, 042313 (2003). . Ibm Quantum, IBM Quantum, https://quantum-computing.ibm.com/. Qiskit pulse: Programming quantum computers through the cloud with pulses. T Alexander, N Kanazawa, D J Egger, L Capelluto, C J Wood, A Javadi-Abhari, D C Mckay, 10.1088/2058-9565/aba404Quantum Sci. Technol. 544006T. Alexander, N. Kanazawa, D. J. Egger, L. Capel- luto, C. J. Wood, A. Javadi-Abhari, and D. C. McKay, Qiskit pulse: Programming quantum computers through the cloud with pulses, Quantum Sci. Technol. 5, 044006 (2020). A quantum engineer's guide to superconducting qubits. P Krantz, M Kjaergaard, F Yan, T P Orlando, S Gustavsson, W D Oliver, 10.1063/1.5089550Applied Physics Reviews. 621318P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gus- tavsson, and W. D. Oliver, A quantum engineer's guide to superconducting qubits, Applied Physics Reviews 6, 021318 (2019). A Galda, M Cubeddu, N Kanazawa, P Narang, N Earnest-Noble, 10.48550/arXiv.2109.00558arxiv:arXiv:2109.00558Implementing a Ternary Decomposition of the Toffoli Gate on Fixed-FrequencyTransmon Qutrits (2021). A. Galda, M. Cubeddu, N. Kanazawa, P. Narang, and N. Earnest-Noble, Implementing a Ternary Decomposi- tion of the Toffoli Gate on Fixed-FrequencyTransmon Qutrits (2021), arxiv:arXiv:2109.00558. Implementation of a Walsh-Hadamard Gate in a Superconducting Qutrit. M A Yurtalan, J Shi, M Kononenko, A Lupascu, S Ashhab, 10.1103/PhysRevLett.125.180504Phys. Rev. Lett. 125180504M. A. Yurtalan, J. Shi, M. Kononenko, A. Lupascu, and S. Ashhab, Implementation of a Walsh-Hadamard Gate in a Superconducting Qutrit, Phys. Rev. Lett. 125, 180504 (2020). Efficient Method for Computing the Maximum-Likelihood Quantum State from Measurements with Additive Gaussian Noise. J A Smolin, J M Gambetta, G Smith, 10.1103/PhysRevLett.108.070502Phys. Rev. Lett. 10870502J. A. Smolin, J. M. Gambetta, and G. Smith, Efficient Method for Computing the Maximum-Likelihood Quan- tum State from Measurements with Additive Gaussian Noise, Phys. Rev. Lett. 108, 070502 (2012). Gambetta, Procedure for systematically tuning up cross-talk in the cross-resonance gate. S Sheldon, E Magesan, J M Chow, J , 10.1103/PhysRevA.93.060302Phys. Rev. A. 9360302S. Sheldon, E. Magesan, J. M. Chow, and J. M. Gam- betta, Procedure for systematically tuning up cross-talk in the cross-resonance gate, Phys. Rev. A 93, 060302 (2016). Special Session: Noise Characterization and Error Mitigation in Near-Term Quantum Computers. C J Wood, 10.1109/ICCD50377.2020.000162020 IEEE 38th International Conference on Computer Design (ICCD. C. J. Wood, Special Session: Noise Characterization and Error Mitigation in Near-Term Quantum Computers, in 2020 IEEE 38th International Conference on Computer Design (ICCD) (2020) pp. 13-16. Rabi Oscillations in a Large Josephson-Junction Qubit. J M Martinis, S Nam, J Aumentado, C Urbina, 10.1103/PhysRevLett.89.117901Phys. Rev. Lett. 89117901J. M. Martinis, S. Nam, J. Aumentado, and C. Urbina, Rabi Oscillations in a Large Josephson-Junction Qubit, Phys. Rev. Lett. 89, 117901 (2002). . K B Cooper, M Steffen, R Mcdermott, R W Simmonds, S Oh, D A Hite, D P Pappas, J , K. B. Cooper, M. Steffen, R. McDermott, R. W. Sim- monds, S. Oh, D. A. Hite, D. P. Pappas, and J. M. Observation of Quantum Oscillations between a Josephson Phase Qubit and a Microscopic Resonator Using Fast Readout. Martinis, 10.1103/PhysRevLett.93.180401Phys. Rev. Lett. 93180401Martinis, Observation of Quantum Oscillations between a Josephson Phase Qubit and a Microscopic Resonator Using Fast Readout, Phys. Rev. Lett. 93, 180401 (2004). High-Fidelity Gates in a Single Josephson Qubit. E Lucero, M Hofheinz, M Ansmann, R C Bialczak, N Katz, M Neeley, A D O&apos;connell, H Wang, A N Cleland, J M Martinis, 10.1103/PhysRevLett.100.247001Phys. Rev. Lett. 100247001E. Lucero, M. Hofheinz, M. Ansmann, R. C. Bialczak, N. Katz, M. Neeley, A. D. O'Connell, H. Wang, A. N. Cleland, and J. M. Martinis, High-Fidelity Gates in a Single Josephson Qubit, Phys. Rev. Lett. 100, 247001 (2008). S O Valenzuela, W D Oliver, D M Berns, K K Berggren, L S Levitov, T P Orlando, 10.1126/science.1134008Microwave-Induced Cooling of a Superconducting Qubit. 3141589S. O. Valenzuela, W. D. Oliver, D. M. Berns, K. K. Berggren, L. S. Levitov, and T. P. Orlando, Microwave- Induced Cooling of a Superconducting Qubit, Science 314, 1589 (2006). M Neeley, M Ansmann, R C Bialczak, M Hofheinz, E Lucero, A D O&apos;connell, D Sank, H Wang, J Wenner, A N Cleland, M R Geller, J M Martinis, 10.1126/science.1173440Emulation of a Quantum Spin with a Superconducting Phase Qudit. 325722M. Neeley, M. Ansmann, R. C. Bialczak, M. Hofheinz, E. Lucero, A. D. O'Connell, D. Sank, H. Wang, J. Wen- ner, A. N. Cleland, M. R. Geller, and J. M. Martinis, Emulation of a Quantum Spin with a Superconducting Phase Qudit, Science 325, 722 (2009). Control and Tomography of a Three Level Superconducting Artificial Atom. R Bianchetti, S Filipp, M Baur, J M Fink, C Lang, L Steffen, M Boissonneault, A Blais, A Wallraff, 10.1103/PhysRevLett.105.223601Phys. Rev. Lett. 105223601R. Bianchetti, S. Filipp, M. Baur, J. M. Fink, C. Lang, L. Steffen, M. Boissonneault, A. Blais, and A. Wallraff, Control and Tomography of a Three Level Superconduct- ing Artificial Atom, Phys. Rev. Lett. 105, 223601 (2010). . A A Abdumalikov, O Astafiev, A M Zagoskin, Yu A Pashkin, Y Nakamura, J S Tsai, 10.1103/PhysRevLett.104.193601Electromagnetically Induced Transparency on a Single Artificial Atom. 104193601Phys. Rev. Lett.A. A. Abdumalikov, O. Astafiev, A. M. Zagoskin, Yu. A. Pashkin, Y. Nakamura, and J. S. Tsai, Electromagnet- ically Induced Transparency on a Single Artificial Atom, Phys. Rev. Lett. 104, 193601 (2010). Experimental realization of non-Abelian non-adiabatic geometric gates. A A AbdumalikovJr, J M Fink, K Juliusson, M Pechal, S Berger, A Wallraff, S Filipp, 10.1038/nature12010Nature. 496482A. A. Abdumalikov Jr, J. M. Fink, K. Juliusson, M. Pechal, S. Berger, A. Wallraff, and S. Filipp, Experi- mental realization of non-Abelian non-adiabatic geomet- ric gates, Nature 496, 482 (2013). Contextuality without nonlocality in a superconducting quantum system. M Jerger, Y Reshitnyk, M Oppliger, A Potočnik, M Mondal, A Wallraff, K Goodenough, S Wehner, K Juliusson, N K Langford, A Fedorov, 10.1038/ncomms12930Nat Commun. 712930M. Jerger, Y. Reshitnyk, M. Oppliger, A. Potočnik, M. Mondal, A. Wallraff, K. Goodenough, S. Wehner, K. Juliusson, N. K. Langford, and A. Fedorov, Contextu- ality without nonlocality in a superconducting quantum system, Nat Commun 7, 12930 (2016). Topological Maxwell Metal Bands in a Superconducting Qutrit. X Tan, D.-W Zhang, Q Liu, G Xue, H.-F Yu, Y.-Q Zhu, H Yan, S.-L Zhu, Y Yu, 10.1103/PhysRevLett.120.130503Phys. Rev. Lett. 120130503X. Tan, D.-W. Zhang, Q. Liu, G. Xue, H.-F. Yu, Y.-Q. Zhu, H. Yan, S.-L. Zhu, and Y. Yu, Topological Maxwell Metal Bands in a Superconducting Qutrit, Phys. Rev. Lett. 120, 130503 (2018). Mixing of coherent waves in a single three-level artificial atom. T Hönigl-Decrinis, I V Antonov, R Shaikhaidarov, V N Antonov, A Yu, O V Dmitriev, Astafiev, 10.1103/PhysRevA.98.041801Phys. Rev. A. 9841801T. Hönigl-Decrinis, I. V. Antonov, R. Shaikhaidarov, V. N. Antonov, A. Yu. Dmitriev, and O. V. Astafiev, Mixing of coherent waves in a single three-level artificial atom, Phys. Rev. A 98, 041801 (2018). Simulating Spin Chains Using a Superconducting Circuit: Gauge Invariance, Superadiabatic Transport, and Broken Time-Reversal Symmetry. A Vepsäläinen, G S Paraoanu, 10.1002/qute.201900121Advanced Quantum Technologies. 31900121A. Vepsäläinen and G. S. Paraoanu, Simulating Spin Chains Using a Superconducting Circuit: Gauge In- variance, Superadiabatic Transport, and Broken Time- Reversal Symmetry, Advanced Quantum Technologies 3, 1900121 (2020). Implementation of a Toffoli gate with superconducting circuits. A Fedorov, L Steffen, M Baur, M P Da Silva, A Wallraff, 10.1038/nature10713Nature. 481170A. Fedorov, L. Steffen, M. Baur, M. P. da Silva, and A. Wallraff, Implementation of a Toffoli gate with super- conducting circuits, Nature 481, 170 (2012). Qutrit Randomized Benchmarking. A Morvan, V V Ramasesh, M S Blok, J M Kreikebaum, K O&apos;brien, L Chen, B K Mitchell, R K Naik, D I Santiago, I Siddiqi, 10.1103/PhysRevLett.126.210504Phys. Rev. Lett. 126210504A. Morvan, V. V. Ramasesh, M. S. Blok, J. M. Kreike- baum, K. O'Brien, L. Chen, B. K. Mitchell, R. K. Naik, D. I. Santiago, and I. Siddiqi, Qutrit Randomized Bench- marking, Phys. Rev. Lett. 126, 210504 (2021). Suppression of midcircuit measurement crosstalk errors with micromotion. J P Gaebler, C H Baldwin, S A Moses, J M Dreiling, C Figgatt, M Foss-Feig, D Hayes, J M Pino, 10.1103/PhysRevA.104.062440Phys. Rev. A. 10462440J. P. Gaebler, C. H. Baldwin, S. A. Moses, J. M. Dreil- ing, C. Figgatt, M. Foss-Feig, D. Hayes, and J. M. Pino, Suppression of midcircuit measurement crosstalk errors with micromotion, Phys. Rev. A 104, 062440 (2021). Rapid Driven Reset of a Qubit Readout Resonator. D T Mcclure, H Paik, L S Bishop, M Steffen, J M Chow, J M Gambetta, 10.1103/PhysRevApplied.5.011001Phys. Rev. Appl. 511001D. T. McClure, H. Paik, L. S. Bishop, M. Steffen, J. M. Chow, and J. M. Gambetta, Rapid Driven Reset of a Qubit Readout Resonator, Phys. Rev. Appl. 5, 011001 (2016). A variational eigenvalue solver on a photonic quantum processor. A Peruzzo, J Mcclean, P Shadbolt, M.-H Yung, X.-Q Zhou, P J Love, A Aspuru-Guzik, J L O&apos;brien, 10.1038/ncomms5213Nat Commun. 54213A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O'Brien, A variational eigenvalue solver on a photonic quantum processor, Nat Commun 5, 4213 (2014). An initialization strategy for addressing barren plateaus in parametrized quantum circuits. E Grant, L Wossnig, M Ostaszewski, M Benedetti, 10.22331/q-2019-12-09-2143214E. Grant, L. Wossnig, M. Ostaszewski, and M. Benedetti, An initialization strategy for addressing barren plateaus in parametrized quantum circuits, Quantum 3, 214 (2019). Appearance of Gauge Structure in Simple Dynamical Systems. F Wilczek, A Zee, 10.1103/PhysRevLett.52.2111Phys. Rev. Lett. 522111F. Wilczek and A. Zee, Appearance of Gauge Structure in Simple Dynamical Systems, Phys. Rev. Lett. 52, 2111 (1984). Non-Abelian Geometric Dephasing. K Snizhko, R Egger, Y Gefen, 10.1103/PhysRevLett.123.060405Phys. Rev. Lett. 12360405K. Snizhko, R. Egger, and Y. Gefen, Non-Abelian Geo- metric Dephasing, Phys. Rev. Lett. 123, 060405 (2019). Holonomic Quantum Computation. P Zanardi, M Rasetti, 10.1016/S0375-9601(99)00803-8arxiv:quant- ph/9904011Physics Letters A. 264P. Zanardi and M. Rasetti, Holonomic Quantum Com- putation, Physics Letters A 264, 94 (1999), arxiv:quant- ph/9904011.
[]
[ "Symmetry Resolved Entanglement of Excited States in Quantum Field Theory III: Bosonic and Fermionic Negativity", "Symmetry Resolved Entanglement of Excited States in Quantum Field Theory III: Bosonic and Fermionic Negativity" ]
[ "Luca Capizzi [email protected][email protected] ", "Michele Mazzoni ", "Olalla A Castro-Alvaredo ♥[email protected] \nDepartment of Mathematics, City\nUniversity of London\n10 Northampton SquareEC1V 0HBUK\n", "\nSISSA and INFN Sezione di Trieste\nvia Bonomea 26534136TriesteItaly\n" ]
[ "Department of Mathematics, City\nUniversity of London\n10 Northampton SquareEC1V 0HBUK", "SISSA and INFN Sezione di Trieste\nvia Bonomea 26534136TriesteItaly" ]
[]
In two recent works, we studied the symmetry resolved Rényi entropies of quasi-particle excited states in quantum field theory. We found that the entropies display many modelindependent features which we discussed and analytically characterised. In this paper we extend this line of investigation by providing analytical and numerical evidence that a similar universal behavior arises for the symmetry resolved negativity. In particular, we compute the ratio of charged moments of the partially transposed reduced density matrix as an expectation value of twist operators. These are "fused" versions of the more traditionally used branch point twist fields and were introduced in a previous work. The use of twist operators allows us to perform the computation in an arbitrary number of spacial dimensions. We show that, in the large-volume limit, only the commutation relations between the twist operators and local fields matter, and computations reduce to a purely combinatorial problem. We address some specific issues regarding fermionic excitations, whose treatment requires the notion of partial time-reversal transformation, and we discuss the differences and analogies with their bosonic counterpart. We find that although the operation of partial transposition requires a redefinition for fermionic theories, the ratio of the negativity moments between an excited state and the ground state is universal and identical for fermions and bosons as well as for a large variety of very different states, ranging from simple qubit states to the excited states of free quantum field theories. Our predictions are tested numerically on a 1D Fermi chain.
null
[ "https://export.arxiv.org/pdf/2302.02666v2.pdf" ]
256,615,808
2302.02666
cc1bf29933d4564c6f3cfb0da8645e4f48281b1b
Symmetry Resolved Entanglement of Excited States in Quantum Field Theory III: Bosonic and Fermionic Negativity February 20, 2023 17 Feb 2023 Luca Capizzi [email protected][email protected] Michele Mazzoni Olalla A Castro-Alvaredo ♥[email protected] Department of Mathematics, City University of London 10 Northampton SquareEC1V 0HBUK SISSA and INFN Sezione di Trieste via Bonomea 26534136TriesteItaly Symmetry Resolved Entanglement of Excited States in Quantum Field Theory III: Bosonic and Fermionic Negativity February 20, 2023 17 Feb 2023Quantum EntanglementSymmetry Resolved EntanglementExcited StatesLog- arithmic Negativity In two recent works, we studied the symmetry resolved Rényi entropies of quasi-particle excited states in quantum field theory. We found that the entropies display many modelindependent features which we discussed and analytically characterised. In this paper we extend this line of investigation by providing analytical and numerical evidence that a similar universal behavior arises for the symmetry resolved negativity. In particular, we compute the ratio of charged moments of the partially transposed reduced density matrix as an expectation value of twist operators. These are "fused" versions of the more traditionally used branch point twist fields and were introduced in a previous work. The use of twist operators allows us to perform the computation in an arbitrary number of spacial dimensions. We show that, in the large-volume limit, only the commutation relations between the twist operators and local fields matter, and computations reduce to a purely combinatorial problem. We address some specific issues regarding fermionic excitations, whose treatment requires the notion of partial time-reversal transformation, and we discuss the differences and analogies with their bosonic counterpart. We find that although the operation of partial transposition requires a redefinition for fermionic theories, the ratio of the negativity moments between an excited state and the ground state is universal and identical for fermions and bosons as well as for a large variety of very different states, ranging from simple qubit states to the excited states of free quantum field theories. Our predictions are tested numerically on a 1D Fermi chain. Introduction Over the past two decades, entanglement measures have been widely studied in the context of low-dimensional quantum field theory, starting with several seminal works [1,2,3,4,5,6,7] which focused on one measure (the entanglement entropy [8]) and on one type of theory, largely 1D conformal field theory (CFT) and its discrete counterpart, critical spin chains. From these papers sprang several important ideas and techniques which have been extensively exploited thereafter. Notable among them are the numerical and analytical observation that the entanglement entropy exhibits universal properties, i.e. properties that depend only on the theory's universality class characterised by the central charge c and, at the technical level, that conformal symmetry in 1D is itself a powerful computational tool. An important idea to emerge from [2,4] and later reinterpreted and generalised to non-critical theories in [9] is that entanglement measures can be written in terms of correlation functions of local fields of the quantum field theory (QFT) under study, or, more precisely, of a replica version thereof. One particular development of these ideas has been the proposal and study of new measures of entanglement, each tailored to capturing particular features of entanglement and/or of the state whose entanglement is being measured. One such new measure is the (logarithmic) negativity [10,11,12,13,14,15,16] which we now introduce. Let us consider a tripartite system consisting of subsystems A, B and C. Let us assume that the Hilbert space of system is factorised as H A∪B∪C = H A ⊗ H B ⊗ H C .(1) We may now consider a state in this Hilbert space. This could be a mixed state (such as a thermal state) or a pure state: in this paper, we will focus on the latter. We now ask: what is the entanglement of A with respect to B given the presence of C? In other words: what is the bipartite entanglement between two non-complementary regions of a quantum system? The answer will depend on the entanglement measure and on the chosen state. If the state is not factorised, that is, there is entanglement, then the answer will be non-trivial and can be measured by the (logarithmic) negativity. Let ρ A∪B be the reduced density matrix (RDM) associated with the subsystem A ∪ B, resulting from tracing out the degrees of freedom of C. If |Ψ is the pure state of the whole system, then ρ A∪B := Tr C (|Ψ Ψ|) . In order to define the logarithmic negativity we first need to introduce the partially transposed version of ρ A∪B , denoted by ρ T B A∪B , as follows. We first pick a basis {|i A }, {|j B } for H A and H B respectively. Then, given the expansion of ρ A∪B in that basis ρ A∪B = i A ,i A ,j B ,j B |i A , j B i A , j B |ρ A∪B |i A , j B i A , j B | ,(3) we require that ρ T B A∪B = i A ,i A ,j B ,j B |i A , j B i A , j B |ρ A∪B |i A , j B i A , j B | .(4) It is possible to show that ρ T B A∪B has real spectrum, but in general it is not positive semidefinite. As a matter of fact, the presence of negative eigenvalues is a signal of quantum entanglement between A and B, which can be quantified by the logarithmic negativity E ≡ log Tr(|ρ T B A∪B |) . Here the trace is understood to be over the Hilbert space associated with A∪B and |·| represents the absolute value of an operator (that is |O| ≡ √ O † O) . While its direct evaluation may be hard, it has been pointed out that the moments of ρ T B A∪B , written as Tr((ρ T B A∪B ) n ) with n ∈ N, have a direct field theoretic description [17,18] and they are easier to compute. Then, one can formally recover the logarithmic negativity as E = lim ne→1 log Tr((ρ T B A∪B ) ne ) ,(6) where the limit is over analytically continued moments for n e even. An interesting issue that is specific to the logarithmic negativity is the fact that the definitions above apply directly to spin chains or bosonic systems, but are ill-suited to treat fermionic systems. The reason for this is rather technical and can be explained in different ways. Unlike for bosons, the (standard) partial transpose of the Gaussian density matrix of a free fermion state is not Gaussian, which makes the computation of the negativity spectrum particularly difficult. There have been several proposals as to how to modify the definition of E in a way that is better adapted to deal with fermionic degrees of freedom. The first definition of partial transposition, specifically modified for fermionic states, was introduced in [19]. However, in ( [20], Appendix A) it was proved that, because of the anticommuting nature of the fermionic degrees of freedom, two of the standard requirements of a partial transposition operation, namely that if ρ ≡ ρ A∪B : (ρ T A ) T B = ρ T and ρ T A 1 ⊗ · · · ⊗ ρ T A n = (ρ 1 ⊗ · · · ⊗ ρ n ) T A , with T representing transposition over the total space (here T = T A∪B ), may not hold with the definition given in [19]. On the other hand, these properties are satisfied with the definition introduced in [20]: this is the time-reversal (or fermionic) negativity, which accounts for the locality properties of fermions [20,21] and which we present below. Following [21], we now choose an occupation-number basis for the Hilbert space |{n j } A , {n j } B(8) such that all the n j ∈ {0, 1}. Then, given the RDM ρ A∪B = {n j } A ,{n j } B {n j } A ,{n j } B |{n j } A , {n j } B {n j } A , {n j } B |ρ A∪B |{n j } A , {n j } B {n j } A , {n j } B | ,(9) we define the fermionic partial transposition as ρ R B A∪B = {n j } A ,{n j } B {n j } A ,{n j } B i φ({n j },{n j }) |{n j } A , {n j } B {n j } A , {n j } B |ρ A∪B |{n j } A , {n j } B {n j } A , {n j } B | ,(10) where φ({n j }, {n j }) is given by φ({n j }, {n j }) = (τ B + τ B )(mod 2) + 2(τ A + τ A )(τ B + τ B ) ,(11) and τ A/B = j∈A/B n j , τ A/B = j∈A/B n j are the numbers of occupied states in each subsystem. Thus, the novelty of the fermionic partial transposition (10), as compared to (4), is the presence of an additional phase shift which depends on the number of fermions. While in general ρ R B A∪B is no longer hermitian for fermionic systems, one can still define the (fermionic) logarithmic negativity as (5) by writing the absolute value as |ρ R B A∪B | = (ρ R B A∪B ) † ρ R B A∪B ,(12) and observing that it is a positive semi-definite matrix. Let us now add the final layer of definitions by introducing symmetry resolved entanglement measures. Such measures have become very popular in the past few years and extend the standard definitions by exploiting the presence of internal symmetries. Some of the earliest studies (see [22,23] for the CFT/QFT and quantum spin chain constructions) focused on the entanglement entropies but more recently also the logarithmic negativity has been generalised in a similar fashion [24,25,26,27]. Let us consider a theory with a global U (1) symmetry (i.e. a complex free boson/fermion). In that case, a global U (1) charge Q A∪B = Q A + Q B commutes with the state ρ A∪B [ρ A∪B , Q A + Q B ] = 0 .(13) Then, it has been shown [24] that the charge imbalance Q A − Q B 1 commutes with ρ T B A∪B [ρ T B A∪B , Q A − Q B ] = 0,(14) and it generates a U (1) symmetry for the (bosonic or fermionic) partial transpose. At this point, it is natural to consider the charged moments of the partial transpose Tr (ρ T B A∪B ) n e 2πiα(Q A −Q B ) , α ∈ [−1/2, 1/2] ,(15) as measures of the symmetry resolved entanglement negativity, generalising the standard moments (α = 0). For fermions, the definition above is changed to Tr |ρ R B A∪B | n e 2πiα(Q A −Q B ) , α ∈ [−1/2, 1/2] .(16) The computation of charged moments of the partial transpose was performed in [25] in the ground and thermal state of massless free fermions in 1+1 dimensions, where the universal UV divergences were captured by the underlying CFT. They have also been measured in an experimental set up in Ref. [28]. In this work, we are interested in zero-density quasi-particle states of (massive) QFT, obtained as excitations of the ground state with finite number of 1 To be precise, the correct relation is [ρ T B A∪B , QA − Q T B ] = 0, however, the charge imbalance operator is basis independent and in the occupation number basis Q T B = QB so we can drop the transposition. As earlier, T represents transposition over the full relevant space, in this case B. particles at given momenta. We aim to compute the contribution to the charged moments given by the quasi-particles, which arises in addition to the zero-point fluctuations. These states are "zero-density" in the sense that they contain a fixed and finite number of quasi-particle excitations within an infinite volume. The present work applies to the same kind of excited states considered in [29,30,31,32] for the standard entanglement measures and more recently in [33,34] for the symmetry resolved entropies. We will now briefly state the main results of this work. Let us consider a QFT in d + 1 dimensions carrying a global U (1) symmetry and two non-complementary spacial regions A and B. We take the vacuum state |0 , an excited state |k containing k identical quasi-particle excitations with unit charge, and we construct the associated reduced density matrices (RDM) over A ∪ B as ρ A∪B,0 ≡ Tr C (|0 0|), ρ A∪B ≡ Tr C (|k k|) with C ≡ A ∪ B ,(17) for the ground state |0 and excited state |k , respectively. We consider the limit when the generalised volume V A , V B , V C of each region goes to infinity while the ratios r A := V A V , r B := V B V and r := V C V = 1 − r A − r B ,(18) are finite and V := V A + V B + V C . We then define the ratio of charged moments: R n k (r A , r B , r; α) := Tr (ρ T B A∪B ) n e 2πiα(Q A −Q B ) Tr (ρ T B A∪B,0 ) n e 2πiα(Q A −Q B ) .(19) We find this ratio to be universal (UV finite), and for a single particle excitation (k = 1) given by R n 1 (r A , r B , r; α) = e 2πiα r n A + e −2πiα r n B + r + r 2 + 4r A r B 2 n + r − r 2 + 4r A r B 2 n .(20) We notice that the last in (20) is positive/negative when n is an even/odd integer and therefore two distinct analytic continuations over the even/odd integers are present. In particular, the analytic continuation from n even to n = 1 gives lim n→ 1 2 R 2n 1 (r A , r B , r; α) = e 2πiα r A + e −2πiα r B + r 2 + 4r A r B .(21) Comparing to the α = 0 result found in [31,32] we see that the phases e ±2πiα only enter some of the factors in (20) whereas others remain exactly as for the total negativity. This provides a useful hint as to how more complicated formulae for multiparticle states will generalise to the symmetry resolved measure, namely by the substitutions r A → e 2πiα n r A and r B → e − 2πiα n r B . Therefore, we can write: R n 1 (r A , r B , r; α) = R n 1 e 2πiα n r A , e − 2πiα n r B , r; 0 .(22) Indeed, the generalisation to states of many distinct quasi-particles is straightforward, and each particle contributes independently (multiplicatively) to the ratio of charged moments, similar to the structure found in [33,34] for the charged Rényi entropies. We highlight that the results (20) are identical for fermions with the definitions (12) and (16) when n is even. This is the case we will consider in the following when treating the fermionic case. For bosonic systems, states of multiple identical excitations can also be considered, whose total negativity was obtained in [31,32]. A similar formula can be derived for the ratio of moments, giving R n k (r A , r B , r; α) = k p=−k [ n 2 (k−p)] q=max(0,−np) A p,q r np+q A r n(k−p)−2q r q B e 2πiαp ,(23) where [·] represents the integer part and A p,q = {k 1 ,...,kn}∈σ n 0 (q) n j=1 k! (p + k j )!(k − p − k j − k j+1 )!k j !(24) are combinatorial factors, with the sum running over all the partitions {k 1 , . . . , k n } of the number q into n non-negative integer parts. We structure the paper as follows. In Section 2 we analyse in detail a simplified model consisting of a state of few qubits. In spite of the simplicity of these states, their symmetry resolved negativity moments capture the main universal features found in QFT. In Section 3 we give a field theoretical formulation of the charged moments of the partially transposed density matrix, employing the notion of twist operators. The difference between fermionic and bosonic particles is thoroughly discussed again in this context, and the evaluation of the moments is shown to reduce to a combinatorial problem which we solve exactly for single and multiple distinct excitations. We check numerically our predictions on a 1D Fermi chain in Section 4 and find good agreement. We conclude in Section 5. Qubit Computation In this section we derive the main formulae for the charged replica negativities by starting with a multi-qubit system. This "toy model" was already employed in [29,30,31,33,34] following the realisation that even if multi-qubit states are much simpler than the excited states of a QFT, they both produce the same universal contribution to entanglement entropies and negativities. Hence it is advantageous to obtain such contribution from qubit states rather than a vastly more involved field-theoretical approach. The advantage of this picture is that the ground state of a multi-qubit system is trivial from the point of view of the entanglement content, which means that what is dubbed "excess of entropy" or "excess of negativity" or, in our case, "ratio of moments" of an excited state with respect to the ground state effectively reduces to the entropy or negativity of the excited state and the moments of the excited state, respectively. The notion of charge imbalance in a qubit setup was introduced in [24] and the notion of fermionic partial transposition in the same setup was later used in [25]. The main result of this section is to show that equation (20) for a state consisting of a single excitation can be derived employing either the bosonic or the fermionic notion of partial transposition. Interestingly, even if the intermediate steps of the computation are different in the two cases, the final result is still the same . For bosonic theories, the result can be generalised to multiple identical excitations to give (23) with (24). Single Bosonic Excitation Assume that a single bosonic excitation is localised in space according to a uniform probability distribution, so that r A , r B and r can be regarded as the probabilities for the excitation to be found in regions A, B, C respectively. Then the state in H A∪B∪C representing a single excitation can be written as |1 = √ r A |100 + √ r B |010 + √ r|001(25) where the values 0 (1) represent the absence (presence) of the excitation and the coefficients can be interpreted as probabilities of finding the excitation in a particular region. The RDM ρ A∪B is obtained taking the trace over H C : ρ A∪B = Tr C |1 1| = r A |10 10| + r B |01 01| + √ r A r B (|01 10| + |10 01|) + r|00 00| ,(26) or in matrix form ρ A∪B =        00 01 10 11 00 r 0 0 0 01 0 r B √ r A r B 0 10 0 √ r A r B r A 0 11 0 0 0 0        .(27) This matrix has a block-diagonal structure with respect to the number operator in A ∪ B: (N A + N B )|i A , j B = (i A + j B )|i A , j B , i A , j B ∈ {0, 1}(28) as indeed we can decompose it as ρ A∪B = (r) N =0 ⊕ r B √ r A r B √ r A r B r A N =1 ⊕ (0) N =2 ,(29) where each block corresponds to an eigenspace of N ≡ N A + N B . Let us now come to the partially transposed matrix ρ T B A∪B . From the definition (4), it follows: ρ T B A∪B = r A |10 10| + r B |01 01| + √ r A r B (|00 11| + |11 00|) + r|00 00| =        00 01 10 11 00 r 0 0 √ r A r B 01 0 r B 0 0 10 0 0 r A 0 11 √ r A r B 0 0 0       (30) which is a block-diagonal matrix with respect to the imbalance operator ∆N ≡ N A − N B : (N A − N B )|i A , j B = (i A − j B )|i A , j B , i A , j B ∈ {0, 1} ,(31) as ρ T B A∪B = (r A ) ∆N =1 ⊕ r √ r A r B √ r A r B 0 ∆N =0 ⊕ (r B ) ∆N =−1 .(32) The spectrum of ρ T B A∪B contains four distinct eigenvalues, one of which is negative and produced by the block ∆N = 0. The eigenvalues are r A , r B , r + r 2 + 4r A r B 2 , r − r 2 + 4r A r B 2(33) and therefore this system has non-vanishing negativity. The block-diagonal structure of ρ T B A∪B , i.e. the property [ρ T B A∪B , N A − N B ] = 0, implies that the operator e 2πiα(N A −N B ) attaches a phase e 2πiα to the ∆N = 1 block, a phase e −2πiα to the ∆N = −1 block and acts as the identity on the uncharged block. Hence, we finally obtain the expected result: Tr (ρ T B A∪B ) n e 2πiα(N A −N B ) = R n 1 (r A , r B , r; α) ,(34) with R n 1 (r A , r B , r; α) given by (20). In addition, knowing the eigenvalues allows to compute the negativity from its original definition (5), without the need to obtain the moments first. Single fermionic excitation In this section we will show that the result (20) holds for n even also if we adopt the fermionic definition of partial transposition (10). It is important to emphasise beforehand that by speaking of fermionic excitations in this context we only refer to the prescription for the partial transposition of the RDM, not to any algebra of the operators that create and annihilate the qubit states. If we adopt the definition (10), the matrix ρ R B A∪B differs from ρ T B A∪B as obtained in the previous section only because there is now an extra phase in the off-diagonal elements: (|10 01|) R B = −i|11 00| , (|01 10|) R B = −i|00 11|(35) while the diagonal elements are not modified. It follows that ρ R B A∪B is still block-diagonal with respect to the imbalance operator: ρ R B A∪B = (r A ) ∆N =1 ⊕ r −i √ r A r B −i √ r A r B 0 ∆N =0 ⊕ (r B ) ∆N =−1 .(36) The eigenvalues of the zero charge sector can now be imaginary, depending on the values of r A and r B . However, we are eventually interested in the evaluation of the charged moment Tr |ρ R B A∪B | n e i2πα(N A −N B ) , which requires the knowledge of the eigenvalues of |ρ R B A∪B | only. From the definition (12), and making use of the block diagonal decomposition, it is clear that we need to find the spectrum of the matrix: r −i √ r A r B −i √ r A r B 0 r −i √ r A r B −i √ r A r B 0 † = r 2 + r A r B ir √ r A r B −ir √ r A r B r A r B ,(37) which is given by λ ± = r ± r 2 + 4r A r B 2 2 .(38) The eigenvalues λ ± are nothing but the squares of the eigenvalues of |ρ R B A∪B | in the sector with ∆N = 0, while the ones associated to ∆N = ±1 are given by r A , r B respectively. Since this spectrum was already obtained in the previous section for a bosonic particle,the result (34) is recovered here for n even. Multiple Distinct Excitations A generic state consisting of k distinct excitations is a linear combination of states of the form |i 1 A . . . i k A , i 1 B . . . i k B , i 1 C . . . i k C ≡ ⊗ k j=1 |i j A , i j B , i j C ∈ (H A∪B∪C ) ⊗k(39) and for every j = 1, . . . , k i j A , i j B , i j C ∈ {0, 1} , i j A + i j B + i j C = 1 .(40) Among these states, let us focus on those which are tensor products of the linear combination (25), that is, on states of the form |11 . . . 1 := |1 ⊗k . For k = 2 we have for instance |11 = |1 ⊗2 = r A |11, 00, 00 + r B |00, 11, 00 + r|00, 00, 11 + √ r A r B (|10, 01, 00 + |01, 10, 00 ) + √ r A r(|10, 00, 01 + |01, 00, 10 ) + √ r B r(|00, 10, 01 + |00, 01, 10 ) . For a tensor product state the density matrix is the tensor product of the single-particle density matrices, and since the trace of a tensor product is the product of the traces, the RDM is a tensor product itself: ρ (1,...,1) = (ρ (1) ) ⊗k , ρ (1,...,1) A∪B = Tr C ρ (1,...,1) = (ρ (1) A∪B ) ⊗k .(42) Now we can compute the charged replica negativities using the bosonic partial transposition (4): Tr ρ (1,...,1) ,T B A∪B n e 2πiα(N A −N B ) = Tr (ρ (1) ,T B A∪B ) ⊗k n e 2πiα(N A −N B ) =Tr ρ (1) ,T B A∪B n ⊗k e 2πiα(N A −N B ) = Tr ρ (1) ,T B A∪B n e 2πiα(N (1) A −N (1) B ) ⊗k = R n 1 (r A , r B , r; α) k ,(43) where the operators N (1) A , N(1) B act on a single two-qubit state in the obvious way N (1) A |i A , i B = i A |i A , i B , N (1) B |i A , i B = i B |i A , i B , i A , i B ∈ {0, 1} .(44) The result above is not surprising and it is a consequence of the choice of the state: if the multi-particle state is a tensor product then there is no correlation between different particles and the total negativity is simply the product of the single particle negativities. This also holds for the ratio of charged moments. As we shall see below, this is not the case when the particles are indistinguishable. Multiple Identical Excitations Consider now a k-particle state consisting of k identical excitations. Its associated qubit state can be written as: |k = {k A ,k B ,k C }∈σ 3 0 (k) c k A ,k B ,k C |k A k B k C ,(45) where σ 3 0 (k) represents the set of integer partitions of k into three non-negative parts and the coefficients c k A ,k B ,k C := k!r k A A r k C r k B B k A !k B !k C ! δ k A +k B +k C ,k ,(46) are as usual probabilities of finding k A identical particles in region A, k B identical particles in region B and the remaining particles, k C in region C, weighted with the appropriate combinatorial factors. As shown in [31,32], if all vectors |k A k B k C are normalised to one, then also k|k = 1. From this expression it is then possible to explicitly construct the matrix elements of the (bosonic) partially transposed density matrix as: k 1 A k 1 B |ρ T B A∪B |k 2 A k 2 B = k C ∈N 0 c k 1 A k 2 B k C c k 2 A k 1 B k C ,(47) where the sum represents taking the trace over the degrees of freedom in C and the partial transposition exchanges the indices k 1 B and k 2 B in the coefficients. The matrix elements of the nth power can then be simplified to k 1 A k 1 B | ρ T B A∪B n |k n+1 A k n+1 B = k s A ,k s B ∈N 0 ;s=2,...,n k r C ∈N 0 ;r∈In n j=1 c k j A k j+1 B k j C c k j+1 A k j B k j C (48) = k s A ,k s B ∈N 0 ;s=2,...,n k r C ∈N 0 ;r∈In n j=1 k!r k j A A r k j C r k j B B k j A !k j C !k j B ! δ k j A +k j+1 B +k j C ,k δ k j+1 A +k j B +k j C ,k , where I n := {1, . . . , n}. This formula follows from multiplying together n copies of (47) with different intermediate states (indices). While Eq. (49) was already presented in [31], its symmetry resolved version is new and can be easily written by introducing phase factors in the sum above. We have k 1 A k 1 B | ρ T B A∪B n e 2πiα(N A −N B ) |k n+1 A k n+1 B = k s A ,k s B ∈N 0 ;s=2,...,n k r C ∈N 0 ;r∈In n j=1 k!r k j A A r k j C r k j B B k j A !k j C !k j B ! e 2πiα n (k j A −k j B ) δ k j A +k j+1 B +k j C ,k δ k j+1 A +k j B +k j C ,k . (49) Starting with this result, the derivation of equations (23) and (24) is identical to that presented in [31,32]. The idea is to employ the various existing constraints on the values of k j A , k j B and k j C in order to reduce the number of terms in the sum and product (49). Several terms in the sum are vanishing due to the two delta-function constraints. Once those are implemented, only one independent set of variables, say k j A remains. This set itself is constrained by the fact that each of these numbers can never be larger than k. The implementation of all these constraints eventually leads to the result (23) with (24). Before concluding the section let us analyse the simplest case, k = 1. For a single excitation, the right-hand side of (23) is given by: A −1,n r n B e −2πiα + A 1,0 r n A e 2πiα + [n/2] q=0 A 0,q (r A r B ) q r n−2q . By looking at the definition (24) one immediately gets A −1,n = A 1,0 = 1. On the other hand, A 0,q = {k 1 ,...,kn}∈σ n 0 (q) 1, where each k j ∈ {0, 1} and whenever k j = 1 then k j+1 = 0. Counting the number of sequences (k 1 , . . . , k n ) that satisfy these constraints is a combinatorial problem identical to the one we solve in the next section. This number is n n−q n−q n . As we explain in the next section, this result exactly reproduces (20). Looking at the coefficients (24) it is clear that the computation has an underlying combinatorial interpretation. For the α = 0 case this has been established by reinterpreting the sum (23) as a partition function for a certain class of graphs [32]. A combinatorial picture will emerge again in the next section in a related context: the computation using twist operators. Twist operator approach In this section, we provide a field theoretic description of the charged moments of the partially transposed RDM, valid in principle for any QFT. To do so, we employ the replica construction, and we define a set of twist operators, which are based on the branch point twist fields of 1+1D QFT [2,9,22,35]. Branch point twist fields play a prominent role in entanglement computations in 1+1D QFT and were used also in the context of excited states in [29,30,31,32,33]. Branch point twist fields sit at branch points from which branch cuts extend. Twist operators were introduced in a previous work [34], and have very similar exchange relations with respect to local fields as branch point twist fields, with the difference that they act on extended regions of space rather than points. In [34] we showed that the analysis of the entanglement entropies of quasi-particle states relies on few algebraic properties, mainly their exchange relations w.r.t. local fields, while most of the theory-dependent features are hidden in the zero-point fluctuations (ground-state entanglement). Similar to the previous section we will consider here a single excitation of either bosonic or fermionic type. In the fermionic case, we discuss the algebra of twist operators and fermionic fields. We both compute the charged moments in a generic fermionic theory, and, as a byproduct, we perform a simpler derivation valid for free fermions in any dimension. Single Bosonic Excitation We consider now a bosonic QFT, described by its algebra of observables A, acting on the Hilbert space H, and |0 ∈ H is the ground-state of the theory. We then consider the replica version of this theory, consisting of n non-interacting copies of the same model. The algebra of observables is now denoted by A n , so that Z n becomes an internal (global) symmetry, which includes cyclic permutation symmetry among copies [36,37,38,39]. We also assume that the QFT we start with carries an additional global U (1) symmetry. This procedure allows us to introduce a set of twist operators, supported on extended spacial regions, which mix cyclic permutation and internal U (1) symmetries, and generalise the notion of composite twist fields of 1+1D QFT [40,41,42,43,22,35]. Let O j (x) be a generic bosonic field of the j-th replica (j = 1, . . . , n) with U (1) charge κ O . We consider a region A, and we associate to it a twist operator T α A which satisfies [34] T α A O j (x) = e 2πiκ O αδ j,n O j+1 (x)T α A x ∈ A , O j (x)T α A x / ∈ A .(50) The action of T α A is non-trivial only in the region A and it can be interpreted as a replica shift j → j + 1 followed by the insertion of a U (1) flux among the n-th and the first replica. Similarly, we define the conjugate twist operatorT α A so that T α A O j (x) = e −2πiκ O αδ j,1 O j−1 (x)T α A x ∈ A , O j (x)T α A x / ∈ A,(51) whose action is the inverse of that of T α A , so we can identifyT α A = (T α A ) † . These operators will now allow us to develop a field-theoretic formulation of the symmetry-resolved negativity and its moments. Let A and B be two disconnected regions, |Ψ a state and ρ A∪B its RDM over A ∪ B. Following [20], one can indeed interpret the moments as charged functions of a n-sheeted Riemann surface with a branch-cut over A and B, connecting the replicas in two opposite directions. A similar construction in the presence of fluxes has been proposed in [25]. In analogy with the works above, we establish the following relation between the charged moments and the twist operators: Tr (ρ T B A∪B ) n e i2πα(Q A −Q B ) ∼ n Ψ|T α AT α B |Ψ n ,(52) up to a non-universal proportionality constant, which is irrelevant for our purpose as we are interested in ratios. Note that |Ψ n represents the replicated version of the state |Ψ . Consider once more the simplest case of a one-particle excitation of fixed momentum p and charge 1, which is created by a field O acting on the ground-state as |1 ∝ O(p)|0 ,(53) up to a normalization constant. Here O(p) is the Fourier transform of O(x) O(p) =ˆM d d xe −ip·x O(x) ,(54) and M is the whole space, which, for simplicity, we take to be a d-dimensional torus. We note that O † (−p) = [O(p)] † which will be important for later computations. Our aim, as in the previous section, is to compute the ratio of charged moments, which in this language becomes n 1|T α AT α B |1 n n 0|T α AT α B |0 n .(55) We define the projection of O(p) over a generic region A as the restricted integral O A (p) =ˆA d d xe −ip·x O(x) ,(56) then we can write n 1|T α AT α B |1 n = n 0|(O † ) n (−p) . . . (O † ) 1 (−p)T α AT α B O 1 (p) . . . O n (p)|0 n n 0|(O † ) n (−p) . . . (O † ) 1 (−p)O 1 (p) . . . O n (p)|0 n .(57) We point out that bosonic creation operators, whether on the same or on different copies, commute with each other, therefore the order of a string of operators O j (p) is irrelevant. We now observe that O j (p) = O j A (p) + O j B (p) + O j C (p) ,(58) which, when inserted into (57), leads to a large numbers of terms both in the numerator and the denominator. The key idea is that in the infinite volume limit many of these terms are subleading and the leading contribution can be isolated and shown to be simple. We now present the details of the calculation. Employing the exchange relations between twist operators and O, we can bring all the bosonic fields O j (p) to the left of T α AT α B in (57). This gives n 0|(O † ) n (−p) . . . (O † ) 1 (−p)T α AT α B O 1 (p) . . . O n (p)|0 n = n 0|(O † ) n (−p) . . . (O † ) 1 (−p)(O 2 A (p) + O n B (p)e −2πiα + O 1 C (p)) · · · × (O 1 A (p)e 2πiα + O n−1 B (p) + O n C (p))T α AT α B |0 n .(59) So far everything is exact, and no approximation has been made. To proceed further with the evaluation of the expectation value, we focus on the large volume behavior. In [34] we argued that the leading terms come from the contractions of fields belonging to the same replica, which amount to the following formal replacement inside the correlation function: (O † ) j A (−p)O j A (p) → 0|(O † ) j A (−p)O j A (p)|0 ∼ V A∩A ,(60) with A, A ⊆ M generic spacial regions. Proportionality to the volume is valid in the large volume limit and was shown in [34]. The proportionality constant can in principle be absorbed in the normalisation of the field O and does not affect the final result. In this regime, the vacuum expectation value of the twist operators also factors out: n 0|(O † ) n (−p) . . . (O † ) 1 (−p)(O 2 A (p) + O n B (p)e −i2πα + O 1 C (p)) · · · × (O 1 A (p)e i2πα + O n−1 B + O n C (p))T α AT α B |0 n n 0|(O † ) n (−p) . . . (O † ) 1 (−p)(O 2 A (p) + O n B (p)e −2πiα + O 1 C (p)) · · · × (O 1 A (p)e 2πiα + O n−1 B (p) + O n C (p))|0 n × n 0|T α AT α B |0 n ,(61) which means that the charge moments of the ground state factor out and will subsequently be cancelled in the ratio (55). While the formula above is already a large volume approximation, when the sums are expanded and each individual term considered, many terms that are subleading for large volume still appear. To make things clear, consider the terms n 0|(O † ) n (−p) . . . (O † ) 1 (−p)O 2 A (p) . . . O 1 A (p)|0 n e 2πiα ∼ V n A e 2πiα ,(62) and n 0|(O † ) n (−p) . . . (O † ) 1 (−p)O n B (p) . . . O n−1 B (p)|0 n e −2πiα ∼ V n B e −2πiα .(63) These are the terms that generate the highest powers in the volume of regions A and B respectively. Among the other terms that are generated, the leading ones at large volume are those containing a string of operators O j (p) and their daggered versions all inserted at different replicas, just as in the examples above. If that is not the case, there is at least a pair of operators that can not be contracted as in (60) and the term is subleading. We now proceed with the systematic evaluation and counting of all the leading terms generated in the expansion (61). We introduce the following notation to identify each term (A 1 . . . A n ) := n 0|(O † ) n (−p) . . . (O † ) 1 (−p)O j 1 A 1 (p) . . . O jn An (p)|0 n ,(64) where A i ∈ {A, B, C} and j i ∈ {1, . . . , n}. We observe that, once the sequence of regions (A 1 . . . A n ) is identified, the sequence of replica indices (j 1 . . . j n ) is fixed unambiguously, as in fact j i =        i + 1 , if A i = A i , if A i = C i − 1 , if A i = B ,(65) hence the choice of notation above. Moreover, due to the contraction rules discussed above, only the terms for which (j 1 . . . j n ) is a permutation of the indices {1, . . . , n} are non-vanishing. As a consequence, one can show that (See Appendix A.1) the only possible non-vanishing terms fit into one of these two categories: • Or, whenever A appears in (A 1 . . . A n ), it has to be followed by B. Similarly, if B appears in (A 1 . . . A n ), then it has to be preceded by A. We focus on the second set of terms. It is convenient to split this into two additional subsets, which we call type-I and type-II • Type-I: (A 1 . . . A n ) = (B A 2 . . . A n−1 A) , • Type-II: (A 1 . . . A n ) with A 1 = B . Thus, both types of string contain a number k of pairs AB and n − 2k C's, and according to (60) each of them will be proportional to ( V A V B ) k V n−2k C . Due to the balance of A's and B's there is no phase present in these terms (no α dependence). We now just need to count how many of each type we have. Among the strings of type-I, there is always at least one pair AB and there are at most k − 1 additional pairs AB that can be present. The number of such strings is precisely (a proof is given in A.2) n − k − 1 k − 1 for k = 0, . . . [n/2].(66) Similarly, one can show that the number of type-II strings consisting of k pairs of consecutive A and B is n − k k for k = 0, . . . [n/2].(67) In summary, n 0|(O † ) n (−p) . . . (O † ) 1 (−p)(O 2 A (p) + O n B (p)e −i2πα + O 1 C (p)) · · · × (O 1 A (p)e i2πα + O n−1 B (p)e i2πα + O n C (p))|0 n ∼ V n A e 2πiα + V n B e −2πiα + n 2 k=0 n n − k n − k k (V A V B ) k (V C ) n−2k ,(68) where we used the simple identity n − k k + n − k − 1 k − 1 = n n − k n − k k .(69) The denominator in (57) can be fully contracted and yields V n (up to a non-universal normalisation constant). Therefore the ratio (55) becomes a function of the usual variables r A , r B and r and we we obtain R n 1 (r A , r B , r; α) = r n A e 2πiα + r n B e −2πiα + n 2 k=0 n n − k n − k k (r A r B ) k r n−2k ,(70) which is the main result of this section. Note that although this formula looks different from (20), they are in fact equivalent. That is r + r 2 + 4r A r B 2 n + r − r 2 + 4r A r B 2 n = n 2 k=0 n n − k n − k k (r A r B ) k r n−2k .(71) This relation was given already in [31,32] without a proof. The proof is indeed quite involved, and can be performed using properties of the generalised Lucas' polynomials. This is presented in Appendix B, where we also derive two interesting corollaries. The equality (71) is particularly interesting because it shows that the result is always a polynomial in integer powers of r A , r B , r for n positive, even or odd. However, its analytic continuation from n even to n = 1 does contain a square root as seen in (21). Single Fermionic Excitation Let us consider a theory for which the algebra A contains fermionic observables. In other words, we assume that A is a Z 2 -graded algebra (superalgebra) generated by bosonic/fermionic fields, which are even/odd with respect to the Z 2 fermionic parity. Here, to generalise properly the twist operator construction, one must take care of the fermionic nature of the fields. Indeed, two such fields sitting at distinct points, say Ψ(x) and Ψ(x ) will now anticommute Ψ(x)Ψ(x ) = −Ψ(x )Ψ(x).(72) Moreover, when the replica construction is performed and the replica fields are obtained, we will require that fermionic fields on distinct replicas also anticommute Ψ j (x)Ψ j (x ) = −Ψ j (x )Ψ j (x).(73) As a result, the algebra of the replica theory A n is not a conventional tensor product. As before, we assume that an additional U (1) symmetry is present in the theory. Let A be a spacial region, and we associate to it a twist operator T α A which shifts the replica indices and appends a U (1) flux. While its action for bosonic fields has been already discussed, we now consider its commutation relation with a generic fermionic field Ψ of charge κ Ψ . The natural generalisation is T α A Ψ j (x) = (−1) (n−1)δ j,n e 2πiκ Ψ αδ j,n Ψ j+1 (x)T α A x ∈ A, Ψ j (x)T α A x / ∈ A.(74) We point out that the only difference with respect to (50) is the presence of an additional flux (−1) n−1 between the n-th and the first replica, a factor that was already introduced in [44] and employed for instance in [9] in the calculation of the VEV of the Ising twist field. For fermionic theories we need to define another twist operator which implements explicitly the fermionic partial transposition, and from now on we only consider n even, denoted also by n e . It has been shown in [20] that the effect of the partial transposition on the fermions gives rise to an additional insertion of a flux (−1) among any pair of consecutive replicas, in addition to the usual replica shift. To implement this construction, we define a twist operatorT α A satisfying T α A Ψ j (x) = −(−1) (n−1)δ j,1 e −2πiκ Ψ αδ j,1 Ψ j−1 (x)T α A x ∈ A, Ψ j (x)T α A x / ∈ A.(75) We are now ready to compute the ratio of charged moments, along the same lines of the previous computation. Namely, given a fermionic field Ψ(x) with U (1) charge +1, we consider the state |1 ∼ Ψ(p)|0 ,(76) and its replicated version |1 n ∼ Ψ 1 (p) . . . Ψ n (p)|0 .(77) Given two disconnected regions A and B, we express the ratio of charged moments of the partial transpose also in this case as n 1|T α AT α B |1 n n 0|T α AT α B |0 n ,(78) with n an even integer. We expand the expectation value of the twist operators as follows n 0|(Ψ † ) n (−p) . . . (Ψ † ) 1 (−p)(Ψ 2 A (p) + Ψ n B (p)e −2πiα + Ψ 1 C (p)) · · · × (−Ψ 1 A (p)e 2πiα − Ψ n−1 B (p)e 2πiα + Ψ n C (p))T α AT α B |0 n n 0|(Ψ † ) n (−p) . . . (Ψ † ) 1 (−p)(Ψ 2 A (p) + Ψ n B (p)e −2πiα + Ψ 1 C (p)) · · · × (−Ψ 1 A (p)e 2πiα − Ψ n−1 B (p) + Ψ n C (p))|0 n × n 0|T α AT α B |0 n .(79) As in the bosonic case, many terms are generated after the sums are expanded and similar considerations as to which are leading and which are sub-leading for large volume can be applied. However, since the fermionic fields anticommute, we need to pay attention to the order of the fields. For example, the term (A A . . . A) can be evaluated to n 0|(Ψ † ) n (−p) . . . (Ψ † ) 1 (−p)Ψ 2 A (p) . . . Ψ 1 A (p)|0 n (−e 2πiα ) = n 0|(Ψ † ) n (−p) . . . (Ψ † ) 1 (−p)Ψ 1 A (p)Ψ 2 A (p) . . . Ψ n A (p)|0 n e 2πiα ∼ e 2πiα V n A ,(80) where Ψ 1 A (p) has been recast in the first position after crossing n − 1 (odd) fermions, thus acquiring an additional phase −1. Similarly, it is easy to show that n 0|(Ψ † ) n (−p) . . . (Ψ † ) 1 (−p)Ψ n B (p) . . . Ψ n−1 B (p)|0 n (−e −2πiα ) ∼ V n B e −2πiα .(81) So far, it should be clear that each term of the expansion is weighted with a proper phase, which arises from both the commutation relations (between twist operators and fermions) and the contractions, and this is the crucial difference with respect to the calculation presented for the boson. We can summarise the total contribution to the phase for a generic term A 1 ... An j 1 ... jn := n 0|(Ψ † ) n (−p) . . . (Ψ † ) 1 (−p)Ψ j 1 A 1 (p) . . . Ψ jn An (p)|0 n(82) as follows: • If j n = 1 and A n = A there is a −e 2πiα phase. Similarly, if j 1 = 1, and A 1 = B, there is a contribution of −e −2πiα . • In addition to the previous phase, an additional −1 is present for each B which appears in the string (A 2 . . . A n ). This is due to the fermionic partial transposition over B. • Once the contraction is performed, there is a sign coming from the order of the fields. Given (j 1 . . . j n ) = (σ(1) . . . σ(n)), with σ a generic permutation of the replica indices, one can show that the sign appearing after the contraction is sign(σ). Having suitably modified the definition of the twist operators for the fermion, the phase of each term appearing in (79) is the same as the one for the corresponding term in (61), leading to the same result for fermions and bosons. To see this, let us first analyse a term of type-I B A 2 ... A n j 2 ... 1 .(83) The phases coming from the last A and the first B cancel each other. Let A i+1 = B: then this must be preceded by A i = A. The resulting replica indices at the corresponding positions are j i+1 = i and j i = i + 1. In other words, there is an exchange of the replica indices i and i + 1, which changes the sign of the permutation σ and it contributes as −1 after the contraction, but it is compensated by the −1 due to the presence of a B. Similar considerations apply straightforwardly to the strings of type-II. Putting everything together, the same formula (20) is obtained again, which is the final result. We emphasise that this derivation relies on the assumption that n is even, something that was not necessary for a single bosonic excitation. Single Fermionic Excitation via Replica Diagonalisation We briefly note that the same result just derived for fermions can also be obtained via replica diagonalisation in the free case. The key observation is that, as show in Ref. [20], the fermionic partial transpose of a Gaussian state is still Gaussian. This allows to simplify the analysis of the replica theory, reducing it to a single-replica model in the presence of proper fluxes. For instance, given a Gaussian state and its RDM, and taking n to be even, one can show the factorisation [20] Tr |ρ T B A∪B | n e 2πiα(Q A −Q B ) = n−1 2 p=− n−1 2 Tr ρ A∪B e 2πi(α+p) n (Q A −Q B )+iπQ B .(84) Each term appearing inside the product is nothing but a single copy charged partition function with given fluxes along A and B, that is the key quantity we aim to evaluate in this subsection. we can also identify each term inside the product (84) as a ratio of correlators of twist operators, as done in 55. The only difference is the flux α in T α A , which needs to be shifted in accordance with the form of the trace inside the product. We then have that 1|T α A T −α+ 1 2 B |1 = 0|Ψ † (p)T α A T −α+ 1 2 B Ψ(p)|0 0|Ψ † (p)Ψ(p)|0 = 0|Ψ † (p)(Ψ A (p)e 2πiα − Ψ B (p)e −2πiα + Ψ C (p))T α A T −α+ 1 2 B |0 0|Ψ † (p)Ψ(p)|0 0|T α A T −α+ 1 2 B |0 (r A e 2πiα − r B e −2πiα + r).(86) so that n−1 2 p=− n−1 2 1|T α+p A T −α−p+ 1 2 B |1 0|T α+p A T −α−p+ 1 2 B |0 = n−1 2 p=− n−1 2 (r A e 2πi(α+p) n − r B e − 2πi(α+p) n + r) .(87) This product can be shown yet again to be equal to (20), though the proof requires some mathematical identities that we present in Appendix (C). Bosonic State with Multiple Distinct Excitations Consider now a k-particle bosonic state where all particles have the same (unitary) charge and momentum p. In order to ensure the presence of U (1) symmetry we consider the complex free boson, described by a field O which satisfies Wick's theorem in the vacuum state. We thus describe the excited state as |k ∼ O(p) k |0 .(88) Before entering the core of the computation in the replica model it is convenient, for computational reasons, to slightly modify the definition (50) of the twist operators as follows T α A O j (x) = e 2πiα n O j+1 (x)T α A x ∈ A, O j (x)T α A x / ∈ A.(89) This amounts to distributing the total flux e 2πiα among all copies, rather than inserting it between the n-th and the first replica only. While this operator is different from what we used before, one can show that the final result obtained as expectation value is not modified, as a consequence of replica and U (1) symmetry. An analogous fractionalisation will be considered forT α A . We can now evaluate n k|T α AT α B |k n = n 0| (O † ) n (−p) k . . . (O † ) 1 (−p) k T α AT α B O 1 (p) k . . . O n (p) k |0 n n 0| (O † ) n (−p) k . . . (O † ) 1 (−p) k O 1 (p) k . . . O n (p) k |0 n .(90) The denominator, required to ensure the normalisation of the state, can be computed using Wick's theorem, which gives the expectation value as a sum over the possible contractions: n 0| (O † ) n (−p) k . . . (O † ) 1 (−p) k O 1 (p) k . . . O n (p) k |0 n = [ 0| (O † )(−p) k O(p) k |0 n ∼ V nk (k!) n .(91) The numerator in (90) can be manipulated similarly to the previous sections, now using the exchange relation (89) : n 0| (O † ) n (−p) k . . . (O † ) 1 (−p) k T α AT α B O 1 (p) k . . . O n (p) k |0 n = n 0| (O † ) n (−p) k . . . (O † ) 1 (−p) k O 2 A (p)e 2πiα n + O n B (p)e − 2πiα n + O 1 C (p) k · · · × O 1 A (p)e 2πiα n + O n−1 B (p)e − 2πiα n + O n C (p) k T α AT α B |0 n n 0| (O † ) n (−p) k . . . (O † ) 1 (−p) k O 2 A (p)e 2πα n + O n B (p)e − 2πiα n + O 1 C (p) k · · · × O 1 A (p)e 2πiα n + O n−1 B (p)e − 2πiα n + O n C (p) k |0 n × n 0|T α AT α B |0 n .(92) The generic evaluation of the previous expression is combinatorially-speaking rather involved, however many crucial features are already apparent. Namely, it is clear that 3 nk terms are generated simply by expanding the product over sums. Each resulting term can then be evaluated via Wick's theorem and will give rise to many contractions, as there are many possible ways to contract O j A i (p) with (O † ) j (−p). Any of these contractions can be recovered via the permutation of the operators (O † ) j (−p) living in the same replica, giving a factor (k!) n which precisely cancels the normalisation of the state (the denominator in Eq. (90)). Moreover, whenever the restriction of O 1 A (p) over the region A appears, a factor V A e 2πiα n is present after the Wick contraction; similarly, a factor V B e − 2πiα n appears with every B and a factor V C for every C. Putting everything together, we can infer the general structure n 0| (O † ) n (−p) k . . . (O † ) 1 (−p) k O 2 A (p)e 2πiα/n + O n B (p)e −2πiα/n + O 1 C (p) k · · · × O 1 A (p)e 2πiα/n + O n−1 B (p)e −2πiα/n + O n C (p) k |0 n = (k!) n k A ,k B C k A ,k B V k A A V k B B (V C ) nk−k A −k B e 2πiα(k A −k B ) n ,(93) so that the expectation value is a homogeneous polynomial of degree nk in V A , V B , V C , and C k A ,k B is a natural number of combinatorial nature. Using the expression Eq. (55), we get the ratio of charged moments as R n k (r A , r B , r; α) = n 1|T α AT α B |1 n n 0|T α AT α B |0 n = k A ,k B C k A ,k B r A k A r B k B r nk−k A −k B e 2πiα(k A −k B ) n ,(94) valid, as usual, in the infinite volume limit. The closed formula for the combinatorial coefficient C k A ,k B at any k and n is difficult to obtain by this method, but has been obtained ealier for simpler qubit states. The coefficients C k A ,k B are nothing but the coefficients A p,q given in (24). An Example: k = n = 2 Let us consider the example n = 2 and k = 2 to get an idea of how the combinatorics of (93) works in this case. Define the symbol A 1 ... A nk j 1 ... j nk ,(95) which represents to the insertion of a string O j 1 A 1 (p) . . . O j nk A nk (p) inside the correlation function. Among the strings which are generated, we only keep those for which any replica index j appears exactly k times among (j 1 . . . j nk ), as all others will vanish after Wick contractions. For n = k = 2 the length of the strings above is nk = 4. For any given string, there are others that can be obtained via the following permutations of the A i indices: (A 1 A 2 A 3 A 4 ) → (A 2 A 1 A 3 A 4 ), (A 1 A 2 A 3 A 4 ) → (A 1 A 2 A 4 A 3 ), (A 1 A 2 A 3 A 4 ) → (A 3 A 4 A 1 A 2 )(96) and these contribute equally. This allows us to slightly simplify the combinatorial counting, and we only list the strings up to the transformations generated by Eq. (96), taking care of the degeneracy for each representative distinct string. These are • (A C B C): it yields 8r A r B r 2 . • (A A A A) : it yields (r A e • (C C C C): it yields r 4 . • (A C A C): it yields 4r 2 (r A e 2πiα 2 ) 2 . • (B C B C): it yields 4r 2 (r B e − 2πiα 2 ) 2 . Putting all these pieces together, we obtain R 2 2 (r A , r B , r; α) = r 4 A e 4πiα + r 4 B e −4πiα + 6r 2 A r 2 B + 4r 3 A r B e 2πiα + 4r 3 B r A e −2πiα + 8r A r B r 2 + r 4 + 4r 2 A r 2 e 2πiα + 4r 2 B r 2 e −2πiα .(97) This result is consistent with the one obtained from a direct evaluation of the right hand side of (23) for n = 2, k = 2. Numerics In this section, we present numerical results for a 1D lattice Fermi gas. In particular, we consider the ground-state at half-filling, which is a Fermi sea and has critical features. This Fermi sea is then excited through the insertion of an additional particle above the Fermi energy at large momentum. We aim to compute the ratio of charged moments for a state of a single excitation and show the validity of result (20) numerically. Agreement with the latter confirms the claim made earlier in this paper and in previous works, namely that while the ground-state exhibits theory-dependent, highly non-trivial behavior (in this case, captured by a free fermion CFT [25]), the contribution given by the excitation is universal. The Method Let us consider a Fermi chain of length L described by the fermionic operators {f j , f † j } j=1,...,L satisfying the standard anticommutation relations {f j , f j } = {f † j , f † j } = 0, {f j , f † j } = δ jj .(98) We choose a Gaussian state with a given number of particles and consider its correlation matrix, denoted by C(j, j ) = f † j f j , j, j = 1, . . . , L. Let us further define the L × L covariance matrix Γ = 1 − 2C .(100) Given any two disconnected spacial subsystems A and B, of length A , B respectively, the restriction of Γ over A ∪ B is a ( A + B ) × ( A + B ) matrix defined by Γ A∪B = Γ AA Γ AB Γ BA Γ BB ,(101) Following Ref. [45] one can show that, if ρ A∪B is the RDM for this system, the fermionic partial transposition ρ R B A∪B is Gaussian. Moreover, since in general ρ R B A∪B is not hermitian, it is convenient to introduce a matrix ρ × defined as ρ × = (ρ R B A∪B )(ρ R B A∪B ) † Tr(ρ 2 A∪B ) ,(102) which, by construction, has unit trace. If one interprets ρ × as an unphysical mixed state of A ∪ B, its associated covariance matrix is (See [45]) Γ × A∪B ≡ 2 1 + Γ 2 A∪B Γ AA 0 0 −Γ BB .(103) One can then express the even charged moments of the partially transposed RDM as (See also Ref. [25]) log Tr |ρ R B A∪B | n e 2πiα(Q A −Q B ) = Tr log   1 − Γ × A∪B 2 n 2 e 2πiα + 1 + Γ × A∪B 2 n 2   + n 2 Tr log 1 + Γ A∪B 2 2 + 1 − Γ A∪B 2 2 .(104) We stress that Eq. (104) makes sense also if n is not an even integer, and it naturally provides the analytic continuation over n for the even charged moments. Lattice Fermi Gas For our numerics we take the Hamiltonian of a lattice free Fermi gas on a ring of length L H = − 1 2 j f † j+1 f j + f † j f j+1 .(105) Its ground state is a Fermi sea, with Fermi momentum k F = π/2, and its correlation matrix is C 0 (j, j ) ≡ f † j f j 0 = sin k F (j − j ) L sin π(j−j ) L .(106) We then consider the excited state obtained via the insertion of a particle at momentum k = k F + π 2 − π L(107) above the Fermi sea, whose correlation matrix is C(j, j ) = C 0 (j, j ) + 1 L e −i(k F + π 2 − π L )(j−j ) .(108) While the specific choice of k is irrelevant for our purpose, it is important to require k − k F finite in the thermodynamic limit 2 We consider the following subsystems A = {1, . . . , A }, B = { A + 1, . . . , A + B },(109) and we fix the size of A ∪ B to be half the subsystem size: A + B L = 1 2 .(110) We finally evaluate numerically the difference of charged Rényi negativities E n (α) − E n,0 (α) ≡ log Tr |ρ R B A∪B | n e 2πiα(Q A −Q B ) Tr |ρ R B A∪B,0 | n e 2πiα(Q A −Q B ) ,(111) for some values of the flux α as a function of r A = A /L, and we compare it with the prediction (20) (with r B ≡ B /L = 1/2 − r A ). In Figs. 1 and 2 we show the results for a chain of length L = 400 and given values of n and α, while varying the value of r A from 0 to 1/2. We consider also non-even values of n, and we compare the numerics with the analytical continuation (over the even integers) of our predictions. The general agreement is good: even if there are small discrepancies for small values of A or B , corresponding to r A 0 and r A 0.5, these are expected to be finite-size effects which vanish in the large-volume limit. Figure 1: Difference of (uncharged, α = 0) Rényi negativities for the one-particle state at n = 0.5, 1, 2. Note that although we derived our predictions for n even via the replica approach, we can analytically continue them to any value of n (and they are shown with dashed lines). Conclusions and Outlook This work is Part III of a series of papers, starting with [33,34] where we have investigated the universal properties of symmetry resolved quantities in zero-density excited states. Zero-density here means that volume is taken to infinity, while the number of excitations above the ground state (which may be trivial, as for a qubit state, or highly non-trivial as in QFT) is kept fixed and finite. These papers in turn extend work on entanglement measures for zero-density excited states carried out in [29,30,31,32]. Other important contributions to this study are [47,48,49,50,51,52,53] In line with the results of [33,34] for the Rényi entropies, also here we expected and indeed found that the contribution of a finite number of excitations to the symmetry resolved (logarithmic) negativity is given by a simple formula, a polynomial on the variables r A , r B and r = 1 − r A − r B , which represent the relative sizes of two subsystems A and B and their complement, respectively. For the symmetry resolved moments of the negativity, this polynomial will also depend on a parameter related to the internal symmetry of the theory, which in this paper we take to be U (1). We called this parameter α. The formulae that we obtained generalise the results of [31,32] in a simple way and are consistent with numerical results. However, some of the methods that we have employed to obtain these results are quite new and have potential for further use. The method of twist operators, closely related to the derivation in [32] for d-dimensional free bosonic theories, was introduced in [34] and the present paper provides a very non-trivial check of its validity. From this method alone, we can claim that our formulae should be valid in any dimensionality. Compared to a computation based on branch point twist fields for free QFT, as performed for the negativity in [31], the use of twist operators, at least for free theories, captures the same universal result through a significantly simpler computation. A further application of twist operators and one of the most interesting and novel results of this paper is the fact that twist operators can be easily adapted to treat particles with Figure 2: Difference of charged Rényi negativities for the one-particle state at flux α = 0.1, 0.2, 0.3, 0.4 and n = 1, 2 evaluated numerically (dots) versus the analytical predictions (dashed lines). The left/right panels show the real/imaginary part of E n (α) − E n,0 (α). The size of the chain is L = 400, and we plot the results as functions of r A ∈ (0, 1/2). Note that although we have insisted that n must be even, once a formula with n even has been obtained, n can be analytically continued to other values, hence our choices for the numerics. both fermionic and bosonic statistics. In particular, it has been known for some time that the negativity of fermionic theories requires a slight redefinition of the operation of partial transposition [19,20]. Here we find that, first, this redefinition is easy to implement in the context of twist operators, and, second, that once implemented it leads to a result which is the same as for bosons (even if the intermediate steps and starting point of the computation are different). This ties in well with the idea that the universal part of the entanglement associated with these types of excitation has a semiclassical interpretation (as recently explored in [53]), thus the statistics of excitations plays no role, even if it does play an important role for nonuniversal contributions, which are non-trivial when we consider excited states of QFT. Looking ahead, there are many interesting questions to explore in relation to the role of different kinds of symmetries in the context of entanglement as well as the entanglement of excited states in different limits and for different types of particle statistics. For example, a notion of anyonic partial transposition was introduced in [54] which we could easily adapt to some of the models considered here, such as qubit states. There have also been recent studies of the symmetry resolved entanglement for non-invertible symmetries [55] which could equally be extended to excited states. In connection to twist operators, any new measures of charged entanglement should be computable by a suitable redefinition of the operators. Also related to this and previous work is the investigation of the crossover from low to high energy states in CFT, from the model-dependent predictions of [56] to the sort of universal results obtained in [29,30,31,32,33,34] and here. The results of [56] apply to low-lying excited states of CFT, whereas the universal formulae obtained for zero-density excited states apply for large momentum/energy. There must be a crossover between these two behaviours that one can understand from CFT arguments and it would be very interesting to do so. Finally, twist operators seem to be a promising approach to computing entanglement measures in limiting cases where many details of the interaction can be neglected, i.e. semiclassical limit. A possible field where these ideas could be applied are out-of-equilibrium protocols [57,58]. It would be interesting to try to characterise entanglement growth for free/interacting theories in any dimension through this approach. For some protocols (i.e. global quench), we expect that the linear growth of entanglement may be captured by a semiclassical approximation of correlation functions, similar to what we have done here. We very much hope to tackle some of these problems in future work. If the first condition holds, one can apply the previous considerations to A n = A and deduce A n−1 = A; this argument can be iterated, and one concludes that (A 1 . . . A n ) = (A A . . . A). In contrast, if the second condition holds, then A 1 A 2 ... An j 1 j 2 ... jn = A B ... An 1 2 ... jn . So far, as the choice of the first position was arbitrary (by cyclic permutation symmetry), we proved that whenever A is present, either it is followed by B or the whole string is (A A . . . A). Using the same argument, one can prove that whenever B is present, either it is preceded by A or (A 1 A 2 . . . A n ) = (B B . . . B). A.2 Combinatorial Counting of Strings In this Subsection we count the number of non-vanishing strings (A 1 . . . A n )(113) containing k pairs of consecutive A's and B's. We first focus on the type-I strings (B A 2 . . . . . . A n−1 A).(114) The number of strings that satisfy the constraints derived above is given by all the possible ways one can insert sequences of C's among any pair of A and B. In other words, the generic string will look like (B C C . . . C A B C C . . . . . . C A),(115) where k sequences of C's of length {x i } i=1,...,k are present, and x i ≥ 0 are integer numbers. As the length of the total string is n, the {x i } i=1,...,k satisfy the following constraint x 1 + · · · + x k = n − 2k.(116) We now make use of a remarkable mathematical result, namely that the number of non-negative integer solutions of x 1 + · · · + x k = n, that is the number of non-negative integer partitions of n into k parts is n+k−1 n [59]. As a consequence, the number of type-I strings satisfying the previous constraints is n − k − 1 k − 1 .(117) Similarly, we consider now the type-II strings, having the following structure (C C . . . C A B C C . . . . . . C). In this case, there are k + 1 sequences of consecutive C's, as the difference with respect to the previous case is that there is now also a sequence that precedes the first pair of A and B. Thus, we now have to count the number of non-negative integer solutions of x 1 + · · · + x k+1 = n − 2k,(119) which is n − k k . Summing up the contribution of both type of strings, we get precisely n − k − 1 k − 1 + n − k k (121) as the number of strings containing k pairs of consecutive A's and B's. As a last technical remark, we observe that there is at least one pair of consecutive A and B in the type-I strings, given by A n = A and A 1 = B. Then, if k = 0 there are no strings satisfying the constraints, and this is compatible with the convention The generalised Lucas polynomials (in two variables) are defined via the recurrence relation [60,61]: V n+2 (x, y) = xV n+1 (x, y) + yV n (x, y) , n ∈ N 0 (123) the first few polynomials are V 0 (x, y) = 2, V 1 (x, y) = x, V 2 (x, y) = x 2 + 2y. The proof of (71) is based on the fact that the two sides of the equation are precisely two equivalent closed formulae for the n-th Lucas polynomial, with x = r, y = r A r B . In fact, we will now prove the following two statements: 1. For all integers n ≥ 0, one has a generalised Binet formula V n (x, y) = α n + β n , α = x + x 2 + 4y 2 , β = x − x 2 + 4y 2 . This is immediate to prove, as (123) holds by inspection for n = 0, n = 1 and furthermore α 2 = xα+y, β 2 = xβ+y, which implies that α n+2 = xα n+1 +yα n and β n+2 = xβ n+1 +yβ n . This means that α n + β n satisfies the relation (123) for all n ≥ 0. 2. For all integers n ≥ 1 another explicit formula for V n (x, y) is given by V n (x, y) = n/2 k=0 n n − k n − k k x n−2k y k . We prove this by showing again that the recurrence relation is satisfied. For n = 1, n = 2 it is immediate to see that this reproduces the correct polynomials. For n ≥ 3, we can make use of the identity n n − k n − k k = n − 1 n − k − 1 n − k − 1 k + n − 2 n − k − 1 n − k − 1 k − 1(126) and we adopt the convention that n k = 0 if k > n or k < 0. It is convenient to split the two cases n = 2m and n = 2m + 1, as the floor function yields different values. Let us consider the case n = 2m, the other case being completely analogous. If n = 2m, n/2 = m, (n − 1)/2 = (n − 2)/2 = m − 1. From (125) and (126) we have V n (x, y) = m k=0 2m 2m − k 2m − k k x 2m−2k y k = x m k=0 2m − 1 2m − 1 − k 2m − 1 − k k x 2m−1−2k y k + y m k=0 2m − 2 2m − 1 − k 2m − 1 − k k − 1 x 2m−2k y k−1(127) the first sum in the right-hand side vanishes if k = m, so that this term is xV n−1 (x, y). The second sum on the other hand vanishes if k = 0, so we can shift the summation variable and we see that this term reproduces yV n−2 (x, y). Hence the recurrence relation is proved. Equation (71) follows from these two statements. This equation has (at least) two interesting implications. The first one comes from a direct expansion of the Binet formula using the binomial theorem: x − x 2 + 4y 2 n + x + x 2 + 4y 2 n = 1 2 n n j=0 (−1) k n j x n−j (x 2 + 4y) j/2 + n j x n−j (x 2 + 4y) j/2 As far as we know, this identity was only proved for n odd in [60]. The other interesting implication is obtained for x = y = 1. In this case, the Lucas polynomials (123) reduce to the Lucas numbers: L n = L n−1 + L n−2 , n ≥ 2 with L 0 = 2, L 1 = 1. The recurrence formula is the same defining the Fibonacci sequence, except for the different initial values. Equation (124) with x = y = 1 gives a closed formula for the Lucas numbers, and thus we have, for n ≥ 1: n/2 k=0 n n − k n − k k = 1 + √ 5 2 n + 1 − √ 5 2 n(131) the quantity on the right-hand side is φ n + (1 − φ) n , with φ the golden ratio, and it is always a positive integer. On the other hand, the quantity on the left hand side is the number of non-vanishing strings of type-I and type-II (out of a total of 3 n possible strings) obtained via the contraction methods discussed in Section 3. C Mathematical Identities Here we point out two useful identities which are employed to obtain the free fermion result in Subsection 3.2.1. The first relation is where the product is performed over p integer/semi-integer when n is odd/even. An immediate consequence of this identity is valid if n an even integer 3 . • Either (A 1 . . . A n ) = (A . . . A) or (A 1 . . . A n ) = (B . . . B), the two cases which have been already discussed in Eqs. (62), (63), the twist operators are defined by Eq. (74) with n = 1. A similar calculation as in the previous sections gives • (B B B B) : it yields (r B e − 2πiα 2 ) 4 . • (A A B B) and (A B A B): they yield 6r 2 A r 2 B . • (B A A A): it yields 4(r A e 2πiα 2 ) 3 (r B e − 2πiα 2 ). • (A B B B): it yields 4(r B e −2πiα/2 ) 3 (r A e 2πiα/2 ). B Generalised Lucas Polynomials and a proof of Eq. (71) n−2k y k ,(128) where in the last line we rearranged the sums over j and k. This quantity equals (125), which implies the non-trivial combinatorial identity: ) = x n + y n , n ∈ N (132) n (α+p) − r B e − 2πi n (α+p) + r) We mention that in Ref.[46] the case k − kF ∼ 1/L, which is a low-lying state, was considered, and its symmetry resolved entanglement was computed. It is important here that n is even, as (−rB) n = r n B . A CombinatoricsA.1 Non-Vanishing StringsHere, we count explicitly the terms in (61) that give rise to non-vanishing contractions. In the following, we will make use of the notation introduced in (64). Let us consider a string containing the symbol A at a certain position, say the first one, without loss of generality:Then, the replica index 1 should appear exactly once among j 2 , . . . , j n , as each of the replica indices 1, . . . , n has to be present if the corresponding term is non-vanishing. By inspecting (65), one realises that there are only two possible cases :• j n = 1, and then A n = A,• j 2 = 1, which implies A 2 = B. Geometric and renormalized entropy in conformal field theory. C Holzhey, F Larsen, F Wilczek, Nucl. Phys. 424C. Holzhey, F. Larsen, and F. Wilczek. "Geometric and renormalized entropy in conformal field theory". In: Nucl. Phys. B424 (1994), pp. 443-467. Entanglement entropy and quantum field theory. P Calabrese, J L Cardy, P002. eprint: hep-th/0405152J. Stat. Mech. 0406P. Calabrese and J. L. Cardy. "Entanglement entropy and quantum field theory". In: J. Stat. Mech. 0406 (2004), P002. eprint: hep-th/0405152. Quantum spin chain, Toeplitz determinants and Fisher-Hartwig conjecture. B.-Q Jin, V E Korepin, J. Stat. Phys. 116B.-Q. Jin and V.E. Korepin. "Quantum spin chain, Toeplitz determinants and Fisher- Hartwig conjecture". In: J. Stat. Phys. 116 (2004), pp. 79-95. Evolution of entanglement entropy in one-dimensional Systems. P Calabrese, J L Cardy, P010. eprint: cond-mat/0503393J. Stat. Mech. 0504P. Calabrese and J. L. Cardy. "Evolution of entanglement entropy in one-dimensional Systems". In: J. Stat. Mech. 0504 (2005), P010. eprint: cond-mat/0503393. Entanglement in quantum critical phenomena. G Vidal, Phys. Rev. Lett. 90227902G. Vidal et al. "Entanglement in quantum critical phenomena". In: Phys. Rev. Lett. 90 (2003), p. 227902. Ground state entanglement in quantum spin chains. J I Latorre, E Rico, G Vidal, Quant. Inf. Comput. 4J. I. Latorre, E. Rico, and G. Vidal. "Ground state entanglement in quantum spin chains". In: Quant. Inf. Comput. 4 (2004), pp. 48-92. Fine-grained entanglement loss along renormalization group flows. J I Latorre, Phys. Rev. 7134301J. I. Latorre et al. "Fine-grained entanglement loss along renormalization group flows". In: Phys. Rev. A71 (2005), p. 034301. Concentrating partial entanglement by local operations. C H Bennett, Phys. Rev. 53C. H. Bennett et al. "Concentrating partial entanglement by local operations". In: Phys. Rev. A53 (1996), pp. 2046-2052. Form factors of branch-point twist fields in quantum integrable models and entanglement entropy. J L Cardy, O A Castro-Alvaredo, B Doyon, J. Stat. Phys. 130J. L. Cardy, O. A. Castro-Alvaredo, and B. Doyon. "Form factors of branch-point twist fields in quantum integrable models and entanglement entropy". In: J. Stat. Phys. 130 (2008), pp. 129-168. Entanglement properties of the harmonic chain. K Audenaert, https:/link.aps.org/doi/10.1103/PhysRevA.66.042327Phys. Rev. 6642327K. Audenaert et al. "Entanglement properties of the harmonic chain". In: Phys. Rev. A66 (4 Oct. 2002), p. 042327. doi: 10.1103/PhysRevA.66.042327. url: https://link.aps. org/doi/10.1103/PhysRevA.66.042327. Volume of the set of separable states. Karolżyczkowski, https:/link.aps.org/doi/10.1103/PhysRevA.58.883Phys. Rev. KarolŻyczkowski et al. "Volume of the set of separable states". In: Phys. Rev. A58 (2 Aug. 1998), pp. 883-892. doi: 10.1103/PhysRevA.58.883. url: https://link.aps. org/doi/10.1103/PhysRevA.58.883. A comparison of entanglement measures. Jens Eisert, B Martin, Plenio, Journal of Modern Optics. 46Jens Eisert and Martin B Plenio. "A comparison of entanglement measures". In: Journal of Modern Optics 46.1 (1999), pp. 145-154. Computable measure of entanglement. Guifré Vidal, Reinhard F Werner, Phys. Rev. A65. 332314Guifré Vidal and Reinhard F Werner. "Computable measure of entanglement". In: Phys. Rev. A65.3 (2002), p. 032314. Logarithmic negativity: a full entanglement monotone that is not convex. B Martin, Plenio, Phys. Rev. Lett. 9590503Martin B Plenio. "Logarithmic negativity: a full entanglement monotone that is not con- vex". In: Phys. Rev. Lett. 95.9 (2005), p. 090503. Erratum: Logarithmic Negativity: A Full Entanglement Monotone That Is not Convex. B Martin, Plenio, Phys. Rev. Lett. 95119902Martin B Plenio. "Erratum: Logarithmic Negativity: A Full Entanglement Monotone That Is not Convex". In: Phys. Rev. Lett. 95.9 (2005), p. 119902. Entanglement in quantum information theory. Jens Eisert, quant- ph/0610253PhD ThesisJens Eisert. "Entanglement in quantum information theory". In: PhD Thesis quant- ph/0610253 (2006). Entanglement negativity in quantum field theory. Pasquale Calabrese, John Cardy, Erik Tonni, Phys. Rev. Lett. 109130502Pasquale Calabrese, John Cardy, and Erik Tonni. "Entanglement negativity in quantum field theory". In: Phys. Rev. Lett. 109.13 (2012), p. 130502. Entanglement negativity in extended systems: a field theoretical approach. Pasquale Calabrese, John Cardy, Erik Tonni, J. Stat. Mech. 2008Pasquale Calabrese, John Cardy, and Erik Tonni. "Entanglement negativity in extended systems: a field theoretical approach". In: J. Stat. Mech. 2013.02 (2013), P02008. Entanglement negativity in two-dimensional free lattice models. Viktor Eisler, Zoltán Zimborás, Phys. Rev. B93. 11115148Viktor Eisler and Zoltán Zimborás. "Entanglement negativity in two-dimensional free lat- tice models". In: Phys. Rev. B93.11 (2016), p. 115148. Partial time-reversal transformation and entanglement negativity in fermionic systems. Hassan Shapourian, Ken Shiozaki, Shinsei Ryu, Phys. Rev. 95165101Hassan Shapourian, Ken Shiozaki, and Shinsei Ryu. "Partial time-reversal transforma- tion and entanglement negativity in fermionic systems". In: Phys. Rev. B95.16 (2017), p. 165101. Entanglement negativity of fermions: Monotonicity, separability criterion, and classification of few-mode states. Hassan Shapourian, Shinsei Ryu, Phys. Rev. A99. 222310Hassan Shapourian and Shinsei Ryu. "Entanglement negativity of fermions: Monotonicity, separability criterion, and classification of few-mode states". In: Phys. Rev. A99.2 (2019), p. 022310. Symmetry-Resolved Entanglement in Many-Body Systems. Moshe Goldstein, Eran Sela, 10.1103/physrevlett.120.200602Phys. Rev. Lett. 12020Moshe Goldstein and Eran Sela. "Symmetry-Resolved Entanglement in Many-Body Sys- tems". In: Phys. Rev. Lett. 120.20 (2018). doi: 10.1103/physrevlett.120.200602. url: https://doi.org/10.1103%2Fphysrevlett.120.200602. Equipartition of the entanglement entropy. J C Xavier, F C Alcaraz, G Sierra, https:/link.aps.org/doi/10.1103/PhysRevB.98.041106Phys. Rev. B. 9841106J. C. Xavier, F. C. Alcaraz, and G. Sierra. "Equipartition of the entanglement entropy". In: Phys. Rev. B 98 (4 July 2018), p. 041106. doi: 10.1103/PhysRevB.98.041106. url: https://link.aps.org/doi/10.1103/PhysRevB.98.041106. Imbalance entanglement: Symmetry decomposition of negativity. Eyal Cornfeld, Moshe Goldstein, Eran Sela, Phys. Rev. A98. 332302Eyal Cornfeld, Moshe Goldstein, and Eran Sela. "Imbalance entanglement: Symmetry decomposition of negativity". In: Phys. Rev. A98.3 (2018), p. 032302. Symmetry decomposition of negativity of massless free fermions. Sara Murciano, Riccarda Bonsignori, Pasquale Calabrese, SciPost Physics. 10111Sara Murciano, Riccarda Bonsignori, and Pasquale Calabrese. "Symmetry decomposition of negativity of massless free fermions". In: SciPost Physics 10.5 (2021), p. 111. Charged Rényi negativity of massless free bosons. Hui-Huang Chen, Journal of High Energy Physics. 2022Hui-Huang Chen. "Charged Rényi negativity of massless free bosons". In: Journal of High Energy Physics 2022.2 (2022), pp. 1-27. Charge imbalance resolved R\'enyi negativity for free compact boson: Two disjoint interval case. Himanshu Gaur, A Urjit, Yajnik, arXiv:2210.06743arXiv preprintHimanshu Gaur and Urjit A Yajnik. "Charge imbalance resolved R\'enyi negativity for free compact boson: Two disjoint interval case". In: arXiv preprint arXiv:2210.06743 (2022). Symmetry-resolved entanglement detection using partial transpose moments. Antoine Neven, npj Quantum Information. 7152Antoine Neven et al. "Symmetry-resolved entanglement detection using partial transpose moments". In: npj Quantum Information 7.1 (2021), p. 152. Entanglement Content of Quasiparticle Excitations. A Olalla, Castro-Alvaredo, 10.1103/PhysRevLett.121.170602arXiv:1805.04948Phys. Rev. Lett. 121170602cond-mat.stat-mechOlalla A. Castro-Alvaredo et al. "Entanglement Content of Quasiparticle Excitations". In: Phys. Rev. Lett. 121.17 (2018), p. 170602. doi: 10.1103/PhysRevLett.121.170602. arXiv: 1805.04948 [cond-mat.stat-mech]. Entanglement content of quantum particle excitations. Part I. Free field theory. A Olalla, Castro-Alvaredo, JHEP 2018. 10Olalla A Castro-Alvaredo et al. "Entanglement content of quantum particle excitations. Part I. Free field theory". In: JHEP 2018.10 (2018), pp. 1-55. Entanglement content of quantum particle excitations. Part II. Disconnected regions and logarithmic negativity. A Olalla, Castro-Alvaredo, JHEP 2019. 11Olalla A Castro-Alvaredo et al. "Entanglement content of quantum particle excitations. Part II. Disconnected regions and logarithmic negativity". In: JHEP 2019.11 (2019), pp. 1- 47. Entanglement Content of Quantum Particle Excitations III. Graph Partition Functions. A Olalla, Castro-Alvaredo, 10.1063/1.5098892doi:10.1063/1.5098892.arXiv:1904.02615J. Math. Phys. 6082301math-phOlalla A. Castro-Alvaredo et al. "Entanglement Content of Quantum Particle Excitations III. Graph Partition Functions". In: J. Math. Phys. 60.8 (2019), p. 082301. doi: 10.1063/ 1.5098892. arXiv: 1904.02615 [math-ph]. Symmetry resolved entanglement of excited states in quantum field theory. Part I. Free theories, twist fields and qubits. Luca Capizzi, 10.1007/JHEP12(2022)127arXiv:2203.12556JHEP. 12127hep-thLuca Capizzi et al. "Symmetry resolved entanglement of excited states in quantum field theory. Part I. Free theories, twist fields and qubits". In: JHEP 12 (2022), p. 127. doi: 10.1007/JHEP12(2022)127. arXiv: 2203.12556 [hep-th]. Symmetry resolved entanglement of excited states in quantum field theory. Part II. Numerics, interacting theories and higher dimensions. Luca Capizzi, 10.1007/JHEP12(2022)128arXiv:2206.12223JHEP. 12128hep-thLuca Capizzi et al. "Symmetry resolved entanglement of excited states in quantum field theory. Part II. Numerics, interacting theories and higher dimensions". In: JHEP 12 (2022), p. 128. doi: 10.1007/JHEP12(2022)128. arXiv: 2206.12223 [hep-th]. Symmetry resolved entanglement in integrable field theories via form factor bootstrap. X Dávid, Pasquale Horváth, Calabrese, 10.1007/JHEP11(2020)131arXiv:2008.08553131hep-thDávid X. Horváth and Pasquale Calabrese. "Symmetry resolved entanglement in integrable field theories via form factor bootstrap". In: JHEP 11 (2020), p. 131. doi: 10 . 1007 / JHEP11(2020)131. arXiv: 2008.08553 [hep-th]. Analytic fields on Riemann surfaces. II. V Knizhnik, Comm. Math. Phys. 112V.G Knizhnik. "Analytic fields on Riemann surfaces. II". In: Comm. Math. Phys. 112.4 (1987), pp. 567-590. The conformal field theory of orbifolds. L Dixon, Nuclear Physics B. 282L. Dixon et al. "The conformal field theory of orbifolds". In: Nuclear Physics B 282 (1987), pp. 13-73. url: http : / / www . sciencedirect . com / science / article / pii / 0550321387906766. Coset construction for winding subalgebras and applications. P Bouwknegt, In: qalg/9610013 (P. Bouwknegt. "Coset construction for winding subalgebras and applications". In: q- alg/9610013 (). Systematic approach to cyclic orbifolds. L Borisov, M B Halpern, C Schweigert, Int. J. Mod. Phys. 13L. Borisov, M. B. Halpern, and C. Schweigert. "Systematic approach to cyclic orbifolds". In: Int. J. Mod. Phys. A13 (1998), pp. 125-168. Arguments towards a c-theorem from branch-point twist fields. O A Castro-Alvaredo, B Doyon, E Levi, 10.1088/1751-8113/44/49/492003arXiv:1107.4280J.Phys. 44492003hep-thO.A. Castro-Alvaredo, B. Doyon, and E. Levi. "Arguments towards a c-theorem from branch-point twist fields". In: J.Phys. A44 (2011), p. 492003. doi: 10.1088/1751-8113/ 44/49/492003. arXiv: 1107.4280 [hep-th]. Composite branch-point twist fields in the Ising model and their expectation values. Emanuele Levi, 10.1088/1751-8113/45/27/275401arXiv:1204.1192J.Phys. 45275401hep-thEmanuele Levi. "Composite branch-point twist fields in the Ising model and their expecta- tion values". In: J.Phys. A45 (2012), p. 275401. doi: 10.1088/1751-8113/45/27/275401. arXiv: 1204.1192 [hep-th]. Entanglement entropy of non-unitary conformal field theory. D Bianchini, arXiv:1405.2804J.Phys. 48hep-thD. Bianchini et al. "Entanglement entropy of non-unitary conformal field theory". In: J.Phys. A48 (2015), 04FT01. arXiv: 1405.2804 [hep-th]. D Bianchini, O A Castro-Alvaredo, 10.1016/j.nuclphysb.2016.10.016arXiv:1607.05656Branch Point Twist Field Correlators in the Massive Free Boson Theory. 913hep-thD. Bianchini and O. A. Castro-Alvaredo. "Branch Point Twist Field Correlators in the Massive Free Boson Theory". In: Nucl. Phys. B913 (2016), pp. 879-911. doi: 10.1016/j. nuclphysb.2016.10.016. arXiv: 1607.05656 [hep-th]. Entanglement and alpha entropies for a massive Dirac field in two dimensions. H Casini, C D Fosco, M Huerta, 10.1088/1742-5468/2005/07/P07007arXiv:cond-mat/0505563J. Stat. Mech. 05077007H. Casini, C. D. Fosco, and M. Huerta. "Entanglement and alpha entropies for a massive Dirac field in two dimensions". In: J. Stat. Mech. 0507 (2005), P07007. doi: 10.1088/1742- 5468/2005/07/P07007. arXiv: cond-mat/0505563. On the partial transpose of fermionic Gaussian states. Viktor Eisler, Zoltán Zimborás, New Journal of Physics. 1753048Viktor Eisler and Zoltán Zimborás. "On the partial transpose of fermionic Gaussian states". In: New Journal of Physics 17.5 (2015), p. 053048. Symmetry resolved entanglement entropy of excited states in a CFT. Luca Capizzi, Paola Ruggiero, Pasquale Calabrese, J. Stat. Mech. 73101Luca Capizzi, Paola Ruggiero, and Pasquale Calabrese. "Symmetry resolved entanglement entropy of excited states in a CFT". In: J. Stat. Mech. 2020.7 (2020), p. 073101. Excited state Rényi entropy and subsystem distance in two-dimensional non-compact bosonic theory. Part I. Single-particle states. Jiaju Zhang, M A Rajabpour, 10.1007/JHEP12(2020)160arXiv:2009.00719JHEP. 12160hep-thJiaju Zhang and M. A. Rajabpour. "Excited state Rényi entropy and subsystem distance in two-dimensional non-compact bosonic theory. Part I. Single-particle states". In: JHEP 12 (2020), p. 160. doi: 10.1007/JHEP12(2020)160. arXiv: 2009.00719 [hep-th]. Universal Rényi entanglement entropy of quasiparticle excitations. Jiaju Zhang, M A Rajabpour, 10.1209/0295-5075/ac130earXiv:2010.13973EPL 135. 660001cond-mat.stat-mechJiaju Zhang and M. A. Rajabpour. "Universal Rényi entanglement entropy of quasiparticle excitations". In: EPL 135.6 (2021), p. 60001. doi: 10.1209/0295-5075/ac130e. arXiv: 2010.13973 [cond-mat.stat-mech]. Corrections to universal Rényi entropy in quasiparticle excited states of quantum chains. Jiaju Zhang, M A Rajabpour, 10.1088/1742-5468/ac1f28arXiv:2010.16348J. Stat. Mech. 210993101cond-mat.stat-mechJiaju Zhang and M. A. Rajabpour. "Corrections to universal Rényi entropy in quasiparticle excited states of quantum chains". In: J. Stat. Mech. 2109 (2021), p. 093101. doi: 10.1088/ 1742-5468/ac1f28. arXiv: 2010.16348 [cond-mat.stat-mech]. Excited state Rényi entropy and subsystem distance in two-dimensional non-compact bosonic theory. Part II. Multi-particle states. Jiaju Zhang, M A Rajabpour, 10.1007/JHEP08(2021)106106hep-thJiaju Zhang and M. A. Rajabpour. "Excited state Rényi entropy and subsystem distance in two-dimensional non-compact bosonic theory. Part II. Multi-particle states". In: JHEP 08 (2021), p. 106. doi: 10.1007/JHEP08(2021)106. arXiv: 2011.11006 [hep-th]. Entanglement of magnon excitations in spin chains. Jiaju Zhang, M A Rajabpour, 10.1007/JHEP02(2022)072doi:10.1007/JHEP02(2022)072.arXiv:2109.1282672cond-mat.stat-mechJiaju Zhang and M. A. Rajabpour. "Entanglement of magnon excitations in spin chains". In: JHEP 02 (2022), p. 072. doi: 10 . 1007 / JHEP02(2022 ) 072. arXiv: 2109 . 12826 [cond-mat.stat-mech]. Subsystem distances between quasiparticle excited states. Jiaju Zhang, M A Rajabpour, 10.1007/JHEP07(2022)119119cond-mat.stat-mechJiaju Zhang and M. A. Rajabpour. "Subsystem distances between quasiparticle excited states". In: JHEP 07 (2022), p. 119. doi: 10.1007/JHEP07(2022)119. arXiv: 2202.11448 [cond-mat.stat-mech]. →0 limit of the entanglement entropy. Giuseppe Mussardo, Jacopo Viti, 10.1103/PhysRevA.105.032404arXiv:2112.06840Phys. Rev. A. 10532404quant-phGiuseppe Mussardo and Jacopo Viti. " →0 limit of the entanglement entropy". In: Phys. Rev. A 105.3 (2022), p. 032404. doi: 10.1103/PhysRevA.105.032404. arXiv: 2112.06840 [quant-ph]. Anyonic Partial Transpose I: Quantum Information Aspects. Hassan Shapourian, S K Roger, Shinsei Mong, Ryu, arXiv:2012.02222quant-phHassan Shapourian, Roger S. K. Mong, and Shinsei Ryu. "Anyonic Partial Transpose I: Quantum Information Aspects". In: (Dec. 2020). arXiv: 2012.02222 [quant-ph]. Asymptotic density of states in 2d CFTs with non-invertible symmetries. Ying-Hsuan Lin, arXiv:2208.05495hep-thYing-Hsuan Lin et al. "Asymptotic density of states in 2d CFTs with non-invertible sym- metries". In: (Aug. 2022). arXiv: 2208.05495 [hep-th]. Entanglement of low-energy excitations in Conformal Field Theory. Miguel Ibanez Francisco Castilho Alcaraz, German Berganza, Sierra, 10.1103/PhysRevLett.106.201601arXiv:1101.2881Phys. Rev. Lett. 106201601cond-mat.stat-mechFrancisco Castilho Alcaraz, Miguel Ibanez Berganza, and German Sierra. "Entanglement of low-energy excitations in Conformal Field Theory". In: Phys. Rev. Lett. 106 (2011), p. 201601. doi: 10.1103/PhysRevLett.106.201601. arXiv: 1101.2881 [cond-mat.stat-mech]. Dynamics of charge-imbalanceresolved entanglement negativity after a quench in a free-fermion model. Gilles Parez, Riccarda Bonsignori, Pasquale Calabrese, Journal of Statistical Mechanics: Theory and Experiment. 202253103Gilles Parez, Riccarda Bonsignori, and Pasquale Calabrese. "Dynamics of charge-imbalance- resolved entanglement negativity after a quench in a free-fermion model". In: Journal of Statistical Mechanics: Theory and Experiment 2022.5 (2022), p. 053103. Dynamics of charge imbalance resolved negativity after a global quench in free scalar field theory. Hui-Huang Chen, Journal of High Energy Physics. 2022Hui-Huang Chen. "Dynamics of charge imbalance resolved negativity after a global quench in free scalar field theory". In: Journal of High Energy Physics 2022.8 (2022), pp. 1-26. Elementary Number Theory in Nine Chapters. J J Tattersall, Cambridge University Press2nd EdJ.J. Tattersall. "Elementary Number Theory in Nine Chapters". In: Cambridge University Press (2nd Ed.) (2005). Irreducibility of Lucas and generalized Lucas polynomials. E Gerald, Bergum, Verner, HoggattJr, The Fibonacci Quarterly. 12Gerald E Bergum and Verner E Hoggatt Jr. "Irreducibility of Lucas and generalized Lucas polynomials". In: The Fibonacci Quarterly 12 (1974), pp. 95-100. Generalized Lucas polynomials and Fibonacci polynomials. Paolo Emilio Ricci, Riv. Mat. Univ. Parma. 4Paolo Emilio Ricci. "Generalized Lucas polynomials and Fibonacci polynomials". In: Riv. Mat. Univ. Parma 4 (1995), pp. 137-146.
[]
[ "MenuCraft: Interactive Menu System Design with Large Language Models", "MenuCraft: Interactive Menu System Design with Large Language Models" ]
[ "Hossein Amir [email protected] \nCenter for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n\n", "Nafiseh Kargaran \nCenter for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n\n", "Abbas Nikeghbal \nCenter for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n\n", "Hinrich Heydarnoori \nCenter for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n\n", "Schütze \nCenter for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n\n" ]
[ "Center for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n", "Center for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n", "Center for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n", "Center for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n", "Center for Information and Language Processing\nLMU Munich Sharif University of Technology Bowling Green State University\n" ]
[]
Menu system design is a challenging task involving many design options and various human factors. For example, one crucial factor that designers need to consider is the semantic and systematic relation of menu commands. However, capturing these relations can be challenging due to limited available resources. With the advancement of neural language models, large language models can utilize their vast pre-existing knowledge in designing and refining menu systems.In this paper, we propose MenuCraft, an AIassisted designer for menu design that enables collaboration between the designer and a dialogue system to design menus. MenuCraft offers an interactive language-based menu design tool that simplifies the menu design process and enables easy customization of design options. MenuCraft supports a variety of interactions through dialog that allows performing few-shot learning.
10.48550/arxiv.2303.04496
[ "https://export.arxiv.org/pdf/2303.04496v1.pdf" ]
257,405,047
2303.04496
f7d63e42b10c538b2adb29e9e8e374587795cc5a
MenuCraft: Interactive Menu System Design with Large Language Models Hossein Amir [email protected] Center for Information and Language Processing LMU Munich Sharif University of Technology Bowling Green State University Nafiseh Kargaran Center for Information and Language Processing LMU Munich Sharif University of Technology Bowling Green State University Abbas Nikeghbal Center for Information and Language Processing LMU Munich Sharif University of Technology Bowling Green State University Hinrich Heydarnoori Center for Information and Language Processing LMU Munich Sharif University of Technology Bowling Green State University Schütze Center for Information and Language Processing LMU Munich Sharif University of Technology Bowling Green State University MenuCraft: Interactive Menu System Design with Large Language Models Menu system design is a challenging task involving many design options and various human factors. For example, one crucial factor that designers need to consider is the semantic and systematic relation of menu commands. However, capturing these relations can be challenging due to limited available resources. With the advancement of neural language models, large language models can utilize their vast pre-existing knowledge in designing and refining menu systems.In this paper, we propose MenuCraft, an AIassisted designer for menu design that enables collaboration between the designer and a dialogue system to design menus. MenuCraft offers an interactive language-based menu design tool that simplifies the menu design process and enables easy customization of design options. MenuCraft supports a variety of interactions through dialog that allows performing few-shot learning. Introduction Menus are widely used interfaces, providing users with an intuitive and efficient access to an application's functions. Although menus may seem simple at first glance, creating a well-designed menu system is complex. This is because the number of alternative designs grows exponentially as the number of commands increases. By disregarding non-textual design factors such as size, saliency, and color, we can focus solely on the textual representation of menu systems. In this regard, the position of commands in the menu and the assignment of hotkeys are the two most critical factors in designing a menu system. Menus typically follow a consistent linear pattern, where command names are displayed on the left, and keyboard shortcut cues are aligned on the right (Giannisakis et al., 2022). Since 1980, Human-Computer Interaction (HCI) researchers have been developing better tech-niques for placing commands within the menu system. Their ultimate goal is to minimize selection time (Ahlström, 2005;Bailly et al., 2016;Card et al., 1980) while maximizing the associativity among commands Dayama et al., 2021). In order to reach these goals, the setup of parameter-based models needs to possess complete information regarding the frequency of command usage and the relation among commands (e.g., pairwise semantic relevance scores (Bailly et al., 2014;Chen et al., 2015)). The designer typically provides the parameters for menu system design. This means that designers must rely on user testing and past experience by comparing commands with each other to determine the parameters. However, as the number of commands increases, this process can become increasingly challenging, time-consuming, and prone to inaccuracy. Furthermore, designers strive to maintain consistency across menus within a given ecosystem. This involves placing commands similarly across menus to help users quickly locate the desired command. The downside is this process of ensuring consistency can be time-consuming and distract designers from their primary goal of optimizing the menu. Prior researches have investigated using datadriven methods such as pre-trained static embeddings (Adar et al., 2014;Li et al., 2018) to capture the semantics of menu commands. However, these embeddings are typically trained on generic datasets such as Wikipedia or limited programspecific data. Consequently, these embeddings may not effectively capture the systemic or semantic relationships specific to the domain of menu systems, resulting in limited applicability. Despite the advancements in neural language models, menu design using data-driven methods has not kept pace. There are two main reasons for this: (i) the cost of providing datasets for menu design is expensive, and (ii) menu design is an interactive process that requires input, feedback, and adjustments from the designer. Parameter-based models give designers a sense of control over the output, and their predictability allows for a clearer understanding of how different inputs will impact the final design. Therefore, parameter-based methods are more preferred over data-driven methods in menu design. Recent advancements in pre-trained large language models (LLMs), such as GPT-3 (Brown et al., 2020) and PaLM (Chowdhery et al., 2022), have shown emergent abilities (Wei et al., 2022a) to adapt to a range of different tasks with just a few examples of the target task. The in-depth knowledge of these models presents an exciting opportunity to facilitate the menu design process without requiring training or specific datasets. The conversational nature of open-domain dialogue systems built on top of these models also can serve to fulfill the interactive demands of design. In this work, we present MENUCRAFT, an AI assistant that incorporates an open-domain dialogue language model to design menus. MenuCraft utilizes the effectiveness of data-driven methods while maintaining interactive design procedures . With MenuCraft, designers can easily create menus, receive suggestions from the assistant, and ask for adjustments to enhance their menu design. For an assistant to be useful in menu system design, it needs to be versatile enough to handle a variety of tasks, ranging from the simple, like adding a command to a menu tab, to complex ones, such as suggesting alternative designs, grouping commands, and applying restrictions to the menu, tabs, or individual commands. To avoid the cost of collecting data and training multiple models for different tasks, we propose using few-shot learning methods. Our tool facilitates various menu design tasks, including topic and command-based menu design, and provides recommendations for commands and hotkeys. Furthermore, our tool enables designers to easily create custom tasks, providing them with the flexibility to tailor the system to their specific needs. From an HCI viewpoint, we intend to utilize Menu-Craft to study how designers engage with language models, the types of tasks they request, and the efficacy of the models in fulfilling those requests. In summary, our contributions are the following: (1) We present a platform for collaborative menu design between a human designer and a large language model. (2) We illustrate how few-shot learning can offer a range of interactive menu design experiences, all without requiring additional model training. The rest of the paper is organized as follows: Section 2, introduces the related works. Section 3 and 4, describes MenuCraft and what interactions it supports. Lastly, we give our final remarks and discuss our future work in Section 5. Related Work Bridging Menus and Natural Language Menu systems can be categorized as a user interface (UI) type. In this sense, significant research in this area has been aimed at bridging graphical UIs (GUIs) with natural language. For instance, some studies have focused on predicting alt-text labels for GUI components (Li et al., 2020;Zhang et al., 2021) or generating text summaries for the entire screen (Leiva et al., 2022;Wang et al., 2021). However, these approaches may not be well-suited for menu systems due to the limitation of capturing the entire menu in a single UI screenshot. Additionally, these researches have been focused on connecting the graphical aspects of the UI to natural language, not addressing the structural characteristics of menus or their textual representations. Despite the potential benefits of bridging menu systems and natural language, there has been limited research in this area. One possible explanation for this is the lack of comprehensive textual datasets for menu systems since most data about different menu system applications are dispersed across the internet or embedded within software applications. In an attempt to gather a dataset for menu systems, Bailly and Malacria (2013) has succeeded in building a menu-logger tool to extract the hierarchies of the menu system for Mac OS X applications. provide an open dataset of 68 applications gathered with this tool to compute associativity score between the commands. Nevertheless, this data only contains limited data on menu system hierarchies for Mac OS X applications and no descriptions or information about commands. In another attempt, Adar et al. (2014) train a word2vec model to capture the domain-specific language of Photoshop application by mining a large corpus of web documents related to the application. However, the static representations derived from the trained model are limited to a specific application domain. There is a noticeable gap in research on integrating menu systems with natural language processing. Our work contributes to this gap by leveraging the few-shot learning capability of LLMs to apply their extensive knowledge to various menu design tasks. Our method does not need additional datasets or training. Interactive Application of Large Language Models Language models, especially LLMs, have the potential to be effectively used for few-shot learning. LLMs facilitate in-context few-shot learning through prompting. Rather than finetune or retrain models for each new task, a few input and output data examples from the target task can be provided as prompts to the LLM (Brown et al., 2020;Chowdhery et al., 2022;Wei et al., 2022b;Zhou et al., 2022). This advantage enables these models to not require explicit training and can support a variety of creative tasks, especially in HCI research (Morris et al., 2022), such as story writing Coenen et al., 2021;Yuan et al., 2022), modifying web designs , conversational interactions on mobile UIs (Wang et al., 2022), email writing (Goodman et al., 2022) and executing robot commands (Ahn et al., 2022). To the best of our knowledge, no one has yet attempted to use language models for menu design. Our platform proposes using language models in dialogue, which leverage pre-existing knowledge and interactive design process . The MenuCraft MenuCraft is an interactive assistant tool designed to assist designers in creating menu designs from scratch, offering alternative designs, command and hotkey suggestions, and more. The user-friendly web interface of MenuCraft is designed as a traditional chatbot, enabling users to engage with the tool seamlessly. Additionally, MenuCraft provides a variety of default supported interactions that can be conveniently inserted into the chat feed as templates. Users can easily modify the prompts to suit their queries, allowing for a customized experience catering to individual needs. MenuCraft's remarkable capabilities result from using open-ended dialog systems built on large language models. The dialog system we use is Chat-GPT (OpenAI, 2022), AKA GPT-3.5-turbo, a language model capable of following instructions and answering questions posed in a conversational format. ChatGPT trained using Reinforcement Learn-ing from Human Feedback (RLHF), using the same methods as InstructGPT (Ouyang et al., 2022), but with slight differences in the data collection setup. Initialize ChatGPT is designed to take as input the previous turns in a conversation and utilize them to generate a prediction for the subsequent turn. For Menu-Craft, the few-shot learning examples are formulated as a conversation between a human designer and an AI assistant. In our initial tests, we found it very convenient to construct few-shot learning contexts for the dialog model since humans are familiar with the conversational format and can quickly adapt to it. To set the initial prompt, we determined the primary goals of MenuCraft. As previously stated, the objective of optimizing the arrangement of commands is to improve selection time and enhance the associativity among the commands. Usually, frequently used commands are prioritized to reduce selection time by placing them at the top of the menu based on Fitts' Law (Ahlström, 2005;Cockburn et al., 2007). This approach minimizes the time required for users to access commonly used commands. To maximize associativity, commands with related operations are grouped and placed within the same tab and near each other . This meets the user's expectation of finding related menu commands in familiar locations due to spatial memory (Cockburn et al., 2007). For instance, it is intuitive that users expect to see commands such as Cut, Copy, and Paste to be grouped or About and Help commands to be located within the last tab. With these objectives in mind, we formulated our initial prompt as follows: MenuCraft I am MenuCraft, an AI-Assisted Menu Designer designed to help create intuitive and user-friendly menu interfaces for various types of applications. I can suggest menu design ideas, arrange commands in linear or hierarchical menus, group commands as tabs, suggest command names, and hotkeys, and more to make the app easier to use. What would you like me to help you with? Now that initialization has been completed, design requests can be submitted. Supported Interactions In this section, we describe a number of the interactions that MenuCraft can perform. Topic-Based Design Topic-based design is one of the most general tasks that a designer can ask MenuCraft to perform. MenuCraft can assist in the creation of a menu for a specific topic by suggesting relevant commands based on similar apps and user input constraints. For example, when MenuCraft is prompted with: Human Designer Create a menu for a text editor application with 3 tabs. There should be a Find command in the Edit tab in the app. Tabs should not have more than 6 commands each. Please answer in the following format: json Tab: list of commands MenuCraft responds with: MenuCraft: Topic-Based Design Certainly! Here's a menu design for a text editor application with 3 tabs, including the Find command in the Edit tab, and no more than 6 commands per tab: {"File": ["New", "Open", "Save", "Save As...", "Print", "Exit"], "Edit": ["Undo", "Redo", "Cut", "Copy", "Paste", "Find"], "Format": ["Font", "Bold", "Italic", "Underline", "Strikethrough", "Highlight"]} This response completely fulfilled the request and met the conditions. The designer can continue the interactions for the new requests or modifying the suggested menu. For example, designer can request the Format tab be deleted, and in its place the View tab and any commands required be added: Human Designer Remove the Format tab from the current design and replace it with a View tab with the commands that are needed. MenuCraft responds with the same menu, replacing the Format Tab with View Tab. The replaced tab (other tabs were not shown for better readability): MenuCraft: Modify Design "View": ["Zoom In", "Zoom Out", "Full Screen", "Show/Hide Ruler", "Show/Hide Gridlines, "Show/Hide Document Outline"] The View tab has been successfully added with relevant commands, such as Zoom In and Zoom Out. Command-Based Design Command-based design is the primary focus within the menu design community. In this scenario, the designer provides a set of commands and their corresponding parameters, including the frequency of usage for each command and the association score between each pair of commands. Given the parameters, the designer then requests a menu optimizer to provide an optimal or sub-optimal design. As the number of commands increases, optimization methods may take hours or even days to find an optimal solution. Conversely, while language model approaches may not yield the optimal answer, they can likely provide an appropriate one within seconds. Furthermore, unlike optimization methods, language models do not require access to the entire set of parameters but welcome any input from the designer. For example, when MenuCraft prompted with random order of 5 tabs and 29 commands of Windows Notepad application, without saying these commands belong to this application: Human Designer Design a menu for a text editor app only based on the provided commands and tabs. Each tab should have the most frequently used commands as its top, and commands with close logical operations should be grouped together as tabs. To show some of the commands in a tab are more related, you may use some subgroups for each tab. MenuCraft responds with: MenuCraft: Command-Based Desgin Here's a suggested menu design for a text editor app based on the provided commands and tabs: {File: ["New", "New Window", "Open", "Save", "Save As", "Page Setup", "Print", "Exit"], Edit: ["Undo", "Cut", "Copy", "Paste", "Delete", {"Find": ["Find...", "Find Next", "Find Previous", "Replace..."]}, "Select All", "Time/Date"], Format: ["Word Wrap", "Font..."], View: [{"Zoom": ["Zoom In", "Zoom Out", "Restore Default Zoom"]}, "Status Bar"], Help: ["View Help", "Search With Bing", "Send Feedback", {"About": ["About Application"]} ]} In this menu design, the commands are arranged in hierarchical order with frequently used commands appearing at the top of each tab. The File tab includes essential file-related actions such as creating, opening, saving, and printing files. The Edit tab includes common text-editing actions such as undo, copy, paste, and find/replace. The Format tab includes options for formatting text, including word wrap and font selection. The View tab includes commands for adjusting the zoom level and displaying the status bar. The Help tab includes options for getting help, searching for information, and providing feedback. I hope this menu design meets your needs and provides a user-friendly interface for your text editor app! This design has a lot in common with the original Windows Notepad design. The commands' order and grouping are the same except for a few commands. For example, Search With Bing is now located under the Help tab, which may not be a bad idea. In the following, MenuCraft explains why it chose this particular design. The designer can always ask for the reasoning behind the design or ask the MenuCraft to think step-by-step and explain why it is taking each action. Command or Tab Recommendation MenuCraft can provide valuable suggestions for related commands or tabs for a particular design. This can save designers time and effort while also improving the overall user experience of the design. For example, asking MenuCraft about related commands under a tab and why these commands were chosen: Human Designer My app menu design already includes Bookmark this Tab under the Bookmarks tab. What other commands should I include under this tab? Please anser in the following format json command name: reason MenuCraft responds with: MenuCraft: Command Recommendation Certainly, here are some additional commands that could be included under the "Bookmarks" tab with a brief reason for each: { "Bookmark All Tabs": "Enables users to save bookmarks for all currently open tabs in a single action.", "Organize Bookmarks": "Opens a bookmark manager or dialog that allows users to sort, categorize, and manage their saved bookmarks.", "Edit Bookmark": "Lets users modify the properties of an existing bookmark, such as the title, URL, or tags.", "Remove Bookmark": "Deletes a selected bookmark from the user's list of saved bookmarks.", "Import Bookmarks": "Allows users to import bookmarks from a different browser or bookmarking service.", "Export Bookmarks": "Enables users to export their bookmarks to a file or other external location.", "Bookmarks Sidebar": "Toggles the display of a sidebar or panel that shows the user's saved bookmarks.", "Recently Bookmarked": "Displays a list of recently bookmarked pages or URLs for quick access." } We found the response quite good. In many cases, these commands already exist in popular browsers. They allow users to manage and organize their bookmarks more effectively. Command or Tab Name Recommendation MenuCraft can help designers generate appropriate names for commands based on their descriptions or suggest names for tabs based on the set of commands they contain. For example, asking Menu-Craft about a menu tab name based on two known commands inside it: Human Designer There is a nameless tab on my menu design that includes commands such as "Minimize" and "Zoom". Please suggest top best 10 names for this tab. MenuCraft responds with: MenuCraft: Tab Name Recommendation Sure! Here are ten potential names for the tab that includes commands like "Minimize" and "Zoom": Window, Display, View, Layout, Screen, Desktop, Workspace, Monitor, Size & Position, Display Options These names are intended to convey the idea of adjusting the window or interface in various ways, such as changing the size, position, or arrangement of elements on the screen. Hopefully, one of these names fits well with the intended purpose of the tab and helps users easily locate the relevant commands. There are several good suggestions for the desired tab name in the response. For instance, one can find both Minimize and Zoom commands in the Window Tab (MenuCraft's first choice) of Chrome browser on MAC OS X. Hotkey Recommendation Hotkeys, also known as keyboard shortcuts, are a combination of keys on a computer keyboard that provide a quick way to access various menu commands. The selection of well-thought-out hotkeys can speed up access to frequently used commands. MenuCraft simplifies this process by creating hotkeys based on menu design, reducing the time and effort required. Hotkeys must be distinct from other commands in the app. For example, continuing the first design of the text editor in section 4.1, by requesting to add shortcuts: Human Designer Add shortcut for each command, two differet commands must not have the same shortcut. MenuCraft responds with: MenuCraft: Hotkey Recommendation Certainly! Here's the updated menu design for a text editor application with 3 tabs, including the Find command in the Edit tab, no more than 6 commands per tab, and keyboard shortcuts for each command: {"File": [ {"name": "New", "shortcut": "Ctrl+N"}, {"name": "Open", "shortcut": "Ctrl+O"}, {"name": "Save", "shortcut": "Ctrl+S"}, {"name": "Save As...", "shortcut": "Ctrl+Shift+S"}, {"name": "Print", "shortcut": "Ctrl+P"}, {"name": "Exit", "shortcut": "Alt+F4"} ], "Edit": [ {"name": "Undo", "shortcut": "Ctrl+Z"}, {"name": "Redo", "shortcut": "Ctrl+Y"}, {"name": "Cut", "shortcut": "Ctrl+X"}, {"name": "Copy", "shortcut": "Ctrl+C"}, {"name": "Paste", "shortcut": "Ctrl+V"}, {"name": "Find", "shortcut": "Ctrl+F"} ], "Format": [ {"name": "Font", "shortcut": "Ctrl+Shift+F"}, {"name": "Bold", "shortcut": "Ctrl+B"}, {"name": "Italic", "shortcut": "Ctrl+I"}, {"name": "Underline", "shortcut": "Ctrl+U"}, {"name": "Strikethrough", "shortcut": "Ctrl+Shift+S"}, {"name": "Highlight", "shortcut": "Ctrl+H"} ]} I've added keyboard shortcuts for each command to make it quicker and easier for users to access frequently used commands. The shortcuts are listed next to each command in the format of Ctrl+Key or Ctrl+Shift+Key. Most of the selected shortcuts are commonly used by other applications, such as Ctrl + Z for the Undo command. The rest is made up of the Ctrl key and the first letter of the command. To prevent shortcut reuse, the Ctrl + Shift combination is used. Nevertheless, an error occurred when assigning the shortcut for the third time to a command that starts with the letter S (Strikethrough). MenuCraft can be instructed about what other templates it can use to create shortcuts to prevent this from happening. However, the designer can also inform MenuCraft of its mistake through more interaction : Human Designer The same shortcut must not be shared by two commands, even in different tabs. Strikethrough and Save As... share a same shortcut. MenuCraft replies with a thank you for clarifying and corrects its mistake by replacing the shortcut for Strikethrough command (other tabs were not shown for better readability): MenuCraft: Hotkey Modification Thank you for the clarification Here's an updated menu design ensuring that no two commands share the same shortcut even in different tabs: ... {"name": "Strikethrough", "shortcut": "Ctrl+Shift+D"}, ... I changed the shortcut for Strikethrough to Ctrl+Shift+D to ensure that it does not share the same shortcut as Save As.... This will help users avoid confusion and make it faster to perform specific tasks with the text editor. In the revised response, it was suggested to use another shortcut (Ctrl+Shift+D) for the Strikethrough command. This suggestion is acceptable since popular applications such as Microsoft Excel also use the same shortcut. Command Elaboration MenuCraft can provide additional information about specific commands in a menu. It can explain the meaning of a command or provide examples of how it could be used in context. This feature can assist in verifying whether an intelligence understands the functionality of a command based on its name in a given context. Additionally, it can generate documentation for the designed menu. For example, continuing the first design of the text editor in section 4.1 by requesting an elaboration on designed menu commands: Human Designer Provide a short elaboration on the designed menu commands for the text editor. The format should be as follows: tab name, command: short elaboration MenuCraft responds with: MenuCraft: Command Elaboration File Tab: -New File: creates a new blank document -Open File: opens an existing file for editing -Save: saves the current document -Save As: allows the user to save the document under a different name or file type -Print: prints the current document -Exit: closes the application Edit Tab: -Undo: reverses the last action performed -Cut: removes the selected text and copies it to the clipboard -Copy: copies the selected text to the clipboard -Paste: inserts the contents of the clipboard at the current cursor position -Find: searches for a specific word or phrase in the document -Replace: allows the user to replace a specific word or phrase with another Format Tab: -Font: allows the user to change the font, size, and style of the selected text -Bold: makes the selected text bold -Italic: makes the selected text italicized -Underline: adds an underline to the selected text -Strikethrough: adds a strikethrough to the selected text -Highlight: highlights the selected text with a color Each command has been explained in a fair and reasonable manner. Conclusion and Future Work MenuCraft represents a promising new direction in menu design through language models. Our initial experiments have shown that MenuCraft can perform various menu design tasks quite well. However, MenuCraft is still in the prototype phase and needs much more to explore. Moving forward, our future work with MenuCraft will focus on conducting additional user studies to gain further insights into the needs and preferences of menu designers. This will allow us to tailor MenuCraft to meet the specific requirements of different users and contexts. Furthermore, we will explore how the data collected from MenuCraft and designer feedback can be leveraged for better prompt engineering. Overall, we are excited about the potential of Menu-Craft and look forward to continuing our work in this area. This new direction has the potential to revolutionize menu design and improve the user experience across a wide range of applications. Discussion. There is a discussion in the design community about rethinking the way AI supports designers (De Peuter et al., 2021). Their view is that AI should prioritize cooperation over automation and seek to enhance the creativity and problemsolving abilities of designers. The challenge is that design is an optimization problem, but integrating a human decision-maker into an optimization process is hard due to the differences in the way humans and optimizers think and work. Our view is that MenuCraft has already close to achieving this objective for two reasons: (1) As ChatGPT is trained on a large corpus of data, and human feedback is also used in the training, we expect ChatGPT to develop a good understanding of the behavior of human designers. (2) Humans are intuitively familiar with conversational formats, and ChatGPT is capable of following instructions posed in a conversational format. Limitations. Language models offer a promising new direction for menu design, but it is essential to acknowledge their limitations. Language models may lack domain-specific knowledge for menu design, and their performance is heavily influenced by the quality and diversity of the training data used. Moreover, Language models may struggle with understanding mathematical concepts, particularly when designers input specific parameters for design features. This limitation could result in poor design recommendations. Furthermore, MenuCraft has demonstrated acceptable results with ChatGPT, but extending these results to other LLMs is currently limited by the lack of experiments with multiple models. Commands provided (unordered): [ View Help , About Application , Paste , Save As , Open , ...] Tabs provided (unordered): [ Format , File , View , Help , Edit ] Please answer in the following format: json Tab: list of commands Initial Prompt I want you to act as an AI-Assisted Menu Designer, called MenuCraft. You will come up with design ideas for menu user interfaces that make apps easier to use. You may suggest menu design apps for a topic, arrange commands as linear or hierarchal menus, group the commands as tabs, suggest command names, add or suggest hot keys for the commands, and so on -but the aim is to design a menu that users find satisfying to use, meaning select good names for commands, prioritize frequently used commands for each tab of menu as the top, and put commands with close logical operations in the same tab. If you understand the your responsibilities, introduce yourself in short and asks for the user request. A Prototype DemonstrationHere is a simple example of a designer interacting with the prototype version of Menu-Craft: https://kargaranamir.github. io/MenuCraft/ Commandspace: modeling the relationships between tasks, descriptions and features. Eytan Adar, Mira Dontcheva, Gierad Laput, Proceedings of the 27th annual ACM symposium on User interface software and technology. the 27th annual ACM symposium on User interface software and technologyEytan Adar, Mira Dontcheva, and Gierad Laput. 2014. Commandspace: modeling the relationships be- tween tasks, descriptions and features. In Proceed- ings of the 27th annual ACM symposium on User interface software and technology, pages 167-176. Modeling and improving selection in cascading pull-down menus using fitts' law, the steering law and force fields. David Ahlström, Proceedings of the SIGCHI conference on Human factors in computing systems. the SIGCHI conference on Human factors in computing systemsDavid Ahlström. 2005. Modeling and improving selec- tion in cascading pull-down menus using fitts' law, the steering law and force fields. In Proceedings of the SIGCHI conference on Human factors in com- puting systems, pages 61-70. Do as i can, not as i say: Grounding language in robotic affordances. Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, arXiv:2204.01691arXiv preprintMichael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. 2022. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691. Gilles Bailly, Eric Lecolinet, Laurence Nigay, Visual menu techniques. ACM Computing Surveys (CSUR). 49Gilles Bailly, Eric Lecolinet, and Laurence Nigay. 2016. Visual menu techniques. ACM Computing Surveys (CSUR), 49(4):1-41. Menuinspector: Outil pour l'analyse des menus et cas d'étude. Gilles Bailly, Sylvain Malacria, Proceedings of the 25th Conference on l'Interaction Homme-Machine. the 25th Conference on l'Interaction Homme-MachineGilles Bailly and Sylvain Malacria. 2013. Menuin- spector: Outil pour l'analyse des menus et cas d'étude. In Proceedings of the 25th Conference on l'Interaction Homme-Machine, pages 103-106. Model of visual search and selection time in linear menus. Gilles Bailly, Antti Oulasvirta, P Duncan, Andrew Brumby, Howes, Proceedings of the sigchi conference on human factors in computing systems. the sigchi conference on human factors in computing systemsGilles Bailly, Antti Oulasvirta, Duncan P Brumby, and Andrew Howes. 2014. Model of visual search and selection time in linear menus. In Proceedings of the sigchi conference on human factors in computing systems, pages 3865-3874. Menuoptimizer: Interactive optimization of menu systems. Gilles Bailly, Antti Oulasvirta, Timo Kötzing, Sabrina Hoppe, Proceedings of the 26th annual ACM symposium on User interface software and technology. the 26th annual ACM symposium on User interface software and technologyGilles Bailly, Antti Oulasvirta, Timo Kötzing, and Sab- rina Hoppe. 2013. Menuoptimizer: Interactive op- timization of menu systems. In Proceedings of the 26th annual ACM symposium on User interface soft- ware and technology, pages 331-342. Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901. The keystroke-level model for user performance time with interactive systems. K Stuart, Card, P Thomas, Allen Moran, Newell, Communications of the ACM. 237Stuart K Card, Thomas P Moran, and Allen Newell. 1980. The keystroke-level model for user perfor- mance time with interactive systems. Communica- tions of the ACM, 23(7):396-410. The emergence of interactive behavior: A model of rational menu search. Xiuli Chen, Gilles Bailly, P Duncan, Antti Brumby, Andrew Oulasvirta, Howes, Proceedings of the 33rd annual ACM conference on human factors in computing systems. the 33rd annual ACM conference on human factors in computing systemsXiuli Chen, Gilles Bailly, Duncan P Brumby, Antti Oulasvirta, and Andrew Howes. 2015. The emer- gence of interactive behavior: A model of rational menu search. In Proceedings of the 33rd annual ACM conference on human factors in computing sys- tems, pages 4217-4226. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won, Charles Chung, Sebastian Sutton, Gehrmann, arXiv:2204.02311Palm: Scaling language modeling with pathways. arXiv preprintAakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Talebrush: sketching stories with generative pretrained language models. John Joon Young Chung, Wooseok Kim, Hwaran Kang Min Yoo, Eytan Lee, Minsuk Adar, Chang, Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. the 2022 CHI Conference on Human Factors in Computing SystemsJohn Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. Talebrush: sketching stories with generative pretrained language models. In Proceedings of the 2022 CHI Conference on Human Factors in Comput- ing Systems, pages 1-19. A predictive model of menu performance. Andy Cockburn, Carl Gutwin, Saul Greenberg, Proceedings of the SIGCHI conference on Human factors in computing systems. the SIGCHI conference on Human factors in computing systemsAndy Cockburn, Carl Gutwin, and Saul Greenberg. 2007. A predictive model of menu performance. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 627-636. Andy Coenen, Luke Davis, Daphne Ippolito, Emily Reif, Ann Yuan, arXiv:2107.07430Wordcraft: a human-ai collaborative editor for story writing. arXiv preprintAndy Coenen, Luke Davis, Daphne Ippolito, Emily Reif, and Ann Yuan. 2021. Wordcraft: a human-ai collaborative editor for story writing. arXiv preprint arXiv:2107.07430. Foraging-based optimization of menu systems. Morteza Niraj Ramesh Dayama, Antti Shiripour, Evgeny Oulasvirta, Andreas Ivanko, Karrenbauer, International Journal of Human-Computer Studies. 151102624Niraj Ramesh Dayama, Morteza Shiripour, Antti Oulasvirta, Evgeny Ivanko, and Andreas Karren- bauer. 2021. Foraging-based optimization of menu systems. International Journal of Human-Computer Studies, 151:102624. Antti Sebastiaan De Peuter, Samuel Oulasvirta, Kaski, arXiv:2107.13074Toward ai assistants that let designers design. arXiv preprintSebastiaan De Peuter, Antti Oulasvirta, and Samuel Kaski. 2021. Toward ai assistants that let designers design. arXiv preprint arXiv:2107.13074. Revisiting menu design through the lens of implicit statistical learning. Emmanouil Giannisakis, Evanthia Dimara, Annabelle Goujon, Gilles Bailly, Proceedings of the 2022 International Conference on Advanced Visual Interfaces. the 2022 International Conference on Advanced Visual InterfacesEmmanouil Giannisakis, Evanthia Dimara, Annabelle Goujon, and Gilles Bailly. 2022. Revisiting menu design through the lens of implicit statistical learn- ing. In Proceedings of the 2022 International Con- ference on Advanced Visual Interfaces, pages 1-9. Lampost: Design and evaluation of an ai-assisted email writing prototype for adults with dyslexia. Erin Steven M Goodman, Patrick Buehler, Andy Clary, Aaron Coenen, Tiffanie N Donsbach, Michal Horne, Robert Lahav, Rain Macdonald, Ajit Breaw Michaels, Narayanan, Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. the 24th International ACM SIGACCESS Conference on Computers and AccessibilitySteven M Goodman, Erin Buehler, Patrick Clary, Andy Coenen, Aaron Donsbach, Tiffanie N Horne, Michal Lahav, Robert MacDonald, Rain Breaw Michaels, Ajit Narayanan, et al. 2022. Lampost: Design and evaluation of an ai-assisted email writing prototype for adults with dyslexia. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility, pages 1-18. Stylette: Styling the web with natural language. Tae Soo Kim, Daeun Choi, Yoonseo Choi, Juho Kim, Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. the 2022 CHI Conference on Human Factors in Computing SystemsTae Soo Kim, DaEun Choi, Yoonseo Choi, and Juho Kim. 2022. Stylette: Styling the web with natural language. In Proceedings of the 2022 CHI Con- ference on Human Factors in Computing Systems, pages 1-17. Describing ui screenshots in natural language. Asutosh Luis A Leiva, Antti Hota, Oulasvirta, ACM Transactions on Intelligent Systems and Technology. 141Luis A Leiva, Asutosh Hota, and Antti Oulasvirta. 2022. Describing ui screenshots in natural language. ACM Transactions on Intelligent Systems and Tech- nology, 14(1):1-28. Predicting human performance in vertical menu selection using deep learning. Yang Li, Samy Bengio, Gilles Bailly, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. the 2018 CHI Conference on Human Factors in Computing SystemsYang Li, Samy Bengio, and Gilles Bailly. 2018. Pre- dicting human performance in vertical menu selec- tion using deep learning. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1-7. Widget captioning: Generating natural language description for mobile user interface elements. Yang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, Zhiwei Guan, arXiv:2010.04295arXiv preprintYang Li, Gang Li, Luheng He, Jingjie Zheng, Hong Li, and Zhiwei Guan. 2020. Widget captioning: Gener- ating natural language description for mobile user in- terface elements. arXiv preprint arXiv:2010.04295. The design space of generative models. Meredith Ringel Morris, Carrie Jun Cai, Jess Scon Holbrook, Chinmay Kulkarni, and Michael Terry. 2022Meredith Ringel Morris, Carrie Jun Cai, Jess Scon Hol- brook, Chinmay Kulkarni, and Michael Terry. 2022. The design space of generative models. . 13.02OpenAI. 2022. Introducing chatgpt. OpenAI. 2022. Introducing chatgpt, 13.02.2023v. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, L Carroll, Pamela Wainwright, Chong Mishkin, Sandhini Zhang, Katarina Agarwal, Alex Slama, Ray, arXiv:2203.02155Training language models to follow instructions with human feedback. arXiv preprintLong Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Car- roll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow in- structions with human feedback. arXiv preprint arXiv:2203.02155. Enabling conversational interaction with mobile ui using large language models. Bryan Wang, Gang Li, Yang Li, arXiv:2209.08655arXiv preprintBryan Wang, Gang Li, and Yang Li. 2022. Enabling conversational interaction with mobile ui using large language models. arXiv preprint arXiv:2209.08655. Automatic mobile ui summarization with multimodal learning. Bryan Wang, Gang Li, Xin Zhou, Zhourong Chen, Tovi Grossman, Yang Li, The 34th Annual ACM Symposium on User Interface Software and Technology. 2Bryan Wang, Gang Li, Xin Zhou, Zhourong Chen, Tovi Grossman, and Yang Li. 2021. Screen2words: Au- tomatic mobile ui summarization with multimodal learning. In The 34th Annual ACM Symposium on User Interface Software and Technology, pages 498- 510. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, arXiv:2206.07682Emergent abilities of large language models. arXiv preprintJason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682. Chain of thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou, arXiv:2201.11903arXiv preprintJason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903. Wordcraft: story writing with large language models. Ann Yuan, Andy Coenen, Emily Reif, Daphne Ippolito, 27th International Conference on Intelligent User Interfaces. Ann Yuan, Andy Coenen, Emily Reif, and Daphne Ip- polito. 2022. Wordcraft: story writing with large language models. In 27th International Conference on Intelligent User Interfaces, pages 841-852. Screen recognition: Creating accessibility metadata for mobile applications from pixels. Xiaoyi Zhang, Lilian De Greef, Amanda Swearngin, Samuel White, Kyle Murray, Lisa Yu, Qi Shan, Jeffrey Nichols, Jason Wu, Chris Fleizach, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsXiaoyi Zhang, Lilian de Greef, Amanda Swearngin, Samuel White, Kyle Murray, Lisa Yu, Qi Shan, Jef- frey Nichols, Jason Wu, Chris Fleizach, et al. 2021. Screen recognition: Creating accessibility metadata for mobile applications from pixels. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-15. Least-to-most prompting enables complex reasoning in large language models. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, arXiv:2205.10625Quoc Le, and Ed Chi. 2022arXiv preprintDenny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reason- ing in large language models. arXiv preprint arXiv:2205.10625.
[]
[ "Machine Learning for UAV Propeller Fault Detection based on a Hybrid Data Generation Model", "Machine Learning for UAV Propeller Fault Detection based on a Hybrid Data Generation Model" ]
[ "Wei Zhang [email protected] ", "J J Tong [email protected] ", "W Zhang ", "Yunfeng Zhang ", "J J Tong \nDepartment of Mechanical Engineering\nNational University of Singapore\n\n", "W Zhang \nDepartment of Mechanical Engineering\nNational University of Singapore\n\n", "F Liao \nTemasek Laboratories\nNational University of Singapore\n\n", "C F Li \nDepartment of Mechanical Engineering\nNational University of Singapore\n\n", "Y F Zhang \nDepartment of Mechanical Engineering\nNational University of Singapore\n\n", "\nDepartment of Mechanical Engineering\nFang Liao is with Temasek Laboratories\nNational University of Singapore\nNational University of Singapore\n\n" ]
[ "Department of Mechanical Engineering\nNational University of Singapore\n", "Department of Mechanical Engineering\nNational University of Singapore\n", "Temasek Laboratories\nNational University of Singapore\n", "Department of Mechanical Engineering\nNational University of Singapore\n", "Department of Mechanical Engineering\nNational University of Singapore\n", "Department of Mechanical Engineering\nFang Liao is with Temasek Laboratories\nNational University of Singapore\nNational University of Singapore\n" ]
[]
This paper describes the development of an onboard data-driven system that can monitor and localize the fault in a quadrotor unmanned aerial vehicle (UAV) and at the same time, evaluate the degree of damage of the fault under real scenarios with interference information and without additional denoising procedures. To achieve offline training data generation, a hybrid approach is proposed for the development of a virtual data-generative model using a combination of datadriven models as well as well-established dynamic models that describe the kinematics of the UAV. To effectively represent the drop in performance of a faulty propeller, a variation of the deep neural network, known as the Long Short-Time Memory network (LSTM) is proposed. With the revolution per minute (RPM) of the propeller as input and depending on the fault condition of the propeller, the proposed propeller model estimates the resultant torque and thrust. Then, flight datasets of the UAV under "normal" conditions as well as various "fault" scenarios are generated via simulation using the developed datagenerative model. Lastly, a fault classifier using a convolutional neural network structure (CNN) is proposed to identify as well as evaluate the degree of damage to the damaged propeller. The scope of this paper currently focuses on the identification of faulty propellers and classification of the fault level for quadrotor UAVs using their RPM as well as flight data. Doing so allows for early minor fault detection to prevent serious faults from occurring if the fault is left unrepaired. To further validate the workability of this approach outside of simulation, a real-flight test is conducted indoors. The real flight data is collected and a simulation to real (sim-real) test is conducted. Due to the imperfections in the build of our experimental UAV, a slight calibration approach to our simulation model is further proposed and the experimental results obtained show that our trained model can identify the location of propeller fault as well as the degree/type of damage. Currently, the diagnosis accuracy on the testing set is over 80%.
10.48550/arxiv.2302.01556
[ "https://export.arxiv.org/pdf/2302.01556v1.pdf" ]
256,598,280
2302.01556
72ac9f754046fcf373aee05827283349081b5fb2
Machine Learning for UAV Propeller Fault Detection based on a Hybrid Data Generation Model Wei Zhang [email protected] J J Tong [email protected] W Zhang Yunfeng Zhang J J Tong Department of Mechanical Engineering National University of Singapore W Zhang Department of Mechanical Engineering National University of Singapore F Liao Temasek Laboratories National University of Singapore C F Li Department of Mechanical Engineering National University of Singapore Y F Zhang Department of Mechanical Engineering National University of Singapore Department of Mechanical Engineering Fang Liao is with Temasek Laboratories National University of Singapore National University of Singapore Machine Learning for UAV Propeller Fault Detection based on a Hybrid Data Generation Model (Corresponding author: This paper describes the development of an onboard data-driven system that can monitor and localize the fault in a quadrotor unmanned aerial vehicle (UAV) and at the same time, evaluate the degree of damage of the fault under real scenarios with interference information and without additional denoising procedures. To achieve offline training data generation, a hybrid approach is proposed for the development of a virtual data-generative model using a combination of datadriven models as well as well-established dynamic models that describe the kinematics of the UAV. To effectively represent the drop in performance of a faulty propeller, a variation of the deep neural network, known as the Long Short-Time Memory network (LSTM) is proposed. With the revolution per minute (RPM) of the propeller as input and depending on the fault condition of the propeller, the proposed propeller model estimates the resultant torque and thrust. Then, flight datasets of the UAV under "normal" conditions as well as various "fault" scenarios are generated via simulation using the developed datagenerative model. Lastly, a fault classifier using a convolutional neural network structure (CNN) is proposed to identify as well as evaluate the degree of damage to the damaged propeller. The scope of this paper currently focuses on the identification of faulty propellers and classification of the fault level for quadrotor UAVs using their RPM as well as flight data. Doing so allows for early minor fault detection to prevent serious faults from occurring if the fault is left unrepaired. To further validate the workability of this approach outside of simulation, a real-flight test is conducted indoors. The real flight data is collected and a simulation to real (sim-real) test is conducted. Due to the imperfections in the build of our experimental UAV, a slight calibration approach to our simulation model is further proposed and the experimental results obtained show that our trained model can identify the location of propeller fault as well as the degree/type of damage. Currently, the diagnosis accuracy on the testing set is over 80%. I. INTRODUCTION In recent years, unmanned aerial vehicles (UAVs) in the form of fixed-wing and multi-rotor are gaining more attention due to their significant usability and important application in many tasks such as surveillance [1], search and rescue [2][3], agriculture applications [4][5], as well as various military and security applications [6]. There are many types of UAVs available with various specialties. This paper focuses on the quadrotor UAVs that are often favored due to their small size, lightweight, and ease to control. These quadrotors refer to a type of UAV that each consists of two pairs of counter-rotating rotors and propellers located at the vertex of a square frame to ensure that it is dynamically balanced. Due to this unique configuration, damage or fault to any propellers could result in instability in the flight behavior of the quadrotor. In practice, "crack" and "bent" propellers are the most common faults encountered in the extensive use of quadrotor UAVs outdoors during operations. For minor "cracks" or "bent" present in the propeller, the quadrotor may be able to selfbalance itself by compensating with higher RPM in its other propeller. This phenomenon can be often detected by a difference in stability and attitude when compared to healthy UAVs. In more serious cases, the quadrotor may not produce enough thrust to sustain its weight and eventually crash to the ground, potentially damaging the entire UAV's structure or causing injuries to its operator. Therefore, it is necessary to develop a robust onboard fault detection and identification system that can detect faults currently on the UAV during its early stage to avoid catastrophic failure. A system like this requires a better understanding of the behavior of a malfunctioning UAV which currently, requires the manual operation of a spoilt UAV. Such experiments expose the pilot to extreme danger as these malfunctioning UAVs exhibit unpredictable behavior. Thus, a data generative model that can replicate the behavior of malfunctioned UAVs is proposed. II. RELATED WORKS Research relating to UAV fault detection can generally be classified into three main categories, namely, hardware redundancy, model-based approach, and data-driven approach. In the research area of hardware redundancy, Panitsrisit et al. [7] proposed a hardware redundant system consisting of various inexact voters to detect faults in the elevator of the UAV by continuously comparing the states and functionalities with its sensors. Based on this idea, Lieret et al. [8] further designed and implemented such a system for its flight control units. The proposed approach was evaluated on actual flights of a hex rotor. Analytical model-based approaches for UAV fault detection focus on analytical models that adopt mathematical models and observable variables from the vehicle to determine the fault [9]. These methods mostly utilize a state estimation approach as well as parameter estimates. By setting a threshold value, fault in the UAV can be determined when the actual operation mode differs from its expected behavior. An early effort in this approach can be found in Chen et al. [10], who proposed an observationbased approach to determine fault in the quadrotor and verified the method using simulations. With regards to the rigid threshold value used by Chen et al., Avran et al. [11] proposed a nonlinear adaptive estimation that actively updates the threshold values to boost robustness and fault sensitivity in different scenarios. Rago et al. [12] and Zhang et al. [13] proposed a fault detection method to detect failures of sensors/actuators based on interactive multiple models. With the interactive multiple models, Zhong et al. [14] tackled the issue of multiple fault diagnosis for both actuators and the system of a quadrotor UAV. Even though model-based methods have good robustness and can diagnose unknown faults, the detection of system faults in quadrotors is not easily represented by analytical models due to their complicated structure. These model-based approaches are often not easy to implement in practical applications and lack scalability where the flight path of a UAV can be dramatically altered by wind or other external disturbances. In recent years, data-driven approaches are getting more attention due to their robustness and reliability in fault detection. The fault detection based on a data-driven approach extracts various features from the original data and feeds them into the neural network model to obtain the fault detection results directly. The learning-based approach based on supervised learning requires experimental data containing the behavior of faulty quadrotors for training and labeling the fault cases [15]. In the case of an unlabeled fault, the result is predicted as a probability distribution based on the trained dataset. Guo et al. [16] proposed a fault detection approach based on hybrid feature models as well as an artificial neural network. By using a Short-time Fourier transform (STFT), the audio signals of the propeller can be converted into time-frequency spectrograms for fault detection. Subsequently, more robust models based on long short-term memory (LSTM) models [17] are introduced to more accurately detect the fault in various quadrotors. Liu et al. [18], on the other hand, proposed a detection approach based on a convolutional neural network coupled with a transfer learning method. For the detection of sensor faults, Chen et al. [19] proposed a wavelet packet deposition (WPD) to extract the energy entropy as a feature to train a generic backpropagation (BP) neural network. Similarly, Xiao et al. [20] extracts the energy entropy and proposed an observer method based on the BP algorithm to detect sensor faults in real-time. Although learning-based approaches seem to be the ultimate solution in accurately detecting UAV faults, the growth in this approach is unfortunately plateaued by the absence of a reliable data generation approach to capture fault datasets. In [21], a dataset containing "fault condition" are collected by artificially destroying the propeller in a small area. By inducing such damage to the propeller, the output thrust and torque of the propeller will change, thus affecting the flight attitude of the whole UAV. However, due to safety reasons, only minor damage could be induced to the propeller so that the UAV will not suddenly crash. The dataset generated in this manner is thus very limited and does not represent scenarios where there could be more serious damage to its propellers. To rectify these limitations, a reliable and efficient workflow in data generation of fault detection datasets is needed. In this paper, a data-driven approach is proposed for the development of the system where flight signals of the UAV captured onboard are used as inputs for classification. As data-driven systems for fault detection requires a large amount of dataset consisting of various quadrotor fault scenario, one feasible solution is to simulate the output signals using a hybrid data generation model of the UAV developed from existing open-source simulation [23]. The roadmap for developing a UAV fault diagnosis system is proposed as shown in Fig. 1. As shown in Fig. 1, Step 1 involves the development of the hybrid data generation model for the adopted quadrotor. In Step 2, the data generation model is used to generate data under both normal and specified fault conditions, in which 16 categorical labels are defined to specify the position and condition of the 4 propellers, respectively. With the generated training samples and their labels, Step 3 describes the formulation of a learning-based model to classify and identify the location and severity of the propeller fault. The novel contributions of this paper are summarized as follows: 1. An end-to-end quadrotor's fault detection approach based on a novel hybrid data-generative model is presented. To the best of our knowledge, this paper presents the first data-generative model based on a data-driven model trained using realistic loadcell experiments to accurately capture the behavior of a faulty propeller when mounted on a quadrotor. 2. Existing simulation models tend to omit the effects of the imperfect build of the quadrotor structure such as a non-centered center of gravity (CG). From our experience in dealing with a quadrotor, non-centered CG is a common occurrence and failure to account for this will lead to more inaccurate results. Thus in this aspect, we present a logical approach to account for such imbalances, making the classifier more accurate. 3. Fault in a propeller comes in different forms which are often tedious to represent with a conventional method. To better represent the drop in performance of the faulty propeller over time, an LSTM network is proposed. 4. Many feature-based approaches shrink the input data into lower dimensional space and often omit important fault characteristics information. To prevent this limitation, a non-feature-based approach based on a two-dimensional convolutional neural network is proposed to adaptively extract features from the original flight data to detect the location and severity of the propeller fault. The rest of the paper is structured as follows. Section III describes the development of the hybrid data generation model combining a data-driven model and an analytical dynamic model to generate flight data of the quadrotor. Section IV presents the data generation process using a developed data generative model under both normal and fault conditions. In this section, the training and testing results of the proposed convolutional neural network (CNN) fault classifier are further discussed, ending with the conclusion and future work in section V. III. PROPOSED METHODS A. Overview The end-to-end fault diagnosis approach using flight data consists of two main components, namely, the data generative model as well as the fault detection model. This section introduces the design principles and architecture of the proposed data generative model. Fig. 2 shows the workflow of the mentioned approach in which the dynamic model (Top) generates the flight data, and a fault classifier (Bottom) detects the type of propeller fault as well as the location of its fault. B. Data-generative UAV model The proposed quadrotor UAV data generation model is shown in Fig. 3 and consists of three subsystems, i.e., the Control system, Propeller Model, and Dynamic Model. As the damage in the Control system and BLDC motor may happen abruptly and is unpredictable, in this paper, we assume that this system is not damaged and will only include in our future research. The Dynamic Model which is well described by the physical models is used to compute the flight data based on the input RPM of the four individual propellers. In this subsection, we will first describe the development of the Propeller Model. Subsequently, we will discuss how we incorporate the propeller model into our simulation model to accurately represent cases of a quadrotor with a damaged propeller. Figure 3. Components of the proposed data generative model The workflow of the proposed data generative model is shown in Fig. 2 (Top). The input to the simulation system is a set of targeted waypoints where the control system will first compute four target RPMs for each propeller in the quadrotor. Subsequently, the four RPMs are passed to the Propeller Model to generate the torque and thrust, which are then used to compute the kinematics of the quadrotor body using the theoretical Dynamic Model. The position and velocity of the quadrotor (x, y, z, roll, pitch, yaw, Vx, Vy, and Vz) at the current timestep are recorded and the position of the quadrotor is passed back through the feedback loop so that the control system can compute the target RPM for the next timestep to drive the quadrotor closer to the target point. The details of the individual subsystems are described in the following sub-sections. Autopilot Controller The controller used in the data generative model is the PX4 autopilot controller system, which is an open-source flight control software for drones and other unmanned vehicles. In our data generative model, the target waypoints are fed into the system as inputs, and the autopilot control system computes a series of target RPM and the required control signals in the form of an ESC signal to guide the UAV toward the targets. To achieve this, PX4 uses sensors to determine vehicle states that are needed for both stabilizations and to enable autonomous control. Some common examples include a gyroscope, accelerometer, magnetometer (compass), and barometer. Propeller Model based on LSTM network Conventionally, under normal conditions, the thrust (f) and torque (τ) generated by a propeller can be computed using Eqs. (2) and (3), = * 2 (2) = * 2(3) where i is the propeller number, ω is the rotational speed (RPM). is lift constant and the drag constant, both of which can be experimentally determined. However, when the propeller is bent or cracked, the linear relationship shown in Eqs. (2) and (3) may no longer hold. An effective way to account for such non-linearity is to use a learning-based approach, in this case, an LSTM network. The input to the network is RPM, and the outputs are thrust and torque, respectively. Depending on the condition of the propeller, three types of propeller models, representing the condition of the propeller ("Normal", "Bent", "Cracked"), are trained as shown in Fig. 5. As LSTM has been proven to achieve good results in time series problems and is capable of learning long-term dependencies, we have chosen LSTM as the basic regression prediction model for the Propeller Model. Recurrent neural networks (RNNs) have layers of repeating modules of a neural network shown. In standard RNNs, this repeating module will have a very simple structure, such as a single tanh layer. LSTM, on the other hand, has four neural network layers, each interacting in a very special way. The hidden node of the LSTM layer is a memory cell shown in Fig. 6. The basic cells are composed of three gates, the input gate, forget gate, and the output gate [27]. Figure 6. Repeating modules in LSTM When a d-dimensional input ∈ * 1 arrives at time t, the cell is updated, and the new information is recorded. Assuming there are k cells in one LSTM layer, and the cell states and hidden states at time t − 1 are defined as −1 ∈ * 1 and ℎ −1 ∈ * 1 , respectively. The new cell states and hidden states ℎ at time t is updated by the following formulas: = ( • [ℎ −1 , ] + ) (4) = ( • [ℎ −1 , ] + ) (5) ̃= ℎ( • [ℎ −1 , ] + ) (6) = ( • [ℎ −1 , ] + ) (7) = * −1 + * ̃ (8) ℎ = * tanh( )(9) where , , and are the outputs of forget gate, input gate, and output gate, respectively. ̃ is an intermediate variable used to update • ∈ ( + ) , ∈ ( + ) , ∈ ( + ) , and ∈ ( + ) represent the weight matrices. ∈ * 1 , ∈ * 1 , ∈ * 1 , and ∈ * 1 are biases. Besides, sigmoid and tanh refer to the sigmoid activation function and hyperbolic tangent activation function, respectively. The symbol * denotes the Hadamard product. After calculating from (4) to (9), the LSTM memory cells remember the information from the beginning to the moment . ℎ is the final output of these cells at time t. As the input sequence arrives, these cells' outputs predict values continually, which, in turn, can form an output sequence, or we can only use the final output as a sequence-to-one prediction. To predict the thrust and torque of a propeller given RPM as input, the stacked LSTM model [27] is used here. Based on the function of a single LSTM layer, the stacked LSTM regression model is shown in Fig. 7. There are two LSTM layers and two dense layers in this model. The input shape is determined by the dimension of the input vector, in this case, a size of 1. The first LSTM layer works in the sequential mode, and the second LSTM layer only outputs one point for each input sequence. The dense layer is a fully-connected layer with a linear kernel. To achieve single-step prediction, input variables ∈ * and target variable ∈ * 1 need to be reconstructed with a sliding window. The length of the window is L, which means the past L historical samples are used to predict the monitored parameter at the next moment. After the reconstruction, the new input samples and corresponding outputs are obtained as follows: = { 1 , 2 , … … . , − }, ∈ * = { 1 , 2 , … … . , − }, ∈ *(10) The stacked LSTM regression model needs to learn the mapping function (. ), which is defined as ̃= ( )(11) where ̃ is the prediction of . The mean square error (MSE) is used as loss training. The proposed propeller model is shown in Fig. 8. 3. Dynamic Model to describe the Kinematics of the quadrotor With the torque and thrust of the individual propeller generated by the Propeller Model, the kinematics of the quadrotor can then be computed using well-established physical models [28]. To induce motion in a quadrotor, control mechanisms by roll, pitch, and yaw is adopted. These are represented by the angle of rotation around the center of the quadrotor's body. In general, to track the altitude of the quadrotor, a twocoordinate system is usually required (see Fig. 8). The inertial coordinate system describes the coordinate system fixed to the earth and is independent of the quadrotor motion while the body frame system is attached to the quadrotor's body at its center of gravity. The angular difference between the two coordinates describes the behavior of the quadrotor attitude in space. The angle of roll, pitch, and yaw are represented with ∅, , while its angular velocity is represented as ∅,̇ ,̇ ̇. These states represent the relationship between the quadrotor and the inertia coordinate system. The next six states represent the physical relationship of the quadrotor's physical location within the earth-fixed system and are denoted as X, Y and Z. In addition, the quadrotor velocity along these axes is denoted as ̇,̇,̇ respectively. In essence, the movement of a quadrotor is induced by the difference in torque and thrust of each of its four propellers by forcing a change around the pitch, roll, and yaw angle. The Dynamic Model of the quadrotor consists of the rotational subsystem that represents the roll, pitch, and yaw angle, and the translational subsystem that represents the X, Y, and Z positions. The full derivation can be found in [28]. Applying the Newton-Euler equation to the quadrotor body results in the equations of motion, (12) and (13), which summarize the kinematic of the quadrotor's body. ̈= [ 0 0 − ] + + (12) ̇= [ ∅ − ∅ − ∅ − ] − [ − − − ](13) Simulation model Adjustment To apply real-flight fault diagnosis, we observed that the CNN model trained using the original simulation model performed poorly in real flight with the diagnosis accuracy dropping below 50%. The main cause is that in the original simulation model, the CG of the quadrotor is centered, while the CG of the experimental UAV is off-center. In the simulation, we follow an ideal dynamic model of the quadrotor, i.e., if given four propellers with the same rotation speed, it should be in an upright hovering position with angular velocity and acceleration equal to zero. However, in the real world, this behavior is often not followed because of the following factors: 1) the center of gravity that is off-centered; 2) the imperfect condition of the mounted propeller or motor; 3) the inaccuracy of flight sensors as well as the imperfect build of the quadrotor's structure. As shown in Fig. 9a, all motors' RPMs in the simulation are near the same at the hovering state, while all motors' RPMs ( Fig. 9b) in the real flight are not the same at the hovering state. Therefore, for our simulation to accurately replicate real flight data more accurately, we need to calibrate our simulation model to account for this phenomenon. As shown in figure 9b below, the average speed of motor 4 is higher than motor 1 at a hovering state, which means that, on average, motor 4 must rotate faster to keep the quadrotor in a stable position. One way to simulate such behavior is to assume motor 4 is weaker than motor 1 such that it is required to rotate at a faster rate to produce similar torque and thrust. From the generated loadcell data in our flight test, the unbalanced ratio (Ur) can be computed using the average RPM (̅̅̅) of each respective motor over the baseline motor 1 ( 1 ̅̅̅̅), i.e., With the computed unbalanced ratio for each respective RPM, the adjusted torque and thrust generated by each propeller are as follows, In this manner, the simulated model will exhibit similar unbalanced behavior as our real-world quadrotor. C. CNN-based fault classifier With the trained Propeller Model and the Dynamic Model defined, we can now conduct simulation flight runs to generate the output signals (see Table 1) of the UAV under normal and fault conditions. As the output signal needs to be obtainable using onboard sensors, not all output signals can be used for training of fault classifier. We can repeat this procedure to generate labeled datasets under all 16 conditions. From the list of parameters shown in Table 1, 10 variables (target RPMs, pitch, roll, yaw, pitch rate, yaw rate, and roll rate) are chosen as input for the training of the classifier. The training dataset under each label thus consists of the 10 variables shown in Fig. 12. With the training samples collected from the simulation flight runs, we then train a CNN as a classifier for fault diagnosis. The overall framework of our fault diagnosis model is given in Fig. 13, in which the output layer consists of 16 labels, specifying the condition of the 4 propellers. The network architecture is shown in Fig. 14. For each set of 80-second data, the timestep is set as 0.05 seconds and one training sample consists of 100 timesteps. Thus, this resulted in a training sample size of 100 x 8 matrix. The best way to ensure the trained network's generalization ability is by increasing the number of training samples. One simple way to do so is by overlapping the sampling data such that two consecutive training sample has 99 overlapped sampling data or timesteps. In this way, the number of training samples is largely increased. Furthermore, to increase the receptive field and pick up important features from the data, this input will first pass through a fixed kernel size of 3x3, and the number of convolution kernels is 32. The number of convolution kernels in each convolution kernel is doubled that of the previous module. In addition, a max-pooling layer is adopted after each convolutional layer to reduce parameters, keeping the important ones. The resulting parameters are flattened into one-dimensional vectors and input into the fully connected layer. Finally, the fault diagnosis results are obtained through the SoftMax layer. Each fault type corresponds to each output from the SoftMax layer respectively. IV. EXPERIMENTS AND RESULTS A. Data Collection and Training of Propeller Model To build a learning-based Propeller Model, we first collect the training dataset with induced artificial damage to the propeller (see Fig. 15). We start by constructing an isolated system known as the loadcell as shown in Fig. 16 for data collection. In this setup, a single propeller is controlled by an isolated control unit with a constant voltage and the inbuilt sensors to allow for various measurements. These measurements include the acceleration vector, torque, thrust, RPM, vibrations as well as efficiency of the propeller. To train our propeller model, we use the RPM, Thrust, and Torque values. The experimental dataset is generated with the ESC signals varied from 1000 to 2000, which is the operating limit of the system. For each labeled case ("Normal", "Bent", and "Crack"), the experiment is conducted for approximately 5 minutes with a sampling period of 25ms. The torque and thrust measured under the "Normal" condition are shown in Fig. 17, respectively. As the experiment is conducted by ramping the control signal between 1000 to 2000, the peak in Fig. 17 corresponds to the case where the control signal is set at its highest while the trough corresponds to the control signal at its lowest. To ensure the LSTM model does not overfit, the training set will thus consist of the first 80% of the duration of the dataset while the remaining 20% is used as a test set. Three propeller models, namely "Normal", "Bent", and "Cracked", have been trained using the respective datasets. Each model takes the RPM as input and computes the torque and thrust under its respective health condition. In the training process, the batch size is defined to be 32, and the learning rate is set at 0.01. The training ceases upon reaching convergence. The "Normal" Propeller Model Fig. 18 and Fig. 19 show the testing results of the "Normal" Propeller Model in which the blue curve represents the ground truth while the red curve represents the network predicted results. We can see that the network can predict the torque and thrust quite accurately for the propeller operating under the "Normal" condition with most of its error occurring at ESC = 2000, which is the operating limit of the motor. This does not cause much of an issue as the UAV is rarely required to operate at maximum power. The average error rate measured for torque is 2.29% while the average error rate measured for thrust is 0.613%. The "Bent" Propeller Model In the previous sub-section, we have shown that the normal propeller model can model the normal condition of the propeller quite accurately. Fig. 20 and Fig. 21 show the testing results of the "Bent" propeller model in which the blue curve represents the ground truth while the red curve represents the network predicted results. From Fig. 20, we can see that the torque prediction outperforms that of the normal condition achieving an average error rate of 1.37% while the average error rate for thrust is measured at 0.689%. The "Crack" Propeller model Fig. 22 and Fig. 23 show the testing results of the "Cracked" propeller model in which the blue curve represents the ground truth while the red curve represents the network predicted results. As compared to the "Normal" and "Bent" condition, the "Cracked" propeller model did not perform as well as the previous fault conditions. In this case, the model slightly overestimated the torque during timestep 600-1300. The average error percentage for torque is measured at 2.47% while the average error percentage for thrust is measured at 4.27%. B. Data Collection and Training of Fault Classifier The training datasets are generated by conducting flight runs in the simulation environment under normal scenarios as well as scenarios where one or more of its propellers are faulty. For each flight run, 5 waypoints are placed randomly in the 3D space. The overall flight duration is set at 80 seconds, where the quadrotor will approach the targeted waypoints starting from waypoints 1 to 5. If the quadrotor completes the designated route before the timer reaches 80 seconds, it will repeat its task until the end of 80 seconds. The detailed parameters of the UAV model are listed in the table below. The mass of the quadrotor used in data collection is 1.2 kg with an arm length of 16cm. The torque and thrust coefficient is 1.076*10 -5 N/rpm 2 and 1.632*10 -7 Nm/RPM 2 respectively. The sampling period is set at 0.05 seconds, resulting in 1600 data points (80/0.05). As mentioned earlier, to increase the number of training samples, we use a window size of 100 and a hop length of 1 resulting in the total training set equal to 15802 samples. Fig. 25. Figure 25. Timestep and training sample definition in the dataset To ensure that the trained classifier does not overfit, 80% of the samples are used for training and 20% are used for testing the network. The datasets are fed into the classifier with network structure illustrated in Fig. 14. The classifier obtains a training accuracy of 100% and a testing accuracy of 99.97%. The overall summary of the classification accuracy is shown in Fig.26. Fig. 26 shows the flight trajectories used for data collection: (a) same waypoints with training dataset but the different payload, (b) different waypoints but same payload. Using the waypoint shown in Fig. 27b, datasets under similar fault scenarios are generated while keeping the UAV weight constant at 1.5kg the testing accuracy obtained in this test is 85.66%. The corresponding confusion matrix is illustrated in Fig. 28. As the training datasets only consist of the quadrotor flying under waypoints label A, the classification model tends to overfit which explains why the testing accuracy under Dataset B is significantly lower than the validation accuracy. This issue can be resolved by introducing more waypoints in the training dataset so that the classification is accustomed to different flight paths of the quadrotor. To illustrate if the classification model can accurately detect faulty quadrotors, the confusion matrix is broken down into 2 categories, namely fault or normal. Another metric known as recall and precision is further computed. The precision is computed to be 96.5% while the recall is computed to be 99.8%. Test set 2: Increase payload by 30% of its weight, keeping target waypoints the same as the training set (see Fig. 27a) In the second test, datasets of the same 16 categories of fault scenarios are generated using an increase of 30% in its payload from 1.5kg to 1.95kg. The resulting testing accuracy falls to 80.02% (see Fig. 29). Similarly, the precision is computed to be 96.95% while the recall is computed to be much lower at 93.6% Figure 29. Confusion Matrix of testing results (payload increased by 30%) Looking at the confusion matrix in Fig. 29, it is evident that the classifier struggles to differentiate between label 1 (all propellers working) and label 16 (all propellers faulty). This can be explained as a faulty propeller tend to produce lower torque and thrust; thus, the quadrotor flight path and RPM requirement resemble the case with higher payload as the propeller is required to rotate at a high speed for both scenarios. In addition, from Fig. 30, the labels show the cases where most errors occur. As this test is done with only 80 seconds worth of data per category, labels 4 and 13 tend to be miscategorized as the network has picked up the wrong information and classified the two labels based on how similar propellers 1,2,3, and 4 were. A similar case happened to labels 3 and 14 where only propeller 3 and propeller 2 are faulty respectively tend to be classified under the label where propeller 3 and propeller 2 are working. Testing Dataset C: Real-world flight test In the previous section, we have shown the potential of the classification model in classifying the simulation dataset. In Dataset C, we evaluate the performance of the classifier on a real-world dataset. To gather this group of datasets, we conducted the experiment indoors as shown in the figure below. However, in this flight test, due to safety concerns, we only conducted the experiment with one faulty propeller mounted. Thus, only label 1-5 are collected for testing. To conclude, in this paper, we have shown that it is possible to locate a faulty propeller using the target RPMs as well as the state variables as inputs. Furthermore, as all of these inputs can be computed onboard, it is possible to locate the fault while the UAV is flying. Currently, most existing approaches are based on simulation data and do not correlate well with the real world. In this paper, we have shown the effectiveness of this approach using a combination of data-driven approaches, existing dynamic models as well as adjustment to the dynamic formation to fit any realworld UAVs. We can achieve an accuracy of up to 76.32% on real-world collected data. In our future work, we are currently working on developing methods to classify the severity of the fault as well as simulate more complex environments such as the inclusion of wind. Furthermore, we understand that collecting fault data from real flights is very challenging and tedious. Thus, we are also looking into utilizing a transfer learning approach through domain adaptation to minimize the need for a real environment involving faulty UAVs. Lastly, we would also like to explore the various prognosis strategy (i.e., predict/estimate the remaining useful life of the motor/propeller system). CLASSIFICATION ACCURACY Accuracy Figure 1 . 1The overall framework of the proposed 3-step UAV fault diagnosis approach. Figure 2 : 2The framework of the data generative model and fault classifier of the UAV. The text marked in red means the corresponding signal is known or can be measured during flight. Figure 5 . 5Three propeller models, each representing a specific propeller condition Figure 7 . 7Propeller Model's network structure Figure 8 . 8The inertia and body coordinate frame of the quadrotor To describe the motion of the quadrotor, there are 12 states of the quadrotor as shown below. = [ ∅, , , ∅,̇ ,̇ , , , ,̇,̇,̇ ] Figure 9a . 9aRPM vs Time curve for simulation (top) and, 9b real-flight (bottom) Figure 10 . 10The proposed 16 labeled categories and their corresponding fault type. Figure 11 . 11Data generation using data generative model simulation for label 5. Figure 12 . 12Generated dataset and their corresponding label Figure 13 . 13The framework of our fault classifier. Figure 14 . 14Fault Classifier's network architecture. Figure 15 . 15Artificially induced damage to the quadrotor's propeller. Figure 16 : 16Experimental data generation using a loadcell Figure 17 . 17Normalized torque and thrust of the experimental dataset ("Normal"). Figure 18 .Figure 19 . 1819Testing result of the propeller model under a "normal" scenario (Torque) Testing result of the propeller model under "normal" scenario (Thrust) Figure 20 .Figure 21 . 2021Testing result of the propeller model under "bent" scenario (Torque) Testing result of the propeller model under "bent" scenario (Thrust) Figure 22 .Figure 23 . 2223Testing result of the propeller model under "cracked" scenario (Thrust) Testing result of the propeller model under "cracked" scenario (Torque) Figure 26 . 26Classification Accuracy of Fault Classifier Figure 27 . 27Testing data collection of two flight runs Testing Dataset B: Different target waypoints Figure 28 . 28Confusion Matrix of testing results (different flight path) Figure 30 . 30Most errors occur at these labels (too small). Figure 30 . 30Data collection indoors. With the CNN model trained using the adjusted simulation model mentioned in section 3.1, the model attains an average accuracy of 76.32% with a precision of 78.21% and recall of 82.39%. The reason for the drop in accuracy could be due to reasons such as overfitting and the difference in the domain the datasets are in. Figure 31 . 31Classification Accuracy of Fault ClassifierV. CONCLUSIONS AND FUTURE WORK Table 1 : 1Parameters simulated by the data generative model Errors in X, Y, and Z from targeted waypointThe chosen training parameter is the quadrotor's angular acceleration as well as the respective four RPM of the motor. Add the control signal, the voltage, and the output signal into the dataset under "label 5".Symbol Description t Time X, Y, Z Position ̇, ̇, ̇ Velocity along X, Y, and Z directions ∅, , Roll, Pitch, Yaw ∅̇, ̇, ̇ Roll Rate, Pitch Rate, Yaw Rate ω1, ω2, ω3, ω4 RPMs of Propellers 1, 2, 3, 4 f1, f2, f3, f4 Thrusts of Propellers 1, 2, 3, 4 τ1, τ2, τ3, τ4 Torques of Propellers 1, 2, 3, 4 δX, δY, δZ Quadrotor angular acceleration (∅̇, ̇, ̇) can be directly measured by the onboard sensor. On a real flight, however, the motor velocities cannot be directly measured. Thus, based on our loadcell experiment, we derived the relationship between the control signal ESC and the Rpm of the respective propeller as follows, = −0.0062( ) 2 + 29.37( ) − 22992 In this segment, propeller faults are classified into two categories (fault/normal). Based on the location of the malfunctioning propeller of the quadrotor UAV, 16 labeled scenarios are categorized as shown in Fig. 10. For example, label 1 represents the case where all propellers are normal, label 2: propeller 1 is faulty and the other 3 are normal, and label 16: all 4 propellers are faulty. A more detailed illustration of how we incorporate our trained propeller model into our simulation model is Table 4 . 4For each set of 80-second data, the target RPM, voltage, roll, pitch, yaw, roll rate, pitch rate, and yaw rate are used as input to the network. This results in a dimension of[10, 100] for the training sample, each corresponding to one category, 1 to 16. An example of one data sample is shown in1: Parameters of UAV model II Parameters UAV model L 0.16 KF 1.076 * 10 -5 Kt 1.632 * 10 -3 Ixx 0.0123 Iyy 0.0123 Izz 0.0224 Table 4.2: Parameters of UAV model II Dataset Type Dataset Name Weight (Kg) Waypoints Label Training A 1.5 A Testing (Simulation) B 1.5 B C 1.95 A Testing (Real- world) D 2.025 Nil AccuracyC. Results and comparison study Furthermore, to test the performance of the trained classifier under different operating conditions, datasets are collected from flight runs following different targeted waypoints and payloads.100 99.7 99.5 99.6 99.7 99.8 99.9 100 Training Data Validation Data CLASSIFICATION ACCURACY A Survey of Unmanned Aerial Vehicles (UAV) for Traffic Surveillance. A Puri, Tampa, FL, USADepartment of Computer Science and Engineering, University of South FloridaPuri, A. A Survey of Unmanned Aerial Vehicles (UAV) for Traffic Surveillance; Department of Computer Science and Engineering, University of South Florida: Tampa, FL, USA, 2005; pp. 1-29. Gaussian Mixture Model and Self-Organizing Map Neural-Network-Based Coverage for Target Search in Curve-Shape Area. P Yao, Q Zhu, R Zhao, IEEE Trans. Cybern. PubMedYao, P.; Zhu, Q.; Zhao, R. Gaussian Mixture Model and Self-Organizing Map Neural-Network-Based Coverage for Target Search in Curve-Shape Area. IEEE Trans. Cybern. 2020. [CrossRef] [PubMed] Optimal UAV Route Planning for Coverage Search of Stationary Target in River. P Yao, Z Xie, P Ren, IEEE Trans. Control Syst. Technol. 27Yao, P.; Xie, Z.; Ren, P. Optimal UAV Route Planning for Coverage Search of Stationary Target in River. IEEE Trans. Control Syst. Technol. 2019, 27, 822-829. [CrossRef] Precision Landing Test and Simulation of the Agricultural UAV on Apron. Y Guo, J Guo, C Liu, H Xiong, L Chai, D He, Sensors. 20PubMedGuo, Y.; Guo, J.; Liu, C.; Xiong, H.; Chai, L.; He, D. Precision Landing Test and Simulation of the Agricultural UAV on Apron. Sensors 2020, 20, 3369. [CrossRef] [PubMed] UAV and Machine Learning Based Refinement of a Satellite-Driven Vegetation Index for Precision Agriculture. V Mazzia, L Comba, A Khaliq, M Chiaberge, P Gay, Sensors. PubMedMazzia, V.; Comba, L.; Khaliq, A.; Chiaberge, M.; Gay, P. UAV and Machine Learning Based Refinement of a Satellite-Driven Vegetation Index for Precision Agriculture. Sensors 2020, 20, 2530. [CrossRef] [PubMed] Security and privacy issues of UAV: A survey. Y Zhi, Z Fu, X Sun, J Yu, Mob. Netw. Appl. 25Zhi, Y.; Fu, Z.; Sun, X.; Yu, J. Security and privacy issues of UAV: A survey. Mob. Netw. Appl. 2020, 25, 95-101. [CrossRef] Sensor system for fault detection identification and accommodation of elevator of uav. P Panitsrisit, A Ruangwiset, SICE Annual Conference. P. Panitsrisit and A. Ruangwiset, "Sensor system for fault detection identification and accommodation of elevator of uav," in SICE Annual Conference 2011, Sept 2011, pp. 1035- 1040. Fault detection for autonomous multirotors using a redundant flight control architecture. M Lieret, J Fertsch, J Franke, Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE). the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)Hong Kong, ChinaLieret, M.; Fertsch, J.; Franke, J. Fault detection for autonomous multirotors using a redundant flight control architecture. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20-21 August 2020; pp. 29-34. On-board deep-learning-based unmanned aerial vehicle fault cause detection and identification. V Sadhu, S Zonouz, D Pompili, arXiv:2005.00336arXiv preprintV. Sadhu, S. Zonouz, and D. Pompili, "On-board deep-learning-based unmanned aerial vehicle fault cause detection and identification," arXiv preprint arXiv:2005.00336, 2020. Robust Backstepping Sliding-Mode Control and Observer-Based Fault Estimation for a Quadrotor UAV. F Chen, R Jiang, K Zhang, B Jiang, G Tao, 10.1109/TIE.2016.2552151IEEE Trans. Ind. Electron. 63Chen F., Jiang R., Zhang K., Jiang B., Tao G. Robust Backstepping Sliding-Mode Control and Observer-Based Fault Estimation for a Quadrotor UAV. IEEE Trans. Ind. Electron. 2016;63:5044- 5056. doi: 10.1109/TIE.2016.2552151. Quadrotor Actuator Fault Diagnosis and Accommodation Using Nonlinear Adaptive Estimators. R C Avram, X Zhang, J Muse, 10.1109/TCST.2016.2640941IEEE Trans. Control Syst. Technol. 25Avram R.C., Zhang X., Muse J. Quadrotor Actuator Fault Diagnosis and Accommodation Using Nonlinear Adaptive Estimators. IEEE Trans. Control Syst. Technol. 2017;25:2219-2226. doi: 10.1109/TCST.2016.2640941 Failure detection and identification and fault tolerant control using the imm-kf with applications to the eagle-eye UAV. C Rago, R Prasanth, R K Mehra, R Fortenbaugh, Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171). the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171)442084213C. Rago, R. Prasanth, R. K. Mehra, and R. Fortenbaugh, "Failure detection and identification and fault tolerant control using the imm-kf with applications to the eagle-eye UAV," in Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171), vol. 4, Dec 1998, pp. 42084213 vol.4. An Online Fault Diagnosis Method For Actuators Of Quadrotor UAV With Novel Configuration Based On IMM. H Zhang, Q Gao, F Pan, Proceedings of the 2020 Chinese Automation Congress (CAC). the 2020 Chinese Automation Congress (CAC)Shanghai, China, 6-8Zhang, H.; Gao, Q.; Pan, F. An Online Fault Diagnosis Method For Actuators Of Quadrotor UAV With Novel Configuration Based On IMM. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6-8 November 2020; pp. 618-623. Actuator and Sensor Fault Detection and Diagnosis for Unmanned Quadrotor Helicopters. Y Zhong, Y Zhang, W Zhang, H Zhan, IFAC-PapersOnLine. 51Zhong, Y.; Zhang, Y.; Zhang, W.; Zhan, H. Actuator and Sensor Fault Detection and Diagnosis for Unmanned Quadrotor Helicopters. IFAC- PapersOnLine 2018, 51, 998-1003. On data-centric diagnosis of aircraft systems. J Stutz, IEEE Transactions on Systems, Man and Cybernetics. J. Stutz, "On data-centric diagnosis of aircraft systems," IEEE Transactions on Systems, Man and Cybernetics, 2010. A hybrid feature model and deep learning based fault diagnosis for unmanned aerial vehicle sensors. D Guo, M Zhong, H Ji, Y Liu, R Yang, 10.1016/j.neucom.2018.08.046Neurocomputing. 319Guo D., Zhong M., Ji H., Liu Y., Yang R. A hybrid feature model and deep learning based fault diagnosis for unmanned aerial vehicle sensors. Neurocomputing. 2018;319:155-163. doi: 10.1016/j.neucom.2018.08.046 An SMC-based fault tolerant control design for a class of underactuated unmanned aerial vehicles. S Mallavalli, A Fekih, Proceedings of the 2018 4th International Conference on Control, Automation and Robotics (ICCAR). the 2018 4th International Conference on Control, Automation and Robotics (ICCAR)Auckland, New ZealandMallavalli, S.; Fekih, A. An SMC-based fault tolerant control design for a class of underactuated unmanned aerial vehicles. In Proceedings of the 2018 4th International Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 20-23 April 2018; pp. 152-155. An Audio-Based Fault Diagnosis Method for Quadrotors Using Convolutional Neural Network and Transfer Learning. W Liu, Z Chen, M Zheng, Proceedings of the American Control Conference. the American Control ConferenceNew Orleans, LA, USALiu, W.; Chen, Z.; Zheng, M. An Audio-Based Fault Diagnosis Method for Quadrotors Using Convolutional Neural Network and Transfer Learning. In Proceedings of the American Control Conference, New Orleans, LA, USA, 25-28 May 2021 UAV fault detection based on GA-BP neural network. Y Chen, C Zhang, Q Zhang, X Hu, Proceedings of the 32nd Youth Academic Annual Conference of Chinese Association of Automation. the 32nd Youth Academic Annual Conference of Chinese Association of AutomationHefei, ChinaChen, Y.; Zhang, C.; Zhang, Q.; Hu, X. UAV fault detection based on GA-BP neural network. In Proceedings of the 32nd Youth Academic Annual Conference of Chinese Association of Automation, Hefei, China, 19-21 May 2017. A Sensor Fault Diagnosis Algorithm for UAV Based on Neural Network. Q X Xiao, Proceedings of the 2021 International. the 2021 InternationalXiao, Q.X. A Sensor Fault Diagnosis Algorithm for UAV Based on Neural Network. In Proceedings of the 2021 International. An Intelligent Quadrotor Fault Diagnosis Method Based on Novel Deep Residual Shrinkage Network. P Yang, H Geng, C Wen, P Liu, Drones. Yang P, Geng H, Wen C, Liu P. An Intelligent Quadrotor Fault Diagnosis Method Based on Novel Deep Residual Shrinkage Network. Drones. 2021; . 10.3390/drones504013351335(4):133. https://doi.org/10.3390/drones5040133 Quadrotor simulation and control (Quad_simcon). J Bass, J. Bass. Quadrotor simulation and control (Quad_simcon). High-Speed Autonomous Obstacle Avoidance with Pushbroom Stereo. Andrew J Barry, Russ Tedrake, PhD ThesisAndrew J. Barry and Russ Tedrake. High-Speed Autonomous Obstacle Avoidance with Pushbroom Stereo. PhD Thesis, March 2016. Andrea Stability and control of a quadrocopter despite the complete loss of one, two, or three propellers. M W Mueller, R , IEEE International Conference on Robotics and Automation (ICRA). M. W. Mueller and R. D'Andrea Stability and control of a quadrocopter despite the complete loss of one, two, or three propellers 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014. Unmanned Aerial Aircraft Systems for transportation engineering: Current practice and future challenges. E N Barmpounakis, E I Vlahogianni, J C Golias, Int. J. Transp. Sci. Technol. 5Barmpounakis, E.N.; Vlahogianni, E.I.; Golias, J.C. Unmanned Aerial Aircraft Systems for transportation engineering: Current practice and future challenges. Int. J. Transp. Sci. Technol. 2017, 5, 111-122. [CrossRef] LSTMbased auto-encoder model for ECG arrhythmias classification. B Hou, J Yang, P Wang, R Yan, IEEE Trans. Instrum. Meas. 694B. Hou, J. Yang, P. Wang, and R. Yan, "LSTM- based auto-encoder model for ECG arrhythmias classification," IEEE Trans. Instrum. Meas., vol. 69, no. 4, pp. 1232-1240, Apr. 2020. Deep learning for time-series analysis. J. Cristian Borges Gamboa, arXiv:1701.01887J. Cristian Borges Gamboa, "Deep learning for time-series analysis," 2017, arXiv:1701.01887. [Online]. Modelling, Identification and control of a quadrotor helicopter. T Bresciani, SwedenLund UniversityMaster's thesisT. Bresciani, "Modelling, Identification and control of a quadrotor helicopter", Master's thesis, Lund University, Sweden, 2008.
[]
[ "EXPANSIVE DYNAMICS ON LOCALLY COMPACT GROUPS REVISED", "EXPANSIVE DYNAMICS ON LOCALLY COMPACT GROUPS REVISED" ]
[ "Bruce P Kitchens " ]
[]
[]
Let G be a second countable, Hausdorff topological group. If G is locally compact, totally disconnected and T is an expansive automorphism then it is shown that the dynamical system (G, T ) is topologically conjugate to the product of a symbolic full-shift on a finite number of symbols, a permutation of a countable coset space of G where every orbit is finite, and a totally wandering, countable state Markov shift. In particular, if the automorphism is transitive then G is compact and (G, T ) is topologically conjugate to a full-shift on a finite number of symbols.
null
[ "https://export.arxiv.org/pdf/2303.01596v1.pdf" ]
257,353,842
2303.01596
7ba13500f63ee5f8781224db9aa0b7770cd33a64
EXPANSIVE DYNAMICS ON LOCALLY COMPACT GROUPS REVISED 2 Mar 2023 Bruce P Kitchens EXPANSIVE DYNAMICS ON LOCALLY COMPACT GROUPS REVISED 2 Mar 2023 Let G be a second countable, Hausdorff topological group. If G is locally compact, totally disconnected and T is an expansive automorphism then it is shown that the dynamical system (G, T ) is topologically conjugate to the product of a symbolic full-shift on a finite number of symbols, a permutation of a countable coset space of G where every orbit is finite, and a totally wandering, countable state Markov shift. In particular, if the automorphism is transitive then G is compact and (G, T ) is topologically conjugate to a full-shift on a finite number of symbols. Background and Motivation In [K1] it was shown that an expansive automorphism of a compact, totally disconnected group is topologically conjugate to the product of a full-shift on a finite number of symbols and an automorphism of a finite group. The method of proof is to use the fact that a compact, totally disconnected group has arbitrarily small compact, open normal subgroups to code the system, (G, T ), to a finite state Markov shift. The Markov shift is topologically conjugate to (G, T ), the alphabet is a finite group and the Markov shift has a group structure where the operation is defined using a symbol by symbol group operation. Then using the group structure of the Markov shift and symbolic dynamics techniques the Markov shift is, through a sequence of reductions, reduced to a topologically conjugate symbolic system which is he product of a full-shift and an automorphism of a finite group. Since the time of the above mentioned result, countable state Markov shifts have been increasingly used in dynamics. This is the particularly true in the study of maps that exhibit some (but not uniform) hyperbolicity, maps of the interval and in symbolic dynamics itself. Likewise, the structure of locally compact, totally disconnected groups and their automorphisms have received increased attention. See for example [CM]. In the present paper it is shown that an expansive automorphism of a second countable, Hausdorff, locally compact, totally disconnected group is topologically conjugate to the product of a symbolic full-shift on a finite number of symbols,a permutation of a countable coset space with every orbit finite, and a totally wandering, countable state Markov shift (in some cases). The strategy used in the case where the group is compact is followed but with several modifications. The first problem is that a locally compact, totally disconnected group has arbitrarily small compact, open subgroups but they need not be normal. This means that if G is locally compact, totally disconnected and T is an expansive automorphism then it is possible to code (G, T ) to a countable state Markov shift. The Markov shift is topologically conjugate to (G, T ), the alphabet is a coset space of G but there is no easily described group structure on the countable state Markov shift. Consequently, the constructions leading to the reductions used must be done in the group G itself rather than on the symbols of the Markov shift. Another problem is that there is wandering behavior that can occur in noncompact groups that cannot occur in compact groups. This leads to the introduction of totally wandering countable state Markov shifts. The paper is organized as follows. Section 2 contains some notation from symbolic dynamics. Section 3 shows how to code an expansive automorphism of a second countable, Hausdorff locally compact, totally disconnected group to a countable state Markov shift where each state has a finite number of predecessors and successors. In Section 4 totally wandering systems are defined. Section 5 has a discussion of systems where the points with a compact orbit closure are dense in the space. Section 6 explains the two fundamental constructions and in Section 7 the parts are put together to prove the theorem. Symbolic Dynamics Let A denote a finite or countably infinite set with the discrete topology and A Z denote the set of all two-sided infinite sequences on A with the product topology. Let σ be the shift homeomorphism of A Z to itself defined by σ(x) i = x i+1 . A closed, shift-invariant subset, X, of A Z is a subshift. If each of the elements of A occur in some point of X, we say A is the alphabet of X. For a subshift X let W(X, n) denote the words of length n that can occur in X, that is w ∈ W(X, n) if and only if w = [x 0 , . . . x n−1 ] for some x ∈ X. Then let W(X) denote the union over all n of the W(X, n). For a word w ∈ W(X) define the follower set of w, f(w), to be the symbols a ∈ A such that wa ∈ W(X). Define the predecessor set of w, p(w), in a similar manner. The proofs that follow depend on understanding the sets f (w) and p(w). If X ⊆ A Z is a subshift the standard n-block presentation of X has alphabet W(X, n) and transitions by overlapping words. That is [x 1 , . . . , x n ] ∈ f ([x 0 , . . . , x n−1 ] if and only if [x 0 , . . . , x n ] ∈ W(X, n + 1). The shift transformation is defined as before. We will be dealing with finite or countably infinite state Markov shifts in the discussions that follow and the reader is referred to [K2] for further background. The groups All groups we consider are second countable, Hausdorff, locally compact and totally disconnected (equivalently zero-dimensional in this setting). Denote such groups by l.c.t.d. groups. Let G be a l.c.t.d. group and T an automorphism of G. The automorphism T is expansive if there exists a neighborhood of the identity, U, so that for each pair g, h ∈ G, g = h, there is an n ∈ Z so that T n (g) / ∈ T n (h)U. We say that such a set U separates points. We use van Dantzig's theorem which states that in an l.c.t.d. group there is a compact, open subgroup contained in any neighborhood of the identity. If the group is compact or meets various other conditions the subgroup can be taken to be normal. If the subgroup is normal, the proofs of the following results are simplified and are somewhat more intuitive. Observation 3.1. Let G be an l.c.t.d. group and T an expansive automorphism then (G, T ) is topologically conjugate to a countable state Markov shift (Σ G/H , σ). The The image of the coding map using the coset partition of G is the one-step Markov shift Σ G/H . This means that the coset partition of G is a Markov partition. If H is a normal subgroup of G then Σ G/H is a group with the group operation defined with a symbol by symbol operation and the arguments can be made by looking only at Σ G/H and ignoring G. Moreover, if G is compact H can be chosen to be a normal subgroup, the coset partition will be finite and Σ G/H will be a subshift of finite type with the group operation defined symbol by symbol. This is the case examined in [K2]. If H is not a normal subgroup of G then Σ G/H is still a group but the group operation is complicated and is defined by pulling back to G. The arguments that follow will depend on looking at G and H. In Σ G/H each symbol has the same number of followers which is the cardinality of f (H). Likewise, each symbol has the same number of predecessors which is the cardinality of p(H). This says the cardinality of f (H) is an eigenvalue for the transition matrix of Σ G/H with the column vector of all 1's an eigenvector. Likewise, the cardinality of p(G/H) is an eigenvalue for the transition matrix with the row vector of all 1's an eigenvector. When G/H is compact the cardinalities of f (H) and p(H) must agree. If the transition matrix is countably infinite the cardinalities may be different as illustrated by Example 4.3. Totally Wandering Systems Definition 4.1. Let X be an infinite topological space and T a homeomorphism from X to itself. The system (X, T ) is totally wandering if the number of points in X with a compact T -orbit closure is finite. Some examples of such systems follow. The first example is where X is countable and discrete. The next two are more interesting. Example 4.2. Let X = Z 2 and T be defined by the matrix 2 1 1 1 . Example 4.3. Consider the 3-adic numbers, Q 3 , expressed as the subgset of {0, 1, 2} Z consisting of the sequences that are eventually all zeros to the left. The group operation is defined as coordinate by coordinate addition modulo 3 with carry to the right. The automorphism is the left shift σ −1 (x) i = x i−1 . It is the inverse of the usual shift and corresponds to multiplication by 3. If we choose for H the subgroup of sequences that are zero for all coordinates to the left of the time zero entry, we produce the countable state Markov shift defined by the transition graph 0. 01. 02. 010. 011. 012. 020. 021. 022. Here0 means zeros from that coordinate to the left. Example 4.4. Consider Q 3 ⊕ Q 3 the direct sum of two copies of the 3-adic numbers. Express the entries in each copy of Q 3 as in Example 4.3. The automorphism is multiplication by 1/3 in the first copy of Q 3 and multiplication by 3 in the second copy. Take H to be the subgroup {(0.,0.)} in the previous notation. This gives the countable state Markov shift defined by the transition graph Proof. Since X is a Markov shift, if this were not true there would be other periodic points in X. Dense Compact Orbit Closures Let G be an l.c.t.d. group and T an expansive automorphism of G. By Observation 3.1 (G, T ) is topologically conjugate to a countable state Markov shift (Σ G/H , σ). We want to consider the case where the points with a compact T -orbit closure are dense in G. Observation 5.1. If the points with compact orbit closure are dense in G and the subgroup H whose coset partition is a Markov partition of G is fixed, let F be a finite collection of cosets of H. Define C F = gH∈F gH and X F = j∈Z T j (C F ) then G c = X F , where the union is over all finite subsets F of the set of cosets of H. Example 5.2. Let G be a finite group with the discrete topology. The product space, G Z , with the product topology and group multiplication defined coordinate by coordinate is a compact, totally disconnected group. The shift transformation, σ, defined by σ(x) i = x i+1 is a group automorphism. Example 5.3. Let G be the direct sum of the groups Z/3 n Z, for n ∈ N, G = n∈Z Z/3 n Z. The space is the subgroup of n∈N Z/3 n Z where all sequences x ∈ G have x i = 0 for all but finitely many i ∈ N. The group is countably infinite and we put the discrete topology on it. Let T be the automorphism defined using multiplication by 2 on each Z/3 n Z. Every point is periodic but there are points of arbitrarily high period. Example 5.4. Let S 3 be the permutation group of a set containing three elements. Form the full-shift ((S 3 ) Z , σ) and let (G, T ) be the system from Example 5.3. Form the direct sum system (( S 3 ) Z ⊕ G, σ × T ). Now, (S 3 ) Z ⊕ G is a nonabelian l.c.t.d. group, σ × T is an expansive automorphism and the periodic points are dense in the group. Example 5.5. Define Σ A ⊆ (Z/4Z ⊕ Z/2Z) Z by the transition graph below. This means f ((0, 0)) = f ((2, 0)) = f ((1, 1)) = f ((3, 1)) = {(0, 0), (2, 0), (1, 0), (3, 0)} f ((1, 0)) = f ((3, 0)) = f ((0, 1)) = f ((2, 1)) = {(0, 1), (2, 1), (1, 1), (3, 1)}. In this case the finite state Markov shift is a compact group where the group operation is coordinate by coordinate addition (with no carry). The subgroup p(H) ∩ f (H) = K is {(0, 0), (2, 0)}. The Markov shift (Σ A , σ) is topologically conjugate to the full-shift on four symbols but it is not algebraically conjugate to either (( Z/2Z ⊕ Z/2Z) Z , σ) or ((Z/4Z) Z , σ). (0, 0) (2, 0) (1, 0) (3, 0) (1, 1) (3, 1) (0, 1) (2, 1) Definition 5.6. Let T be an expansive automorphism of the l.c.t.d. group G and H be a compact, open subgroup whose coset partition defines a Markov partition. Define the subgroups H s loc = j≤0 T j (H) and H u loc = j≥0 T j (H). In dynamical terms these are the local stable set of the identity and the local unstable set of the identity. If g and h are in the same coset of H then gH s loc ∩ hH u loc is single point in the same coset. The point is denoted by [g, h]. In dynamics, the partition is said to have local product structure. Note that the cosets of H s loc and H u loc , for all points in G, are dynamically defined. Each coset of H s loc is an equivalence class in the corresponding coset of H defined by the equivalence relation x ∼ s y if and only if T j (x) and T j (y) are in the same element of of the partition for all j ≥ 0. Each coset of H u loc is an equivalence class in the corresponding coset of H defined by the equivalence relation x ∼ u y if and only if T j (x) and T j (y) are in the same element of the partition for all j ≤ 0. Lemma 5.7. Let T , G and H be as stated. If the points whose T -orbit closure is compact are dense in the group G then H = H u loc H s loc = H s loc H u loc . Proof. First observe that if H is normal in G the result is immediate. Recall the sets F , C F and X F from Observation 5.1. C F is a compact, open subset of G and X F is a compact, T -invariant subset of G. Use the proof of [HR] Theorem 7.6 to find a compact, open subgroup, N , of H which is normal with respect to C F . By this we mean xN x −1 = N for all x ∈ C F . By the proof of [HR] Theorem 7.6, N = x∈C F xHx −1 . Observe that T n (N ) is normal with respect to T n (X F ) = X F , for all n. Let N s loc = j≤0 T j (N ) and N u loc = j≥0 T j (N ). Consider N s loc as a compact, open (in the relative topology), normal with respect to X F , subgroup of the compact group T −1 (H s loc ) and N u loc as a compact, open, normal with respect to X F , subgroup of the compact group T (H u loc ). There is a minimal N ≥ 0 such that T N (H s loc ) ⊆ N s loc and then for this N, let N s = T −N (N s loc ). Note that N s is also j≥N T −j (N ) which is a compact subgroup of G which is normal with respect to X F . Furthermore, H s loc ⊆ N s ⊆ T −1 (H s loc ). This follows from the consideration of the coset structure of N s loc in T −1 (H s loc ). It means that N s ∩H = H s loc . The corresponding statements hold for N u . Let z ∈ H ∩ X F , z s = [e, z] and z u = [z, e]. Then both z s and z u are in X F with z = [z u , z s ]. Observe that z s z u ∈ z s H u loc and z s z u = z u w s for some w s ∈ N s since N s is normal with respect to X F . In fact, w s = z −1 u z s z u ∈ H and since N s ∩ H = H s loc , w s ∈ H s loc . Now z u w s is also in z u H s loc which means z s z u = [z u , z s ] = z. The same holds for z u z s . Consequently, for z ∈ H ∩ X F , z = [z u , z s ] = z u z s = z s z u . Since every point with a compact orbit closure is in some X F , such points are dense in H and since the group operation is continuous the same holds for all z ∈ H. Since H s loc H u loc ⊆ H, we have the desired result. For an arbitrary coset, gH, of H we have gH = gH s loc H u loc = gH u loc H s loc , gN s ∩ gH = gH s loc and gN u ∩ gH = gH u loc . Moreover, the local product structure for z, w ∈ gH can be expressed as [z, w] = z[w, z] −1 w = w[w, z] −1 z. We can also consider the coset partition defined by the subgroup T j (H) for any j ∈ Z. In this partition, we have T j (H s loc ), T j (H u loc ), T j (N s ), and T j (N u ) with all the properties of the corresponding subgroups in the partition defined by H. In the partition defined by T j (H) the local product structure is denoted by [·, ·] j . Choose g ∈ T (H u loc ) ∩ X F and h ∈ H s loc ∩ X F so that gh ∈ f (H) ∩ X F . Then gh = h ′ g = hg ′ for h ′ ∈ N s and g ′ ∈ T (N u ). This means hg ′ ∈ hT (N u ). Since hT (N u ) ∩ hT (H) = hT (H u loc ), hg ′ ∈ hT (H u loc ) and then hg ′ = gh = gH s loc ∩ hT (H u loc ). Using h ′ g = hg ′ ∈ hT (H u loc ) we conclude h ′ = h so that gh = hg = gH s loc ∩ hT (H u loc ). When p(H) and f (H) are subgroups of G, it follows that every predecessor set that occurs not only contains the same number of cosets of H as p(H) but is a coset of p(H). The group G is partitioned into disjoint predecessor sets that are cosets of p(H). The same is true for the follower sets. This is a great deal more structure for the partition than has previously been exhibited. Two Constructions Next we describe two constructions that will be used in the proof of Theorem 7.4. They were originally used in [K1]. Suppose f (H) and p(H) are subgroups of G. The cosets of H partition K and all H cosets in K have the same follower and the same predecessor sets. This is also true for each coset of K. We say the cardinality of K is the number of cosets of H it contains. Construction 6.2. The case when the cardinality of K is greater than one. Suppose p(H) and f (H) are subgroups of G. Consider two new systems which are Σ G/K and (K/H) Z . The space Σ G/K (when not countable) has all the properties of Σ G/H . The point is that the systems Σ G/H and Σ G/H × (K/H) Z are topologically conjugate. The conjugacy is defined on the symbol level. It is clear because there is the algebraic map G/H → G/K with "kernel" K/H. If H and K are normal subgroups of G this is just saying that G/H is an extension of K/H by G/K so every element of G/H can be written as a pair of elements with the first from G/K and the second from K/H. We have reduced the Markov shift Σ G/H to a product of a new Markov shift cross a full-shift on a finite number of symbols. In the new Markov shift the cardinalities of the predecessor and follower sets are strictly smaller than in the old one. The new cardinalities of the predecessor and follower sets is the old cardinality (which was finite) divided by the cardinality of K. Likewise, the cardinality of the alphabet of Σ G/K is the cardinality of the alphabet of Σ G/H divided by the cardinality of K. If the alphabet of Σ G/H was infinite the new alphabet is infinite. ✷ Construction 6.3. The case when the cardinality of K is one. Suppose p(H) and f (H) are subgroups of G. Both subgroups contain H. The important point is that when the cardinality of K is one every element of f (H) has a distinct follower set and every element of p(H) has a distinct predecessor set. To see this assume two elements of f (H) have the same follower set. Using the fact that f (H) is a subgroup we can assume that one of them is H and the other is gH. But then we have both H and gH in K. The same reasoning applies to the predecessor sets. In turn each element of any follower set, f (gH), has a unique follower set and each element of any predecessor set, p(gH), has a unique predecessor set. Consider the new Markov shifts Σ G/f (H) and Σ G/p(H) . The alphabets are the cosets of f (H) and p(H), respectively. Each is a factor of Σ G/H by a map defined on symbols. Moreover, each factor map is invertible. The map onto Σ G/f (H) is invertible by a twoblock map that looks at the present symbol and one symbol into the future. This is well-defined because each element of a fixed follower set has a distinct follower set. Likewise, the map onto Σ G/p(H) is invertible by a two-block map that looks at the present symbol and one symbol into the past. Each of the new Markov shifts (if not countable) has all of the properties of Σ G/H . The number of distinct follower sets in Σ G/f (H) is the number of distinct follower sets in Σ G/H divided by the cardinality of f (H) and the number of distinct predecessor sets in Σ G/p(H) is the number of distinct predecessor sets in Σ G/H divided by the cardinality of p(H). The cardinality of the alphabet of Σ G/f (H) is the cardinality of the alphabet of Σ G/H divided by the cardinality of f (H) and the cardinality of the alphabet of Σ G/p(H) is the cardinality of the alphabet of Σ G/H divided by the cardinality of p(H). If the alphabet of Σ G/H was infinite the new alphabets are infinite. A crucial point is that whether we use follower or predecessor sets to make the reduction the cardinalities of the new predecessor and follower sets, in both cases, are unchanged. They cardinalities neither increase nor decrease. ✷ The two constructions can be best understood by applying them to Example 5.5. First apply Construction 6.2 and then Construction 6.3 to obtain (Σ 2 , σ) × (Σ 2 , σ) which is topologically conjugate to (Σ 4 , σ). Structure Theorem Definition 7.1. Let T be an expansive automorphism of an l.c.t.d. group G. Define G c to be the closure of the points whose T -orbit closure is compact in G. Observe that G c is a closed, T -invariant, subgroup of G. Denote by T c the restriction of T to G c . We have the system (G c , T c ) where T c is an expansive automorphism of an l.c.t.d. group G c . Recall Observation 5.1 and note that it applies in this more general setting. Definition 7.2. Let T be an expansive automorphism of an l.c.t.d. group G. Define G t to be the closure of the points, g, where T n (g) ∈ H for |n| ≥ N, some N ∈ N. Observe that G t is a closed, T -invariant, subgroup of G. Denote by T t the restriction of T to G t . We have the system (G t , T t ) where T t is an expansive automorphism of an l.c.t.d. group G t . Observe that the T t -periodic points are dense in G t and there are points in G t whose forward and backwards T t -orbits are both dense in G t . Another interpretation of G t can be given which parallels Observation 5.1. Let F t be the finite collection of the cosets of H that contain a periodic point whose T-orbit passes through H. Define C Ft = gH∈Ft gH then G t = j∈Z T j (C Ft ). We have G t ⊆ G c ⊆ G and each is an l.c.t.d. group. Their automorphisms T t , T c and T are expansive. To prove Theorem 7.4 (2) we will make use of the following corollary of Theorem 7.4 (1). Corollary 7.3. Let T be an expansive automorphism of an l.c.t.d. group G. The subgroup G t is compact and the system (G t , T t ) is topologically conjugate to a full-shift on a finite number of symbols. Moreover, G t is open in G c and G c /G t is a countable, discrete coset space. Theorem 7.4. Let T be an expansive automorphism of an l.c.t.d. group G. The system falls into one of the two following categories. (1) G c = G in which case (G, T ) is topologically conjugate to the product of a fullshift on a finite alphabet and a permutation of a countable coset space where the subgroup coset is fixed by the permutation and every other coset has a finite orbit. (2) G c = G in which case (G, T ) is topologically conjugate to the product of a fullshift on a finite alphabet, a permutation of a countable coset space where the subgroup coset is fixed by the permutation and every other coset has a finite orbit and a totally wandering countable state Markov shift. Proof. Case 1: G c = G. Use Observation 3.1 to produce a countable state Markov shift (Σ G/H , σ) topologically conjugate to (G, T ). Consider the subgroup K defined by Definition 6.1. If the cardinality of K is greater than one apply Construction 6.2. This produces Σ G/K × (K/H) Z topologically conjugate to Σ G/H . We think of factoring out a full-shift on a finite number of symbols. We can ignore the full-shift and concentrate on Σ G/H . The important point is that in Σ G/K the cardinality of the predecessor or follower sets are strictly smaller than in Σ G/H . The new cardinalities of the predecessor or follower sets is the old cardinality (which was finite) divided by the cardinality of K. If the cardinality of K is one, use p(H) or f (H), and apply Construction 6.3 identifying the H cosets in p(H) or f (H). This leaves the cardinality of p(H) or f (H), for the new H unchanged. Continue to apply Constructions 6.2 or 6.3 as appropriate. Notice that since the cardinalities of p(H) and f (H) were originally finite, Construction 6.2 can be used only a finite number of times. The cardinalities of p(H) and f (H) will be reduced to one. To see this, suppose the cardinality of f (H) is greater than one and the cardinality of K is one. Then since the points with compact T -orbit closure are dense in G there is either a periodic point not equal to the identity element or a preperiodic point in H. Apply Construction 6.3 repeatedly until the cardinality of K is greater than one and then apply Construction 6.2 further reducing the cardinality of f (H). The cardinality of K will become greater than one after a finite number of applications of Construction 6.3 because of the periodic point or preperiodic point. More and more of the orbit is getting absorbed into H as Construction 6.3 is applied. Finally there will be transitions that go from H to a distinct gH and then from this gH back to H making the cardinality of K greater than one. If the cardinality of p(H) is greater than one we make the analogous constructions. Once the cardinalities of p(H) and f (H) are one we have the desired result. Case 2: G c = G. Form the coset space G/G c with the quotient topology. Let T w be the homeomorphism induced on G/G c by T . We will show (G/G c , T w ) is topologically conjugate to a totally wandering, countable state Markov shift. Let π denote the quotient map from G to G/G c . If E ⊆ G then π −1 (π(E)) = EG c . If U ⊆ G is open then π −1 (π(U)) = UG c is open in G and consequently π(U) is open in G/G c . This means π is an open map and G/G c is a second countable, Hausdorff, locally compact and totally disconnected coset space. The original group G acts on G/G c by left multiplication. Its relationship with the T w is T w (g(g ′ G c )) = T (g)T w (g ′ G c ). T w is expansive in the sense that that if gG c = g ′ G c , there is an n ∈ Z such that T n w (gG c ) / ∈ T n w (g ′ H/G c ). By construction, (G/G c , T w ) is totally wandering. It remains only to show that there is a Markov partition for (G/G c , T w ). This is done in two steps. First, consider G/G t with the induced homeomorphism T ′ t . Everything proved about (G/G c , T w ) holds for (G/G t , T ′ t ). To construct a Markov partition, as in the proof of Lemma 5.7, apply the proof of [HR] Theorem 7.6 in G to find a compact, open subgroup, N ⊆ H, that is normal with respect to G t . Using the proofs of Observation 3.1 and Lemma 5.7 we can insure that the coset partition of N is a Markov partition for (G, T ). Since N is normal with respect to G t N G t = G t N so π −1 (π(N )) is composed of a finite number of cosets of N and is a subgroup of G. It follows that the image of the coset partition of G is a Markov partition for (G/G t , T ′ t ). The final step is produce a Markov partition for (G/G c , T w ) from the Markov partition of (G/G t , T ′ t ). The map from G/G t to G/G c is defined by inclusion of cosets of G t in cosets of G c . From Corollary 7.3 we can conclude that the cosets with a compact T ′ t -orbit closure in G/G t form a countable discrete subset of G/G t . So, the map from G/G t to G/G c is a covering map. If the subgroup N is chosen so that the cosets gG t N , for g ∈ G c , are disjoint, the Markov partition of G/G t is mapped to a partition of G/G c which is Markov with respect to T w . Then the first case is applied to (G c , T c ). Corollary 7.5. Let T be an expansive automorphism of an l.c.t.d. group G. If T is transitive then G is compact and (G, T ) is topologically conjugate to a full-shift on a finite number of symbols. Remark 7.6. Notice what the proof of Theorem 7.4 really constructs. In case 2, three open subgroups of G are constructed, which are H, HG t and HG c . They are nested H ⊆ HG t ⊆ HG c . Their coset partitions describe the three "building blocks" of the automorphism. The partition defined by HG c on G/G c describes the wandering component. The partition defined by HG t on G c /G t describes the discrete permutative component. The partition defined by H on G t describes the transitive component. Remark 7.7. It is natural to try to formulate the notion of topological entropy for an expansive automorphism, T , of l.c.t.d. group, G. In view of Theorem 7.4 we need only consider the three building blocks of such an automorphism. The topological entropy of a full-shift on N symbols is log N. The topological entropy of a permutation on a countable set can safely be said to be zero. The problem is with the totally wandering countable state Markov shift. In general, there are many reasonable definitions for the topological entropy of a homeomorphism of a noncompact space. They all have difficulties and almost all can be shown to differ in specific cases. A discussion of some of the definitions and examples can be found in [HKR]. When the transition matrix for a countable state Markov shift is irreducible things are somewhat better but there are still several problems. This is discussed in [K2]. If we consider the Markov shift defined in Example 4.4 there are several problems. On the one hand, there is only one invariant probability measure which is the Dirac delta measure supported at the identity. It has zero entropy, so thinking in terms of the variational principle it is reasonable to declare the topological entropy of the Markov shift zero. On the other hand, the growth rate of the number of blocks occurring in the Markov shift the begin (or end) with any fixed symbol is 3 n , so thinking in terms of the (n, ǫ)-separated set definition of topological entropy it is reasonable to declare the topological entropy log 3. . 5 . 5Let X be a totally wandering countable state Markov shift. Suppose there is at most one fixed point and no other periodic points. If x is not the fixed point and x ∈ A , where A is an element of the partition, then σ n (x) / ∈ A for any n = 0. Lemma 5 . 8 . 58Let T , G and H be as stated. If the points whose T -orbit closure is compact are dense in the group G then the structures of the follower and predecessor sets of H are f (H) = T (H)H = T (H u loc )H s loc = H s loc T (H u loc ) = HT (H) p(H) = T −1 (H)H = T −1 (H u loc )H s loc = H s loc T −1 (H u loc ) = HT −1 (H). Both f (H) and p(H) are compact, open subgroups of G. Proof. The proof follows the ideas of the preceding argument. Consider f (H) = T (H)H = T (H u loc )H s loc . We have shown f (H) = T (H u loc )H s loc ⊆ H s loc T (H u loc ). The other inclusion follows from the same argument. The fact that f (H) and p(H) are compact, open subgroups of G follows immediately from the fact that f (H) = T (H)H = HT (H) and p(H) = T −1 (H)H = HT −1 (H). Definition 6 . 1 . 61Denote by K the nonempty, compact, open subset of G defined by K = p(H) ∩ f (H). When f (H) and p(H) are subgroups of G, K is also a subgroup. New Directions in Locally Compact Groups. Lecture Note Series. P-E. Caprace and N. Monod447Cambridge Univerity PressP-E. Caprace and N. Monod, eds., "New Directions in Locally Compact Groups", L.M.S. Lecture Note Series 447, Cambridge Univerity Press 2018. Metric Geometry of Locally Compact Groups. Y Cornulier, P De La Harpe, European Mathematical Society. 25E.M.S. Tracts in MathematicsY. Cornulier and P. de la Harpe, "Metric Geometry of Locally Compact Groups", E.M.S. Tracts in Mathematics 25, European Mathematical Society, Zurich, 2016. Metrics and Entropy for Non-compact Sets. M Handel, B P Kitchens, D J Rudolph, Israel J. of Math. 91M. Handel, B. P. Kitchens and D. J. Rudolph, Metrics and Entropy for Non-compact Sets, Israel J. of Math., (1995) 91, 253-271. Abstract Harmonic Analysis. E Hewitt, K Ross, Academic Press and Springer-VerlagE. Hewitt and K. Ross, "Abstract Harmonic Analysis", Academic Press and Springer-Verlag, 1963. Expansive Dynamics on Zero-dimensional groups. B P Kitchens, Ergod. Th. & Dynam. Sys. 7B. P. Kitchens, Expansive Dynamics on Zero-dimensional groups, Ergod. Th. & Dynam. Sys. (1987) 7, 249-261. Symbolic Dynamics; One sided, Two-sided and Countable State Markov Shifts. B P Kitchens, Springer-VerlagBerlin, HeidelbergB. P. Kitchens, "Symbolic Dynamics; One sided, Two-sided and Countable State Markov Shifts", Springer-Verlag, Berlin, Heidelberg, 1998. Expansive Dynamics on locally compact groups. B P Kitchens, Ergod. Th. & Dynam. Sys. 41Email address: [email protected]. P. Kitchens, Expansive Dynamics on locally compact groups, Ergod. Th. & Dynam. Sys. (2021) 41, 3768-3779. Email address: [email protected]
[]
[ "An anomalous 'butterfly'-shaped magnetoresistance loop in an alloy, Tb 4 LuSi 3", "An anomalous 'butterfly'-shaped magnetoresistance loop in an alloy, Tb 4 LuSi 3" ]
[ "K Mukherjee \nTata Institute of Fundamental Research\nHomi Bhabha Road400005ColabaMumbaiIndia\n", "Sitikantha D Das \nTata Institute of Fundamental Research\nHomi Bhabha Road400005ColabaMumbaiIndia\n", "Kartik K IyerNiharika Mohapatra \nTata Institute of Fundamental Research\nHomi Bhabha Road400005ColabaMumbaiIndia\n", "E V Sampathkumaran \nTata Institute of Fundamental Research\nHomi Bhabha Road400005ColabaMumbaiIndia\n" ]
[ "Tata Institute of Fundamental Research\nHomi Bhabha Road400005ColabaMumbaiIndia", "Tata Institute of Fundamental Research\nHomi Bhabha Road400005ColabaMumbaiIndia", "Tata Institute of Fundamental Research\nHomi Bhabha Road400005ColabaMumbaiIndia", "Tata Institute of Fundamental Research\nHomi Bhabha Road400005ColabaMumbaiIndia" ]
[]
Magnetic field (H) induced first-order magnetic transition and the associated electronic phaseseparation phenomena are active topics of research in magnetism. Magnetoresistance (MR) is a key property to probe these phenomena and, in literature, a butterfly-shaped MR loop has been noted while cycling the field, with the envelope curve lying below the virgin curve in MR versus H plots of such materials. Here, we report an opposite behavior of MR loop for an alloy, Tb 4 LuSi 3 , at low temperatures (<<20 K) in the magnetically ordered state. Such an anomalous curve reveals unexpected domination of higher resistive high-field phase in electrical conduction, unlike in other materials where conduction is naturally by low-resistive high-field phase that follows first-order transition.The observed features reveal an unusual electronic phase separation, namely involving high-resistive high-field phase and low-resistive virgin phase.
10.1103/physrevb.81.184434
[ "https://export.arxiv.org/pdf/1001.4942v1.pdf" ]
117,135,025
1001.4942
d89a72cdee5d82ec27f153cfb0a7d780c0b98d55
An anomalous 'butterfly'-shaped magnetoresistance loop in an alloy, Tb 4 LuSi 3 K Mukherjee Tata Institute of Fundamental Research Homi Bhabha Road400005ColabaMumbaiIndia Sitikantha D Das Tata Institute of Fundamental Research Homi Bhabha Road400005ColabaMumbaiIndia Kartik K IyerNiharika Mohapatra Tata Institute of Fundamental Research Homi Bhabha Road400005ColabaMumbaiIndia E V Sampathkumaran Tata Institute of Fundamental Research Homi Bhabha Road400005ColabaMumbaiIndia An anomalous 'butterfly'-shaped magnetoresistance loop in an alloy, Tb 4 LuSi 3 (Communicated for publication on 16 Dec 2010)1PACS numbers: 7530Kz; 7215Eb Magnetic field (H) induced first-order magnetic transition and the associated electronic phaseseparation phenomena are active topics of research in magnetism. Magnetoresistance (MR) is a key property to probe these phenomena and, in literature, a butterfly-shaped MR loop has been noted while cycling the field, with the envelope curve lying below the virgin curve in MR versus H plots of such materials. Here, we report an opposite behavior of MR loop for an alloy, Tb 4 LuSi 3 , at low temperatures (<<20 K) in the magnetically ordered state. Such an anomalous curve reveals unexpected domination of higher resistive high-field phase in electrical conduction, unlike in other materials where conduction is naturally by low-resistive high-field phase that follows first-order transition.The observed features reveal an unusual electronic phase separation, namely involving high-resistive high-field phase and low-resistive virgin phase. The phenomenon of magnetic phase co-existence following a travel through a first-order metamagnetic transition has been actively studied for more than a decade in many materials, particularly in the context of the physics of manganites [1]. Generally speaking, in all such magnetic materials known to date, at a given temperature (T), an externally applied magnetic field (H) transforms 'less electrically conductive' antiferromagnetic phase to a 'more conductive' ferromagnetic phase at the first-order transition, resulting in negative magnetoresistance (MR defined as {ρ(H)-ρ(0)}/ρ(0), where ρ is electrical resistivity). If the magnetic field is gradually reduced to zero, the variation of ρ with H can be hysteretic, in which case a lower value of ρ compared to that for the virgin state has been naturally observed with the value of ρ after returning the field to zero depending on the fractions of this 'supercooled' highfield phase and 'transformed' virgin phase contributing to electrical conductivity. The abovestated variation of ρ with H is found to be true irrespective of the nature of the magnetic interaction mediating magnetic ordering, that is, whether it is double-exchange mechanism as in manganites or Rudermann Kasuya Kittel Yosida interaction as in rare-earth intermetallics [see, for instance, Refs. 3,4]. Recently, we have reported [5,6] that the compound, Tb 5 Si 3 , crystallizing in Mn 5 Si 3 -type hexagonal structure (space group P6 3 /mcm) [7][8][9] interestingly attains a higher-resistive state beyond a critical magnetic field (H cr ) in the magnetically ordered state (<70 K), in contrast to commonly known behavior in metamagnetic systems. In this article, we report that, for a partial replacement of Tb by Lu (20% atomic percent), the electrical transport in zero-field, attained after traveling through H cr once, is dominated by the "supercooled" high-resistive state interestingly resulting in the virgin ρ(H) curve falling below the envelope curve in the entire field range of investigation even in the negative cycles of H. The results reveal that this system provides a unique opportunity to study super-cooling and electronic phase separation phenomena for a case in which high-field phase is less conductive. We have also studied a few other compositions in the series, Tb 5-x Lu x Si 3 (x= 2 and 3), to bring out that this transport behavior is unique to this alloy. Polycrystalline samples, Tb 5-x Lu x Si 3 (x= 1, 2 and 3), were prepared by arc melting stoichiometric amounts of high purity (>99.9%) constituent elements in an atmosphere of high purity argon. Single phase nature and homogeneity of the specimens were ascertained by x-ray diffraction (Cu K α ) (figure 1), scanning electron microscope and energy dispersive x-ray analysis. A comparison of x-ray diffraction patterns of the parent and Lu substituted alloys is made in figure 1; this reveals a gradual shift of diffraction lines with Lu substitution thereby establishing that all Lu indeed go to Tb site without precipitating any other phase within the detectable limits of this technique. The ρ measurements in the presence of magnetic fields (<120 kOe, T= 1.8-300 K) were performed by a commercial physical property measurements system (PPMS) (Quantum Design) and a conducting silver paint was used for making electrical contacts of the leads with the samples. We had to characterize the specimens by dc magnetization, M, (<120 kOe, T= 1.8-300 K) for a comparison with the transport behavior and this was done with the help of a commercial vibrating sample magnetometer (Oxford Instruments). We first look at how the magnetic anomalies vary with a gradual replacement of Tb by Lu. In figure 2, we show magnetization measured in a field of 5 kOe as a function of temperature for all compositions. The data for the parent compound from our past publications [5,6] is included for the benefit of the reader. As expected, the magnetic transition, as indicated by the peak temperature in M/H plots obtained in a field of 5 kOe, shifts to lower temperatures monotonically with increasing Lu concentration. The M(H) plots (see figure 3) undergo dramatic changes in the magnetically ordered state with Lu substitution. For instance, for x= 1.0, at 1.8 K, the field-induced transition is feeble and significantly broadened, and a continuous increase in slope (rather than an abrupt one reported for x= 0 near 58 kOe, see inset of figure 3) in M(H) plot beyond 20 kOe is noted in the increasing field direction. This feature is absent in the reverse leg of the M(H) curves. [The virgin curve lies outside the envelope curve as shown later, thereby suggesting that the field-induced transition is of a first-order character type, but broadened]. The change of slope was found to get further weakened as the temperature increases (not shown here). For higher concentrations of Lu (figure 3), the variation of M with H in the magnetically ordered state (e.g., at 1.8 K) does not reveal any spin reorientation effects. It is important to note that, for the x = 1.0 alloy, the value of M at the highest field measured (about 17.5 µ B /formulaunit at 120 kOe) is nearly the same as that obtained by linear extrapolation of the low-field data (that is, before the transition , < 40 kOe) of Tb 5 Si 3 . This could mean that only some portion of Tb ions, possibly decided by chemical inhomogeneity resulting from Lu substitution, undergo the magnetic transition at H cr in the alloy, Tb 4 LuSi 3 , and this explains why the transition is feeble. Let us look at the MR behavior (figure 4). For x= 1 at 1.8 K, we see a fairly prominent upturn in the range 30-50 kOe in the virgin curve followed by a decrease at higher fields as in the parent compound [5,6] (see inset of figure 4). This transition is observed despite the fact that it is weak and broadened in M/H. This means that the number of Tb ions undergoing this transition is sufficient enough to provide percolative electrical conduction. The fact that the fraction of Tb ions undergoing field-induced magnetism is diminished compared to that in the parent compound could be qualitatively inferred from a relatively reduced jump (about 60%) in MR at H cr . The transition field is reduced with respect to that in the parent compound (from ~ 58 to ~ 50 kOe). Apart from dilution effect of Tb sublattice, we believe that positive pressure also is responsible for this reduction based on our experiments under external pressure and negative chemical pressure induced by Ge substitution for Si [6,10]. For higher concentrations of Lu, the features due to field-induced transition are not apparent (figure 4), possibly because H cr is reduced to zero due to these factors. In fact, MR remains in the negative zone in the entire field range of investigation without any evidence for hysteresis. We will make more comments on the magnetic behavior of these alloys later in this article. The point of emphasis here is that, among the compositions we studied, the alloy, Tb 4 LuSi 3 , is the one of importance for the present purpose. Let us now look at the MR behavior while returning the field to zero after reaching 120 kOe for x= 1 to infer the nature of the high-field (and supercooled) phase. MR keeps increasing closely following the virgin curve till about 60 kOe and, at lower fields, the curve diverges from the virgin curve with this increasing trend persisting till the field is reduced to zero (as though there is an extrapolation of the high-field phase behavior). This situation is different from the parent compound in the sense that the increase in ρ in this case is cut off by a sharp fall before the field reaches zero (see inset of figure 4). The value of MR in zero field thus attained for the former is relatively larger (about 50%). If one has to observe the increasing tendency till zero field for the parent compound, an external pressure needs to be applied at 1.8 K [6]. It is important to note that MR at low fields increases essentially quadratically (see a broken line in figure 4) with decreasing H characteristic of paramagnets. We have earlier mooted [6,10] the idea of 'inverse metamagnetism (a process in which paramagnetic fluctuations are induced at H cr ) to explain sudden enhancement of positive MR in the parent compound at H cr . Such an 'inverse' process can happen in a situation in which the molecular field due to one magnetic site induces an antiferromagnetic component at the other site and an application of an external magnetic field (at a critical value) tends to destroy this coupling thereby resulting in magnetic fluctuations (and hence increased scattering). Clearly, if such 'a high-field phase' with magnetic fluctuations is 'supercooled' to zero-field, one should see quadratic field-dependence of MR as H 0, as observed experimentally. In view of the exotic MR behavior of Tb 4 LuSi 3 stabilized under ambient pressure conditions as described above, we considered it worthwhile to perform additional isothermal MR experiments for this composition traveling through negative values of H to emphasize on the key conclusion. We have noted that there is some degree of hysteresis of isothermal M curve persisting even at 120 kOe (see figure 5a for 1.8 K data ), but the size of the loop was found to get weaker gradually with increasing temperature. The location of the virgin curve outside envelope curve is distinctly visible as a typical feature of broadened field-induced first-order magnetic transitions. In figure 5b, we show MR data for both positive and negative cycles of H at 1.8, 10 and 25 K. Arrows and numericals are placed on the curves to serve as guides to the eyes. It is apparent from this figure that, at 1.8 K, while increasing the magnitude of the field in the negative H quadrant, there is a monotonic decrease of MR without any evidence for the fieldinduced transition (as though the conductivity occurs through the supercooled phase only). With the consideration of the data for further cycling of magnetic-field, a butterfly-shaped MR curve is evident with the virgin curve lying below this envelope curve. The observation of this shape of MR loop is unique in the field of magnetism. With increasing temperature, say to 10 K, in the positive quadrant, MR in the reverse leg tends to fall at a particular field (< ~25 kOe) at which the supercooled state tends to get transformed to the virgin state. In the zero-field reached thereafter, MR stays 'intermediate' between that expected for the virgin phase and the high-field phase. This implies that, after traveling through the transition field, at this temperature, the fraction of the high-field phase in zero-field gets reduced with respect to that for 1.8 K. As a further support for the gradual dominance of virgin phase following field-cycling at 10 K, there is an upturn in MR near -40 kOe (as in virgin curve), however, with a reduced magnitude compared to that for virgin state. A similar reduced jump appears again in the positive quadrant for further field cycling. Clearly, the virgin curve lies below this butterfly-shaped MR curve. The data at 10 K distinctly brings out that there is an unusual phase-coexistence involving high-fieldhigh-resistive phase and low-field-low-resistive phase after the returning the field to zero. At 25 K, there is a field-induced transition near 25 kOe and the ρ value in zero field after traveling through this field is nearly the same as that of the virgin curve. Thus, one is able to control the fraction of the virgin phase and the high-field phase by varying the temperature. Unfortunately, one can not obtain the relative fractions of these two phases from the corresponding isothermal magnetization curves in the event that the supercooled component is of a paramagnetic-like fluctuating phase as argued earlier [6,10]. Now to bring out the uniqueness of the MR behavior of x= 1 alloy among these compositions, we make relevant comments on the magnetic behavior of other Lu rich compositions, x= 2 and 3. As mentioned earlier, as Lu concentration increases, H cr , is presumably reduced zero. This means that, in these Lu richer alloys, these Tb ions should show the MR behavior of the high-field phase of x= 0 or 1 alloys, that is, a gradual drop in ρ with an increase H. This is indeed found to be the case (see figure 4). In fact, the MR curves for these higher compositions of Tb (at 1.8 K) look somewhat similar to that in the reverse leg of MR for x= 1 alloy in the sense that MR varies with H essentially quadratically (as shown in the bottom part of figure 4). Such an H-dependence of MR is a characteristic of paramagnetic-like fluctuations alone and not of magnetically ordered state. The MR curves were found to be symmetric with respect to zero field without any hysteresis. However, the M(H) curves are hysteretic at 1.8 K as shown in figure 3. The hysteretic M(H) with a gradual variation with H without any evidence for saturation at high-fields implies a complex antiferromagnetic component. Apart from this M(H) behavior, in figure 2, we note clear evidence for magnetic ordering in M(T) data. Thus, there appears to be a conflict in the conclusions from MR on the one hand and M on the other. The only way to reconcile these apparently conflicting inferences in these single-phase materials is to propose that, even for these compositions, there is an electronic phase separation due to chemical inhomogeneity. This means that, to start with (that is, in the virgin state of these compositions), there is a paramagnetic-like region responsible for MR behavior, coexisting with the magnetic region which does not dominate conductivity. Thus, this family of alloys is in general ideal to study the novel electronic phase separation in a metallic environment. Incidentally, the high-field magnetic phase undergoes changes with increasing x is evident from the fact that the magnetization value (per Tb) at 120 kOe varies non-monotonically with decreasing Tb concentration. Summarizing, the magnetoresistance behavior of Tb 4 LuSi 3 is exceptional in magnetism. That is, the magnetoresistance versus magnetic field loop for this compound exhibits butterflyshaped behavior with the virgin curve lying lower with respect to envelope curve. We have demonstrated that such a shape of MR curve can arise in the event that the high-field phase following field-induced first-order magnetic transition is (unexpectedly) more resistive electrically compared to virgin magnetic phase and that it dominates conductivity in subsequent field-cycling. The present study brings out an opportunity to probe an unusual electronic phase separation. Figure 1: (color online) X-ray diffraction patterns below 2θ= 40˚ for the alloys, Tb 5-x Lu x Si 3 . The lattice constants, a and c (± 0.004 Å) and unit-cell volume (V) are included. The curves are shifted along y-axis for the sake of clarity. Figure 2 : 2(color online) Magnetization divided by magnetic field as a function of temperature obtained in a field of 5 kOe for the alloys, Tb 5-x Lu x Si 3 (x= 0, 1, 2, and 3). The data points are shown for x= 3 only. Figure 3 : 3(color online) Isothermal magnetization at 1.8 K for the alloys, Tb 5-x Lu x Si 3 . The curve for Tb 5 Si 3 (Ref. 5) is shown in the inset. Figure 4 : 4(color online) Magnetoresistance as a function of externally applied magnetic field for the alloys, Tb 5-x Lu x Si 3 at 1.8K. Lines are drawn through the data points for Tb 4 LuSi 3 . A dotted line is drawn in the reverse field-cycle for this composition to highlight that MR varies essentially quadratically with H. Continuous lines for other compositions represent quadratic field dependence. Arrows and numericals (top figure) are drawn as a guide to the eyes. Figure 5 : 5(a) Isothermal magnetization at 1.8 K and (b) magnetoresistance at 1.8, 10 and 25 K for Tb 4 LuSi 3 . The lines through the data points and arrows and numericals are drawn as a guide to the eyes. E-mail address: [email protected]. E-mail address: [email protected] Colossal magnetoresistance, Charge Ordering and Related Properties of Manganese Oxides. See, C.N.R. Rao and B. RaveauWorld ScientificSingaporeSee, for reviews, "Colossal magnetoresistance, Charge Ordering and Related Properties of Manganese Oxides, edited by C.N.R. Rao and B. Raveau (World Scientific, Singapore, 1988); . E Dagotto, T Hotta, A Moreo, Phys. Rep. 344E. Dagotto, T. Hotta, and A. Moreo, Phys. Rep. 344 (2001). . Y Tokura, H Kuwahara, Y Moritomo, Y Tomioka, A Asamitsu, Phys. Rev. Lett. 763184Y. Tokura, H. Kuwahara, Y. Moritomo, Y. Tomioka, and A. Asamitsu, Phys. Rev. Lett. 76, 3184 (1996). . T Kimura, Y Tomioka, R Kumai, Y Okimoto, Y Tokura, Phys , T. Kimura, Y. Tomioka, R. Kumai, Y. Okimoto, Y. Tokura, Phys. . Rev. Lett. 833940Rev. Lett. 83, 3940 (1999); . Y Tomioka, A Asamitsu, Y Moritomo, H Kuwahara, Y Tokura, Phys. Rev. Lett. 745108Y. Tomioka, A. Asamitsu, Y. Moritomo, H. Kuwahara, and Y. Tokura, Phys. Rev. Lett. 74, 5108 (1995); . H Kuwahara, Y Tomioka, A Asamitsu, Y Morimoto, Y Tokura, Science. 270961H. Kuwahara, Y. Tomioka, A. Asamitsu, Y. Morimoto, and Y. Tokura, Science 270, 961 (1995). . M A Manekar, S Chaudhary, M K Chattopadhyay, K J Singh, S B Roy, P Chaddah, Phys. Rev. B. 64104416M.A. Manekar, S. Chaudhary, M.K. Chattopadhyay, K.J. Singh, S.B. Roy, and P. Chaddah, Phys. Rev. B 64, 104416 (2001); . K J Singh, S Chaudhary, M K Chattapadhyay, M A Manekar, S B Roy, P Chaddah, 6594419K.J. Singh, S. Chaudhary, M.K. Chattapadhyay, M.A. Manekar, S.B. Roy, and P. Chaddah, 65, 094419. . K Sengupta, E V Sampathkumaran, Phys. Rev. B. 7320406K. Sengupta and E.V. Sampathkumaran, Phys. Rev. B 73, 020406(R) (2006). . S Narayana Jammalamadaka, Niharika Mohapatra, D Sitikantha, E V Das, Sampathkumaran, Phys. Rev. B. 7960403S. Narayana Jammalamadaka, Niharika Mohapatra, Sitikantha D Das, and E.V. Sampathkumaran, Phys. Rev. B 79, 060403(R) (2009). . K Kartik, E V Iyer, Sampathkumaran, App. Phys. Lett. 95142504Kartik K Iyer and E.V. Sampathkumaran, App. Phys. Lett. 95, 142504 (2009). . K S V L Narasimhan, H Steinfink, E V Ganapathy, J. Appl. Phys. 4051K.S.V.L. Narasimhan, H. Steinfink, and E.V. Ganapathy, J. Appl. Phys. 40, 51 (1969). . I P Semitelou, Hel, J K Konguetsof, Yakinthos, J. Magn. Magn. Mater. 79131I.P. Semitelou, Hel. Konguetsof, and J.K. Yakinthos, J. Magn. Magn. Mater. 79, 131 (1989); . J Roger, M B Yahia, V Babizhetskyy, J Bauer, S Cordier, R Guertin, K Hiebl, X Rocquefelte, J Saillard, J F Halet, J. Sold State Chem. 1792310J. Roger, M.B. Yahia, V. Babizhetskyy, J. Bauer, S. Cordier, R. Guertin, K. Hiebl, X. Rocquefelte, J. Saillard, and J.F. Halet, J. Sold State Chem. 179, 2310 (2006). . F Canepa, S Cirafici, F Merlo, A Palenzona, J. Magn. Magn. Mater. 118182F. Canepa, S. Cirafici, F. Merlo, and A. Palenzona, J. Magn. Magn. Mater. 118, 182 (1993). . Niharika Mohapatra, K Sitikantha D Das, Mukherjee, K Kartik, E V Iyer, Sampathkumaran, arXiv:0912.2275Phys. Rev. B. in pressNiharika Mohapatra, Sitikantha D Das, K. Mukherjee, Kartik K Iyer, and E.V. Sampathkumaran, Phys. Rev. B, 1 st Dec., 2009 issue (in press); arXiv:0912.2275
[]
[ "TAUTOLOGICAL CYCLES ON TROPICAL JACOBIANS", "TAUTOLOGICAL CYCLES ON TROPICAL JACOBIANS" ]
[ "Andreas ", "Farbod Shokrieh " ]
[]
[]
The classical Poincaré formula relates the rational homology classes of tautological cycles on a Jacobian to powers of the class of Riemann theta divisor. We prove a tropical analogue of this formula. Along the way, we prove several foundational results about real tori with integral structures (and, therefore, tropical abelian varieties). For example, we prove a tropical version of the Appell-Humbert theorem. We also study various notions of equivalences between tropical cycles and their relation to one another.
10.2140/ant.2023.17.885
[ "https://export.arxiv.org/pdf/1910.07165v1.pdf" ]
204,734,208
1910.07165
c105a4134482453a68986cf4282d6706812b6abc
TAUTOLOGICAL CYCLES ON TROPICAL JACOBIANS Andreas Farbod Shokrieh TAUTOLOGICAL CYCLES ON TROPICAL JACOBIANS The classical Poincaré formula relates the rational homology classes of tautological cycles on a Jacobian to powers of the class of Riemann theta divisor. We prove a tropical analogue of this formula. Along the way, we prove several foundational results about real tori with integral structures (and, therefore, tropical abelian varieties). For example, we prove a tropical version of the Appell-Humbert theorem. We also study various notions of equivalences between tropical cycles and their relation to one another. Poincaré formula gives a refinement of Riemann's theorem (see, e.g., [GH94, page 350], [ACGH85, Chapter 1, §5], or [BL04, §11.2]). It states that for 0 ≤ d ≤ g the classes of W d coincides with Θ g−d in rational homology (up to the multiplicative constant 1/(g − d)!). In other words, the subalgebra of tautological cycles in H * (J; Q) is generated by the class of Riemann theta divisor. There are also versions of the Poincaré formula over a general field. For example, Lieberman proves 'Weil cohomology' statement (see [Kle68,Remark 2A13]), and Mattuck proves a 'numerical equivalence' statement (see [Mat62,§2]). 1.2. Our contribution. Our main goal in this paper is to prove a tropical analogue of the Poincaré formula. Let Γ be a compact connected metric graph of genus g. Following [MZ08], one associates to Γ a g-dimensional polarized real torus Jac(Γ), called its tropical Jacobian. There is also a well-behaved theory of divisors, ranks, Abel-Jacobi maps, and Picard groups for metric graphs ([MZ08, GK08,BN07]). We denote the tropical Abel-Jacobi morphism by Φ : Γ d → Jac(Γ), which is well-defined up to a translation. Here Γ d denotes the set of all unordered d-tuples of points of Γ. The image W d = Φ(Γ d ) is a polyhedral subset of Jac(Γ) of pure dimension d. Exactly as in the classical situation W d may be identified with the effective locus W d ⊆ Pic d (Γ) via the Abel-Jacobi map. In [MZ08] one also finds the notion of Riemann theta divisor Θ on Jac(Γ), which is closely related to the theory of Voronoi polytopes of lattices. The polyhedral subsets W d and Θ of Jac(Γ) support tropical fundamental cycles [ W d ] and [Θ] (see §8). Recently, the notions of tropical homology, cohomology, and the cycle class map have been developed in [IKMZ19] and further studied in [GS19]. Theorem A (= Theorem 9.8 and Corollary 9.10). For every 0 ≤ i ≤ g, we have the equality [ W d ] = [Θ] g−d (g − d)! on Jac(Γ) modulo tropical homological equivalence. Moreover, the equality also holds modulo numerical equivalence. Our proof further provides explicit descriptions of the classes of W d and Θ g−d in tropical homology in terms of the combinatorics of the metric graph Γ (see §9.2 and §9.3). The Poincaré formula has several interesting, but immediate, consequences. Corollary B (= Corollaries 9.12, 9.13, and 9.15). We note that part (a) is a tropical version of Riemann's Theorem and has already been proven by Mikhalkin and Zharkov [MZ08] using other combinatorial techniques. The special case d = 1 of part (b) can also be found in [MZ08] in the context of the Jacobi inversion theorem, where again the proof is direct and combinatorial. This was essential in the development of break divisors in their paper. Part (c) classically follows from the geometric Riemann-Roch theorem for abelian varieties (see, e.g., [BL04,Theorem 3.6.3]). Building up to the proof of the Poincaré formula we also prove several foundational results about real tori with integral structures (and, therefore, about tropical abelian varieties) some of which had been used implicitly in previous work on the subject. Most notably, we prove the following tropical version of the Appell-Humbert Theorem: Theorem C (= Theorem 7.2). Every tropical line bundle on a real torus N R /Λ corresponds to a pair (E, l) of a symmetric form E on N R with E(N, Λ) ⊆ Z and a morphism l ∈ Hom(N R , R). Two such pairs (E, l) and (E , l ) define the same line bundle if and only if E = E and (l − l )(N) ⊆ Z. We also study the relationship between various notions of equivalence of tropical cycles. For example, we prove the following statement. Theorem D (= Propositions 5.8 and 5.11). Algebraic equivalence implies homological equivalence, and homological equivalence implies numerical equivalence on real tori admitting a 'spanning curve'. 1.3. Further directions. We believe our Poincaré formula is a first step in proving the following ambitious conjecture in tropical Brill-Noether theory. Let W r d ⊆ Pic d (Γ) denote the locus of divisor classes of degree d and rank at least r (see, e.g., [CDPR12,LPP12]). Conjecture. Assume ρ = g − (r + 1)(g − d + r) ≥ 0. Then there exists a canonical tropical subvariety Z r d ⊆ W r d of pure dimension ρ such that [Z r d ] = r ∏ i=0 i! (g − d + r + i)! [Θ] g−ρ . modulo tropical homological equivalence. Note that our Theorem A precisely establishes this conjecture in the case r = 0, in which case W 0 d = W d is pure-dimensional by [GST18,Theorem 8.3] (see also Theorem 8.2) and Z 0 d = W d . We also remark that a less precise version of this conjecture is posed as a question in [Pfl17, Question 6.2]. As stated above, it follows from the Poincaré formula that the subring of tautological cycles in rational homology is too simple to provide interesting invariants. A celebrated result of Ceresa [Cer83] implies that for a generic curve C, the class of W d is not proportional to the class of Θ modulo algebraic equivalence. Beauville in [Bea04] (see also [Pol05,Mar08,Moo09]) has studied results about algebraic equivalence. We believe that the tautological subring of the ring of tropical cycles modulo algebraic equivalence is an interesting object to study. For example, one might hope that this ring is generated by the classes of the W d 's for 1 ≤ d ≤ g − 1. We remark that a tropical version of Ceresa's result has already been established by Zharkov in [Zha15]. As stated in Theorem D, homological equivalence implies numerical equivalence on tropical abelian varieties. We expect this to be true in general on any tropical manifold. In analogy with Grothendieck's 'standard conjecture D' one might also hope that homological equivalence coincides with numerical equivalence, at least in the case of tropical abelian varieties. The analogous classical result has been established by Lieberman in [Lie68]. 1.4. The structure of this paper. In § §2-4 we review the main objects and tools needed to proof the Poincaré formula, including rational polyhedral spaces, tropical cycles, tropical homology, and tropical Jacobians. In § §5-7 we study tropical cycles, tropical homology, and line bundles on real tori. Our results here are of a more foundational nature, and include the Appell-Humbert Theorem. We also study various notions of equivalences of tropical cycles and prove Theorem D. Finally, in § §8-9 we prove the Poincaré formula. In §8 we show that the set W i has a fundamental cycle. In §9 we give explicit expression for both the cycle classes of the [ W i ] and of powers of the theta divisor. Comparing these expression will finish the proof of Theorem A. The results summarized in Corollary B will be direct consequences of the Poincaré formula. Acknowledgements. AG was supported by the ERC Starting Grant MOTZETA (project 306610) of the European Research Council (PI: Johannes Nicaise) during parts of this project. Notation. We will denote by N the natural numbers including 0. For an Abelian group A and a topological space X, we will denote by A X the constant sheaf on X associated to A. RATIONAL POLYHEDRAL SPACES The tropical spaces studied in this paper are real tori with integral structures, compact tropical curves, and their Jacobians. They all live inside the category of boundaryless rational polyhedral spaces. We quickly review their definition and refer to [MZ14, JRS18, GS19] for more details. 2.1. boundaryless rational polyhedral spaces. A rational polyhedral set in R n is a finite union of finite intersections of sets of the form {x ∈ R n | m, x ≤ a} , where m ∈ (Z n ) * , a ∈ R, and ·, · denotes the evaluation pairing. Any such set P comes with a sheaf Aff P of integral affine functions, which are precisely the continuous realvalued functions that are locally (on P) of the form x → m, x + a for some m ∈ (Z n ) * and a ∈ R. Definition 2.1. A boundaryless rational polyhedral space is a pair (X, Aff X ) consisting of a topological space X and a sheaf of continuous real-valued functions Aff X such that every point x ∈ X has an open neighborhood U such that there exists a rational polyhedral set P in some R n , an open subset V ⊆ P, and a homeomorphism f : U → V that induces an isomorphism f −1 (Aff P | V ) ∼ = Aff X | U via pulling back functions. Such an isomorphism f is called a chart for X. A boundaryless rational polyhedral space that is compact is called a closed rational polyhedral space. The sections of Aff X are called integral affine functions. Remark 2.2. In the literature (for example in [JRS18,GS19]), the notion of rational polyhedral spaces is used for spaces that are locally isomorphic to open subsets of rational polyhedral sets in R n , where R = R∪{∞}. This introduces a notion of boundary, which is essential for many applications. For our purposes it is sufficient to consider spaces without boundary. A boundaryless rational polyhedral space is precisely a rational polyhedral space without boundary. Definition 2.3. (i) A morphism of boundaryless rational polyhedral spaces is a continuous map f : X → Y such that pullbacks of functions in Aff Y are in Aff X . (ii) A morphism f : X → Y is called proper if it is a proper map of topological spaces, that is preimages of compact sets are compact. 2.2. Real tori with integral structures. Let N be a lattice, and let Λ ⊆ N R = N ⊗ Z R be a second lattice of full rank, that is such that the induced morphism Λ R → N R is an isomorphism. Clearly, N R gets a well-defined rational polyhedral structure from any isomorphism N ∼ = Z n . The real torus (with integral structure) associated to N and Λ is the quotient X = N R /Λ, with the sheaf of affine functions being the one induced by N R . More precisely, if π : N R → X denotes the quotient map, and U ⊆ X is open, then φ : U → R is in Aff X (U) if and only if φ • π ∈ Aff N R (π −1 U). Note that the integral affine structure on X is induced by N and not by Λ. The group law on a real torus X makes it a group object in the category of boundaryless rational polyhedral spaces. In particular, every x ∈ X defines an automorphism via translation. Definition 2.4. Let X be a real torus and let x ∈ X. Then the translation by x is the morphism t x : X → X, y → x + y . Figure 1) if every point has a neighborhood that is FIGURE 1. Two tropical curves embedded in R 2 . The one to the left is smooth, the one to the right is not. isomorphic to a neighborhood of the origin in a star-shaped set, that is a set of the form 0≤i≤n R ≥0 e i ⊆ R n+1 /R1 . Here n > 0 will denote the valency of the point, we denote by 1 the vector whose coordinates are all 1, and e i denotes the i-th standard basis vector. Using the integral structure on a compact tropical curve, one can assign lengths to its edges, thus defining a metric graph. Conversely, given a metric graph (a topological graph Γ equipped with an inner metric), one can define Aff Γ as the sheaf of harmonic functions on Γ, that is the sheaf of functions whose sum of incoming slopes is 0 at every point. In this way, one obtains a smooth tropical curve (Γ, Aff Γ ) (cf. [MZ08, Proposition 3.6]). The genus g of a tropical curve Γ is defined as its first Betti number, that is g = dim R H 1 (Γ; R). Remark 2.5. With our notion of tropical curves, the underlying topological graph is not allowed to have 1-valent vertices. This can be resolved by working in the larger category of polyhedral spaces with boundary mentioned in Remark 2.2 and allowing neighborhoods of ∞ in R as local models for the curves. In this way, tropical curves could have edges of infinite length that end in a 1-valent vertex. But as we will note in Remark 9.11, the results of this paper are easily generalized to apply to compact and connected smooth tropical curves with boundary as well. Example 2.6. For any positive real number j ∈ R >0 the sublattice Z j of R = Z R has full rank. Therefore, the quotient Γ = R/Z j, endowed with the integral affine structure induced by Z, is a 1-dimensional real torus. It is also a smooth tropical curve of genus 1. Its unique edge is both open and closed and it is homeomorphic to the 1-sphere. The length of this edge is given by j, which can be considered as the j-invariant of Γ [KMM08]. Example 2.7. Consider the topological space Γ obtained by gluing three intervals [0, a], [0, b], and [0, c] along their lower and upper bounds, respectively. Clearly, Γ is a topological graph with three edges and two vertices. We can view the three intervals as rational polyhedral spaces, so on the interior of the edges of Γ we have a notion of linearity. We can now define Aff Γ as the sheaf of all continuous functions whose restrictions to the interiors of the intervals are linear, and such that the sum of the outgoing slopes is 0 at the two vertices. With these choices, Γ is the smooth tropical curve associated to the metric graph with three parallel edges of lengths a, b and c. It is depicted in Figure 2. 2.4. Tropical manifolds. We recall that every loop-free matroid M on a ground set E(M) has an associated tropical linear space L M , which is a rational polyhedral set in R E(M) /R1. We will only consider very special linear spaces and therefore refrain from recalling their precise definition. For our purposes, it suffices to say that R n is a tropical linear space for any n, and the 1-dimensional tropical linear spaces are precisely the star-shaped sets appearing in the definition of smooth tropical curves in §2.3. Definition 2.8. A boundaryless rational polyhedral space X is called a boundaryless tropical manifold if it can be covered by charts X ⊇ U ∼ = − → V ⊆ L , where U is an open subset of X and V is an open subset of a tropical linear space L. Since both R n and star-shaped sets are tropical linear spaces, it follows that real tori and smooth tropical curves are boundaryless tropical manifolds. 2.5. The cotangent sheaf. Definition 2.9. Let X be a boundaryless rational polyhedral space. (i) The quotient Aff X /R X is called the cotangent sheaf and is denoted by Ω 1 X . (ii) The integral tangent space at a point x ∈ X is defined as T Z x X = Hom(Ω X,x , Z). (iii) The tangent space at a point x ∈ X is defined as T x X = (T Z x X) R ∼ = Hom(Ω X,x , R). Example 2.10. Let X = N R /Λ be a real torus. Then Aff X has no non-constant global sections because there is no globally defined non-constant integral affine function on N R that is Λ-periodic. On the other hand, the quotient Aff X /R X = Ω 1 X is isomorphic to the constant sheaf N X . By definition, a morphism of boundaryless rational polyhedral spaces f : X → Y induces a morphism f −1 Ω 1 Y → Ω 1 X . Taking stalks and dualizing induces morphisms on b c a FIGURE 2. A tropical curve of genus 2 with its local charts and edge lengths. tangent spaces d x f : T x X → T f (x) Y for all x ∈ X that map the integral tangent spaces on X to the integral tangent spaces on Y . TROPICAL CYCLES AND THEIR TROPICAL CYCLE CLASSES We briefly recall the definitions of tropical cycles, tropical (co)homology, and the tropical cycle class map connecting the two. We closely follow [AR10,FR13,Sha13] regarding tropical cycles and [IKMZ19, MZ14, JRS18, GS19] regarding tropical (co)homology and the tropical cycle class map. 3.1. Tropical cycles. For a boundaryless rational polyhedral space X, let us denote by X reg its open subset of points x ∈ X that have a neighborhood isomorphic (as boundaryless rational polyhedral spaces) to an open subset of R n for some n ∈ N. A tropical k-cycle is a function A : X → Z such that its support |A| = {x ∈ X | A(x) = 0} is either empty or a purely k-dimensional polyhedral subset of X, A is nonzero precisely on the set |A| reg , on which it is locally constant, and it satisfies the so-called balancing condition. The latter is a local condition that is well-known for X = R n , to which the general case can be reduced. As we will only need it implicitly, we refer to [AR10] for details. The sum of two tropical k-cycles on X, considered as a sum of Z-valued functions, is not a tropical k-cycle again in general. However, there exists a unique tropical k-cycle on X that agrees with the sum on the complement of an at most (k − 1)-dimensional polyhedral subset of X. This makes the set Z k (X) into an Abelian group. A tropical cycle A is said to be effective if it is everywhere nonnegative. If f : X → Y is a proper morphism of boundaryless rational polyhedral spaces, it induces a push-forward f * : Z k (X) → Z k (Y ) of tropical cycles. If A ∈ Z k (X) is a tropical cycle, then f * A will be zero outside of the subset ( f |A|) k ⊆ f |A| where the local dimension of f |A| is k. There exists a dense open subset U ⊆ ( f |A|) k such that for each y ∈ U the fiber f −1 {y} is finite and contained in |A| reg , and for each such y ∈ U we have f * A(y) = ∑ x∈ f −1 {y} | coker d x f | . Note that the finiteness of coker d x f follows from the finiteness of the fiber over y. If X is compact then one can take Y to be a point. Identifying the tropical 0-cycles on a point with Z, the push-forward then defines a morphism Z 0 (X) → Z. The image of a tropical 0-cycle A under this morphism is called the degree of A, and it is denoted by X A. If X and Y are boundaryless rational polyhedral spaces, and A ∈ Z k (X) and B ∈ Z l (X), then the cross product A × B : X ×Y → Z, (x, y) → A(x) · B(x) of A and B is a tropical cycle again. A rational function on a boundaryless rational polyhedral space X is a continuous function φ : X → R such that φ is piecewise affine with integral slopes in every chart. As this is a local condition, rational functions define a sheaf M X of Abelian groups. The group of tropical Cartier divisors on X is given by CDiv(X) = Γ(X, M X / Aff X ). For every φ ∈ Γ(X, M X ) we denote its image in CDiv(X) by div(φ ), and refer to it as the associated principal divisor. There exists natural bilinear map CDiv(X) × Z k (X) → Z k−1 (X), the intersection pairing of divisors and tropical cycles. Note that a boundaryless rational polyhedral space X does not automatically have natural fundamental cycle, that is there is no canonical element in Z * (X) in general. Definition 3.1. We will say that a boundaryless rational polyhedral space X has a fundamental cycle if X is pure-dimensional and the extension by 0 of the constant function with value 1 on X reg defines a tropical cycle. In that case we will denote this tropical cycle by [X], and refer to it as the fundamental cycle of X. We will say that a Cartier divisor D ∈ CDiv(X) on a tropical space X with fundamental cycle is effective, if its associated Weil divisor [D] := D · [X] is effective. If X is a tropical manifold then it has a fundamental cycle [X], which is the unity of the tropical intersection product on Z * (X). The tropical intersection product is compatible with intersections with Cartier divisors in the sense that D · A = [D] · A for every Cartier divisor D ∈ CDiv(X) and tropical cycle A ∈ Z * (X). Furthermore, the morphism CDiv(X) → Z dim(X)−1 (X), D → [D] is an isomorphism (see [Fra13,Corollary 4.9]). If X is locally isomorphic to open subsets of R n , then a Cartier divisor D ∈ CDiv(X) is effective if and only if it is locally given by concave rational functions. This follows from the fact that every tropical hypersurface of R n is realizable. Here, a rational function is concave if it is the restriction of a concave rational function on R n in sufficiently small local charts. Also note that concave functions appear rather than convex ones, because we are using the "min"-convention (see Remark 3.4). Line bundles. A tropical line bundle on a boundaryless rational polyhedral space X is an Aff X -torsor. More geometrically, it is a morphism Y → X of boundaryless rational polyhedral spaces such that locally on X there are trivializations Y ∼ = X × R, where two such trivializations are related via the translation by an integral affine function. More precisely, if two trivializations are defined over U ⊆ X, then the transition between them is of the form U × R → U × R, (u, x) → (u, x + φ (u)) for some φ ∈ Γ(U, Aff X ). The standard argument usingČech cohomology shows that the set of isomorphism classes of tropical line bundles on X is in natural bijection to H 1 (X, Aff X ). In particular, isomorphism classes of tropical line bundles form a group. A rational section of a tropical line bundle Y → X is a continuous section that is given by a rational function in all trivializations. Exactly as in algebraic geometry, every tropical Cartier divisor D on X defines a line bundle L (D) on X that comes with a canonical rational section. This defines a bijection between CDiv(X) and isomorphism classes of pairs (L , s) of a tropical line bundle L on X and a rational section s of L . 3.3. Homology and cohomology. Let X be a boundaryless rational polyhedral space. To define the tropical homology and cohomology groups, we need sheaves Ω p X of tropical p-forms for p > 0. On the open subset X reg it is clear that we would like Ω p X to be isomorphic to p Ω 1 X . However, this is not a suitable definition globally because in general p Ω 1 X can be nonzero even for p > dim(X) (see [GS19, Example 2.9]). One thus defines Ω p X as the image of the natural map p Ω 1 X → ι * p Ω 1 X | X reg , where ι : X reg → X is the inclusion. The singular tropical homology groups are defined similar to the integral singular homology groups, but with different coefficients. More precisely, there is a coarsest stratification of X such that the restrictions of the constructible sheaf Ω 1 X is locally constant on all the strata, and only singular simplices are allowed that respect this stratification in the sense that each of their open faces is mapped into a single stratum. The (p, q)-th chain group is then defined as C p,q (X) = σ : ∆ q →X allowable Hom(Ω p X , Z σ (∆ q ) ) , where ∆ q denotes the standard q-simplex, the sum runs over all q-simplices respecting the stratification, and Z σ (∆ q ) denotes the constant sheaf associated to Z on σ (∆ q ). With the usual boundary operators this defines chain complexes C p,• and the tropical homology groups which are defined as H p,q (X) = H q (C p,• (X)). Dualizing (over Z) the chain complexes C p,• (X) yields cochain complexes C p,• (X) whose cohomology are the tropical cohomology groups H p,q (X) = H q (C p,• (X)). There is a natural isomorphism H p,q (X) ∼ = H q (X, Ω p X ) . 3.4. The first Chern class map. The quotient map d : Aff X → Ω 1 X of sheaves on a boundaryless rational polyhedral space induces a morphism c 1 := H 1 (d) : H 1 (X, Aff X ) → H 1 (X, Ω 1 X ) ∼ = H 1,1 (X) called the first Chern class map from the group of all tropical line bundles on X to the (1, 1)-tropical cohomology group of X. Using the first Chern class map, any divisor D ∈ CDiv(X) has an associated (1, 1)-cohomology class c 1 (L (D)). 3.5. The tropical cycle class map. Exactly as in algebraic geometry, there is a tropical cycle class map that assigns a class in tropical homology to every tropical cycle. More precisely, on any closed rational polyhedral space X, there exist morphisms cyc : Z k (X) → H k,k (X) for every k ∈ N. We will only need an explicit description of the tropical cycle class map for 1-dimensional tropical cycles, that is when k = 1. If A ∈ Z 1 (X), then its support |A| is a compact (not necessarily smooth) tropical curve. For each open edge e of |A| choose a generator η e ∈ T x |A| for some x ∈ e. By taking parallel transports of η e along e we actually obtain a generator for all T y |A| with y ∈ e. Therefore, η e defines a morphism Ω 1 |A| → Z e (recall that Z e denotes the constant sheaf on e associated to Z), which can be uniquely extended to a morphism Ω 1 |A| → Z e . Precomposing with the morphism Ω 1 X → Ω 1 |A| defined by the inclusion |A| → X, one obtains a morphism η e ∈ Hom(Ω 1 X , Z e ). To complete the construction, one has to choose a homeomorphism γ e : ∆ 1 → e that parametrizes e in the direction specified by η e . Let us denote the element in C 1,1 (X) defined by γ e and η e by γ e ⊗ η e . Then cyc(A) is represented by the cycle ∑ e A(e) · γ e ⊗ η e ∈ C 1,1 (X) , where the sum runs over all open edges of |A| and A(e) denotes the weight of the tropical 1-cycle A on e. Example 3.2. Let Γ be the graph from Example 2.7, and denote its edges by e 1 , e 2 , and e 3 . Let v and w be the vertices of Γ and orient all edges from v to w. Let η i be the primitive tangent direction on e i in the chosen direction. Then cyc[Γ] is represented by the (1, 1)-chain γ 1 ⊗ η 1 + γ 2 ⊗ η 2 + γ 3 ⊗ η 3 , where γ i is any path that parametrizes e i from v to w. This is indeed a cycle. Its boundary is given by w ⊗ (η 1 + η 2 + η 3 ) − v ⊗ (η 1 + η 2 + η 3 ) , which vanishes: locally at v (respectively at w), the graph Γ looks like the star-shaped set depicted to the right in Figure 1, and the vectors η i are the (negatives of the) primitive generators of the rays of the star-shaped set. Since these sum to 0, the boundary is 0. 3.6. Identities in tropical homology. In [GS19] we studied various operations on tropical homology and cohomology and showed how to carry over identities known for singular homology to the tropical setting. For example, there are pull-backs of cohomology classes and push-forwards of homology classes along morphisms of boundaryless rational polyhedral spaces, there is a cup product " " on tropical cohomology and a cap product " " that makes the tropical homology groups a module over the tropical cohomology ring. There also are cross products "×" of both homology and cohomology classes. We will refer the reader to [GS19] for the details regarding these operations. For the reader's convenience, we have summarized the most important identities for the tropical cycle class map in the following theorem: Theorem 3.3 ([GS19]) . Let X, Y , and Z be closed rational polyhedral spaces, let f : X → Z be a proper morphism, let A ∈ Z * (X), B ∈ Z * (Y ) and D ∈ CDiv(X). Then we have If X is a closed rational polyhedral space, then the morphism from X to a point defines a morphism H 0,0 (X) → Z by identifying the (0, 0)-tropical homology group of a point with Z. The image of a tropical cycle α ∈ H 0,0 (X) is called the degree of α and denoted by X α. It is a direct consequence of the first equation in Theorem 3.3 that cyc( f * A) = f * cyc(A) ,X A = X cyc(A) for every A ∈ Z 0 (X). If X is a closed tropical manifold, then homology and cohomology are dual to each other, in the sense that the morphism H * , * (X) → H * , * (X), c → c cyc[X] is an isomorphism [JRS18,GS19]. In this context one says that c is Poincaré dual to c cyc[X]. Poincaré duality allows to define an intersection product for tropical homology classes on a closed tropical manifold X. More precisely, if α, β ∈ H * , * (X), and c ∈ H * , * (X) is Poincaré dual to α, then one defines α · β := c β . Remark 3.4. Both the intersection pairing between tropical Cartier divisors and tropical cycles, and the tropical cycle class map are not entirely free of choices. The intersection pairing depends on whether one measures incoming or outgoing slopes. When measuring incoming slopes, concave functions define effective principal divisors, whereas when measuring outgoing slopes, convex functions define effective principal divisors. Since minima of linear functions are concave, and maxima of linear functions are convex, one speaks of the "min"-and "max"-conventions, respectively. The cycle class map, on the other hands, depends on a consistent choice of isomorphisms k N ∼ = − → H k (N R , N R \ {0}; Z) for any lattice N of any rank k (see [GS19,§5]). If one wants Theorem 3.3 to hold, one has to make the choices involved in the definitions of the intersection pairing and the cycle class map consistently. In other words, the choice of either "min"-or "max"-convention will determine the sign of the cycle class map. In this paper, we will choose the "min"-convention, because it makes the formulas in §9 nicer, but the same formulas hold true in the "max"-convention after appropriately adjusting the sign. TROPICAL JACOBIANS In this section we review the definition of tropical Jacobians, closely following [MZ08]. Let Γ be a compact and connected smooth tropical curve. We write Ω Z (Γ) := H 0 (Γ, Ω 1 Γ ) for the group of global integral 1-forms, and Ω R (Γ) := Ω Z (Γ) ⊗ Z R for the group of (real) 1-forms. A 1-form on Γ is completely determined by its restrictions to the edges of Γ, and these restrictions are constant and completely determined by a real number and an orientation of the edge: it will be of the form rdx, where r ∈ R, and x is the chart on the edge determined by the orientation. Extracting the data of its restrictions to the edges out of a 1-form gives rise to a natural morphism Ω R (Γ) → C 1 (Γ; R). Since the outgoing primitive direction vectors at any point of Γ (in any chart around that point) sum to 0, the chains in the image of Ω R (Γ) will in fact be 1-cycles, that is they are mapped to 0 by the boundary morphism. It is not hard to see that the induced map Ω R (Γ) → H 1 (Γ; R) is an isomorphism. Remark 4.1. Another way to think of the elements of Ω Z (Γ) is as integral flows. Given ω ∈ Ω Z (Γ), we have already observed that the restriction ω| e to an open edge e is determined by a direction and a nonnegative integer. Conversely, a collection of directions and nonnegative integers for every edge in Γ will define a global 1-form if and only if this collection defines a flow. Global 1-forms on Γ can be integrated on singular 1-chains in Γ. We obtain a pairing (4.1) Ω R (Γ) ×C 1 (Γ; R) → R, (ω, c) → c ω , which can be shown to induce a morphism H 1 (X; R) → Ω R (Γ) * . Together with the isomorphism H 1 (Γ; R) ∼ = Ω R (Γ) from above, we obtain a natural bilinear form E on H 1 (Γ; R), which can be described explicitly. Namely, for two 1-cycles c 1 and c 2 , the pairing E(c 1 , c 2 ) is the weighted length of the intersection of c 1 and c 2 , where an oriented line segment occurring in c 1 and c 2 with weights λ and µ, respectively, contributes with weigh λ · µ. This bilinear form is clearly symmetric and positive definite. In particular, it is a perfect pairing, and hence the morphism H 1 (X; R) → Ω R (Γ) * we used to define it is an isomorphism. Via this isomorphism H 1 (Γ; Z) becomes a sublattice of Ω R (Γ) * of full rank, and the positive definite symmetric bilinear form E induces a positive definite symmetric bilinear form Q on Ω R (Γ) * . The full-rank sublattice of Ω R (Γ) * that has integer pairings with the elements of H 1 (X, Z) with respect to Q is precisely Ω Z (Γ) * . Definition 4.2. The tropical Jacobian associated to the compact and connected smooth tropical curve Γ is the pair consisting of the real torus Jac(Γ) := Ω R (Γ) * /H 1 (Γ; Z) and the bilinear form Q that is defined on the universal cover Ω R (Γ) * of Jac(Γ). Remark 4.3. By the universal coefficient theorem, we also have an isomorphism H 1 (Γ; R) ∼ = H 1 (Γ; R) * . Together with the isomorphism Ω R (Γ) ∼ = H 1 (Γ; R) from above one obtains an isomorphism H 1 (Γ; R) ∼ = Ω R (Γ) * . It is therefore also possible to write the Jacobian of Γ as the quotient H 1 (Γ; R)/H 1 (Γ; Z). Now fix a base point q ∈ Γ. Given any other point p ∈ Γ there is a path γ p connecting q to p. As any other path from q to p differs from γ p by an integral 1-cycle, the class of γ p in C 1 (Γ; Z)/B 1 (Γ; Z) /H 1 (Γ; Z) is independent of the choice of γ p . Here, B 1 (Γ; Z) denotes the group of 1-boundaries. Using the pairing (4.1), we obtain an element in Jac(Γ) that only depends on the choice of q. This defines the Abel-Jacobi map Φ q : Γ → Jac(Γ) . Let p ∈ Γ, and let U be a sufficiently small connected open neighborhood of p. More precisely, U should be connected and U \ {p} should be disjoint from V (Γ). Then for every p ∈ U \ {p} there exists r > 0 and a geodesic path γ : [0, r] → U from p to p . Let e denote the unique open edge e of Γ containing p , and let η denote the primitive integral tangent vector on e pointing from p towards p . If x is any lift of Φ q (p) to the universal cover Ω R (Γ) * , then by definition, p lifts to x + r · δ , where δ is given by δ : Ω R (Γ) → R, ω → 1 r γ ω = ω| e , η . If we identify Ω R (Γ) with flows on Γ (as in Remark 4.1) then δ is the map assigning to a flow ω on Γ its flow on e in the direction specified by η. In particular, δ is integral, that is δ ∈ Ω Z (Γ) * . This shows that Φ q is, in fact, a morphism of boundaryless rational polyhedral spaces, and that its action on the tangent space of e is given by δ = (dΦ q )(η) . Example 4.4. Let Γ be the smooth tropical curve associated to the metric graph that consists of two vertices which are connected by three edges of length 1 (the graph of Example 2.7 with a = b = c = 1). It is depicted to the left in Figure 3. We choose one of the vertices as the base point q and orient the edges of Γ such that one edge, call it e 3 is oriented towards q and the other two edges, call them e 1 and e 2 , are oriented away from q. The orientations define two simple closed loops c 1 and c 2 in Γ, where c i first follows e i and then e 3 . These loops define a basis for H 1 (Γ; R), and hence for Ω Z (Γ). Let δ 1 , δ 2 ∈ Ω Z (Γ) * be the dual basis. Since the signed length of c i ∩ c j is 2 if i = j and 1 if i = j, the injection H 1 (Γ; Z) → Ω R (Γ) * maps c 1 to (2, 1) and c 2 to (1, 2) in the coordinates defined by the basis δ 1 , δ 2 . If follows that Jac(Γ) = R 2 Z 1 2 + Z 2 1 , where the integral structure is given by Z 2 ⊆ R 2 . The Abel-Jacobi map sends q to 0 in this quotient. If γ 1 is the geodesic path along e 1 that starts at q, then (dΦ q )(γ 1 (t)) = t · 1 0 + H 1 (Γ; Z) for all t ∈ [0, 1] because the path from q to γ 1 (t) along e 1 intersects c 1 in an edge segment of length t, and c 2 in a point (an edge segment of length 0). Similarly, if γ 2 is a geodesic path along e 2 , and γ 3 is a geodesic path along e 3 , both starting at q, then (dΦ q )(γ 2 (t)) = t · 0 1 + H 1 (Γ; Z) , and (dΦ q )(γ 3 (t)) = t · −1 −1 + H 1 (Γ; Z) for all t ∈ [0, 1]. ALGEBRAIC, HOMOLOGICAL, AND NUMERICAL EQUIVALENCE In this section we study different notions of equivalence for tropical cycles on boundaryless rational polyhedral spaces, with a focus on real tori. 5.1. Algebraic equivalence. Following [Zha15], we make the following definition. Definition 5.1. Let X be a boundaryless rational polyhedral space. Let R alg be the subgroup of Z * (X) generated by tropical cycles of the form p * (q * (t 0 − t 1 ) ·W ) , where W is a tropical cycle on X × Γ for some compact and connected smooth tropical curve Γ containing the two points t 0 ,t 1 ∈ Γ, and p : X × Γ → X and q : X × Γ → Γ are the natural projections. Note that because Γ is smooth, the difference t 0 − t 1 defines a tropical Cartier divisors on Γ (see §3.1) and tropical Cartier divisors can be pulled-back along any morphism of boundaryless rational polyhedral spaces. We say that two tropical cycles A, B ∈ Z * (X) are algebraically equivalent, denoted by A ∼ alg B, if their classes in Z * (X)/R alg coincide. Proposition 5.2. Let X be a boundaryless tropical manifold, let A, B,C ∈ Z * (X) be tropical cycles on X, and assume that A ∼ alg B. Then A ·C ∼ alg B ·C . Proof. By the definition of algebraic equivalence, we may assume that there exists a compact and connected smooth tropical curve Γ, two points t 0 ,t 1 ∈ Γ, and a tropical cycle W on X × Γ such that A − B = p * (q * (t 0 − t 1 ) ·W ) , where p and q denote the projection. Using the projection formula [FR13,Theorem 8.3 (1)], we see that A ·C − B ·C = (A − B) ·C = p * (q * (t 0 − t 1 ) · (W · (C × Γ))) . Applying the definition of algebraic equivalence with W replaced by W · (C × Γ), we obtain that A ·C and B ·C are algebraically equivalent. Definition 5.3. Let X = N R /Λ be a real torus. A spanning curve for X is a 1-dimensional polyhedral subset Γ ⊆ X such that there exists an effective tropical 1-cycle on X with support Γ, and such that the parallel transports to 0 of the direction vectors of the edges of Γ span T 0 X ∼ = N R . If such a curve exists, we say that X admits a spanning curve. Proposition 5.4. Let Γ be a compact and connected smooth tropical curve. Then its Jacobian Jac(Γ) admits a spanning curve. Proof. For any choice of base point q ∈ Γ, the image Φ q (Γ) of Γ under the Abel-Jacobi map is the support of the effective cycle Φ q * [Γ]. Using the explicit description given in §4 of the tangent directions in Jac(Γ) of the images of the edges of Γ, it follows directly that Φ q (Γ) is a spanning curve for Jac(Γ). Proposition 5.5. Let X = N R /Λ be a real torus that admits a spanning curve Γ. Let x ∈ X, and recall that we denote by t x : X → X the translation by x. Then for every tropical cycle A ∈ Z * (X) we have A ∼ alg (t x ) * A . Proof. By the assumptions on Γ, the point x is in the subgroup of X generated by the differences y − y for pairs y, y ∈ Γ contained in the same edge of Γ. Therefore, it suffices to show that (t x ) * A ∼ alg (t x ) * A for any pair of points x, x contained in the same edge of Γ. Let Γ x be the component of Γ containing x. Even though Γ x is not smooth, it still determines a metric graph G. After a choice of weights that makes Γ into a tropical 1-cycle, the metric graph G is equipped with weights m : E(G) → Z >0 induced by the weights on Γ. Let G be the metric graph obtained from G replacing each edge e of G by an edge of length (e)/m(e), where (e) denotes the length of e in the metric graph G. If Γ denotes the smooth tropical curve associated to the graph G (see §2.3), then there is a natural morphism f : Γ → |Γ x | of rational polyhedral spaces, which is a bijection of the underlying spaces. Let t,t ∈ Γ be the unique points with f (t) = x and f (t ) = x . Now let g : X × Γ → X × Γ, (x, s) → (x + f (s), s) , and denote W = g * (A × [Γ]) ∈ Z * (X × Γ) . By construction, if p : X × Γ → X and q : X × Γ → Γ denote the projections, we have p * (q * (t) ·W ) = (t x ) * (A) and p * (q * (t ) ·W ) = (t x ) * (A) , finishing the proof. Homological equivalence. Definition 5.6. Let X be a closed rational polyhedral space. We say that two tropical cycles A and B are homologically equivalent, if cyc(A) = cyc(B). Example 5.7. Let Γ be a compact and connected smooth tropical curve. By definition, we have H 0,0 (Γ) ∼ = H 0 (Γ; Z) ∼ = Z. It follows that the degree morphism H 0,0 (Γ) → Z is an isomorphism. Therefore, the homological equivalence class of a tropical 0-cycle is uniquely determined by its degree. Let D ∈ CDiv(Γ) be a Cartier divisor on Γ. By Proof. By the definition of algebraic equivalence, we may assume that there exists a compact and connected smooth tropical curve Γ, two points t 0 ,t 1 ∈ Γ, and a tropical cycle W on X × Γ such that A − B = p * (q * (t 0 − t 1 ) ·W ). Since t 0 − t 1 has degree 0, we have c 1 (L (t 0 − t 1 )) = 0 (see Example 5.7). Therefore, by Theorem 3.3, we have cyc(A) − cyc(B) = cyc(A − B) = p * q * c 1 (L (t 0 − t 1 )) cyc(W ) = 0 , finishing the poof. Theorem 5.9. Let X be a real torus admitting a spanning curve, and let A, B ∈ Z * (X) be tropical cycles. Then we have cyc(A · B) = cyc(A) · cyc(B) . Proof. As both sides are bilinear in A and B, we may assume that A and B are puredimensional, say of dimensions k and l, respectively. By Proposition 5.5 and Proposition 5.8, we may replace A by a general translate. Therefore, we can assume that A and B meet transversally, that is that |A| ∩ |B| is either empty or of pure dimension k + l − n, cyc(A · B) = cyc (a · H 1 · · · H n−k ) · (b · H 1 · · · H n−l ) · [X] = = a · c 1 (L (H 1 )) . . . c 1 (L (H n−k )) b · c 1 (L (H 1 )) . . . c 1 (L (H n−l )) cyc[X] = = α cyc(B) = cyc(A) · cyc(B) , where the last equality holds by the definition of the intersection product of tropical homology classes. This finishes the proof. Numerical equivalence. Definition 5.10. Let X be a closed tropical manifold. Then two tropical cycles A, B ∈ Z * (X) on X are numerically equivalent, for which we write A ∼ num B, if for every tropical cycle C ∈ Z * (X) on X we have X A ·C = X B ·C . Proposition 5.11. Let X be a real torus admitting a spanning curve, and let A, B ∈ Z * (X) with A ∼ hom B. Then A ∼ num B. Proof. Let C ∈ Z * (X). By Theorem 5.9, we have X A ·C = X cyc(A ·C) = X cyc(A) · cyc(C) = = X cyc(B) · cyc(C) = X cyc(B ·C) = X B ·C , from which the assertion follows. TROPICAL HOMOLOGY OF REAL TORI Let X = N R /Λ be a real torus. Then the group law and the tropical cross product endow the tropical homology groups with the additional structure of the Pontryagin product: Definition 6.1. Let X be a real torus with group law µ : X × X → X. The tropical Pontryagin product is defined as the pairing (α, β ) → α β := µ * (α × β ) , where α and β are either elements of Z * (X) or of H * , * (X). We thus obtain morphisms : Z i (X) ⊗ Z Z k (X) → Z i+k (X) : H i, j (X) ⊗ Z H k,l (X) → H i+k, j+l (X) for all choices of natural numbers i, j, k, l. It is not hard to see that makes Z * (X) into a graded abelian group, and H * , * (X) into a bigraded abelian group. Proposition 6.2. Let X be a real torus. Then the tropical cycle class map respects Pontryagin products, that is the diagram Z i (X) ⊗ Z Z j (X) Z i+ j (X) H i,i (X) ⊗ Z H j, j (X) H i+ j,i+ j (X) cyc ⊗ cyc cyc is commutative for all i, j ∈ Σ Proof. Since the Pontryagin product is defined as the push-forward of a cross product, this follows immediately from the compatibility of the tropical cycle class map with cross products and push-forwards stated in Theorem 3.3. For the real torus X = N R /Λ, we will now describe the group H * , * (X) and the Pontryagin product on it explicitly. First we note that the sheaf Ω 1 X is the constant sheaf M X associated to the lattice M = Hom(N, Z), and since X reg = X, we have Ω k X ∼ = k M X for all integers k. By definition of singular tropical homology, we thus have a canonical graded isomorphism H * , * (X) ∼ = H * X; * N ∼ = H * (X; Z) ⊗ Z * N. The restriction of the Pontryagin product to the first factor H * (X; Z) ∼ = H 0, * (X) is precisely the classical Pontryagin product one obtains when one views X is a topological group. But, as a topological group, X is a product of 1-spheres. So using the Künneth theorem one sees that H * (X; Z) is isomorphic to H 1 (X; Z). This is, in fact, an isomorphism of rings, the multiplication of H * (X; Z) being the Pontryagin product. Finally, because X is the quotient of its universal covering space N R by the action of Λ, we obtain a natural isomorphism H 1 (X; Z) ∼ = Λ. If a tropical 1-cycle in H 1 (X; Z) is represented by a loop γ : [0, 1] → X then the corresponding element of Λ is given by γ(1) − γ(0) for any lift γ : [0, 1] → N R of γ to the universal cover. We obtain an isomorphism (6.1) H * , * (X) ∼ = * Λ ⊗ Z * N . It is straightforward to check that with this identification, the tropical Pontryagin product on H * , * (X) satisfies (α ⊗ ω) (β ⊗ ξ ) = (α ∧ β ) ⊗ (ω ∧ ξ ) . By a similar argument, one obtains a description for the tropical cohomology of X that is dual to the description of tropical homology in (6.1). More precisely, one sees that (6.2) H * , * (X) ∼ = * Λ * ⊗ Z * M , and that, with this identification, the tropical cup product on H * , * (X) satisfies (α ⊗ ω) (β ⊗ ξ ) = (α ∧ β ) ⊗ (ω ∧ ξ ) . With the descriptions of the tropical homology and the tropical cohomology given in (6.1) and (6.2), the tropical cap product can also be expressed explicitly. More precisely, we have (6.3) (α ⊗ ω) (β ⊗ ξ ) = (α β ) ⊗ (ω ξ ) , where " " denotes the interior product on the exterior algebra. In bidegree (1, 1) our description of the tropical cohomology of X produces an isomorphism H 1,1 (X) ∼ = Λ * ⊗ Z M . We can further identify the right side with Hom(Λ ⊗ Z N, Z), that is with bilinear forms on N R that have integer values on Λ × N. Convention 6.3. From now on we will always identify, according to the identifications in this section, the cohomology group H 1,1 (N R /Λ) with the group of bilinear forms on N R that have integer values on Λ × N. LINE BUNDLES ON REAL TORI 7.1. Factors of automorphy. Let N be a lattice, let Λ ⊆ N R be a lattice of full rank, and let X = N R /Λ be the real torus associated to N and Λ. To describe the tropical line bundles on X we recall from §3.2 that they form a group, canonically identified with H 1 (X, Aff X ). Invoking the results from [Mum08, Appendix to §2], together with the fact that the pull-back π −1 Aff X ∼ = Aff N R along the quotient morphism π : N R → N R /Λ = X has trivial cohomology on N R , we obtain the identification H 1 (X, Aff X ) ∼ = H 1 (Λ, Γ(N R , Aff N R )) , where the right side is the first group cohomology group of Γ(N R , Aff N R ), equipped with its natural Λ-action. This is very much akin to the case of complex tori: an element of H 1 (Λ, Γ(N R , Aff N R )) can be represented by a tropical factor of automorphy, that is a family of integral affine functions indexed by Λ, that, if we represent it as a function a : Λ × N R → R, satisfies (7.1) a(λ + µ, x) = a(λ , µ + x) + a(µ, x) for all µ, λ ∈ Λ and x ∈ N R . Two factors of automorphy represent the same element of H 1 (Λ, Γ(N R , Aff N R )) if and only if they differ by a factor of automorphy of the form (λ , x) → l(x + λ ) − l(x) for some integral affine function l ∈ Γ(N R , Aff N R ), which happens if and only if they differ by a factor of automorphy of the form (λ , x) → m R (λ ) ,a E,l (λ , x) = l(λ ) − E(λ , x) − 1 2 E(λ , λ ) is a tropical factor of automorphy. We denote the associated tropical line bundle on X by L (E, l). The following proposition shows that the first Chern class recovers E from L (E, l). Proposition 7.1. Let E be a symmetric bilinear form on N R with E(Λ × N) ⊆ Z, and let l ∈ Hom(Λ, R). Then c 1 (L (E, l)) = E, where we identify H 1,1 (X) with the group of bilinear forms on N R with integer values on Λ × N according to Convention 6.3. Proof. Let U = {U α } α be an open cover of X such that each preimage π −1 U α is a union of disjoint open subsets of N R that map homeomorphically onto U α . For each α, choose a continuous section s α : U α → π −1 U α of π. Furthermore, we choose a (necessarily non-continuous) section s : X → N R of π. By construction, the line bundle L (E, l) is represented by theČech cocycle (U α,β x → a E,l (s β (x) − s α (x), s α (x))) ∈Č 1 (U, Aff X ) . Note that s β − s α has values in Λ and is therefore constant on the connected components of U α,β = U α ∩ U β by continuity. In particular, the functions x → a E,l (s β (x) − s α (x), s α (x)) are indeed integral affine. By definition, the first Chern class of L (E, l) is represented by theČech cocycle obtained by differentiating the transition functions for all α and β . Using the definition of a E,l , it follows that c 1 (L (E, l)) is represented by the cocycle (7.2) (U α,β x → −E(s β (x) − s α (x))) ∈Č 1 (U, Ω 1 X ) , where we consider E as a function Λ → N * . To compute what this corresponds to under the identification of H 1 (X, Ω 1 X ) with H 1 (X; N * ) ∼ = Λ * ⊗ N * , we consider the double complex (Č i (U, C j (X; N * )), d i j , ∂ i j ) , where C i (X; N * ) denotes the sheafification of the presheaf U → C i (U; N * ) and we set C −1 (U; N * ) = N * andČ −1 (U, F ) = Γ(X, F ) for any sheaf F . We follow the cocycle of formula 7.2 through the double complex in the zig-zag from the (1, −1) entry to the (−1, 1) entry indicated by the solid arrows in the following diagram: 0 0 0 0 N * C 0 (X; N * ) C 1 (X; N * ) · · · 0Č 0 (U, (N * ) X )Č 0 (U, C 0 (X; N * ))Č 0 (U, C 1 (X; N * )) · · · 0Č 1 (U, (N * ) X )Č 1 (U, C 0 (X; N * ))Č 1 (U, C 1 (X; N * )) · · · . . . . . . . . . First we apply the differential coming from singular cohomology and obtain ((U α,β x ← − {0}) → −E(s β (x) − s α (x))) ∈Č 1 (U, C 0 (X; N * )) . Clearly, this is the image under the differential coming fromČech cohomology of the cochain ((U α x ← − {0}) → −E(s α (x) − s(x))) ∈Č 0 (U, C (X; N * )) . Applying the differential of singular cohomology again we obtain (1))−s(σ (1))−s α (σ (0))+s(σ (0)))) ∈Č 0 (U, C 1 (X; N * )) . ((U α σ ← − [0, 1]) → −E(s α (σ This can be lifted to a singular 1-cochain. Namely, for an arbitrary 1-simplex σ : [0, 1] → X we choose a lift σ : [0, 1] → N R and then assign to σ the value −E(σ (1) − s(σ (1)) − σ (0) + s(σ (0))) . This is clearly independent of the choice of σ . In particular, if the image of σ is contained in U α , we may choose σ = s α • σ and obtain the same cocycle on U α as before. It is also clear that any loop in X which is the image of a path in N R from 0 to λ ∈ Λ is mapped to −E(λ ) by this 1-cochain. Therefore, we have c 1 (L (E, l)) = E when identifying H 1,1 (X) with Λ * ⊗ Z N * according to Convention 6.3. Proof. We have already seen in §7.1 that there exists a tropical factor of automorphy a : Λ × N R → R such that L is the line bundle associated to a(−, −). For every λ ∈ Λ, the function a(λ , −) is integral affine, hence its differential E(λ ) := −da(λ , −) defines an element in Hom(N, Z). Differentiating (7.1), we see that the map λ → E(λ ) is linear. In other words, E defines a bilinear map on Λ × N → Z. Therefore, for a suitable function b : Λ → R, we have a(λ , x) = −E(λ , x) + b(λ ) for all λ ∈ Λ and x ∈ N R . Plugging this into (7.1), we see that E(λ , µ) = E(µ, λ ) for all λ , µ ∈ Λ, that is that E is, in fact, symmetric. The tropical factor of automorphy a − a E,0 is then a family of constant functions, that is we have (a − a E,0 )(λ , x) = l(λ ) for some function l : Λ → R. Applying (7.1) once more we see that l is, in fact, linear. It follows that a = (a − a E,0 ) + a E,0 = a E,l . In particular, we have L ∼ = L (E, l). Now assume we are given a second choice of linear function l ∈ Hom(Λ, R) and symmetric form E on N R with E (Λ × N) ⊆ Z such that L (E , l ) ∼ = L . We have already seen in §7.1 that this happens if and only if a E,l − a E ,l is of the form a 0,m R | Λ for some linear function m : N → Z. By Proposition 7.1, we have E = c 1 (L (E , l )) = c 1 (L (E, l)) = E . Therefore, we have a E,l − a E ,l = a 0,l−l and it follows that (l − l ) R has integer values on N. Remark 7.3. It follows directly from the tropical Appell-Humbert theorem that there is a bijection between the group of all tropical line bundles with trivial first Chern class and Λ * R /N * , which is called the dual real torus to X for that reason. 7.3. Translations of line bundles. Proposition 7.4. Let X = N R /Λ be a real torus, let l ∈ Hom(Λ, R), and let E be a symmetric bilinear form on N R with E(Λ × N) ⊆ Z. Furthermore, let π : N R → X be the projection, and let y ∈ N R . Then we have t * π(y) L (E, l) ∼ = L (E, l − E(−, y)) . In particular, if the bilinear form E is nondegenerate and L is any line bundle on N R /Λ with c 1 (L ) = E, then there exists x ∈ X such that L ∼ = t * x L (E, l). If, moreover, E restricts to a perfect pairing Λ × N → Z, then x is unique. Proof. We recall from above that L (E, l) can be defined as the quotient of the trivial bundle N R × R by the Λ-action given by λ .(x, b) = (x + λ , b + a (E,l) (λ , x)). Since the morphism t y : N R → N R x → x + y that induces t π(y) on the quotient N R /Λ is Λ-equivariant, the pull-back t * π(y) L (E, l) can be represented as the quotient of t * y (N R × R) ∼ = N R × R by the pulled back Λ-action. The action of λ ∈ Λ on (x, b) under the pulled back action is obtained by first applying t y to the first coordinate, yielding (x + y, b), then applying the Λ-action defined by a (E,l) , yielding (x + y + λ , b + a (E,l) (λ , x + y)), and finally applying t −1 y to the first coordinate, yielding (x + λ , b + a (E,l) (λ , x + y)). So in total, the pulled back action is given by λ .(x, b) = (x + λ , b + a (E,l) (λ , x + y)) = x + λ , b + l(λ ) − E(λ , x + y) − 1 2 E(λ , λ ) = x + λ , b + l(λ ) − E(λ , y) − E(λ , x) − 1 2 E(λ , λ ) = (x + λ , b + a (E,l−E(−,y)) ) . which is precisely the action on the trivial bundle defined by the factor of automorphy a (E,l−E(−,y)) . This shows that t * π(y) L (E, l) = L (E, l − E(−, y)). Now assume that E is nondegenerate and that L is any line bundle on X with c 1 (L ) = E. By Theorem 7.2 and Proposition 7.1, there exists a linear form l : Λ → R such that L ∼ = L (E, l ). Since E is nondegenerate and Λ R ∼ = N R , there exists x ∈ N R such that l − l = E(−, x). By what we have shown above, we have 7.4. Rational sections of line bundles. Let E : Λ × N → Z be bilinear such that E R is a symmetric bilinear form on N R , and let l : Λ → R be linear. As mentioned above, the tropical line bundle L (E, l) on X is a quotient of the trivial bundle N R × R by the Λ-action defined by E and l. In particular, the global rational sections of L (E, l) are precisely those global rational sections of N R × R that are invariant under the Λ-action. More precisely, the global rational sections of L (E, l) are in bijection with the piecewise linear function φ ∈ Γ(N R , M N R ) such that L ∼ = L (E, l − E(−, x)) ∼ = t * π( x) L (E, l) = t * x L (E, l) , where x = π( x). If x ∈ X is another point such that t * x L (E, l) ∼ = L , and x ∈ N R is chosen such that π( x ) = x , then we have L (E, l − E(−, x)) ∼ = L (E, l − E(−, x ))(7.3) φ (x + λ ) = φ (x) + l(λ ) − E(λ , x) − 1 2 E(λ , λ ) . The divisor associated to the section of L (E, l) corresponding to φ is precisely the quotient of div(φ ) by the Λ-action. In particular, this divisor is effective if and only if div(φ ) is effective, that is if φ is concave. Together, concavity and (7.3) put strong constraints on φ , or rather its Legendre transform. In fact, it has been shown in [MZ08,Theorem 5.4] that if E is a perfect pairing and E R is positive definite, these constraints completely determine φ up to an additive constant. More precisely, φ is given by φ (x) = min E(λ , x) + 1 2 E(λ , λ ) − l(λ ) λ ∈ Λ + const in this case (note that this only differs from the formula in [MZ08] because we are using the "min"-convention, see Remark 3.4). By the tropical Appell-Humbert theorem it follows that for every line bundle L on X with c 1 (L ) = E there exists a unique effective divisor D ∈ CDiv(X) with L (D) = L . Proposition 7.6. Let X = N R /Λ be the real torus associated to a pair of lattices N and Λ ⊂ N R , and let D, D ∈ CDiv(X) be two effective divisors such that cyc [D] = cyc[D ] is Poincaré dual to E ∈ H 1,1 (X) for some perfect pairing E : Λ × N → Z such that E R is a positive definite symmetric bilinear form on N R , where we identify H 1,1 (X) with Hom(Λ, N * ) according to Convention 6.3. Then there exits a unique x ∈ X such that t * By Proposition 7.4, there exists a unique point x ∈ X such that t * x (L (D)) ∼ = L (t * x (D)) is isomorphic to L (D ). It follows that the two divisors t * x (D) and D correspond to two concave rational sections of L (D ). But, since c 1 (L (D )) = E, these two rational sections differ by a constant. Therefore, D = t * x (D). TAUTOLOGICAL CYCLES ON TROPICAL JACOBIANS Classically, the ring of tautological classes on the Jacobian of an algebraic curve is the smallest subring of its Chow group that contains the image of the curve under the Abel-Jacobi map and is invariant under intersection products, Pontryagin products, translations, and the involution map. We will now introduce the most important tropical tautological cycles on a tropical Jacobian. Throughout this section, Γ will denote a compact connected smooth tropical curve of genus g. We will also fix a base point q ∈ Γ with respect to which we define the Abel-Jacobi map. 8.1. Effective loci and semibreak divisors. Using the group structure on the Jacobian, the Abel-Jacobi map induces morphisms Φ d q : Γ d → Jac(Γ) for all nonnegative integers d. Definition 8.1. For every integer 0 ≤ d ≤ g we define W d := Φ d q (Γ d ) . Because Φ d q is a proper morphism of boundaryless rational polyhedral spaces, we know that W d is an at most d-dimensional boundaryless rational polyhedral subspace of Jac(Γ). By definition, To show that W d in fact is purely d-dimensional we will use the the identification of Jac(Γ) with the Pic 0 (Γ) given by the tropical Abel-Jacobi theorem [MZ08]. Here, Pic(Γ) denotes the quotient of CDiv(Γ) by the subgroup consisting of all principal divisors, and Pic d (Γ) denotes the subgroup of Pic(Γ) consisting of the all classes of divisors of degree d. The statement of the tropical Abel-Jacobi theorem is that the Abel-Jacobi map Φ q induces a bijections Pic d (Γ) → Jac(Γ) for d = 0, and hence for any d. If W d denotes the preimage of W d in Pic d (Γ) under the bijection Pic d (Γ) → Jac(Γ), then W d is precisely the set of the classes of effective divisors of degree d. In particular W d is independent of the base point q. Together with L. Tóthmérész, we have proved the following theorem. (Φ d q ) * [Γ d ](Φ d q ) * [Γ d ]. To do this, we will need the notion of a break and semibreak divisors. A break divisor on Γ is an effective divisor B such that there exist g open edge segments e 1 , . . . , e g ⊆ Γ and points q i ∈ e i such that Γ \ i e i is contractible and B = ∑ i (q i ). A semibreak divisor is an effective divisor that is dominated by a break divisor, that is an effective divisor D such that there exists an effective divisor E for which D + E is a break divisor (cf. [GST18] By the definition of the push-forward, we now have to show that for any x ∈ W d such that (Φ d q ) −1 {x} is finite and contained in (Γ d ) reg , the value of (Φ d q ) * [Γ d ] at x is d!. Let σ be a component of (Γ d ) reg . Then there exist open edges e 1 , . . . , e d of Γ such that σ = e 1 × . . . × e d . We choose an orientation on each of these d edges. This determines a unique primitive tangent vector η k on each each edge e k . These d tangent vectors form a basis of the integral tangent space of the product e 1 × . . . × e d . As already noted in §4, the image of η k in the tangent space Ω Z (Γ) * of Jac(Γ) is given by (dΦ q )(η k ) : Ω Z (Γ) → Z, ω → ω| e k , η k . If we identify Ω Z (Γ) with integral flows on Γ, as explained in Remark 4.1, then (dΦ q )(η k ) is the map assigning to an integral flow ω on Γ its flow on e k in the direction specified by the chosen orientation. Because Φ d q is defined as the d-fold sum of Φ q , we have (dΦ d q )(η k ) = (dΦ q )(η k ). In particular, if e k = e l for k = l, then (dΦ d q )(η k ) = (dΦ d q )(η l ) which means that Φ d q is not injective on σ and x / ∈ Φ d q (σ ). We may thus assume that all e k are distinct. If Γ \ e k is disconnected, then there exists an 1 ≤ l ≤ d such that Γ \ ∪ l k=1 e k has precisely two components C 1 and C 2 . For 1 ≤ k ≤ l let α k be equal 1 if e k is oriented such that it leads from C 1 to C 2 , and let α k be equal to −1 if it is oriented the other way. Since the total flow from C 1 to C 2 in any integral flow on Γ is 0, we have l ∑ k=1 α k (dΦ d q )(η k ) = 0 , which means that dΦ d q is not injective on the tangent spaces of σ . Therefore, Φ d q is not injective on σ and again x / ∈ Φ d q (σ ). If Γ \ e k is connected, then for each 1 ≤ k ≤ d there is a simple closed loop in Γ that passes through e k but not through e l for l = k. It follows that for every assignment of values f : {1, . . . , d} → Z there is an integral flow ω ∈ Ω Z (Γ) whose flow on e k is f (k). This implies that the vectors (dΦ d q )(η 1 ), . . . , (dΦ d q )(η i ) span a saturated rank-i sublattice of Ω Z (Γ) * . Therefore, every point of (Φ d q ) −1 {x} ∩ σ contributes to the weight of (Φ d q ) * [Γ d ] with multiplicity one, and by [GST18,Lemma 8.1] there is at most one of these points. In fact, if (Φ d q ) −1 {x} ∩ σ is nonempty, then [GST18, Lemma 8.1] tells us that all other components σ of (Γ d ) reg with (Φ d q ) −1 {x} ∩ σ = / 0 are obtained from σ via a permutation of coordinates. As there are exactly d! of these permutations, the weight at x is d!, finishing the proof. As an immediate consequence of Proposition 8.3 we obtain the following corollary. coming from the fact that Jac(Γ) = Ω R (Γ) * /H 1 (Γ; Z) is defined by taking a quotient of a real vector space by H 1 (Γ; Z). Lemma 8.5. The morphism (Φ q ) * : H 1 (Γ; Z) → H 1 (Jac(Γ); Z) ∼ = H 1 (Γ; Z) is the identity. Proof. Let α be a cycle on Γ representing a class in H 1 (Γ; Z). We need to show that (Φ q ) * [α] = [α] . By the Hurewicz theorem, we may assume that it is represented by a loop γ : [0, 1] → Γ starting and ending at the base point q. By the definition of the Abel-Jacobi map, the path γ : [0, 1] → Ω R (Γ) * , t → ω → γ| [0,t] ω lifts the composite Φ q • γ. Therefore, (Φ q ) * γ ∈ H 1 (Jac(Γ); Z) is identified with the element γ(1) − γ(0) = γ(1) ∈ H 1 (Γ; Z) . But this is equal to the image of γ under the embedding H 1 (Γ; Z) → Ω R (Γ) * . 8.2. The tropical Riemann theta divisor. Recall from §4 that the tropical Jacobian Jac(Γ) = Ω R (Γ) * /H 1 (Γ; Z) of a smooth tropical curve Γ comes equipped with a positive definite symmetric form Q on its universal cover Ω R (Γ) * which restricts to a perfect pairing Ω Z (Γ) * × H 1 (Γ; Z) → Z. By Proposition 7.1, the first Chern class of the line bundle L (Q, 0) is given by Q. As explained in §7.4, this implies that L (Q, 0) has, up to an additive constant, a unique concave rational section, the Riemann theta function, which defines a unique effective divisor Θ ∈ CDiv(Jac(Γ)) with L (Θ) = L (Q, 0). For further details about the Riemann theta function see [MZ08], and see [FRSS18] for the connection to the non-archimedean Riemann theta function. Definition 8.6. The unique effective divisor Θ ∈ CDiv(Jac(Γ)) with L (Θ) = L (Q, 0) is called the tropical Riemann theta divisor on Jac(Γ). Note that by construction, we have c 1 (L (Θ)) = Q. Example 8.7. Figure 3 shows the Θ-divisor for the curve Γ from Example 4.4. It is the image in Jac(Γ) of the boundaries of the Voronoi cells of the lattice points H 1 (Γ; Z) in Ω R (Γ) * with respect to the metric defined by Q. THE TROPICAL POINCARÉ FORMULA We are finally in a position to prove the Poincaré formula. Our strategy is to give explicit formulas for both sides of the equation. More precisely, we will introduce coordinates on the tropical homology groups of the tropical Jacobian, and will compare the coefficients of both sides of the equation in these coordinates. Throughout this section, Γ will denote a compact and connected smooth tropical curve of genus g, and e 1 , . . . , e g will denote distinct open edges of Γ such that Γ \ ( k e k ) is contractible. Furthermore, we will assume that we have chosen an orientation on each of the edges e 1 , . . . , e g . 9.1. Bases for the tropical (co)homology of Jac(Γ). Recall from §6 that there is an isomorphism of rings H * , * (Jac(Γ)) ∼ = H 1 (Γ; Z) ⊗ Ω Z (Γ) * , where the ring structure on the left side is given by the Pontryagin product. Using this isomorphism, a choice of bases for H 1 (Γ; Z) and Ω Z (Γ) * will induce a basis for H * , * (Jac(Γ)). We will use our choice of open edges e 1 , . . . , e g to define bases for these lattices. Let 1 ≤ k ≤ g. The orientation on e k defines a start and an end point for e k . Since T is contractible and therefore a tree, there is a path in T from the end to the start point of e k , and this path is unique up to homotopy. Together with any path in e k from its start to its end point, this defines a fundamental circuit c k ∈ H 1 (Γ; Z) that traverses e k but is disjoint from e l for l = k. It is well known, and straightforward to check, that the the fundamental circuits c 1 , . . . , c g form a basis of H 1 (Γ; Z). To obtain a basis for Ω Z (Γ) * , let η k denote the primitive tangent vector on e k in the direction specified by the orientation, and let δ k = (dΦ q )(η k ). As we observed in §4, δ k can be described as the morphism Ω Z (Γ) → Z assigning to an integral flow on Γ its flow through e k in the direction specified by the orientation. By definition of the bilinear from Q on Ω R (Γ), we have Q(c k , δ l ) = 1 if k = l and Q(c k , δ l ) = 0 if k = l, that is δ 1 , . . . , δ g is dual to the basis c 1 , . . . , c g with respect to Q. We noticed in §4 that Ω Z (Γ) * is precisely the set of vectors in Ω R (Γ) * that have integral pairing with respect to Q with all elements of H 1 (Γ; Z). It follows directly that δ 1 , . . . , δ g is a basis for Ω * Z (Γ). Similarly, by the isomorphism H * , * (Jac(Γ)) ∼ = H 1 (Γ; Z) * ⊗ Ω Z (Γ) of rings discussed in §6, bases for H 1 (Γ; Z) * and Ω Z (Γ) induce a basis for H * , * (Jac(Γ)). The bases we will use for these lattices are the dual bases (c * k ) k and (δ * k ) k to the bases (c k ) k and (δ k ) k . Note that both H * , * (Jac(Γ)) and H * , * (Jac(Γ)) are tensor products of skew commutative graded rings. We will use the following notation for elements of special form in groups of this type. Notation 9.1. Let R 1 and R 2 be two skew-commutative graded rings, let J be a finite set, and let a : J → R 1 and b : J → R 2 be maps such that for every j ∈ J the elements a( j) and b( j) are homogeneous of the same degree. Then for any injective map σ : {1, . . . , k} → J, the element k ∏ l=1 a(σ (l)) ⊗ k ∏ l=1 b(σ (l)) of R 1 ⊗ Z R 2 only depends on the image I := σ ({1, . . . , k}). We denote it by ∏ i∈I a(i) ⊗ ∏ i∈I b(i) . Cycle classes of tautological cycles. Proposition 9.2. We have cyc[ W 1 ] = g ∑ k=1 c k ⊗ δ k . Proof. Choose an orientation for every edge e of Γ that coincides with the orientation we have already chosen if e = e k for some k. Let η e the primitive tangent vector of e in the direction specified by the orientation, and let δ e = (dΦ q )(η e ). By construction, we have δ e k = δ k for all 1 ≤ k ≤ g. It follows immediately from the definition of the tropical cycle class map and Theorem 3.3 that cyc[ W 1 ] is represented by the (1, 1) -cycle ∑ e∈E(Γ) (Φ q ) * (e) ⊗ δ e ∈ C 1,1 (X) , where we view the oriented closed edge e as a singular 1-simplex by choosing a parametrization compatible with the given orientation. Using that the c k and the δ k form dual bases with respect to the bilinear form Q, we see that the above equals ∑ e∈E(Γ) (Φ q ) * (e) ⊗ g ∑ i=1 Q(c i , δ e ) · δ i = g ∑ i=1 ∑ e∈E(Γ) Q(c i , δ e ) · (Φ q ) * (e) ⊗ δ i . Since Q(c i , δ e ) is 1 whenever e is on the loop c i , and 0 otherwise, we have ∑ e∈E(Γ) Q(c i , δ e )(Φ q ) * (e) = (Φ q ) * c i , which is equal to c i by Lemma 8.5. This finishes the proof. Remark 9.3. It follows immediately from Proposition 9.2 that the expression ∑ k c k ⊗ δ k ∈ H 1 (Γ; Z) ⊗ Ω Z (Γ) * is independent of the choice of spanning tree used to define the elements c k and δ k . On a closer look, it turns out that this independence is more of a feature of linear algebra than a feature of spanning trees. To see this, we observe that the natural isomorphism H 1 (Γ; Z) ∼ = Ω Z (Γ) identifies the basis (δ k ) k with the dual basis of (c k ) k . Therefore, ∑ k c k ⊗ δ k is identified with the identity endomorphism on H 1 (Γ; Z) under the composite H 1 (Γ; Z) ⊗ Ω Z (Γ) * ∼ = H 1 (Γ; Z) ⊗ H 1 (Γ; Z) * ∼ = End(H 1 (Γ; Z)) , which is an invariant of H 1 (Γ; Z) rather than of Γ. Proof. Since Jac(Γ) = W g , we have (9.1) cyc[Jac(Γ)] = 1≤k≤g c k ⊗ 1≤k≤g δ k by Lemma 9.4. Note that for every α ∈ H 1 (Jac(Γ); Z) * , x ∈ i H 1 (Jac(Γ); Z), and y ∈ j H 1 (Jac(Γ); Z) we have α (x ∧ y) = (α x) ∧ x + (−1) i x ∧ (α y) by the properties of the interior product (see [Eis95, Proposition A 2.8]). Similarly for every a ∈ Ω Z (Γ), b ∈ i Ω Z (Γ) * , and c ∈ j Ω Z (Γ) * we have a (b ∧ c) = (a b) ∧ c + (−1) i b ∧ (a c) . Using induction, we conclude that is Poincaré dual to c 1 (L (Θ)) g−d . By Lemma 9.5 we know that c 1 (L (Θ)) = g ∑ i=1 c * i ⊗ δ * e i . With the description of the cap-product on H * , * (X) given in §6, we obtain Theorem 9.8. The Poincaré formula holds tropically, that is we have (g − d)![ W d ] ∼ hom [Θ] g−d . Remark 9.9. The Poincaré formula is more commonly expressed as [ W d ] ∼ hom 1 (g − d)! [Θ] g−d , where we the right side is defined after an extension of scalars to Q. Because the tropical homology groups of Jacobians are torsion-free, this is indeed an equivalent expression of the formula. Proof. This follows directly from Theorem 9.8 and Proposition 5.11. Remark 9.11. We have proved Theorem 9.8 under the assumption that the smooth tropical curve is boundaryless. If Γ is a compact and connected smooth tropical curve with boundary as described in Remark 2.5, then the Poincaré formula holds as well, and the proof in this seemingly more general case can easily be reduced to the boundaryless case. Namely, if Γ denotes the boundaryless smooth tropical curve obtained from Γ by removing the leaves from Γ, then Γ and Γ have identical Jacobians, and their theta divisors coincide by definition. Furthermore, the Abel-Jacobi map associated to Γ contracts all the leaves of Γ , so that the loci W d associated to Γ and Γ coincide as well. 9.5. Consequences of the Poincaré formula. The tropical Poincaré formula has some interesting immediate consequences. One of them is a tropical version of Riemann's theorem. The statement has appeared before [MZ08], with a different (combinatorial) proof. To state the theorem, recall from §8 that the Abel-Jacobi map induces a bijection Pic 0 (Γ) → Jac(Γ). Because all contributions from the chosen base point q cancel in degree 0, this bijection is independent of all choices. In particular, we can view Θ as a divisor on Pic 0 (Γ) in a natural way. Also recall from §8 that while W d ⊆ Jac(Γ) depends on q, the image W d of Γ d in Pic d (Γ) does not. Proof. It suffices to show that there exists a unique µ ∈ Jac(Γ) such that [ W g−1 ] = (t µ ) * [Θ] when considering Θ as a divisor on Jac(Γ). Since [ W g−1 ] is a codimension-1 tropical cycles on the tropical manifold Jac(Γ), we can view W g−1 as a tropical Cartier divisor as well (see §3.1)). Applying the Poincaré formula (Theorem 9.8) with d = g − 1 yields cyc[ W g−1 ] = cyc[Θ] . By definition of Θ, the cycle class cyc[Θ] is Poincaré dual to the element in H 1,1 (Jac(Γ)) corresponding to the linear form Q. As Q restricts to a perfect pairing H 1 (Γ; Z) × Ω Z (Γ) * → Z, Proposition 7.6 applies and there a unique µ ∈ Jac(Γ) such that t * µ W g−1 = Θ. This is, of course, equivalent to the equality (t µ ) * [ W g−1 ] = [Θ]. Corollary 9.13. For every 0 ≤ d ≤ g, we have Jac(Γ) [ W d ] · [ W g−d ] = g d . Remark 9.14. In the special case d = 1 we recover the formula (a) There exists a unique µ ∈ Pic g−1 (Γ) such that [W g−1 ] = [Θ] + µ . (b) The effective tropical 0-cycle obtained from the stable intersection of [ W d ] and [ W g−d ] has degree g d . (c) The tropical 0-cycle [Θ] g has degree g!. cyc(A × B) = cyc(A) × cyc(B) , and cyc(D · A) = c 1 (L (D)) cyc(A) . FIGURE 3 . 3A tropical curve of genus g, the universal cover of its Jacobian, and the sets W 1 and Θ lifted to the universal cover (see §8.2 for the definition of Θ). Theorem 3.3, we have cyc[D] = c 1 (L (D)) [Γ], and by Poincaré duality this implies that c 1 (L (D) = 0 if and only if cyc[D] = 0. By what we just saw, we have cyc[D] = 0 if and only if the degree of D is 0. We see that if D ∈ CDiv(Γ) is another Cartier divisor, then [D] and [D ] are homologically equivalent if and only if c 1 (L (D)) = c 1 (L (D )), which holds if and only if D and D have the same degree.Proposition 5.8. Algebraic equivalence implies homological equivalence: if A and B are tropical cycles on a closed rational polyhedral space X with A ∼ alg B, then A ∼ hom B. where n = dim(X), and (|A| ∩ |B|) reg = |A| reg ∩ |B| reg . As explained in [GS19, Remark 5.5], we can view cyc(A) as an element of the Borel-Moore homology group H BM k,k (|A|, X) with supports on |A|, and similarly cyc(B) ∈ H BM l,l (|A|, X) and cyc(A · B) ∈ H BM k+l−n,k+l−n (|A ∩ B|, X) . Using Verdier duality [GS19, Theorem D], the cycle class cyc(A) is Poincaré dual to a cohomology class with support on |A|, that is to an element in H n−k,n−k |A| (X). Therefore, the intersection product cyc(A) · cyc(B) is also represented by an element in H BM k+l−n,k+l−n (|A ∩ B|, X) and it suffices to prove the equality cyc(A · B) = cyc(A) · cyc(B) in H BM k+l−n,k+l−n (|A ∩ B, X). For dimension reasons, both sides are uniquely determined by their restrictions to H BM k+l−n,k+l−n (|A ∩ B| ∩ U,U), where U is an open subset of X with U ∩ |A ∩ B| = |A ∩ B| reg [GS19, Lemma 4.8 (b)]. Combining the facts that V → H BM k+l−n,k+l−n (|A ∩ B| ∩V,V ) satisfies the sheaf axioms [GS19, Lemma 4.8 (b)], X is locally isomorphic to open subsets of R n , and (|A| ∩ |B|) reg = |A| reg ∩ |B| reg allows us to further reduce to the case where U = R n and A and B are linear subspaces of R n . In this case, there exist hyperplanes H 1 , . . . , H n−k and H 1 . . . , H n−l , and integers a, b ∈ Z such that A = a · H 1 · · · H n−k andB = b · H 1 · · · H n−l .Let α ∈ H n−k,n−k |A| (X) be the Poincaré dual to cyc(A). Applying [GS19, Proposition 5.12] (see also [GS19, Remark 5.13]) yields where m R is the R-linear extension of a linear form m : N → Z.Any factor of automorphy a(−, −) defines a group action λ .(x, b) = (x + λ , b + a(λ , x)) of Λ on the trivial line bundle N R × R on N R . The tropical line bundle on X corresponding to a(−, −) is the quotient (N R × R)/Λ.7.2. The Appell-Humbert Theorem.It is easy to check that for every morphism l ∈ Hom(Λ, R) and every symmetric bilinear form E on N R with E(Λ × N) ⊆ Z, the family of integral affine functions on N R defined by Theorem 7. 2 ( 2Tropical Appell-Humbert Theorem). Let L be a tropical line bundle on the real torus X = N R /Λ. Then there exists l ∈ Hom(Λ, R) and a symmetric form E on N R with E(Λ × N) ⊆ Z such that L ∼ = L (E, l). Moreover, if we are given another choice of l ∈ Hom(Λ, R) and symmetric form E on N R with E (Λ × N) ⊆ Z, then L ∼ = L (E , l ) if and only if E = E and the linear form (l − l ) R : N R → R has integer values on N. by what we have shown above. This happens if and only if E(−, x − x ) has integer values on N by Theorem 7.2. If E restricts to a perfect pairing on Λ × N, this happens if and only if x − x ∈ Λ, that is if and only if x = x .Remark 7.5. If we call two line bundles on a real torus tropically equivalent if they have the same first Chern class, then Proposition 7.4 shows that two tropical line bundles which are translates of each other are tropically equivalent, with the converse being true if their first Chern class is nondegenerate. This is completely analogous to the situation on complex tori, where two line bundles are analytically equivalent if they have the same first Chern class [BL04, Proposition 2.5.3]. If two line bundles on a complex torus are translates of each other, then they are analytically equivalent, with the converse being true if their first Chern class is nondegenerate [BL04, Corollary 2.5.4]. We have cyc[D] = cyc(D · [X]) = c 1 (L (D)) cyc[X], so cyc[D] is Poincaré dual to c 1 (L (D)), and similarly cyc[D ] is Poincaré dual to c 1 (L (D )). By assumption, it follow that c 1 (L (D)) = c 1 (L (D )) = E . Corollary H 1 ( 1Jac(Γ); Z) ∼ = H 1 (Γ; Z) Lemma 9 . 4 . 94We have cyc[ W d ] = ∑ I⊆{1,...,g} |I|=d sign being the same on the right-hand sides of the two equations as long as we order the sets I and {1, . . . , g} consistently in both equations. Combining these identities with the expression (9.1) for the fundamental class of Jac(Γ) and the identity (6.precisely what we needed to show.The following result is the tropical analogue of [BL04, Theorem 4.10.4]. Lemma 9.7. We have cyc([Θ] g−d ) = (g − d)! ∑ I⊆{1,...Since intersections with divisors is compatible with the tropical cycle class map by Theorem 3.3, we have cyc([Θ] g−d ) = c 1 (L (Θ)) g−d [Jac(Γ)] , that is cyc([Θ] g−d ) . The proof of the tropical Poincaré formula. Corollary 9 . 12 ( 912Tropical Riemann's Theorem). (cf. [MZ08, Corollary 8.6]) There exists a unique µ ∈ Pic g−1 (Γ) such that [W g−1 ] = µ + [Θ] ,where we consider [Θ] as a tropical cycle in Pic 0 (Γ). 2.3. Tropical curves. A tropical curve is a purely 1-dimensional boundaryless rational polyhedral space. With this definition, the underlying space of a tropical curve Γ is a topological graph. In particular, it has a finite set of vertices (branch points) V (Γ) where Γ does not locally look like an open interval in R, and a set of open edges E(Γ), which are the connected components of Γ \V (Γ). The closed edges of Γ are the closures of its open edges and an open edge segment is a connected open subset of an open edge. A tropical curve is smooth (see is a tropical d-cycle on W d . Note that this does not mean that W d has dimension d or that it is pure-dimensional as (Φ d q ) * [Γ d ] could be 0. All we can say a priori is that the support of (Φ d q ) * [Γ d ] is precisely the subset of points of W d where the local dimension of W d is equal to d. Theorem 8.2 ([GST18, Theorem 8.3]). The subset W d of Pic d (Γ) is purely d-dimensional. follows immediately that W d is purely d-dimensional as well, and hence that the tropical cycle (Φ d q ) * [Γ d ] has support W d . We will now show that W d has a fundamental cycle [ W d ] which we will relate toIt ) . )Proposition 8.3. Let 0 ≤ d ≤ g.Then W d has a fundamental cycle [ W d ], and the equality(Φ d q ) * [Γ d ] = d![ W d ] hold in Z * (Jac(Γ)). Proof. It suffices to show that (Φ d q ) * [Γ d ] has weight d! on all components of W reg d . Indeed, if that is the case then 1 d! (Φ q ) * [Γ d ] is a tropical cycle with support W d and weight 1 on all components of W reg d . But this implies that W d has a fundamental cycle and that (Φ d q ) * [Γ d ] = d![ W d ]. 8.4. The equality of tropical cycles d k=1 [ W 1 ] = d![ W d ] holds in Z * (Jac(Γ)). Proof. This follows directly from the formulas for [ W d ] and [ W 1 ] given in Proposition 8.3, and the fact that Φ d q is the d-fold sum of Φ q . We have a morphism H 1 (Γ; Z) → H 1 (Jac(Γ); Z) induced by the (continuous) Abel-Jacobi map. As noticed in §6, there is a natural identification Proof. By Lemma 9.4 we have cyc[ W d ] = ∑ I⊆{1,...,g} |I|=d On the other hand, by Lemma 9.7 we have cyc([Θ] g−d ) = (g − d)! ∑ It follows immediately that cyc((g − d)![ W d ]) = cyc([Θ] g−d ) , which is equivalent to saying that [ W d ] and [Θ] g−d are homologically equivalent. Corollary 9.10. We have (g − d)![ W d ] ∼ num [Θ] g−d .k∈I c k ⊗ k∈I δ k . I⊆{1,...,g} |I|=d k∈I c k ⊗ k∈I δ k . Jac(Γ)[ W 1 ] · [Θ] = g stated in [MZ08, Theorem 6.5]. Also note that the intersection product [ W d ] · [ W g−d ] is effective since one can locally apply the fan displacement rule. Using the description of the Pontryagin product from §6, we can rewrite this aswhere the sum is over all maps σ : {1, . . . , d} → {1, . . . , g}.Since Ω Z (Γ) is skewcommutative, only an injective σ would contribute to the sum. If I is the image of an injective σ then, using our Notation 9.1, we haveThe result follows after dividing both sides by d!. This division is allowed because the tropical homology groups of Jac(Γ) are torsion-free.9.3. Tropical cycle classes of powers of the theta divisor.Lemma 9.5. We haveProof. As already observed in §8.2, we have c 1 (L (Θ)) = Q, where we identifywith Hom(H 1 (Γ; Z) ⊗ Ω Z (Γ), Z). Because (c k ) k and (δ k ) k are dual bases with respect to Q, the assertion follows.Lemma 9.6. Let I ⊆ {1, . . . , g} with |I| = d. ThenProof. We apply Poincaré formula (Theorem 9.8) three times, and obtain a chain of equalitiesthat hold modulo homological equivalence. Taking the degree yields the result.Corollary 9.15. We have Jac(Γ)[Θ] g = g! .Proof. By the tropical Poincaré formula (Theorem 9.8), we have Jac(Γ)Remark 9.16. Classically, the statement of Corollary 9.15 also follows from the geometric Riemann-Roch theorem for Abelian varieties [BL04, Theorem 3.6.3]. Tropically, it is also possible to prove the statement using the duality of Voronoi and Delaunay decompositions. E Arbarello, M Cornalba, P A Griffiths, J Harris, Grundlehren der Mathematischen Wissenschaften. New YorkSpringer-VerlagIE. Arbarello, M. Cornalba, P. A. Griffiths, and J. Harris, Geometry of algebraic curves. Vol. I, Grundlehren der Mathematischen Wissenschaften, vol. 267, Springer-Verlag, New York, 1985. MR770932 ↑1, 2 First steps in tropical intersection theory. Lars Allermann, Johannes Rau, MR2591823 ↑8Math. Z. 2643Lars Allermann and Johannes Rau, First steps in tropical intersection theory, Math. Z. 264 (2010), no. 3, 633-670. MR2591823 ↑8 Algebraic cycles on Jacobian varieties. Arnaud Beauville, Compos. Math. 1403Arnaud Beauville, Algebraic cycles on Jacobian varieties, Compos. Math. 140 (2004), no. 3, 683-688. MR2041776 ↑3 Complex abelian varieties, Second. Christina Birkenhake, Herbert Lange, Grundlehren der Mathematischen Wissenschaften. BerlinSpringer-Verlag30235Christina Birkenhake and Herbert Lange, Complex abelian varieties, Second, Grundlehren der Mathematischen Wissenschaften, vol. 302, Springer-Verlag, Berlin, 2004. MR2062673 ↑1, 2, 3, 24, 32, 35 Riemann-Roch and Abel-Jacobi theory on a finite graph. Matthew Baker, Serguei Norine, MR2355607 ↑2. 215Matthew Baker and Serguei Norine, Riemann-Roch and Abel-Jacobi theory on a finite graph, Adv. Math. 215 (2007), no. 2, 766-788. MR2355607 ↑2 A tropical proof of the Brill-Noether theorem. Filip Cools, Jan Draisma, Sam Payne, Elina Robeva, MR2914965 ↑3. 230Filip Cools, Jan Draisma, Sam Payne, and Elina Robeva, A tropical proof of the Brill-Noether theorem, Adv. Math. 230 (2012), no. 2, 759-776. MR2914965 ↑3 C is not algebraically equivalent to C − in its Jacobian. G Ceresa, MR690847 ↑3. G. Ceresa, C is not algebraically equivalent to C − in its Jacobian, Ann. of Math. (2) 117 (1983), no. 2, 285-291. MR690847 ↑3 . David Eisenbud, Commutative Algebra, Graduate Texts in Mathematics. 150Springer-VerlagWith a view toward algebraic geometry. MR1322960 ↑32David Eisenbud, Commutative algebra, Graduate Texts in Mathematics, vol. 150, Springer- Verlag, New York, 1995. With a view toward algebraic geometry. MR1322960 ↑32 The diagonal of tropical matroid varieties and cycle intersections. Georges François, Johannes Rau, Collect. Math. 64215MR3041763Georges François and Johannes Rau, The diagonal of tropical matroid varieties and cycle intersections, Collect. Math. 64 (2013), no. 2, 185-210. MR3041763 ↑8, 15 Georges François, MR2996952 ↑9Cocycles on tropical varieties via piecewise polynomials. 141Georges François, Cocycles on tropical varieties via piecewise polynomials, Proc. Amer. Math. Soc. 141 (2013), no. 2, 481-497. MR2996952 ↑9 Non-Archimedean and tropical theta functions. Tyler Foster, Joseph Rabinoff, Farbod Shokrieh, Alejandro Soto, MR3880286 ↑28. 372Tyler Foster, Joseph Rabinoff, Farbod Shokrieh, and Alejandro Soto, Non-Archimedean and tropical theta functions, Math. Ann. 372 (2018), no. 3-4, 891-914. MR3880286 ↑28 Principles of algebraic geometry. Phillip Griffiths, Joseph Harris, John Wiley & Sons, IncNew YorkReprint of the 1978 original. MR1288523 ↑1, 2Phillip Griffiths and Joseph Harris, Principles of algebraic geometry, Wiley Classics Library, John Wiley & Sons, Inc., New York, 1994. Reprint of the 1978 original. MR1288523 ↑1, 2 A Riemann-Roch theorem in tropical geometry. Andreas Gathmann, Michael Kerber, 217-230. MR2377750 ↑2Math. Z. 2591Andreas Gathmann and Michael Kerber, A Riemann-Roch theorem in tropical geometry, Math. Z. 259 (2008), no. 1, 217-230. MR2377750 ↑2 A sheaf-theoretic approach to tropical homology. Andreas Gross, Farbod Shokrieh, 1118Preprint available at arXiv:1906.09245. ↑2, 4, 5, 8, 10Andreas Gross and Farbod Shokrieh, A sheaf-theoretic approach to tropical homology, 2019. Preprint available at arXiv:1906.09245. ↑2, 4, 5, 8, 10, 11, 12, 17, 18 Effective divisor classes on metric graphs. Andreas Gross, Farbod Shokrieh, Lilla Tóthmérész, arXiv:1807.00843.↑32627Preprint available atAndreas Gross, Farbod Shokrieh, and Lilla Tóthmérész, Effective divisor classes on metric graphs, 2018. Preprint available at arXiv:1807.00843. ↑3, 26, 27 . Ilia Itenberg, Ludmil Katzarkov, Grigory Mikhalkin, Ilia Zharkov, Tropical Homology, Math. Ann. 3741-2MR3961331Ilia Itenberg, Ludmil Katzarkov, Grigory Mikhalkin, and Ilia Zharkov, Tropical Homology, Math. Ann. 374 (2019), no. 1-2, 963-1006. MR3961331 ↑2, 8 Lefschetz (1, 1)-theorem in tropical geometry. Philipp Jell, Johannes Rau, Kristin Shaw, Epijournal Geom. Algébrique. 212Art. 11, 27. MR3894860 ↑4, 5Philipp Jell, Johannes Rau, and Kristin Shaw, Lefschetz (1, 1)-theorem in tropical geometry, Epijournal Geom. Algébrique 2 (2018), Art. 11, 27. MR3894860 ↑4, 5, 8, 12 Algebraic cycles and the Weil conjectures, Dix exposés sur la cohomologie des schémas. S L Kleiman, MR292838 ↑2S. L. Kleiman, Algebraic cycles and the Weil conjectures, Dix exposés sur la cohomologie des schémas, 1968, pp. 359-386. MR292838 ↑2 The j-invariant of a plane tropical cubic. Eric Katz, Hannah Markwig, Thomas Markwig, MR2457725 ↑6. 320Eric Katz, Hannah Markwig, and Thomas Markwig, The j-invariant of a plane tropical cubic, J. Algebra 320 (2008), no. 10, 3832-3848. MR2457725 ↑6 Numerical and homological equivalence of algebraic cycles on Hodge manifolds. David I Lieberman, MR230336 ↑4Amer. J. Math. 90David I. Lieberman, Numerical and homological equivalence of algebraic cycles on Hodge manifolds, Amer. J. Math. 90 (1968), 366-374. MR230336 ↑4 A note on Brill-Noether theory and rank-determining sets for metric graphs. Chang Mou Lim, Sam Payne, Natasha Potashnik, MR2999150 ↑3. 23Chang Mou Lim, Sam Payne, and Natasha Potashnik, A note on Brill-Noether theory and rank-determining sets for metric graphs, Int. Math. Res. Not. IMRN 23 (2012), 5484-5504. MR2999150 ↑3 Tautological cycles on Jacobian varieties. Giambattista Marini, 167-190. MR2414143 ↑3Collect. Math. 592Giambattista Marini, Tautological cycles on Jacobian varieties, Collect. Math. 59 (2008), no. 2, 167-190. MR2414143 ↑3 On symmetric products of curves. Arthur Mattuck, MR0136608 ↑2Proc. Amer. Math. Soc. 13Arthur Mattuck, On symmetric products of curves, Proc. Amer. Math. Soc. 13 (1962), 82-87. MR0136608 ↑2 Ben Moonen, Relations between tautological cycles on Jacobians. 84Ben Moonen, Relations between tautological cycles on Jacobians, Comment. Math. Helv. 84 (2009), no. 3, 471-502. MR2507251 ↑3 Abelian varieties. David Mumford, Tata Institute of Fundamental Research Studies in Mathematics. C. P. Ramanujam and Yuri Manin5Hindustan Book AgencyCorrected reprint of the second (1974) edition. MR2514037 ↑20David Mumford, Abelian varieties, Tata Institute of Fundamental Research Studies in Math- ematics, vol. 5, Hindustan Book Agency, New Delhi, 2008. With appendices by C. P. Ra- manujam and Yuri Manin, Corrected reprint of the second (1974) edition. MR2514037 ↑20 Tropical curves, their Jacobians and theta functions, Curves and abelian varieties. Grigory Mikhalkin, Ilia Zharkov, 2834Grigory Mikhalkin and Ilia Zharkov, Tropical curves, their Jacobians and theta functions, Curves and abelian varieties, 2008, pp. 203-230. MR2457739 ↑2, 3, 6, 12, 25, 26, 28, 34 Tropical eigenwave and intermediate Jacobians, Homological mirror symmetry and tropical geometry. Grigory Mikhalkin, Ilia Zharkov, MR3330789. 4Grigory Mikhalkin and Ilia Zharkov, Tropical eigenwave and intermediate Jacobians, Ho- mological mirror symmetry and tropical geometry, 2014, pp. 309-349. MR3330789 ↑4, 8 Special divisors on marked chains of cycles. Nathan Pflueger, J. Combin. Theory Ser. A. 1503Nathan Pflueger, Special divisors on marked chains of cycles, J. Combin. Theory Ser. A 150 (2017), 182-207. MR3645573 ↑3 Universal algebraic equivalences between tautological cycles on Jacobians of curves. A Polishchuk, MR2190148 ↑3Math. Z. 2514A. Polishchuk, Universal algebraic equivalences between tautological cycles on Jacobians of curves, Math. Z. 251 (2005), no. 4, 875-897. MR2190148 ↑3 A tropical intersection product in matroidal fans. Kristin M Shaw, 459-491. MR3032930 ↑8SIAM J. Discrete Math. 271Kristin M. Shaw, A tropical intersection product in matroidal fans, SIAM J. Discrete Math. 27 (2013), no. 1, 459-491. MR3032930 ↑8 C is not equivalent to C − in its Jacobian: a tropical point of view. Ilia Zharkov, Int. Math. Res. Not. IMRN. 315MR3340338Ilia Zharkov, C is not equivalent to C − in its Jacobian: a tropical point of view, Int. Math. Res. Not. IMRN 3 (2015), 817-829. MR3340338 ↑4, 15
[]
[ "Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation", "Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation" ]
[ "Rae Jeong ", "Yusuf Aytar ", "David Khosid ", "Yuxiang Zhou ", "Jackie Kay ", "Thomas Lampe ", "Konstantinos Bousmalis ", "Francesco Nori " ]
[]
[]
Collecting and automatically obtaining reward signals from real robotic visual data for the purposes of training reinforcement learning algorithms can be quite challenging and time-consuming. Methods for utilizing unlabeled data can have a huge potential to further accelerate robotic learning. We consider here the problem of performing manipulation tasks from pixels. In such tasks, choosing an appropriate state representation is crucial for planning and control. This is even more relevant with real images where noise, occlusions and resolution affect the accuracy and reliability of state estimation. In this work, we learn a latent state representation implicitly with deep reinforcement learning in simulation, and then adapt it to the real domain using unlabeled real robot data. We propose to do so by optimizing sequence-based selfsupervised objectives. These exploit the temporal nature of robot experience, and can be common in both the simulated and real domains, without assuming any alignment of underlying states in simulated and unlabeled real images. We propose Contrastive Forward Dynamics loss, which combines dynamics model learning with time-contrastive techniques. The learned state representation that results from our methods can be used to robustly solve a manipulation task in simulation and to successfully transfer the learned skill on a real system. We demonstrate the effectiveness of our approaches by training a vision-based reinforcement learning agent for cube stacking. Agents trained with our method, using only 5 hours of unlabeled real robot data for adaptation, shows a clear improvement over domain randomization, and standard visual domain adaptation techniques for sim-to-real transfer.
10.1109/icra40945.2020.9197326
[ "https://arxiv.org/pdf/1910.09470v1.pdf" ]
204,800,473
1910.09470
377079e2a90e19b1e667853fc9aa4caf3d899489
Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation Rae Jeong Yusuf Aytar David Khosid Yuxiang Zhou Jackie Kay Thomas Lampe Konstantinos Bousmalis Francesco Nori Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation Collecting and automatically obtaining reward signals from real robotic visual data for the purposes of training reinforcement learning algorithms can be quite challenging and time-consuming. Methods for utilizing unlabeled data can have a huge potential to further accelerate robotic learning. We consider here the problem of performing manipulation tasks from pixels. In such tasks, choosing an appropriate state representation is crucial for planning and control. This is even more relevant with real images where noise, occlusions and resolution affect the accuracy and reliability of state estimation. In this work, we learn a latent state representation implicitly with deep reinforcement learning in simulation, and then adapt it to the real domain using unlabeled real robot data. We propose to do so by optimizing sequence-based selfsupervised objectives. These exploit the temporal nature of robot experience, and can be common in both the simulated and real domains, without assuming any alignment of underlying states in simulated and unlabeled real images. We propose Contrastive Forward Dynamics loss, which combines dynamics model learning with time-contrastive techniques. The learned state representation that results from our methods can be used to robustly solve a manipulation task in simulation and to successfully transfer the learned skill on a real system. We demonstrate the effectiveness of our approaches by training a vision-based reinforcement learning agent for cube stacking. Agents trained with our method, using only 5 hours of unlabeled real robot data for adaptation, shows a clear improvement over domain randomization, and standard visual domain adaptation techniques for sim-to-real transfer. I. INTRODUCTION Learning-based approaches, and specifically the ones that utilize the recent advances of deep learning, have shown strong generalization capacity and the ability to learn relevant features for manipulation of real objects [1], [2], [3], [4], [5]. These features can be used to avoid explicit object pose estimation [6] which is often inaccurate, even for known objects, in the presence of occlusions and noise. Furthermore, parameterization of the environment state with positions in R 3 and rotations in SO (3) is not necessarily the best state representation for every task. Deep learning can provide task-relevant features and state representation directly from data. However, deep learning, and especially deep reinforcement learning (RL), requires a significant amount of data, which is a critical challenge for robotics [5]. For this reason, sim-to-real transfer is an important area of research for vision-based robotic control as simulations offer an abundance of labeled data. Authors are with DeepMind London, UK. {raejeong, yusufaytar, dkhosid, yuxiangzhou, kayj, thomaslampe, konstantinos, fnori}@google.com. Qualitative results can be found in our supplementary video: https://youtu. be/pmLASU_MW_o Fig. 1: First step of our method trains a state-based and a vision-based agents in simulation. Using unlabeled real data we then perform domain adaptation with a sequencebased self-supervised objective, that are computable on both simulated and real data. The simulated and real robot setups we used are displayed at the bottom. Pixel-based agents trained in simulation do not generalize naively to the real world. However, recent sim-toreal transfer techniques have shown significant promise in reducing real-world sample complexity. Such techniques either randomize the simulated environment in ways that help with generalization [7], [8], [9], use domain adaptation [10], or both [11]. Our work falls in the scope of unsupervised domain adaptation techniques, i.e. methods that are able to utilize both labeled simulated and unlabeled real data. These have been successfully used both in computer vision [12] and in vision-based robot learning for manipulation [11] and locomotion [10]. The contribution of our work is two-fold: (a) we investigate the use of sequence-based self-supervision as a way to improve sim-to-real transfer; and (b) we develop contrastive forward dynamics (CFD), a self-supervised objective to achieve that. We propose a two-step procedure (see Fig. 1) for such sequence-based self-supervised adaptation. In the first step, we use the simulated environment to learn a policy that solves the task in simulation using synthetic images and proprioception as observations. In the second step, we use synthetic and unlabeled real image sequences to adapt the state representation to the real domain. Besides the task objective on the simulated images, this step also uses sequence-based self-supervision as a way to provide a common objective for representation learning that applies in both simulation and reality without the need for paired or aligned data. Our CFD objective additionally combines dynamics model learning with time-contrastive techniques to better utilize the structure of sequences in real robot data. We demonstrate the effectiveness of our approach by training a vision-based cube stacking RL agent. Our agent interacts with the real world with 20Hz closed-loop Cartesian velocity control from vision which makes our method applicable to a large set of manipulation tasks. The cube stacking task also emphasizes the generality of our approach for long horizon manipulation tasks. Most importantly, our method is able to make better use of the available unlabeled real world data resulting in higher stacking performance, compared to domain randomization [13] and domain-adversarial neural networks [14]. II. RELATED WORK a) Manipulation: challenges and approaches: It is well acknowledged that both planning and state estimation become challenging when performed in cluttered environments [15]. During execution, continuously tracking the pose of manipulated objects becomes increasingly more difficult in presence of occlusions, often caused by the gripper itself. Surveys reveal that pose estimation is still an essential component in many approaches to grasping [1, fig. 3-5-7]; proposed approaches rely on some sort of supervision, either in the form of model-based grasp quality measure [16], [17], [18], or in the form of heuristics for grasp stability [1, fig. 18-19], or finally in the form of labelled data for learning [1, fig. 9]. b) Sim-to-Real Transfer for Robotic Manipulation: Sim-to-real transfer learning aims to bridge the gaps between simulation and reality, which consist of differences in the dynamics and observation models such as image rendering. Sim-to-real transfer techniques can be grouped by the amount and kind of real world data they use. Techniques like domain randomization [9], [13] focus on zero-shot transfer. Others are able to utilize real data in order to adapt to the real world via system identification or domain adaptation. Similar to system identification in classical control [19], recent techniques like SimOpt [20] utilize real data to learn policies that are robust under different transition dynamics. Unsupervised domain adaptation [12] has been successfully used for sim-to-real transfer in vision-based robotic grasping [11]. Semi-supervised domain adaptation additionally utilizes any labeled data that might be available, as was done by [11]. In many ways, zero-shot transfer, system identification, domain adaptation-with or without labeled data in the real world-are complementary groups of techniques. c) Cube Stacking Task: Recent work on efficient multitask deep reinforcement learning [21] has shown the difficulty of cube stacking task even in simulated environments as the task requires several core abilities such as grasping, lifting and precise placing. Sim-to-real method has also been applied for cube stacking task from vision where combination of domain randomization and imitation learning was used to perform zero-shot sim-to-real transfer of the cube stacking task [22]. However, the resulting policy only obtained a success rate of 35% over 20 trials in a limited number of configurations reconfirming the difficulty of the cube stacking task. d) Unsupervised Domain Adaptation: Unsupervised domain adaptation techniques are either feature-based or pixel-based. Pixel-based adaptation is possible by changing the observations to match those from the real environment with image-based GANs [23]. Feature-based adaptation is done either by learning a transformation over fixed simulated and real feature representations, as done by [24] or by learning a domain-invariant feature extractor, also represented by a neural network [25], [26]. The latter has been shown to be more effective [26], and we employ a feature-level domain adversarial method [25] as a baseline. e) Sequence-based Self Supervision: Sequence-based self-supervision is commonly applied for video representation learning, particularly making use of local [27] and global [28] temporal structures. Time-contrastive networks (TCN) [29] utilize two temporally synchronous camera views to learn view-independent high-level representations. By predicting temporal distance between frames, Aytar et al. [30] learn a representation that can handle small domain gaps (i.e. color changes and video artifacts) for the purpose of imitating YouTube gameplays in an Atari environment. To the best of our knowledge, sequence-based self-supervision for handling large visual domain gaps in sim-to-real transfer for robotic learning have not been considered before. III. OUR METHOD In this section, we provide the detailed description of our method for enabling sim-to-real transfer of visual robotic manipulation. We propose a two stage training process. In the first stage, state-based and vision-based agents are trained simultaneously in simulation with domain randomization. We then collect unlabeled robot data by executing the vision-based agent on the real robot. In the second stage, we perform self-supervised domain adaptation by tuning the visual perception module with the help of sequence-based self-supervised objectives optimized over simulation and real world data jointly. Our method optimizes three main loss functions: (a) L RL is the reinforcement learning (RL) objective optimized by the state-based and vision-based agents in simulation, (b) L BC is the behavioral cloning loss utilized by the visionbased agent to speed up learning by imitating the state-based agent, and (c) L SS is the sequence-based self-supervised objective optimized on both simulation and real robot data. The purpose of L SS is to align the agent's perception of real and simulated visuals by solving a common objective using a shared encoder. Our system is composed of four main neural networks: (a) an image representation encoder with parameters φ = {φ i } L i composed of L layers which embeds any visual observation o t to a latent space as z t = φ(o t ), (b) a vision-based deep policy network with parameters θ which combines the output of the visual encoder with the proprioceptive observations and outputs an action, (c) a state-based policy network with parameters τ which takes the simulation state and outputs an action, and (d) a self-supervised objective network with parameters ψ which takes the encoded visual observation z t (and action a t if necessary) as input and directly computes the loss L SS . Fig. 1 presents a visual description of these components. In the remainder of this section, we discuss the two stages of our method and present an objective for sequence-based self-supervision. A. First stage: Learning in simulation In this stage we train a state-based agent and a visionbased agent with a shared experience replay. Our goal is to speed up the learning process by leveraging the privileged information in simulation through the state-based agent, and distilling the learned skills into the vision-based agent using a shared replay buffer. Both of the agents are trained with an off-policy reinforcement learning objective, L RL . We use a state of the art continuous control RL algorithm, Maximum a Posteriori Policy Optimization (MPO) [31], which uses an expectation-maximization-style policy optimization with an approximate off-policy policy evaluation algorithm. As shown in Fig. 1, the state-based agent has access to the simulator state, which allows it to learn much faster than the vision-based agent that uses raw pixel observations. In essence, the state-based agent is an asymmetric behavior policy, which provides diverse and relevant data for reinforcement learning of the vision-based agent. This idea leverages the flexibility of off-policy RL, which has been shown to improve sample complexity in a single-domain setting [32]. Additionally, we also utilize the behavioral cloning (BC) objective [33] for the vision-based agent to imitate the state-based agent. L BC provides reliable training and further improves sample efficiency in the learning process, as we show in Sect. V. We additionally employ DDPGfD [34] which injects human demonstrations to the replay buffer and asymmetric actor-critic for our stacking experiments. Our final objective in the first stage can be written as follows: B. Second stage: Self-supervised sim-to-real adaptation min φ,θ,τ L RL + L BC(1) Although our vision-based agent can perform reasonably well when transferred to the real robot, there is still significant room for improvement, mostly due to the large domain gap between simulation and the real robot. Our main objective in this stage is to mitigate the negative effects of the domain gap by utilizing the unlabeled robot data collected by our simulation-trained agent for domain adaptation. In addition to well-explored domain adversarial training [25], which we present as a strong baseline, we investigate the use of sequence-based self-supervised objectives for sim-toreal domain adaptation. Modality tuning [35], freezing the higher-level weights of a trained network and adapting only the initial layers for a new modality (or domain), is a method shown to successfully align multiple modalities (i.e. natural images, line drawings and text descriptions), though it requires class labels in all modalities. In our context, it would require rewards for the real-world data which we do not have. Instead, we utilize a self-supervised objective while performing modality tuning (i.e. simulation-to-reality adaptation) which can be readily applied both in simulation and reality. However, there is no guarantee that this alignment learned using a L SS objective would indeed successfully transfer the vision-based policy from simulation to the real world. In fact, different L SS objectives would result in different transfer performances. Finding a suitable L SS objective for better transfer of the learned policy is of major importance as well. In the context of our neural network architecture, while applying the modality tuning, we freeze the vision-based agent's policy network parameters θ and the encoder parameters φ except for the first layer φ 1 . This allows the system to adapt its visual perception to the real world without making major changes in the policy logic, which we expect to be encoded in the higher layers of the neural network. We also continue optimizing the L RL and L BC objectives along with L SS to ensure that as φ 1 is adapting itself to solve the L SS , it also maintains good performance for the manipulation task. In other words, φ 1 is forced to adapt itself without compromising the performance of the vision-based agent. The final objective in the second stage is: min φ1,ψ L RL + L BC + L SS(2) Due to its wide adoption in the robotics settings, we employ the Time-Contrastive Networks (TCN) [29] objective for L SS in our self-supervised sim-to-real adaptation method, though any other sequence-based self-supervised objective can also be used here. In the next subsection we introduce an alternative loss for L SS which makes use of domainspecific properties of robotics, therefore potentially result in better transferable alignment. C. Contrastive Forward Dynamics Time-Contrastive Networks (TCN) [29], which we use as a baseline, and other sequence-based self-supervision methods [30], [36], [37], mainly exploit the temporal structure of the observations. However, with robot data we also have physical dynamics of the real world probed by actions and perceived through observations. In this section we describe the contrastive forward dynamics (CFD) objective, which is able to utilize both observations and actions by learning a forward dynamics model in a latent space. Essentially we are learning the latent transition dynamics of the environment which has strong connections to the model-based optimal control approaches [38]. Therefore we can expect that the alignment achieved through our CFD objective potentially better transfers the learned policy from simulation to real world. We formally define the CFD objective below. Assume we are given a dataset of sequences where each sequence s = {(o t , a t )} T t is of length T . o t denotes observations and a t denotes the actions at time t. Any observation o t is embedded into a latent space as z t = φ(o t ) through the encoder network φ. Given a transition (z t , a t , z t+1 ) in the latent space, the forward dynamics model predicts the next latent state asẑ t+1 = f (z t , a t ) where f is the prediction network. Instead of learning f by minimizing the prediction error ||ẑ t+1 − z t+1 ||, which has a trivial solution achieved by setting the latents to zero, we minimize a contrastive prediction loss. A contrastive loss [39], [40] takes pairs of examples as input and predicts whether the two elements in the pair are from the same class or not. It can also be implemented as a multi-class classification objective comparing one positive pair and multiple negative pairs [41], creating an embedding space by pushing representations from the same "class" together and ones from different "classes" apart. In our context, (ẑ i , z i ) is our positive pair and any other nonmatching pairs (ẑ i , z k ) where k = i are the negative pairs. With CFD, we solve such a multi-class classification problem by minimizing the cross-entropy loss for any given latent observation z i and its predictionẑ i as follows: Fig. 3: Rollouts of the multi-step future predictions in the learned latent space. For instance,ẑ t+2 andẑ t+2 are one and two step predictions of z t+2 , respectively. In our experiments, we use 5 step prediction for a trajectory length of 32. min φ,f − log e −||ẑi−zi|| e −||ẑi−zi|| + k =i e −||ẑi−z k ||(3) In practice, while forming the negative pairs we pick all the other latent observations in the same mini-batch, which also contains observations from the same sequence. To further enforce the prediction quality, we perform multistep future predictions by continuously applying the forward dynamics model. These longer horizon predictions optimize the same objective given in Eq. 3 whereẑ i is replaced with any multi-step prediction of z i . Fig. 3 illustrates how multistep predictions are obtained using a single forward dynamics model. IV. SIMULATED AND REAL ENVIRONMENTS AND TASKS The primary manipulation task we have used in this work is vision-based stacking of one cube on top of another. However, as this is a particularly hard task to solve [21] from pixels from scratch with off-the-shelf RL algorithms, we studied the ablation effects of different components of our proposed RL framework on the easier problem of visionbased lifting instead. As lifting is an easier task, and a required skill towards achieving stacking, we focused on the latter for the rest of our experimental analysis in simulation and for all our real world evaluations. Fig. 1 shows our real robot setup, which is composed of a 7-DoF Sawyer robotic arm, a basket and two cubes. The agent receives the front left and right 64 × 64 RGB camera images as observations, shown in Fig. 2. The two cameras are positioned in a way that can help disambiguate 3D positions of the arm and the objects. In addition to these images, our observations also consist of the pose of the cameras, endeffector position and angle, and the gripper finger angle. The action space of the agent is 4D Cartesian velocity control of the end effector, with an additional action for actuating the gripper. The real environment is modelled in simulation using the MuJoCo [42] simulator. Fig 1 also shows the simulated version of our environment. Unless mentioned otherwise, all of our policies are trained in simulation with domain randomization and a shaped reward functions. The shaped reward function for lifting is a combination of reaching, touching and lifting rewards. Let d gripper be the Euclidean distance of a target object from the pinch . Otherwise, r stack = 2+r reach (x b )+1(ho(xt)>ho(x b )∨||xt−x b || 2 <0.07) 5 , to account for bringing the cubes closer to each other. In practice we set r reach (x b ) = 1 if it's greater than 0.75. In the real world, the cubes are fitted with AR tags that are only used for the purposes of fair and consistent evaluation of our resulting policies: the 3D poses of the cubes are never available to an RL agent during training or testing. At the beginning of every episode, the cubes are placed in a random position by a hand-crafted controller. All real world evaluations referred to in the rest of the section are on the stacking task and consist of 50 episodes. A real world episode is considered a success if the green cube is on top of the yellow cube at any point throughout the episode. Episodes are of length 200 with 20Hz control rate for both simulated and real environments. V. EXPERIMENTAL RESULTS AND DISCUSSION In this section, we discuss the details of our experiments, and attempt to answer the following questions: (a) Can sequence-based self-supervision be used as a common auxiliary objective for simulated and real data without degrading task performance in simulation? (b) Does doing so improve final task performance in the real world? (c) How does using sequence-based self supervision for visual domain alignment between simulation and reality compare with domainadversarial adaptation? (d) Is the use of actions in such a selfsupervised loss important for bridging the sim-to-real domain gap? (e) What is the performance difference of modality tuning in our two-stage approach versus a one-stage end-toend approach? and (f) What are the effects of the different components of our RL framework in solving manipulation tasks from scratch, i.e. without the shared replay buffer or behavior cloning, in simulation? A. Self-Supervised Sim-to-Real Adaptation We evaluated the following methods on our vision-based cube stacking task: domain randomization [44], unsupervised domain adaptation with a domain adversarial (DANN) [14] loss, and self-supervised domain adaptation (SSDA) with two sequence-based self-supervised objectives: the timecontrastive networks (TCN) [29] loss, and the contrastive forward dynamics (CFD) loss we proposed in Sect. III-C. We ablate two different training methods for domain adaptation, end-to-end and two-stage. The end-to-end training method simply optimize Eq. 2 from Sect. III-B with respect to all parameters, without the two-stage procedure described in Sect. III-B. This means that all of the losses are jointly optimized without freezing any part of the neural network. Two-stage training procedure is described in Sect. III and employs modality tuning [35]. Table I shows the quantitative results from evaluating task success on the real robot. These experiments show that DANN improves on top of the domain randomization baseline by a small margin. However, end-to-end adaptation with the TCN loss results in degradation of performance. This is likely due to insufficient sharing of the encoder between the self-supervised objective using simulated data and real data. On the other hand, the two-stage self-supervised domain adaptation with TCN significantly improves over the end-to-end variant and domain randomization baselines. This reconfirms that modality tuning used in the two-stage training method results in significantly better sharing of the encoder. Finally, the two-stage self-supervised adaptation with our CFD objective, which utilizes both the temporal structure of the observations and the actions, performs significantly better when compared to all other methods, yielding a 62 % task success. We also evaluated the importance of jointly optimizing the RL and BC objectives in Eq. 2 for the two-stage selfsupervised domain adaptation. As one can see in Table II, only optimizing L SS without the task objective significantly reduces the performance. Fig. 4 further shows how the task performance in simulation degrades when optimizing only the self-supervised objective. In essence, by only optimizing the self-supervised loss, the network catastrophically forgets [45] how to solve the manipulation task. B. Ablations for different components of our RL framework In order to assess the necessity and efficacy of the different components of our framework, described in Sect. III-A, we provide ablation experimental results. Specifically we examined the effects of the state-based agent that share a replay buffer with the vision-based agent, and the addition of an auxiliary behavior cloning objective for the vision-based agent to imitate the state-based agent. Fig. 5 shows these effects on the cube lifting task. A vision-based agent trained with MPO [31], the state-of-the-art continuous control RL Fig. 6: Simulation performance on our vision-based stacking task of our RL framework with and without behavior cloning (BC). Using BC results in faster training that maintains stability with the addition of auxiliary adaptation objectives. method at the core of our framework, struggles with solving this task, contrary to an MPO agent with access to the full state information. By sharing the replay buffer between the state-based agent and the vision-based agent, one can see that the vision-based agent is able to solve lifting in a reasonable amount of time. The addition of the behavior cloning (BC) objective further improves the speed and stability of training. Fig. 6 shows the even more profound effect our BC objective has on learning our vision-based cube stacking task. Furthermore, one can also observe the stability of the method persists even when jointly training, end-to-end, with the TCN loss, or the DANN loss with real world data. VI. CONCLUSION In this work, we have presented our self-supervised domain adaptation method, which uses unlabeled real robot data to improve sim-to-real transfer learning. Our method is able to perform domain adaptation for sim-to-real transfer learning of cube stacking from visual observations. In addition to our domain adaptation method, we developed contrastive forward dynamics (CFD), which combines dynamics model learning with time-contrastive techniques to better utilize the structure available in unlabeled robot data. We demonstrate that using our CFD objective for adaptation yields a clear improvement over domain randomization, other self-supervised adaptation techniques and domain adversarial methods. Through our experiments, we discovered that optimizing only the first visual layers of the policy network in combination with jointly optimizing the reinforcement learning, behavior cloning and self-supervised loss was necessary for a successful application of self-supervised learning for simto-real transfer for robotic manipulation. Finally, the use of sequence-based self-supervised loss by leveraging the dynamical structure in the robotic system ultimately resulted in the best domain adaptation for our manipulation task. Fig. 2 : 2Left and right pixel observations in both real and domain randomized simulated environments. II: Cube stacking performance on the real system for two-stage self-supervised domain adaptation (SSDA) with CFD optimized with and without the task objective.site of the end effector, and h t , h o be the target height and object height from the ground in meters. Our reach reward is defined as r reach = 1(d gripper < 0.01), where 1 is the indicator function. In practice we use reward shaping with the Gaussian tolerance reward function as defined in the DeepMind Control Suite[43], with bounds [0, 0.01] and a margin of 0.25. Our touch reward r touch = 1 contact is binary and provided by our simulator upon contact with the object. Our lift reward is r lif t = 1(|h t − h o | < 0.1 ∧ h o < 0.01) and the final shaped version we use during training: r lif t shaped = r reach + 1+r touch 2 + r lif t .As before, in practice the distance |h t − h o | is passed through the same tolerance function as above, with bounds [0, 0.1] and a margin of 15. For stacking we now have a top and a bottom target objects with positions x t , x b . If the cubes are in contact and on top of each other, the reward is 1. Otherwise, we have additional shaping to aid with training. More specifically, if h o (x t ) ≤ 0.025m we revert to a normalized lift reward for the top object r stack = r lif t shaped (xt) 5 Fig. 4 : 4Cube stacking performance in simulation for twostage self-supervised domain adaptation (SSDA) with CFD jointly optimized with and without the task objective. Fig. 5 : 5Ablation of techniques used in conjunction with RL for cube lifting task in simulation. The plot shows the average return for the lifting task with and without shared replay buffer and behavior cloning (BC). RL from state and RL from vision are trained only with the RL objective. TABLE I : ISim-to-real transfer performance for vision-based cube stacking agent with unsupervised domain adaptation using DANN, self-supervised domain adaptation (SSDA) using TCN and CFD for the end-to-end and two-stage methods. Method Task Success SSDA without Task Objective 12.0 % SSDA with Task Objective (Ours) 62.0 % TABLE Data-driven grasp synthesis-a survey. J Bohg, A Morales, T Asfour, D Kragic, IEEE Transactions on Robotics. J. Bohg, A. Morales, T. Asfour, and D. Kragic, "Data-driven grasp synthesis-a survey," IEEE Transactions on Robotics, 2014. Learning a visuomotor controller for real world robotic grasping using easily simulated depth images. U Viereck, A T Pas, K Saenko, R Platt, U. Viereck, A. t. Pas, K. Saenko, and R. Platt, "Learning a visuomotor controller for real world robotic grasping using easily simulated depth images," CoRL, 2017. Dex-Net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics. J Mahler, J Liang, S Niyaz, M Laskey, R Doan, X Liu, J A Ojea, K Goldberg, RSS. J. Mahler, J. Liang, S. Niyaz, M. Laskey, R. Doan, X. Liu, J. A. Ojea, and K. Goldberg, "Dex-Net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics," in RSS, 2017. Learning handeye coordination for robotic grasping with deep learning and largescale data collection. S Levine, P Pastor, A Krizhevsky, D Quillen, abs/1603.02199CoRR. S. Levine, P. Pastor, A. Krizhevsky, and D. Quillen, "Learning hand- eye coordination for robotic grasping with deep learning and large- scale data collection," CoRR, vol. abs/1603.02199, 2016. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. D Kalashnikov, A Irpan, P Pastor, J Ibarz, A Herzog, E Jang, D Quillen, E Holly, M Kalakrishnan, V Vanhoucke, S Levine, CoRR. D. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine, "Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation," CoRR, vol. abs/1806.10293, 2018. B Siciliano, O Khatib, Springer Handbook of Robotics. Secaucus. NJ, USASpringer-Verlag New York, IncB. Siciliano and O. Khatib, Springer Handbook of Robotics. Secau- cus, NJ, USA: Springer-Verlag New York, Inc., 2007. Sim-toreal transfer of robotic control with dynamics randomization. X B Peng, M Andrychowicz, W Zaremba, P Abbeel, abs/1710.06537CoRR. X. B. Peng, M. Andrychowicz, W. Zaremba, and P. Abbeel, "Sim-to- real transfer of robotic control with dynamics randomization," CoRR, vol. abs/1710.06537, 2017. Sim-to-real via simto-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks. S James, P Wohlhart, M Kalakrishnan, D Kalashnikov, A Irpan, J Ibarz, S Levine, R Hadsell, K Bousmalis, abs/1812.07252CoRR. S. James, P. Wohlhart, M. Kalakrishnan, D. Kalashnikov, A. Irpan, J. Ibarz, S. Levine, R. Hadsell, and K. Bousmalis, "Sim-to-real via sim- to-sim: Data-efficient robotic grasping via randomized-to-canonical adaptation networks," CoRR, vol. abs/1812.07252, 2018. Learning dexterous in-hand manipulation. M Openai, B Andrychowicz, M Baker, R Chociej, B Józefowicz, J W Mcgrew, J Pachocki, A Pachocki, M Petron, G Plappert, A Powell, J Ray, S Schneider, J Sidor, P Tobin, L Welinder, W Weng, Zaremba, abs/1808.00177CoRR. OpenAI, M. Andrychowicz, B. Baker, M. Chociej, R. Józefowicz, B. McGrew, J. W. Pachocki, J. Pachocki, A. Petron, M. Plappert, G. Powell, A. Ray, J. Schneider, S. Sidor, J. Tobin, P. Welinder, L. Weng, and W. Zaremba, "Learning dexterous in-hand manipula- tion," CoRR, vol. abs/1808.00177, 2018. Genesis-rt: Generating synthetic images for training secondary real-world tasks. G J Stein, N Roy, 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEEG. J. Stein and N. Roy, "Genesis-rt: Generating synthetic images for training secondary real-world tasks," in 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018, pp. 7151-7158. Using simulation and domain adaptation to improve efficiency of deep robotic grasping. K Bousmalis, A Irpan, P Wohlhart, Y Bai, M Kelcey, M Kalakrishnan, L Downs, J Ibarz, P Pastor, K Konolige, S Levine, V Vanhoucke, abs/1709.07857CoRR. K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M. Kelcey, M. Kalakr- ishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige, S. Levine, and V. Vanhoucke, "Using simulation and domain adaptation to improve efficiency of deep robotic grasping," CoRR, vol. abs/1709.07857, 2017. Domain adaptation for visual applications: A comprehensive survey. G Csurka, arxiv:1702.05374G. Csurka, "Domain adaptation for visual applications: A comprehen- sive survey," arxiv:1702.05374, 2017. Domain randomization for transferring deep neural networks from simulation to the real world. J Tobin, R Fong, A Ray, J Schneider, W Zaremba, P Abbeel, abs/1703.06907CoRR. J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, "Domain randomization for transferring deep neural networks from simulation to the real world," CoRR, vol. abs/1703.06907, 2017. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, Domain-Adversarial Training of Neural Networks. arXiv e-printsY. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Lavio- lette, M. Marchand, and V. Lempitsky, "Domain-Adversarial Training of Neural Networks," arXiv e-prints, May 2015. Trends and challenges in robot manipulation. A Billard, D Kragic, Science. 36464468414A. Billard and D. Kragic, "Trends and challenges in robot manipula- tion," Science, vol. 364, no. 6446, p. eaat8414, 2019. Constructing Force-Closure Grasps. V.-D Nguyen, IJRRV.-D. Nguyen, "Constructing Force-Closure Grasps," IJRR, 1988. From caging to grasping. A Rodriguez, M T Mason, S Ferry, IJRR. A. Rodriguez, M. T. Mason, and S. Ferry, "From caging to grasping," IJRR, 2012. A Survey of Robotic Caging and its Applications. S Makita, W Wan, Advanced Robotics. 00S. Makita and W. Wan, "A Survey of Robotic Caging and its Applications," Advanced Robotics, vol. 0, no. 0, pp. 1-15, 2017. Physically consistent state estimation and system identification for contacts. S Kolev, E Todorov, 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids). S. Kolev and E. Todorov, "Physically consistent state estimation and system identification for contacts," in 2015 IEEE-RAS 15th Interna- tional Conference on Humanoid Robots (Humanoids), Nov 2015, pp. 1036-1043. Closing the sim-to-real loop: Adapting simulation randomization with real world experience. Y Chebotar, A Handa, V Makoviychuk, M Macklin, J Issac, N D Ratliff, D Fox, abs/1810.05687CoRR. Y. Chebotar, A. Handa, V. Makoviychuk, M. Macklin, J. Issac, N. D. Ratliff, and D. Fox, "Closing the sim-to-real loop: Adapting simulation randomization with real world experience," CoRR, vol. abs/1810.05687, 2018. Learning by playing solving sparse reward tasks from scratch. M Riedmiller, R Hafner, T Lampe, M Neunert, J Degrave, T Wiele, V Mnih, N Heess, J T Springenberg, International Conference on Machine Learning. M. Riedmiller, R. Hafner, T. Lampe, M. Neunert, J. Degrave, T. Wiele, V. Mnih, N. Heess, and J. T. Springenberg, "Learning by playing solving sparse reward tasks from scratch," in International Conference on Machine Learning, 2018, pp. 4341-4350. Reinforcement and imitation learning for diverse visuomotor skills. Y Zhu, Z Wang, J Merel, A Rusu, T Erez, S Cabi, S Tunyasuvunakool, J Kramr, R Hadsell, N Freitas, N Heess, Proceedings of Robotics: Science and Systems. Robotics: Science and SystemsPittsburghPennsylvaniaY. Zhu, Z. Wang, J. Merel, A. Rusu, T. Erez, S. Cabi, S. Tun- yasuvunakool, J. Kramr, R. Hadsell, N. de Freitas, and N. Heess, "Reinforcement and imitation learning for diverse visuomotor skills," in Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsyl- vania, June 2018. Unsupervised pixel-level domain adaptation with generative adversarial networks. K Bousmalis, N Silberman, D Dohan, D Erhan, D Krishnan, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan, "Unsupervised pixel-level domain adaptation with generative adver- sarial networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3722-3731. Beyond the shortest path: Unsupervised Domain Adaptation by Sampling Subspaces Along the Spline Flow. R Caseiro, J F Henriques, P Martins, J Batista, CVPR. R. Caseiro, J. F. Henriques, P. Martins, and J. Batista, "Beyond the shortest path: Unsupervised Domain Adaptation by Sampling Subspaces Along the Spline Flow," in CVPR, 2015. Domain-adversarial training of neural networks. Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V Lempitsky, The Journal of Machine Learning Research. 171Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Lavio- lette, M. Marchand, and V. Lempitsky, "Domain-adversarial training of neural networks," The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2096-2030, 2016. Domain separation networks. K Bousmalis, G Trigeorgis, N Silberman, D Krishnan, D Erhan, NIPS. K. Bousmalis, G. Trigeorgis, N. Silberman, D. Krishnan, and D. Erhan, "Domain separation networks," in NIPS, 2016. Self-supervised video representation learning with odd-one-out networks. B Fernando, H Bilen, E Gavves, S Gould, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionB. Fernando, H. Bilen, E. Gavves, and S. Gould, "Self-supervised video representation learning with odd-one-out networks," in Pro- ceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3636-3645. Learning and using the arrow of time. D Wei, J J Lim, A Zisserman, W T Freeman, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionD. Wei, J. J. Lim, A. Zisserman, and W. T. Freeman, "Learning and using the arrow of time," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8052-8060. Time-contrastive networks: Self-supervised learning from multi-view observation. P Sermanet, C Lynch, J Hsu, S Levine, abs/1704.06888CoRR. P. Sermanet, C. Lynch, J. Hsu, and S. Levine, "Time-contrastive net- works: Self-supervised learning from multi-view observation," CoRR, vol. abs/1704.06888, 2017. Playing hard exploration games by watching youtube. Y Aytar, T Pfaff, D Budden, T Paine, Z Wang, N De Freitas, Advances in Neural Information Processing Systems. Y. Aytar, T. Pfaff, D. Budden, T. Paine, Z. Wang, and N. de Freitas, "Playing hard exploration games by watching youtube," in Advances in Neural Information Processing Systems, 2018, pp. 2930-2941. Maximum a posteriori policy optimisation. A Abdolmaleki, J T Springenberg, Y Tassa, R Munos, N Heess, M A Riedmiller, abs/1806.06920CoRR. A. Abdolmaleki, J. T. Springenberg, Y. Tassa, R. Munos, N. Heess, and M. A. Riedmiller, "Maximum a posteriori policy optimisation," CoRR, vol. abs/1806.06920, 2018. Simultaneously learning vision and feature-based control policies for real-world ball-in-a-cup. D Schwab, J T Springenberg, M F Martins, T Lampe, M Neunert, A Abdolmaleki, T Hertweck, R Hafner, F Nori, M A Riedmiller, abs/1902.04706CoRR. D. Schwab, J. T. Springenberg, M. F. Martins, T. Lampe, M. Ne- unert, A. Abdolmaleki, T. Hertweck, R. Hafner, F. Nori, and M. A. Riedmiller, "Simultaneously learning vision and feature-based control policies for real-world ball-in-a-cup," CoRR, vol. abs/1902.04706, 2019. Overcoming exploration in reinforcement learning with demonstrations. A Nair, B Mcgrew, M Andrychowicz, W Zaremba, P Abbeel, abs/1709.10089CoRR. A. Nair, B. McGrew, M. Andrychowicz, W. Zaremba, and P. Abbeel, "Overcoming exploration in reinforcement learning with demonstra- tions," CoRR, vol. abs/1709.10089, 2017. Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. M Vecerik, T Hester, J Scholz, F Wang, O Pietquin, B Piot, N Heess, T Rothörl, T Lampe, M A Riedmiller, abs/1707.08817CoRR. M. Vecerik, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Rothörl, T. Lampe, and M. A. Riedmiller, "Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards," CoRR, vol. abs/1707.08817, 2017. Cross-modal scene networks. Y Aytar, L Castrejon, C Vondrick, H Pirsiavash, A Torralba, IEEE transactions. Y. Aytar, L. Castrejon, C. Vondrick, H. Pirsiavash, and A. Torralba, "Cross-modal scene networks," IEEE transactions on pattern analysis and machine intelligence, 2017. Shuffle and learn: unsupervised learning using temporal order verification. I Misra, C L Zitnick, M Hebert, European Conference on Computer Vision. SpringerI. Misra, C. L. Zitnick, and M. Hebert, "Shuffle and learn: un- supervised learning using temporal order verification," in European Conference on Computer Vision. Springer, 2016, pp. 527-544. Representation learning with contrastive predictive coding. A V Oord, Y Li, O Vinyals, arXiv:1807.03748arXiv preprintA. v. d. Oord, Y. Li, and O. Vinyals, "Representation learning with contrastive predictive coding," arXiv preprint arXiv:1807.03748, 2018. L Grne, J Pannek, Nonlinear Model Predictive Control: Theory and Algorithms. Springer Publishing CompanyIncorporatedL. Grne and J. Pannek, Nonlinear Model Predictive Control: Theory and Algorithms. Springer Publishing Company, Incorporated, 2013. Learning a similarity metric discriminatively, with application to face verification. S Chopra, R Hadsell, Y Lecun, CVPR. S. Chopra, R. Hadsell, Y. LeCun, et al., "Learning a similarity metric discriminatively, with application to face verification," in CVPR (1), 2005, pp. 539-546. Dimensionality reduction by learning an invariant mapping. R Hadsell, S Chopra, Y Lecun, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). IEEE2R. Hadsell, S. Chopra, and Y. LeCun, "Dimensionality reduction by learning an invariant mapping," in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 2. IEEE, 2006, pp. 1735-1742. Improved deep metric learning with multi-class n-pair loss objective. K Sohn, Advances in Neural Information Processing Systems. K. Sohn, "Improved deep metric learning with multi-class n-pair loss objective," in Advances in Neural Information Processing Systems, 2016, pp. 1857-1865. Mujoco: A physics engine for model-based control. E Todorov, T Erez, Y Tassa, Intelligent Robots and Systems (IROS). E. Todorov, T. Erez, and Y. Tassa, "Mujoco: A physics engine for model-based control," in Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEEIEEE/RSJ International Conference on. IEEE, 2012, pp. 5026-5033. Deepmind control suite. Y Tassa, Y Doron, A Muldal, T Erez, Y Li, D De Las, D Casas, A Budden, J Abdolmaleki, A Merel, T P Lefrancq, M A Lillicrap, Riedmiller, abs/1801.00690ArXiv. Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. de Las Casas, D. Bud- den, A. Abdolmaleki, J. Merel, A. Lefrancq, T. P. Lillicrap, and M. A. Riedmiller, "Deepmind control suite," ArXiv, vol. abs/1801.00690, 2018. CAD2RL: Real single-image flight without a single real image. F Sadeghi, S Levine, RSS. F. Sadeghi and S. Levine, "CAD2RL: Real single-image flight without a single real image." in RSS, 2017. Overcoming catastrophic forgetting in neural networks. J Kirkpatrick, R Pascanu, N C Rabinowitz, J Veness, G Desjardins, A A Rusu, K Milan, J Quan, T Ramalho, A Grabska-Barwinska, D Hassabis, C Clopath, D Kumaran, R Hadsell, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America114J. Kirkpatrick, R. Pascanu, N. C. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell, "Overcoming catastrophic forgetting in neural networks," Proceedings of the Na- tional Academy of Sciences of the United States of America, vol. 114 13, pp. 3521-3526, 2016.
[]
[ "THERMALISATION FOR SMALL RANDOM PERTURBATIONS OF DYNAMICAL SYSTEMS", "THERMALISATION FOR SMALL RANDOM PERTURBATIONS OF DYNAMICAL SYSTEMS" ]
[ "Gerardo And Barrera ", "Milton Jara " ]
[]
[]
We consider an ordinary differential equation with a unique hyperbolic attractor at the origin, to which we add a small random perturbation. It is known that under general conditions, the solution of this stochastic differential equation converges exponentially fast to an equilibrium distribution. We show that the convergence occurs abruptly: in a time window of small size compared to the natural time scale of the process, the distance to equilibrium drops from its maximal possible value to near zero, and only after this time window the convergence is exponentially fast. This is what is known as the cut-off phenomenon in the context of Markov chains of increasing complexity. In addition, we are able to give general conditions to decide whether the distance to equilibrium converges in this time window to a universal function, a fact known as profile cut-off.2000 Mathematics Subject Classification. 60K35,60G60,60F17,35R60.
10.1214/19-aap1526
[ "https://export.arxiv.org/pdf/1510.09207v5.pdf" ]
204,872,420
1510.09207
5ae08fc3099deedb6ab016da6192ddd2138f4c44
THERMALISATION FOR SMALL RANDOM PERTURBATIONS OF DYNAMICAL SYSTEMS 5 Sep 2020 Gerardo And Barrera Milton Jara THERMALISATION FOR SMALL RANDOM PERTURBATIONS OF DYNAMICAL SYSTEMS 5 Sep 2020 We consider an ordinary differential equation with a unique hyperbolic attractor at the origin, to which we add a small random perturbation. It is known that under general conditions, the solution of this stochastic differential equation converges exponentially fast to an equilibrium distribution. We show that the convergence occurs abruptly: in a time window of small size compared to the natural time scale of the process, the distance to equilibrium drops from its maximal possible value to near zero, and only after this time window the convergence is exponentially fast. This is what is known as the cut-off phenomenon in the context of Markov chains of increasing complexity. In addition, we are able to give general conditions to decide whether the distance to equilibrium converges in this time window to a universal function, a fact known as profile cut-off.2000 Mathematics Subject Classification. 60K35,60G60,60F17,35R60. Introduction Our main goal is the study of the convergence to equilibrium for a family of stochastic small random perturbations of a given dynamical system in R d . Consider an ordinary differential equation with a unique hyperbolic global attractor. Without loss of generality, we assume that the global attractor is located at the origin. Under general conditions, as time goes to infinity, any solution of this differential equation approaches the origin exponentially fast. We perturb the deterministic dynamics by a Brownian motion of small intensity. It is well known that, again under very general conditions, as time goes to infinity, any solution of this stochastic differential equation converges in distribution to an equilibrium law. The convergence can be improved to hod with respect to the total variation distance. The theory of Lyapunov functions allows to show that this convergence, for each fixed perturbation, is again exponentially fast. We show that the convergence occurs abruptly: when the intensity of the noise goes to zero, the total variation distance between the law of the stochastic dynamics and the law of its equilibrium in a time window around the cut-off time decreases from one to near zero abruptly, and only after this time window the convergence is exponentially fast. This fact is known as cut-off phenomenon. Moreover, when a properly normalised ω-limit set of the initial datum of the deterministic differential equation is contained in a sphere, we are able to prove convergence of the distance to equilibrium to a universal function, a fact known as profile cut-off or profile thermalisation in the context of ergodic Markov processes. To be more precise, we are concerned about the abrupt convergence to equilibrium in the total variation distance for systems of the form: dx ǫ (t) = −F (x ǫ (t))dt + √ ǫdB(t) for t ≥ 0, x ǫ (0) = x 0 . (1.1) where F is a given vector field with a unique hyperbolic fixed point and {B(t) : t ≥ 0} is a standard Brownian motion. Notice that systems described by the stochastic differential equation (1.1) are not necessarily reversible. In statistical physics, equation (1.1) is known as an overdamped Langevin dynamics, and it is used to model fluctuations of stationary states. In the small noise asymptotics, the stochastic dynamics (1.1) fluctuates around the attractor of the deterministic dynamics which is called relaxation dynamics or zero-noise dynamics. Assuming that the deterministic dynamics is strongly coercive together with some growth condition on F , when the intensity ǫ of the noise goes to zero, in a time windows of small size compared to the natural time scale of the process, the total variation distance to equilibrium drops from near one to near zero. Dynamical systems subjected to small Gaussian perturbations have been studied extensively, see the book of M. Freidlin & A. Wentzell [31] which discusses this problem in great detail; see also M. Freidlin & A. Wentzell [29], [30], M. Day [32], [33] and W. Siegert [44]. This treatment has inspired many works and considerable effort was concerned about purely local phenomena, i.e., on the computation of exit times and exit probabilities from neighbourhoods of fixed points that are carefully stipulated not to contain any other fixed point of the deterministic dynamics. The theory of large deviations allows to solve the exit problem from the domain of attraction of a stable point. It turns out that the mean exit time is exponentially large in the small noise parameter, and its logarithmic rate is proportional to the height of the potential barrier that the trajectories have to overcome. Consequently, for a multi-well potential one can obtain a series of exponentially non-equivalent time scales given by the wells-mean exit times. Moreover, the normalised exit times are asymptotically exponentially distributed and have a memoryless property, for further details see A. Galves, E. Olivieri & M. Vares [5], E. Olivieri & M. Vares [19] and C. Kipnis & C. Newman [12]. There are situations in which the analysis at the level of large deviations is not enough, and it is necessary the study of distributional scaling limits for the exit distributions, for more details see Y. Bakhtin [46] and [47]. The cut-off phenomenon was extensively studied in the eighties to describe the phenomenon of abrupt convergence that appears in models of card shuffling, Ehrenfest urns and random transpositions, see for instance D. Aldous & P. Diaconis [14] and [15]. In general, it is a challenging problem to prove that an specific family of stochastic models exhibit or does not exhibit a cut-off phenomenon. It requires a complete understanding of the dynamics of the specific random process. Since the appearance of [14] many families of stochastic processes have been shown to have similar properties. Various notions of cut-off have been proposed; see J. Barrera & B. Ycart [24] and P. Diaconis [34] for an account. We refer to the book of D. Levin et al. ( [16], Chapter 18) for an introduction of the subject in the Markov chain setting, L. Saloff-Coste [28] provides an extensive list of random walks for which the cut-off phenomenon holds, P. Diaconis [34] for a review on the finite Markov chain case, S. Martínez and B. Ycart [41] for the case of Markov chains with countably infinite state space, G. Chen and L. Saloff-Coste [23] for Brownian motions on a compact Riemann manifold, B. Lachaud [8] and G. Barrera [21] for Ornstein-Uhlenbeck processes on the line and G. Barrera and M. Jara [22] for stochastic small perturbations of one-dimensional dynamical systems. Roughly speaking, thermalisation or window cut-off holds for a family of stochastic systems, when convergence to equilibrium happens in a time window which is small compared to the total running time of the system. Before a certain "cut-off time" those processes stay far from equilibrium with respect to some suitable distance; in a time window of smaller order the processes get close to equilibrium, and after a time window that convergence to equilibrium happens exponentially fast. Alternative names are threshold phenomenon and abrupt convergence. When the distance to equilibrium at the time window can be well approximated by some profile function, we speak about profile cut-off. Sequences of stochastic processes for which an explicit profile cut-off can be determined are scarce. Explicit profiles are usually out of reach, in particular for the total variation distance. In general, the existence of the phenomenon is proven through a precise estimation of the sequence of cut-off times and this precision comes at a high technical price, for more details see J. Barrera, O. Bertoncini & R. Fernández [25]. The main result of this article, Theorem 2.2, states that when the deterministic dynamics is strongly coercive and satisfies some growth condition, the family of perturbed dynamics presents a thermalisation (windows cutoff) as we describe in Section 2. Moreover, in Corollary 2.9 and Corollary 2.11 we give a necessary and sufficient condition for having profile thermalisation (profile cut-off). We point out that our condition is always satisfied by reversible dynamics; i.e., when F (x) = ∇V (x), x ∈ R d , and also for a large class of dynamics that are non-reversible. Non-reversible dynamics naturally appear for example in polymeric fluid dynamics or Wigner-Fokker-Planck equations, see A. Arnold, J. Carrillo & C. Manzini [1] and B. Jourdain, C. Le Bris, T. Lelièvre & F. Otto [7]. Non-reversible systems arise in the theory of activated process in glasses and other disordered materials, chemical reactions far from equilibrium, stochastic modelled computer networks, evolutionary biology and theoretical ecology, see R. Maier & D. Stein [37] and [38]. Notice that the set of symmetric matrices is not open. In particular, reversibility is not a generic property of dynamical systems. On the other hand, hyperbolicity is an open property, meaning that it is stable under small perturbations of the vector field. Moreover, in general for the non-reversible case, there is not an explicit formula for the invariant measure of the random dynamics (1.1) as in the reversible case. For reversible dynamics, analytic methods from quantum mechanics have been used to compute asymptotic expansions in the diffusivity √ ǫ. The strong point is that full asymptotic expansions in √ ǫ and sharp estimates can be done. However, so far only applicable for reversible diffusion process. For more details, see [2] and [3]. Therefore, it is desirable to have a treatment that does not rely on these properties, namely reversibility and/or explicit knowledge of invariant measures. Our idea is to carry out this asymptotic expansion in √ ǫ by probabilistic methods. It turns out that the hyperbolicity of the underlying dynamics can be used to show that a second-order expansion gives a description of the original dynamics which is good for times much larger than the time at which equilibration occurs. This expansion is the same introduced in [22]. Notice that in the one-dimensional case, the stochastic dynamics is always reversible. Therefore, profile cut-off always holds and a more refined analysis of this expansion is needed in order to be able to discern whether profile cut-off holds or not. A consequence of our analysis is an L 1 -version of the local central limit theorem for the invariant measure of (1.1), which could be of independent interest. This material is organized as follows. Section 2 describes the model and states the main result besides establishing the basic notation and definitions. Section 3 provides sharp estimates on the asymptotics of related linear approximations which are the main ingredient in order to prove the main result in the end of this section. Finally, we provide an Appendix which is divided in three sections as follows: Section A gives useful properties for the total variation distances between Gaussian distributions. Section B and C provide the rigorous arguments about the deterministic dynamics and the stochastic dynamics, respectively, that we omit in Section 3 to make the presentation more fluid. Notation and results In this section we rigorously state the family of stochastically perturbed dynamical systems that we are considering and the results we prove. 2.1. The dynamical system. Let F : R d → R d be a vector field of class C 2 (R d , R d ). For each x ∈ R d , let {ϕ(t, x) : t ∈ [0, τ x )} be the solution of the deterministic differential equation: d dt ϕ(t) = −F (ϕ(t)) for 0 ≤ t < τ x , ϕ(0) = x (2.1) where τ x denotes the explosion time. Since F is smooth, this equation has a unique solution. Since we have not imposed any growth condition on F , τ x may be finite. We denote by · the Euclidean norm in R d and by ·, · the standard inner product of R d . Under the condition sup z∈R d z, −F (z) 1 + z 2 < +∞, an straightforward application of the Lemma C.7 (Gronwall's inequality) implies that the explosion time τ x is infinite for any x ∈ R d . Later on, we will make stronger assumptions on F , so we will assume that the explosion time is always infinite without further comments. We call the family {ϕ(t, x) : t ≥ 0, x ∈ R d } the dynamical system associated to F . We say that a point y ∈ R d is a fixed point of (2.1) if F (y) = 0. In that case ϕ(t, y) = y for any t ≥ 0. Let y be a fixed point of (2.1). We say that x ∈ R d belongs in the basin of attraction of y if lim t→+∞ ϕ(t, x) = y. We say that y is an attractor of (2.1) if the set U y = {x ∈ R d : x is in the basin of attraction of y} contains an open ball centered at y. If U y = R d we say that y is a global attractor of (2.1). We say that y is a hyperbolic fixed point of (2.1) if Re(λ) = 0 for any eigenvalue λ of the Jacobian matrix DF (y). By the Hartman-Grobman Theorem (see Theorem (Hartman) page 127 of [27] or the celebrated paper of P. Hartman [35]), a hyperbolic fixed point y of (2.1) is an attractor if and only if Re(λ) > 0 for any eigenvalue λ of the matrix DF (y). From now on, we will always assume that 0 is a hyperbolic attractor of (2.1). In that case, for any x ∈ U 0 the asymptotic behaviour of ϕ(t, x) as t → +∞ can be described in a very precise way. A sufficient condition for 0 to be a global attractor of (2.1) is the following coercivity condition: there exists a positive constant δ such that x, F (x) ≥ δ x 2 for any x ∈ R d . (C) Notice that d dt ϕ(t) 2 = 2 ϕ(t), d dt ϕ(t) = ϕ(t), −F (ϕ(t)) ≤ −2δ ϕ(t) 2 for any t ≥ 0. Then Lemma C.7 allows us to deduce that ϕ(t, x) ≤ x e −δt for any x ∈ R d and any t ≥ 0. (2.2) In other words, ϕ(t, x) converges to 0 exponentially fast as t → +∞. Notice that the eigenvalues of the Jacobian matrix of F at zero, DF (0), might be complex numbers. Recall that for any λ ∈ C and v ∈ C d , λv = (Re(λ) + i Im(λ))(Re(v) + i Im(v)) = Re(λ) Re(v) − Im(λ) Im(v) + i (Im(λ) Re(v) + Re(λ) Im(v)) . From (C) we have Re(λ) ≥ δ for any eigenvalue λ of DF (0). Let v ∈ C d an eigenvector associated to the eigenvalue λ of DF (0). Then −(Re(λ)−δ) Im(v) 2 ≤ Im(λ) Re(v), Im(v) ≤ (Re(λ)−δ) Re(v) 2 . (2.3) Particularly, from (2.3) we have that (C) does not allow to control the imaginary part of the eigenvalues of DF (0). Typically and roughly speaking, the dynamical system associated to (2.1) is an "uniformly contracting spiral". The following Lemma provides us the asymptotics of ϕ(t) as t goes to +∞. It will be important for determining the cut-off time and time window. Lemma 2.1. Assume that (C) holds. Then for any x 0 ∈ R d \{0} there exist λ := λ(x 0 ) > 0, ℓ := ℓ(x 0 ), m := m(x 0 ) ∈ {1, . . . , d}, θ 1 := θ 1 (x 0 ), . . . , θ m := θ m (x 0 ) ∈ [0, 2π), v 1 := v 1 (x 0 ), . . . , v m := v m (x 0 ) in C d linearly independent and τ := τ (x 0 ) > 0 such that lim t→+∞ e λt t ℓ−1 ϕ(t + τ, x 0 ) − m k=1 e iθ k t v k = 0. This lemma will be proved in Appendix B, where we give more detailed description of the constants and vectors appearing in this lemma. We can anticipate that the numbers λ ± iθ k , k = 1, . . . , m are eigenvalues of DF (0) and that the vectors v k ∈ C d , k = 1, . . . , m are elements of the Jordan decomposition of the matrix DF (0). 2.2. The cut-off phenomenon. Let µ, ν be two probability measures in (R d , B(R d )). We say that a probability measure π in ( R d × R d , B(R d × R d )) is a coupling between µ and ν if for any Borel set B ∈ B(R d ), π(B × R d ) = µ(B) and π(R d × B) = ν(B). In that case we say that π ∈ C(µ, ν). The total variation distance between µ and ν is defined as d TV (µ, ν) = inf π∈C(µ,ν) π (x, y) ∈ R d × R d : x = y . Notice that the diameter with respect to d TV (·, ·) of the set M + 1 (R d , B(R d )) of probability measures defined in (R d , B(R d )) is equal to 1. If X and Y are two random variables in R d which are defined in the same measurable space (Ω, F), we write d TV (X, Y ) instead of d TV (P(X ∈ ·), P(Y ∈ ·)). For simplicity, we also write d TV (X, µ Y ) in place of d TV (X, Y ), where µ Y is the distribution of the random variable Y . For an account of the equivalent formulations of the total variation distance (normalised or not normalised), we recommend the book of A. Kulik ([6], Chapter 2). For any ǫ ∈ (0, 1], let x ǫ be the continuous time stochastic process {x ǫ (t) : t ≥ 0}. We say that a family of stochastic processes {x ǫ } ǫ∈(0,1] has thermalisation at position {t ǫ } ǫ∈(0,1] , window {ω ǫ } ǫ∈(0,1] and state {µ ǫ } ǫ∈(0,1] if i) lim ǫ→0 t ǫ = +∞ and lim ǫ→0 ω ǫ t ǫ = 0, ii) lim c→+∞ lim sup ǫ→0 d TV (x ǫ (t ǫ + cω ǫ ), µ ǫ ) = 0, iii) lim c→−∞ lim inf ǫ→0 d TV (x ǫ (t ǫ + cω ǫ ), µ ǫ ) = 1. If for any ǫ ∈ (0, 1], x ǫ is a Markov process with a unique invariant measure and µ ǫ is the invariant measure of the process x ǫ we say that the family {x ǫ } ǫ∈(0,1] presents thermalisation or window cut-off. If in addition to i) there is a continuous function G : R → [0, 1] such that G(−∞) = 1, G(+∞) = 0 and ii') lim ǫ→0 d TV (x ǫ (t ǫ + cω ǫ ), µ ǫ ) =: G(c) for any c ∈ R, we say that there is profile thermalisation or profile cut-off. Notice that ii') implies ii) and iii), and therefore profile thermalisation (respectively profile cut-off) is a stronger notion than thermalisation (respectively window cutoff). 2.3. The overdamped Langevin dynamics. Let {B(t) : t ≥ 0} be a standard Brownian motion in R d and let ǫ ∈ (0, 1] be a scaling parameter. Let x 0 ∈ U 0 \ {0} and let {x ǫ (t, x 0 ) : t ≥ 0} be the solution of the following stochastic differential equation: dx ǫ (t) = −F (x ǫ (t))dt + √ ǫdB(t) for t ≥ 0, x ǫ (0) = x 0 . (2.4) Stochastic differential equation (2.4) is used in molecular modelling. In that context ǫ = 2κτ , where τ is the temperature of the system and κ is the Boltzmann constant. In statistical physics, equation (2.4) has a computational interest to modelling a sample of a Gibbs measure in highdimensional Euclidean spaces. Denote by (Ω, F, P) the probability space where {B(t) : t ≥ 0} is defined and denote by E the expectation with respect to P. Notice that (2.4) has a unique strong solution (see Remark 2.1.2 page 57 of [44] or Theorem 10.2.2 of [17]), and therefore {x ǫ (t, x 0 ) : t ≥ 0} can be taken as a stochastic process in the same probability space (Ω, F, P). In order to avoid unnecessary notation, we write {x ǫ (t) : t ≥ 0} instead of {x ǫ (t, x 0 ) : t ≥ 0} and {ϕ(t) : t ≥ 0} instead of {ϕ(t, x 0 ) : t ≥ 0}. Since ǫ ∈ (0, 1], for simplicity, we write lim ǫ→0 instead of lim ǫ→0 + . Our aim is to describe in detail the asymptotic behaviour of the law of x ǫ (t) for large times t, as ǫ → 0. In particular, we are interested in the law of x ǫ (t) for times t of order O(log(1/ǫ)), where thermalisation or window cut-off phenomenon appears. Under (C), for any ǫ ∈ (0, 1], the process {x ǫ (t) : t ≥ 0} is uniquely ergodic with stationary measure µ ǫ , see Lemma C.3 for details. Moreover, the process is strongly Feller. In particular, the process visits infinitely often every non-empty open set of the state space R d . The stationary measure µ ǫ is absolutely continuous with respect to the Lebesgue measure in R d . The density ρ ǫ of µ ǫ is smooth and solves the stationary Fokker-Planck equation: ǫ 2 d j=1 ∂ 2 ∂x 2 j (ρ ǫ (x)) + d j=1 ∂ ∂x j (F j (x)ρ ǫ (x)) = 0 for any x ∈ R d , where F = (F 1 , . . . , F d ) T , for details see [44] (pages 60-63). When the process is reversible, i.e., F (x) = ∇V (x), x ∈ R d , for some scalar function V (also called potential), the stationary measure µ ǫ is of the Gibbs type: µ ǫ (dx) = 1 Z ǫ e − 2V (x) ǫ dx, where Z ǫ = R d e − 2V (x) ǫ dx < +∞. (2.5) The normalised constant Z ǫ is called the partition function. If the vector field F can be decomposed as F (x) = ∇V (x) + b(x) for any x ∈ R d , where V : R d → R is a scalar function and b : R d → R d is a vector field which satisfies the divergence-free condition: div e − 2 ǫ V (x) b(x) = 0 for any x ∈ R d , then under some appropriate assumptions on V at infinity, i.e., 1 2 ∇V (x) 2 − ∆V (x) → +∞ as x → +∞, the probability measure µ ǫ given by (2.5) remains stationary for (2.4). For details see [11], [13], [40] and [42]. In this situation, using the Laplace Method, asymptotics as ǫ → 0 for µ ǫ can be obtained, see [10] and [26] for further details. In general, the equilibrium measure can be expressed as an integral of a Green function, but aside from a few simple cases, there are no closed expressions for it. In this case, the Freidlin-Wentzell theory implies that the non-Gibbs measure µ ǫ is equivalent to a Gibbs measure with a "quasi-potential"Ṽ playing the role of the potential energy, see for instance [9], [37], [39] and [49]. However, the study of the regularity of the quasi-potential is a non-trivial mathematical issue, for details see [45]. For our purposes, no transverse condition on the vector field F is assumed and also we do not need that the Gibbs measure remains stationary for (2.4), for further details see [20] and the references therein. In many theoretical or applied problems involving ergodic processes, it is important to estimate the time until the distribution of the process is close to its equilibrium distribution. Under some strong coercivity condition and growth condition that we will state precisely in Section 2.4, we will prove that the law of x ǫ (t) converges in total variation distance to µ ǫ in a time window w ǫ := 1 λ + o(1) (2.6) of order O(1) around the cut-off time t ǫ mix := 1 2λ ln (1/ǫ) + ℓ − 1 λ ln (ln (1/ǫ)) + τ,(2.7) where λ, ℓ and τ are the positive constants associated to x 0 in Lemma 2.1. If we only assume that 0 is a hyperbolic attractor of (2.1), we can not rule out the existence of other attractors. These attractors are accessible to the stochastic dynamics (2.4) (a large part of the celebrated book of M. Freidlin & A. Wentzell [31] is devoted to the study of this problem). However, in this situation the other attractors are not accessible until times of order O(e c /ǫ ), where c is a positive constant, and we prove that the law of x ǫ (t) converges to a Gaussian random variable on a time window of order O(1) around t ǫ mix . The exact way on which this convergence takes place is the content of the following section. Recall that for any y ∈ R d , DF (y) denotes the Jacobian matrix of F at y. A sufficient condition that allows to uniformly push back to the origin the dynamics of (2.4) is the following strong coercivity condition: there exists δ > 0 such that x, DF (y)x ≥ δ x 2 for any x, y ∈ R d .(H) At the beginning of Section 3 we will see that (H) implies (C). To control the growth of the vector field F around infinity, we assume the following growth condition: there exist positive constants c 0 and c 1 such that F (x) ≤ c 0 e c 1 x 2 for any x ∈ R d . (G) In the case of a stochastic perturbation of a dynamical system satisfying the strongly coercivity condition (H) and the growth condition (G) we prove thermalisation. Theorem 2.2. Assume that (H) and (G) hold. Let {x ǫ (t, x 0 ) : t ≥ 0} be the solution of (2.4) and denote by µ ǫ the unique invariant probability measure for the evolution given by (2.4). Denote by the total variation distance between the law of the random variable x ǫ (t, x 0 ) and its invariant probability µ ǫ . Consider the cut-off time t ǫ mix given by (2.7) and the time window given by (2.6). Let x 0 = 0. Then for any c ∈ R we have lim ǫ→0 d ǫ (t ǫ mix + cw ǫ ) − D ǫ (t ǫ mix + cw ǫ ) = 0, where D ǫ (t) = d TV G (t − τ ) ℓ−1 e λ(t−τ ) √ ǫ Σ − 1 /2 m k=1 e iθ k (t−τ ) v k , I d , G(0, I d ) (2.8) for any t ≥ τ with m, λ, ℓ, τ , θ 1 , . . . , θ m , v 1 , . . . , v m are the constants and vectors associated to x 0 in Lemma 2.1, and the matrix Σ is the unique solution of the matrix Lyapunov equation: DF (0)X + X(DF (0)) * = I d . (2.9) Remark 2.3. The last theorem tells us that the total variation distance between the law of x ǫ (t) and its equilibrium µ ǫ can be well approximated in a time window (2.6) around the cut-off time (2.7) by the total variation distance between two Gaussian distributions (2.8). Remark 2.4. From Lemma A.2 we deduce an "explicit" formula for the distance (2.8), i.e., D ǫ (t) = 2 π m ǫ (t) /2 0 e − x 2 2 dx, where m ǫ (t) = (t−τ ) ℓ−1 e λ(t−τ ) √ ǫ Σ − 1 /2 m k=1 e iθ k (t−τ ) v k for any t ≥ τ . Remark 2.5. Since the linear differential equation dx(t) = −DF (0)x(t)dt for any t ≥ 0 is asymptotically stable, then the matrix Lyapunov equation (2.9) has a unique solution Σ which is symmetric and positive definite and it is given by the formula: Σ = ∞ 0 e −DF (0)s e −(DF (0)) * s ds. For more details, see Theorem 1, page 443 of [36]. If in addition, DF (0) is symmetric then Σ is easily computable and it is given by Σ = 1 2 (DF (0)) −1 . From Theorem 2.2 we have the following consequences that we write as corollaries. To made the presentation more fluent, in all the corollaries below, we will assume the same hypothesis of Theorem 2.2 and keep the same notation. For any ǫ ∈ (0, 1] and x 0 ∈ R d , denote by x ǫ,x 0 the Markov process {x ǫ (t, x 0 ) : t ≥ 0}. Corollary 2.6. Suppose that x 0 = 0. Window thermalisation for the distance D ǫ at cut-off time t ǫ mix and time window w ǫ is equivalent to window thermalisation for the distance d ǫ at cut-off time t ǫ mix and time window w ǫ . The same holds true for profile thermalisation. Proof. It follows easily from Theorem 2.2 and the following inequalities Proof. From Corollary 2.6, we only need to analyse the distance D ǫ . Notice D ǫ (t ǫ mix + cw ǫ ) ≤ |D ǫ (t ǫ mix + cw ǫ ) − d ǫ (t ǫ mix + cw ǫ )| + d ǫ (t ǫ mix + cw ǫ ) and d ǫ (t ǫ mix + cw ǫ ) ≤ |D ǫ (t ǫ mix + cw ǫ ) − d ǫ (t ǫ mix + cw ǫ )| + D ǫ (t ǫ mix + cw ǫ ).0 < L := lim inf t→+∞ m k=1 e iθ k (t−τ ) v k ≤ lim sup t→+∞ m k=1 e iθ k (t−τ ) v k ≤ m k=1 v k =: U, where first inequality follows from the Cantor diagonal argument and the fact that v 1 , . . . , v m are linearly independent. From Remark 2.4 we have D ǫ (t) = 2 π m ǫ (t) /2 0 e − x 2 2 dx, where m ǫ (t) = (t−τ ) ℓ−1 e λ(t−τ ) √ ǫ Σ − 1 /2 m k=1 e iθ k (t−τ ) v k for any t ≥ τ . Notice that Le −c ≤ lim inf ǫ→0 m ǫ (t ǫ mix + cw ǫ ) ≤ lim sup ǫ→0 m ǫ (t ǫ mix + cw ǫ ) ≤ U e −c for any c ∈ R. From Lemma A.6 and Lemma A.2 we deduce 2 π Le −c /2 0 e − x 2 2 dx ≤ lim inf ǫ→0 D ǫ (t ǫ mix + cw ǫ ) ≤ lim sup ǫ→0 D ǫ (t ǫ mix + cw ǫ ) ≤ 2 π U e −c /2 0 e − x 2 2 dx for any c ∈ R. Therefore lim c→−∞ lim inf ǫ→0 D ǫ (t ǫ mix + cw ǫ ) = 1 and lim c→+∞ lim sup ǫ→0 D ǫ (t ǫ mix + cw ǫ ) = 0. Remark 2.8. Recall that {v 1 , . . . , v m } are linearly independent in C. If in addition lim t→+∞ Σ − 1 /2 m k=1 e iθ k t v k is well defined, then lim t→+∞ Σ − 1 /2 m k=1 e iθ k t v k = Σ − 1 /2 m k=1 v k > 0. In this case, we define r(x 0 ) := Σ − 1 /2 m k=1 v k > 0.lim t→+∞ Σ − 1 /2 m k=1 e iθ k t v k is well defined. Proof. Suppose that there is profile thermalisation for {x ǫ,x 0 } ǫ∈(0,1] . Then lim ǫ→0 d ǫ (t ǫ mix + cw ǫ ) exists for any c ∈ R. From Corollary 2.6 we have that lim ǫ→0 D ǫ (t ǫ mix + cw ǫ ) also exists for any c ∈ R. From Remark 2.4 we deduce lim t→+∞ Σ − 1 /2 m k=1 e iθ k t v k is well defined. On the other hand, if lim t→+∞ Σ − 1 /2 m k=1 e iθ k t v k = r(x 0 ), from Remark 2.8 we have that r(x 0 ) > 0 and from Remark 2.4 we get lim ǫ→0 D ǫ (t ǫ mix + cw ǫ ) = 2 π e −c r(x 0 ) 2 0 e − x 2 2 dx for any c ∈ R. The latter together with Corollary 2.6 imply profile thermalisation for {x ǫ,x 0 } ǫ∈(0,1] . The following corollary includes the case when the dynamics is reversible, i.e., F = ∇V for some scalar function V : R d → R. Proof. The proof follows from Corollary 2.9 observing that θ j = 0 for any j = 1, . . . , m and the fact that {v 1 , . . . , v m } are linearly independent in C. Moreover, in [22], we study the case when d = 1 which follows immediately from Corollary 2.10. We also have a dynamical characterisation of profile thermalisation. Define "a normalised" ω-limit set of x 0 as follows: ω(x 0 ) := y ∈ R d : there exists a sequence of positive numbers {t n : n ∈ N} such that lim n→+∞ t n = +∞ and lim n→+∞ e λtn t ℓ−1 n Σ − 1 /2 e −DF (0)tn x 0 = y . From Lemma B.1, it is not hard to see that Σ − 1 /2 m k=1 v k ∈ ω(x 0 ) . When all the eigenvalues of DF (0) are real, then again by Lemma B.1, we get that ω(x 0 ) consists of a non-zero element which is given by Σ − 1 /2 m k=1 v k . Corollary 2.11. Suppose that x 0 = 0. The family of stochastic Markov processes {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation if and only if ω(x 0 ) is con- tained in a d-sphere with radius r(x 0 ) := Σ − 1 /2 m k=1 v k , i.e., ω(x 0 ) ⊂ S d−1 (r(x 0 )), where S d−1 (r(x 0 )) := x ∈ R d : x = r(x 0 ) . Proof. Suppose that {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation. By Corollary 2.9 we have lim t→+∞ Σ − 1 /2 m k=1 e iθ k t v k is well defined. From Remark 2.8 we know lim t→+∞ Σ − 1 /2 m k=1 e iθ k t v k = r(x 0 ) > 0. The latter together with Lemma B.1 allows to deduce that lim t→+∞ e λt t ℓ−1 Σ − 1 /2 e −DF (0)t x 0 = r(x 0 ). Consequently, ω(x 0 ) ⊂ S d−1 (r(x 0 )). On the other hand, suppose that ω(x 0 ) ⊂ S d−1 (r(x 0 )). Then lim t→+∞ e λt t ℓ−1 e −DF (0)t x 0 = r(x 0 ). From Lemma B.1 we get lim t→+∞ Σ − 1 /2 m k=1 e iθ k t v k = r(x 0 ). The latter together with Corollary 2.9 allow us to deduce the statement. In dimension 2 and 3, we can state a spectral characterisation of profile thermalisation. Remind that if all the eigenvalues of DF (0) are real, we have profile thermalisation as Corollary 2.10 stated, so we avoid that case. Corollary 2.12. Suppose that x 0 = 0 and d = 2. Let γ be a complex eigenvalue of DF (0) with non-zero imaginary part and let u 1 + iu 2 be its eigenvector, where u 1 , u 2 ∈ R 2 . Then the family of Markov stochastic processes {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation if and only if u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = 0. Proof. Write γ = λ + iθ, where λ > 0 with θ = 0. To the eigenvalue γ we associated an eigenvector u 1 + iu 2 , where u 1 , u 2 ∈ R 2 . An straightforward computation shows e λt e −DF (0)t x 0 = (c 1 cos(θt) − c 2 sin(θt))u 1 + (c 1 sin(θt) + c 2 cos(θt))u 2 for any t ≥ 0, where c 1 := c 1 (x 0 ) and c 2 := c 2 (x 0 ) are not both zero. Notice that c := c 2 1 + c 2 2 > 0 and let cos(α) = c 1/c and sin(α) = c 2/c. Then e λt e −DF (0)t x 0 = c cos(θt + α)u 1 + c sin(θt + α)u 2 for any t ≥ 0. Therefore, Σ − 1 /2 e λt e −DF (0)t x 0 2 = c 2 cos 2 (θt + α) u 1 , Σ −1 u 1 + c 2 sin 2 (θt + α) u 2 , Σ −1 u 2 + 2c 2 cos(θt + α) sin(θt + α) u 1 , Σ −1 u 2 (2.10) for any t ≥ 0. If u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = 0 then Σ − 1 /2 e λt e −DF (0)t x 0 2 = c 2 u 1 , Σ −1 u 1 for any t ≥ 0. Notice that u 1 , Σ −1 u 1 = 0 since u 1 = 0 and Σ −1 is a positive definite symmetric matrix. The conclusion follows easily from Lemma B.1 and Corollary 2.9. On the other hand, if {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation then Lemma B.1 and Corollary 2.9 imply lim t→+∞ Σ − 1 /2 e λt e −DF (0)t x 0 2 is well defined. Now, using (2.10) and taking different subsequences we deduce u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = 0. Notice that in dimension 3, at least one eigenvalue of DF (0) is real. Therefore the interesting case is when the others eigenvalues are complex numbers with non-zero imaginary part. Let γ 1 be a real eigenvalue of DF (0) with eigenvector v ∈ R 3 . Let γ be a complex eigenvalue of DF (0) with nonzero imaginary part and let u 1 + iu 2 be its eigenvector, where u 1 , u 2 ∈ R 3 . In this case, e −DF (0)t x 0 =c(x 0 )e −γ 1 t v + e −λt (c 1 cos(θt) − c 2 sin(θt))u 1 + (c 1 sin(θt) + c 2 cos(θt))u 2 ) for any t ≥ 0, where c 0 := c 0 (x 0 ), c 1 := c 1 (x 0 ) and c 2 := c 2 (x 0 ) are not all zero. Notice that c := c 2 1 + c 2 2 > 0 and let cos(α) = c 1/c and sin(α) = c 2/c. Then e −DF (0)t x 0 =c 0 e −γ 1 t v + e −λt (c cos(θt + α)u 1 + c sin(θt + α)u 2 ) (2.11) for any t ≥ 0. Corollary 2.13. Suppose that x 0 = 0 and d = 3. Let γ 1 be a real eigenvalue of DF (0) with eigenvector v ∈ R 3 . Let γ be a complex eigenvalue of DF (0) with non-zero imaginary part and let u 1 + iu 2 be its eigenvector, where u 1 , u 2 ∈ R 3 . Let c 0 , c and α the constants that appears in (2.11). i) Assume c 0 = 0. {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation if and only if u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = 0. ii) Assume c 0 = 0 and γ 1 < λ. Then the family {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation. iii) Assume c 0 = 0, γ 1 = λ. The family {x ǫ,x 0 } ǫ∈(0,1] has profile thermal- isation if and only if u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = v, Σ −1 u 1 = v, Σ −1 u 2 = 0. iv) Assume c 0 = 0, γ 1 > λ. The family {x ǫ,x 0 } ǫ∈(0,1] has profile thermal- isation if and only if u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = 0. Proof. i) This case can be deduced using the same arguments as Corollary 2.12. ii) Using relation (2.11) we obtain lim t→+∞ Σ − 1 /2 e γ 1 t e −DF (0)t x 0 = c 0 Σ − 1 /2 v = 0. The latter together with Corollary 2.9 allows us to deduce profile thermalisation. iii) Using relation (2.11) for any t ≥ 0 we get Σ − 1 /2 e γ 1 t e −DF (0)t x 0 2 =c 2 0 v, Σ − 1 /2 v + c 2 cos 2 (θt + α) u 1 , Σ − 1 /2 u 1 + c 2 sin 2 (θt + α) u 2 , Σ − 1 /2 u 2 + 2c 0 c cos(θt + α) v, Σ − 1 /2 u 1 + 2c 0 c sin(θt + α) v, Σ − 1 /2 u 2 + 2c 2 cos(θt + α) sin(θt + α) u 1 , Σ − 1 /2 u 2 . (2.12) If u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = v, Σ −1 u 1 = v, Σ −1 u 2 = 0, then from (2.12) we deduce Σ − 1 /2 e γ 1 t e −DF (0)t x 0 = c 2 0 v, Σ − 1 /2 v + c 2 u 1 , Σ −1 u 1 > 0. The latter together with Lemma B.1 and Corollary 2.9 imply profile thermalisation. On the other hand, if {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation then Lemma B.1 and Corollary 2.9 imply lim t→+∞ Σ − 1 /2 e λt e −DF (0)t x 0 2 is well defined. Now, using (2.12) and taking different subsequences we deduce u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = v, Σ −1 u 1 = v, Σ −1 u 2 = 0. iv) Using relation (2.11) for any t ≥ 0 we have Σ − 1 /2 e λt e −DF (0)t x 0 2 =c 2 0 e −2(γ 1 −λ)t v, Σ − 1 /2 v + c 2 cos 2 (θt + α) u 1 , Σ − 1 /2 u 1 + c 2 sin 2 (θt + α) u 2 , Σ − 1 /2 u 2 + 2c 0 ce −(γ 1 −λ)t cos(θt + α) v, Σ − 1 /2 u 1 + 2c 0 ce −(γ 1 −λ)t sin(θt + α) v, Σ − 1 /2 u 2 + 2c 2 cos(θt + α) sin(θt + α) u 1 , Σ − 1 /2 u 2 . (2.13) If u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = 0 from (2.13) we obtain lim t→+∞ Σ − 1 /2 e λt e −DF (0)t x 0 = c 2 u 1 , Σ −1 u 1 = 0 which together with Corollary 2.9 imply profile thermalisation. On the other hand, if {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation then Lemma B.1 and Corollary 2.9 imply lim t→+∞ Σ − 1 /2 e λt e −DF (0)t x 0 2 is well defined. Now, using (2.13) and taking different subsequences one can deduce u 1 , Σ −1 u 1 = u 2 , Σ −1 u 2 and u 1 , Σ −1 u 2 = 0. When Σ is the identity matrix, roughly speaking Corollary 2.12 and Corollary 2.13 state that profile thermalisation is equivalent to "norm" preserving and orthogonality of the real and imaginary parts of the eigenvectors of DF (0). When Σ is not the identity, the latter still true under a change of basis. The multiscale analysis In this section, we prove that the process {x ǫ (t) : t ≥ 0} can be well approximated by the solution of a linear non-homogeneous process in a time window that will include the time scale on which we are interested. It is not hard to see that (H) basically says that (C) is satisfied around any point y. In fact, writing F (y) − F (x) = 1 0 d dt F (x + t(y − x))dt = 1 0 DF (x + t(y − x))(y − x)dt we obtain the seemingly stronger condition y − x, F (y) − F (x) ≥ δ y − x 2 for any x, y ∈ R d . The latter is basically saying that (C) is satisfied around any point y ∈ R d . A good example of a vector field F satisfying (H) and (G) is F (x) = Ax+H(x), x ∈ R d , where A is a matrix, H is a vector valued function such that F satisfies (H) and it satisfies H(0) = 0, DH(0) = 0, DH ∞ < +∞ and D 2 H ∞ < +∞. In dimension one, a good example to keep in mind is F (x) = n j=1 a j x 2j−1 for any x ∈ R,(3.1) where n ∈ N, a 1 > 0 and a j ≥ 0 for any j ∈ {2, . . . , n}. It is fairly easy to see that (3.1) satisfies (H) and (G). If a j > 0 for some j ∈ {2, . . . , n} then (3.1) is not globally Lipschitz continuous. Recall that F ∈ C 2 (R d , R d ). Notice that for any x, y ∈ R d we have F (x) − F (y) = 1 0 DF (x + t(y − x))(y − x)dt. Therefore, for any x, y ∈ R d we get F (x) − F (y) − DF (y)(x − y) = 1 0 (DF (y + t(x − y)) − DF (y))(x − y)dt. Note DF (y + t(x − y)) − DF (y) = 1 0 D 2 F (y + st(x − y))t(x − y)ds for any x, y ∈ R d . For any r 0 > 0 and r 1 > 0, define C := sup |z|≤2r 1 +r 0 D 2 F (z) . Then F (x) − F (y) − DF (y)(x − y) ≤ C x − y 2 (3.2) for any x ≤ r 0 and y ≤ r 1 . Inequality (3.2) will allow us to control the random dynamics {x ǫ (t) : t ≥ 0} on compacts sets and it will be very useful in our apriori estimates. 3.1. Zeroth-order approximations. It is fairly easy to see that for any t ≥ 0, as ǫ → 0, x ǫ (t) converges to ϕ(t). The convergence can be proved to be almost surely uniform in compacts. But for our purposes, we need a quantitative estimate on the distance between x ǫ (t) and ϕ(t). The idea is fairly simple: (H) says that the dynamical system (2.1) is uniformly contracting. Therefore, it is reasonable that fluctuations are pushed back to the solution of (2.1) and therefore the difference between x ǫ (t) and ϕ(t) has a short-time dependence on the noise {B(s) : 0 ≤ s ≤ t}. This heuristics can be made precise computing the Itô derivative of x ǫ (t) − ϕ(t) 2 as follows: d x ǫ (t) − ϕ(t) 2 = −2 x ǫ (t) − ϕ(t), F (x ǫ (t)) − F (ϕ(t)) dt + 2 √ ǫ x ǫ (t) − ϕ(t), dB(t) + dǫdt ≤ −2δ x ǫ (t) − ϕ(t) 2 dt + 2 √ ǫ x ǫ (t) − ϕ(t), dB(t) + dǫdt,(3.3) where the last inequality follows from (H). After a localisation argument we get d dt E x ǫ (t) − ϕ(t) 2 ≤ −2δE x ǫ (t) − ϕ(t) 2 + ǫd for any t ≥ 0. From Lemma C.7, we obtain the following uniform bound E x ǫ (t) − ϕ(t) 2 ≤ dǫ 2δ (1 − e −2δt ) ≤ dǫ 2δ for any t ≥ 0. (3.4) We call this bound the zeroth order approximation of x ǫ (t). We have just proved that the distance between x ǫ (t) and ϕ(t) is of order O( √ ǫ), uniformly in t ≥ 0. However, this estimate is meaningful only while ϕ(t) ≫ √ ǫ. By Lemma 2.1, ϕ(t) is of order O(t ℓ−1 e −λt ) , which means that (3.4) is meaningful for times t of order o(t ǫ mix ), which fall just short of what we need. This is very natural, because at times of order t ǫ mix we expect that fluctuations play a predominant role. 3.2. First-order approximations. Notice that (3.4) can be seen as a law of large numbers for x ǫ (t). In fact, E[x ǫ (t)] = ϕ(t) for every t ≥ 0 and for t ≪ t ǫ mix , ǫ/ ϕ(t) 2 → 0. By the second-moment method, x ǫ (t) satisfies a law of large numbers when properly renormalised. Therefore, it is natural to look at the corresponding central limit theorem. Define {y ǫ (t) : t ≥ 0} as y ǫ (t) = x ǫ (t) − ϕ(t) √ ǫ for any t ≥ 0. As above, it is not very difficult to prove that for every T > 0, the process {y ǫ (t) : t ∈ [0, T ]} converges in distribution to the solution {y(t) : t ∈ [0, T ]} of the linear non-homogeneous stochastic differential equation (also known as non-homogeneous Ornstein-Uhlenbeck process): dy(t) = −DF (ϕ(t))y(t)dt + dB(t) for t ≥ 0, y(0) = 0. (3.5) Notice that this equation is linear and in particular y(t) has a Gaussian law for any t > 0. As in the previous section, our aim is to obtain good quantitative bounds for the distance between y ǫ (t) and y(t). First, we notice that the estimate (3.4) can be rewritten as E y ǫ (t) 2 ≤ d 2δ for any t ≥ 0. (3.6) We will also need an upper bound for E y ǫ (t) 4 . From the Itô formula and (H) we have d y ǫ (t) 4 = −4 y ǫ (t) 2 y ǫ (t), DF (ϕ(t))y ǫ (t) dt + 4 y ǫ (t) 2 y ǫ (t), dB(t) + (2d + 4) y ǫ (t) 2 dt ≤ −4δ y ǫ (t) 4 dt + 4 y ǫ (t) 2 y ǫ (t), dB(t) + (2d + 4) y ǫ (t) 2 dt. After a localisation argument we obtain d dt E y ǫ (t) 4 ≤ −4δE y ǫ (t) 4 + (2d + 4)E y ǫ (t) 2 . From (3.6) and Lemma C.7 we get the uniformly bound E y ǫ (t) 4 ≤ d(d + 2) 4δ 2 1 − e −4δt ≤ d(d + 2) 4δ 2 for any t ≥ 0. (3.7) Notice that x ǫ (t) = ϕ(t)+ √ ǫy ǫ (t) for any t ≥ 0 and the difference y ǫ (t)−y(t) has bounded variation. Then d dt (y ǫ (t) − y(t)) = − 1 √ ǫ F (x ǫ (t)) − F (ϕ(t)) − √ ǫDF (ϕ(t))y(t) = − 1 √ ǫ F (ϕ(t) + √ ǫy ǫ (t)) − F (ϕ(t) + √ ǫy(t)) − 1 √ ǫ F (ϕ(t) + √ ǫy(t)) − F (ϕ(t)) − √ ǫDF (ϕ(t))y(t) . Define h ǫ (t) := F (ϕ(t) + √ ǫy(t)) − F (ϕ(t)) − √ ǫDF (ϕ(t))y(t) for any t ≥ 0. Therefore, using the chain rule for y ǫ (t) − y(t) 2 we obtain the differential equation: d dt y ǫ (t) − y(t) 2 = 2 y ǫ (t) − y(t), d dt (y ǫ (t) − y(t)) = − 2 √ ǫ y ǫ (t) − y(t), F (ϕ(t) + √ ǫy ǫ (t)) − F (ϕ(t) + √ ǫy(t)) − 2 √ ǫ y ǫ (t) − y(t), h ǫ (t) ≤ −2δ y ǫ (t) − y(t) 2 − 2 √ ǫ y ǫ (t) − y(t), h ǫ (t) ,(3.8) where the last inequality follows from (H). From the Cauchy-Schwarz inequality we observe |( 2 / √ ǫ) y ǫ (t) − y(t), h ǫ (t) | ≤ ( 2 / √ ǫ) y ǫ (t) − y(t) h ǫ (t) . (3.9) Recall the well known Young type inequality 2|ab| ≤ ̺a 2 + ( 1 /̺)b 2 for any a, b ∈ R and ρ > 0. From inequality (3.9) we have |( 2 / √ ǫ) y ǫ (t) − y(t), h ǫ (t) | ≤ δ y ǫ (t) − y(t) 2 + 1 ǫδ h ǫ (t) 2 . (3.10) From inequality (3.8) and inequality (3.10) we deduce d dt y ǫ (t) − y(t) 2 ≤ −δ y ǫ (t) − y(t) 2 + 1 ǫδ h ǫ (t) 2 . By taking expectation in both sides of the last inequality, we obtain d dt E y ǫ (t) − y(t) 2 ≤ −δE y ǫ (t) − y(t) 2 + 1 ǫδ E h ǫ (t) 2 . (3.11) Define H ǫ (t) := E h ǫ (t) 2 for any t ≥ 0. From inequality (3.11) and Lemma C.7 we deduce E y ǫ (t) − y(t) 2 ≤ (1 − e −δt ) ǫδ 2 t 0 H ǫ (s)ds for any t ≥ 0. Therefore, we need to get an upper bound for t 0 H ǫ (s)ds for any t ≥ 0. From Lemma 3.1 we have t 0 H ǫ (s)ds ≤ C(η, x 0 , d, δ)ǫ 2 t +C(η, x 0 , d, δ)ǫ 3 /2 t 7 /4 for any η > 0 and t ∈ 0, η 2 2ǫd , where C(η, x 0 , d, δ) andC(η, x 0 , d, δ) are positive constants that only depend on η, x 0 , d and δ. The latter implies E x ǫ (t) − (ϕ(t) + √ ǫy(t)) 2 ≤ 1 δ 2 C(η, x 0 , d, δ)ǫ 2 t +C(η, x 0 , d, δ)ǫ 3 /2 t 7 /4 (3.12) for any t ∈ 0, η 2 2ǫd . We call this bound the first-order approximation of x ǫ (t). Roughly speaking, for t = O(ln( 1 /ǫ)) we have just proved that the distance between x ǫ (t) and ϕ(t) + √ ǫy(t) is of order O ǫ 3 /4−℘ for any ℘ ∈ (0, 3 /4) which will be enough for our purposes. Lemma 3.1. Assume that (H) and (G) hold. Let ǫ ∈ 0, δ 32c 1 . For any η > 0 and t ∈ 0, η 2 2ǫd we have t 0 H ǫ (s)ds ≤ C(η, x 0 , d, δ)ǫ 2 t +C(η, x 0 , d, δ)ǫ 3 /2 t 7 /4 , where C(η, x 0 , d, δ) andC(η, x 0 , d, δ) only depend on η, x 0 , d and δ. Moreover, for any ℘ ∈ (0, 6 /7) we have lim ǫ→0 sup 0≤t≤O( 1 /ǫ ℘ ) E x ǫ (t) − (ϕ(t) + √ ǫy(t)) 2 = 0. Proof. Recall that H ǫ (t) = E h ǫ (t) 2 , where h ǫ (t) = F (ϕ(t) + √ ǫy(t)) − F (ϕ(t)) − √ ǫDF (ϕ(t))y(t) for any t ≥ 0. Take any η > 0 and t > 0 and define the event A η,ǫ,t = sup 0≤s≤t y(s) ≤ η √ ǫ . By inequality (3.2) we have E h ǫ (t) 2 1 Aη,ǫ,t ≤ C 2 0 (η, x 0 )ǫ 2 E y(t) 4 for any t ≥ 0, where the positive constant C 0 (η, x 0 ) depends on η and x 0 . By a similar argument using in inequality (3.7) we deduce E y(t) 4 ≤ d(d+2) 4δ 2 for any t ≥ 0. Then E h ǫ (t) 2 1 Aη,ǫ,t ≤ C 2 0 (η, x 0 ) d(d + 2) 4δ 2 ǫ 2 for any t ≥ 0. On the other hand, recall the well-known inequality (x + y + z) 2 ≤ 4(x 2 + y 2 + z 2 ) for any x, y, z ∈ R. Then E h ǫ (t) 2 1 A c η,ǫ,t ≤4E F (ϕ(t) + √ ǫy(t)) 2 1 A c η,ǫ,t + 4E F (ϕ(t)) 2 1 A c η,ǫ,t + 4ǫE DF (ϕ(t))y(t) 2 1 A c η,ǫ,t for any t ≥ 0. We will analyse the upper bound of the last inequality. Since ϕ(s) ≤ x 0 for any s ≥ 0, then E F (ϕ(t)) 2 1 A c η,ǫ,t ≤ C 2 1 ( x 0 )P A c η,ǫ,t , for any t ≥ 0, where C 1 ( x 0 ) is a positive constant that only depends on x 0 . We also observe that E DF (ϕ(t))y(t) 2 1 A c η,ǫ,t ≤ C 2 2 ( x 0 )E y(t) 2 1 A c η,ǫ,t for any t ≥ 0, where C 2 ( x 0 ) is a positive constant that only depends on x 0 . From the Cauchy-Schwarz inequality we get E y(t) 2 1 A c η,ǫ,t =E y(t) 2 1 A c η,ǫ,t 1 A c η,ǫ,t ≤ E y(t) 4 1 A c η,ǫ,t 1 /2 P A c η,ǫ,t 1 /2 ≤ E y(t) 8 1 /4 P A c η,ǫ,t 3 /4 . for any t ≥ 0. Following similar computations as we did in (3.7) or by item ii) of Proposition C.2 we deduce E y(t) 8 ≤ d(d + 2)(d + 4)(d + 6) 16δ 4 for any t ≥ 0. Therefore E DF (ϕ(t))y(t) 2 1 A c η,ǫ,t ≤ C 2 3 ( x 0 , δ, d) P A c η,ǫ,t 3 /4 for any t ≥ 0, where C 3 ( x 0 , δ, d) is a positive constant that only depends on x 0 , δ and d. Finally, we analise E F (ϕ(t) + √ ǫy(t)) 2 1 A c η,ǫ,t . From (G) we have E F (ϕ(t) + √ ǫy(t)) 2 1 A c η,ǫ,t ≤ c 2 0 e 4c 1 x 0 2 E e 4c 1 ǫ y(t) 2 1 A c η,ǫ,t for any t ≥ 0. From the Cauchy-Schwarz inequality we deduce E e 4c 1 ǫ y(t) 2 1 A c η,ǫ,t 1 A c η,ǫ,t ≤ E e 8c 1 ǫ y(t) 2 1 A c η,ǫ,t 1 /2 P A c η,ǫ,t 1 /2 ≤ E e 16c 1 ǫ y(t) 2 1 /4 P A c η,ǫ,t 3 /4 for any t ≥ 0. From item iv) of Proposition C.2, for any ǫ ∈ 0, δ 32c 1 we have E e 16c 1 ǫ y(t) 2 ≤ e 16c 1 dǫt for any t ≥ 0. Therefore, E h ǫ (t) 2 ≤C 2 4 (η, x 0 , d, δ)ǫ 2 + 4C 2 1 ( x 0 )P A c η,ǫ,t + 4 C 2 3 ( x 0 , δ, d) + e 4c 1 dǫt P A c η,ǫ,t 3 /4 ≤ C 2 4 (η, x 0 , d, δ)ǫ 2 + 4 C 2 1 ( x 0 ) + C 2 3 ( x 0 , δ, d) + e 4c 1 dǫt P A c η,ǫ,t 3 /4 , where C 2 4 (η, x 0 , d, δ) = C 2 0 (η, x 0 ) d(d+2) 4δ 2 . From item ii) of Lemma C.1 we have P A c η,ǫ,t ≤ 2dǫ 2 t δ(η 2 − ǫdt) 2 for any 0 ≤ t < η 2 ǫd . Notice that P A c η,ǫ,t ≤ 4dǫ 2 t δη 2 for any 0 ≤ t < η 2 2ǫd . Consequently, E h ǫ (t) 2 ≤C 2 4 (η, x 0 , d, δ)ǫ 2 + 4 C 2 1 ( x 0 ) + C 2 3 ( x 0 , δ, d) + e 2c 1 η 2 4dǫ 2 t δη 2 3 /4 for any 0 ≤ t < η 2 2ǫd . The second part follows immediately from inequality (3.12). In Lemma C.6, we will prove that the linear non-homogeneous process {y(t) : t ≥ 0} has a limiting, non-degenerate law which is Gaussian with mean vector zero and covariance matrix Σ which is the unique solution of the Lyapunov matrix equation (2.9). 3. 3. An ǫ /3 proof. We approximate the process {x ǫ (t) : t ≥ 0} by a linear non-homogeneous process {z ǫ (t) := ϕ(t) + √ ǫy(t) : t ≥ 0} in which we can carry out "explicit" computations. Since we need to compare solutions of various Stochastic Differential Equations with different initial conditions, we introduce some notation. Let ξ be a random variable in R d and let T > 0. Let {ϕ(t, ξ) : t ≥ 0} denote the solution of dϕ(t, ξ) = −F (ϕ(t, ξ))dt for any t ≥ 0, ϕ(0, ξ) = ξ. Let {y(t, ξ, T ) : t ≥ 0} be the solution of the stochastic differential equation dy(t, ξ, T ) = −DF (ϕ(t, ξ))y(t, ξ, T )dt + dB(t + T ) for any t ≥ 0, y(0, ξ, T ) = 0 and define {z ǫ (t, ξ, T ) : t ≥ 0} as z ǫ (t, ξ, T ) := ϕ(t, ξ) + √ ǫy(t, ξ, T ) for any t ≥ 0. Let c ∈ R. In what follows, we will always take T = t ǫ mix + cw ǫ > 0 for every ǫ > 0 small enough, so for simplicity, we will omit it from the notation. Let δ ǫ > 0 such that δ ǫ = o(1). For ǫ ≪ 1 define t ǫ shift := t ǫ mix − δ ǫ > 0. (3.13) The following lemma is the key of the proof. Roughly speaking, from Lemma 3.1 we see that the processes {x ǫ (t) : t ≥ 0} and {z ǫ (t) : t ≥ 0} are close enough for times of order O(ln( 1 /ǫ)). Therefore we can shift the processes for a small time δ ǫ and then we coupled the remainder differences in a small time interval [0, δ ǫ ]. Since {z ǫ (t) : t ≥ 0} is linear, then thermalisation (window cut-off) will be concluded from it. Lemma 3.2. For any c ∈ R and ǫ ≪ 1 we have |d TV (x ǫ (t ǫ mix + cw ǫ , x 0 ), µ ǫ ) − d TV (z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 )), G(0, ǫΣ))| ≤ d TV (x ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )))+ d TV (z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 ))) + d TV (G(0, ǫΣ), µ ǫ ). (3.14) Proof. Notice that d TV (x ǫ (t ǫ shift + cw ǫ + δ ǫ , x 0 ), µ ǫ ) = d TV (x ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), µ ǫ ) ≤ d TV (x ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )))+ d TV (z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 )))+ d TV (z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 )), G(0, ǫΣ)) + d TV (G(0, ǫΣ), µ ǫ ). On the other hand, G(0, ǫΣ)). d TV (z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 )), G(0, ǫΣ)) ≤ d TV (z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )))+ d TV (z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), x ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )))+ d TV (x ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), µ ǫ ) + d TV (µ ǫ , Gluing both inequalities we deduce |d TV (x ǫ (t ǫ shift + cw ǫ + δ ǫ , x 0 ), µ ǫ ) − d TV (z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 )), G(0, ǫΣ))| ≤ d TV (x ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )))+ d TV (z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 ))) + d TV (G(0, ǫΣ), µ ǫ ). In what follows, we will prove that the upper bound of inequality (3.14) is negligible as ǫ → 0. 3.3.1. Short-time coupling. A natural question arising is how to obtain explicit "good" bounds for the total variation distance between x ǫ (t) and z ǫ (t). Using the celebrated Cameron-Martin-Girsanov Theorem, a coupling on the path space can be done and it is possible to establish bounds on the total variation distance using the Pinsker inequality of such diffusions. This method only provides a coupling over short time intervals. For more details see [4], [22], [48] and the references therein. On the other hand, "explicit" bounds for the total variation distance between transition probabilities of diffusions with different drifts are derived using analytic arguments. This approach also works for the stationary measures of the diffusions. For further details see [43] and the references therein. In order to avoid homogenisation arguments for F , we use the Hellinger approach developed in [48] for obtain an upper bound for the total variation distance between the non-linear model x ǫ (t) with the linear nonhomogeneous model z ǫ (t) in a short time interval. That upper bound is enough for our purposes. As we can notice in Theorem 5.1 in [48], we need to carry out second-moment estimates of the distance between the vector fields associated to the diffusions {x ǫ (t) : t ≥ 0} and {z ǫ (t) : t ≥ 0}, respectively. It is exactly the estimate that we did in Lemma 3.1. ∈ R lim ǫ→0 d TV (x ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 ))) = 0, where t ǫ shift is given by (3.13). Proof. Let T ǫ = t ǫ shift + cw ǫ > 0 for ǫ ≪ 1. Notice that d TV (x ǫ (δ ǫ , x ǫ (T ǫ , x 0 )), z ǫ (δ ǫ , x ǫ (T ǫ , x 0 ))) ≤ R d d TV (x ǫ (δ ǫ , u), z ǫ (δ ǫ , u))P(x ǫ (T ǫ , x 0 ) ∈ du). For short, denote by P ǫ (du) the probability measure P(x ǫ (T ǫ , x 0 ) ∈ du). Let K be a positive constant. Then d TV (x ǫ (δ ǫ , x ǫ (T ǫ , x 0 )), z ǫ (δ ǫ , x ǫ (T ǫ , x 0 ))) ≤ u ≤K d TV (x ǫ (δ ǫ , u), z ǫ (δ ǫ , u))P ǫ (du) + P ( x ǫ (T ǫ , x 0 ) > K) . (3.15) Now, we prove that the upper bound of (3.15) is negligible as ǫ → 0. From the Markov inequality we get P ( x ǫ (T ǫ , x 0 ) > K) ≤ E x ǫ (T ǫ , x 0 ) 2 K 2 . Recall the well-known inequality (x + y) 2 ≤ 2(x 2 + y 2 ) for any x, y ∈ R. Then E x ǫ (T ǫ , x 0 ) 2 ≤2E x ǫ (T ǫ , x 0 ) − ϕ(T ǫ , x 0 ) 2 + 2 ϕ(T ǫ , x 0 ) 2 . From inequality (3.4) and inequality (2.2) we have E x ǫ (T ǫ , x 0 ) 2 ≤ ǫd δ + 2e −2δT ǫ x 0 2 , which allows to deduce lim ǫ→0 P ( x ǫ (T ǫ , x 0 ) > K) = 0. (3.16) Now, we analyse u ≤K d TV (x ǫ (δ ǫ , u), z ǫ (δ ǫ , u))P ǫ (du). From the Theorem 5.1 in [48] we obtain u ≤K d TV (x ǫ (δ ǫ , u), z ǫ (δ ǫ , u))P ǫ (du) ≤ 1 ǫ u ≤K δǫ 0 E I ǫ (s, u) 2 dsP ǫ (du), where I ǫ (s, u) := F (x ǫ (s, u)−DF (ϕ(s, u))z ǫ (s, u)+DF (ϕ(s, u))ϕ(s, u)−F (ϕ(s, u)) for any s ≥ 0 and u ∈ R d . Following the same argument using in Lemma 3.1, for any u ∈ R d such that u ≤ K and 0 ≤ s ≤ δ ǫ we deduce E I ǫ (s, u) 2 ≤ 1 δ 2 C(K, d, δ)ǫ 2 δ ǫ +C(K, d, δ)ǫ 3 /2 δ 7 /4 ǫ , where C(K, d, δ) andC(K, d, δ) only depend on K, d and δ. Therefore, lim ǫ→0 u ≤K d TV (x ǫ (δ ǫ , u), z ǫ (δ ǫ , u))P ǫ (du) = 0. 3.3.2. Linear non-homogeneous coupling. In this part, we couple two nonhomogeneous solutions z ǫ (t, x) and z ǫ (t, y) for short time t ≪ 1 and initials conditions x and y such that x − y small enough. Proposition 3.4. Assume that (H) and (G) hold. Let δ ǫ = ǫ θ for some θ ∈ (0, 1 /2). Then for any c ∈ R lim ǫ→0 d TV (z ǫ (δ ǫ , x ǫ (t ǫ shift + cw ǫ , x 0 )), z ǫ (δ ǫ , z ǫ (t ǫ shift + cw ǫ , x 0 ))) = 0, where t ǫ shift is given by (3.13). Proof. Let T ǫ = t ǫ shift + cw ǫ > 0 for ǫ ≪ 1. Notice that d TV (z ǫ (δ ǫ , x ǫ (T ǫ , x 0 )), z ǫ (δ ǫ , z ǫ (T ǫ , x 0 ))) ≤ R d ×R d d TV (z ǫ (δ ǫ , u), z ǫ (δ ǫ ,ũ))P(x ǫ (T ǫ , x 0 ) ∈ du, z ǫ (T ǫ , x 0 ) ∈ dũ), For short, we denote by P ǫ (du, dũ) for the coupling P(x ǫ (T ǫ , x 0 ) ∈ du, z ǫ (T ǫ , x 0 ) ∈ dũ). Let K andK any positive constants. Then Since the stochastic differential equation associated to {y(t, u, T ǫ ) : t ≥ 0} is linear then the variation of parameters formula allows us to deduce that d TV (z ǫ (δ ǫ , x ǫ (T ǫ , x 0 )), z ǫ (δ ǫ , z ǫ (T ǫ , x 0 ))) ≤ u ≤K, ũ ≤K d TV (z ǫ (δ ǫ , u), z ǫ (δ ǫ ,ũ))P ǫ (du, dũ)+ P ( x ǫ (T ǫ , x 0 ) > K) + P z ǫ (T ǫ , x 0 ) >K .z ǫ (δ ǫ , u) = Φ(δ ǫ )u + √ ǫ(Φ(δ ǫ )) δǫ 0 (Φ(s)) −1 d(B(T ǫ + s) − B(T ǫ )) for any u ∈ R d , where {Φ(t) : t ≥ 0} is the solution of the matrix differential equation: d dt Φ(t) = −Φ(t)DF (ϕ(t + T ǫ )) for t ≥ 0, Φ(0) = I d . Observe that for any υ ∈ R d , z ǫ (δ ǫ , υ) has Gaussian distribution with mean vector ϕ(δ ǫ , υ) and covariance matrix ǫΣ(δ ǫ ), where Σ(δ ǫ ) is the covariance matrix of the random vector (Φ(δ ǫ )) δǫ 0 (Φ(s)) −1 d(B(T ǫ + s) − B(T ǫ )) which does not depend on υ. Moreover, using the Itô formula we deduce Σ(δ ǫ ) = Φ(δ ǫ ) δǫ 0 (Φ(s)) −1 (Φ(s)) −1 * ds(Φ(δ ǫ )) * From the Lebesgue Differentiation Theorem we obtain lim ǫ→0 Σ(δ ǫ ) δ ǫ = I d . The latter allows us to deduce that (Σ(δ ǫ ) ) − 1 /2 ≤ (δ ǫ ) − 1 /2 C(d), where C(d) is an absolute positive constant that only depends on d. From item iii) of Lemma A.1 and Lemma A.2 we have d TV (z ǫ (δ ǫ , u), z ǫ (δ ǫ ,ũ)) ≤ 1 √ 2πǫ (Σ(δ ǫ )) − 1 /2 Φ(δ ǫ ) (u −ũ) for any u,ũ ∈ R d . Then d TV (z ǫ (δ ǫ , u), z ǫ (δ ǫ ,ũ)) ≤ C 1 (d) √ ǫδ ǫ u −ũ for any u,ũ ∈ R d , where C 1 (d) is an absolute positive constant that only depends on d. Therefore, u ≤K, ũ ≤K d TV (z ǫ (δ ǫ , u), z ǫ (δ ǫ ,ũ))P ǫ (du, dũ) ≤ C 1 (d) √ ǫδ ǫ E [ x ǫ (T ǫ , x 0 ) − z ǫ (T ǫ , x 0 ) ] ≤ C 1 (d) √ ǫδ ǫ E x ǫ (T ǫ , x 0 ) − z ǫ (T ǫ , x 0 ) 2 1 /2 . From Lemma 3.1 we deduce lim ǫ→0 u ≤K, ũ ≤K d TV (z ǫ (δ ǫ , u), z ǫ (δ ǫ ,ũ))P ǫ (du, dũ) = 0. Putting all pieces together, we get the statement. 3.3.3. Window Cut-off. Remind that z ǫ (t) = ϕ(t) + √ ǫy(t), t ≥ 0, where {y(t) : t ≥ 0} satisfies the linear non-homogeneous stochastic differential equation: dy(t) = −DF (ϕ(t))y(t)dt + dB(t) for t ≥ 0, y(0) = 0. Therefore, for any t > 0, z ǫ (t) has Gaussian distribution with zero mean vector ϕ(t) and covariance matrix ǫΣ(t), where {Σ(t) : t ≥ 0} is the the solution to the deterministic matrix differential equation: d dt Σ(t) = −DF (ϕ(t))Σ(t) − Σ(t)(DF (ϕ(t))) * + I d for t ≥ 0, Σ(0) = 0. Under (H), we can prove that ϕ(t) → 0 and Σ(t) → Σ as t → +∞, where Σ is a symmetric and positive definite matrix (See Lemma C.6). Therefore, z ǫ (t) converges in distribution to a random vector z ǫ (∞) as t → +∞, where z ǫ (∞) has Gaussian law with zero mean vector and covariance matrix ǫΣ. Using item iii) of Lemma A.1, Lemma A.3, Lemma A.5 the convergence can be easily improved to be in total variation distance. Bearing all this in mind, we can measure how abrupt is the convergence to its equilibrium. Definē D ǫ (t) := d TV (z ǫ (t), z ǫ (∞)) = d TV (G(ϕ(t), ǫΣ(t)), G(0, ǫΣ)) for any t > 0. Proposition 3.5. Assume that (H) holds. Let δ ǫ ≥ 0 such that δ ǫ = o (1). For any c ∈ R we have lim ǫ→0 D ǫ (t ǫ shift + δ ǫ + cw ǫ ) − D ǫ (t ǫ shift + δ ǫ + cw ǫ ) = 0, where t ǫ shift is given by (3.13), D ǫ (t) := d TV G (t − τ ) ℓ−1 e λ(t−τ ) √ ǫ Σ − 1 /2 m k=1 e iθ k (t−τ ) v k , I d , G(0, I d ) for any t ≥ τ , with λ, ℓ, τ , θ 1 , . . . , θ m ∈ [0, 2π), v 1 , . . . , v m are the constants and vectors associated to x 0 in Lemma 2.1, and the matrix Σ is the unique solution of the matrix Lyapunov equation: DF (0)X + X(DF (0)) * = I d . Proof. Let t > 0. From the triangle inequality and item ii), item iii) of Lemma A.1 we obtain D ǫ (t) ≤d TV (G(ϕ(t), ǫΣ(t)), G(ϕ(t), ǫΣ)) + d TV (G(ϕ(t), ǫΣ), G(0, ǫΣ)) ≤ d TV (G(0, Σ(t)), G(0, Σ)) + d TV G 1 √ ǫ ϕ(t), Σ , G(0, Σ) . A similar argument allows us to deduce that D ǫ (t) − d TV G 1 √ ǫ ϕ(t), Σ , G(0, Σ) ≤ d TV (G(0, Σ(t)) , G(0, Σ)). (3.19) Using Lemma A.5 we get lim t→+∞ d TV (G(0, Σ(t)), G(0, Σ)) = 0. Therefore, the cut-off phenomenon can be deduced from the distancê D ǫ (t) := d TV G 1 √ ǫ ϕ(t), Σ , G(0, Σ) = d TV G Σ − 1 /2 1 √ ǫ ϕ(t), I d , G(0, I d ) for any t > 0, where the last equality follows from item iii) of Lemma A.1. Using the constants and vectors associated to x 0 in Lemma 2.1, for any t > τ define D ǫ (t) := d TV G Σ − 1 /2 (t − τ ) ℓ−1 e λ(t−τ ) √ ǫ m k=1 e iθ k (t−τ ) v k , I d , G(0, I d ) and R ǫ (t) := d TV G Σ − 1 /2 1 √ ǫ ϕ(t), I d , G Σ − 1 /2 (t − τ ) ℓ−1 e λ(t−τ ) √ ǫ m k=1 e iθ k (t−τ ) v k , I d . From item ii) of Lemma A.1 we deduce that R ǫ (t) = d TV G 1 √ ǫ Σ − 1 /2 ϕ(t) − (t − τ ) ℓ−1 e λ(t−τ ) m k=1 e iθ k (t−τ ) v k , I d , G (0, I d ) . A similar argument for obtain inequality (3.19) allows to show that D ǫ (t) − D ǫ (t) ≤ R(t) for any t > τ. (3.20) From inequalities (3.19) and (3.20) we obtain D ǫ (t) −D ǫ (t) ≤ R ǫ (t) + d TV (G(0, Σ(t)), G(0, Σ)) for any t > τ . Straightforward computations led us to lim ǫ→0 e −λ(t ǫ shift +δǫ+cw ǫ −τ ) (t ǫ shift + δ ǫ + cw ǫ − τ ) ℓ−1 √ ǫ = (2λ) 1−ℓ e −c (3.21) for any c ∈ R. Therefore, Lemma 2.1 together relation (3.21) allow to deduce that lim ǫ→0 R ǫ (t ǫ shift + δ ǫ + cw ǫ ) = 0 for any c ∈ R. Consequently, we obtain the statement. 3.3.4. The invariant measure. In this section, we prove that the invariant measure µ ǫ of the evolution (2.4) is well approximated in total variation distance by a Gaussian distribution with zero mean vector and covariance matrix ǫΣ, where Σ is the unique solution of the matrix Lyapunov equation: Proof. Recall that z ǫ (t) = ϕ(t) + √ ǫy(t) for any t ≥ 0. Note that for any s, t ≥ 0 and x ∈ R d we have d TV (G(0, ǫΣ), µ ǫ ) ≤d TV (G(0, ǫΣ), z ǫ (s + t, x))+ d TV (z ǫ (s + t, x), x ǫ (s + t, x)) + d TV (x ǫ (s + t, x), µ ǫ ). (3.22) Observe that d TV (G(0, ǫΣ), z ǫ (s + t, x)) = d TV (G(0, ǫΣ), G(ϕ(s + t, x), ǫΣ(s + t))), DF (0)X + X(DF (0)) * = I d . where Σ(t) is the covariance matrix of y(t). Therefore, using the triangle inequality together with item ii) and item iii) of Lemma A.1 we obtain d TV (G(0, ǫΣ), z ǫ (s + t, x)) ≤d TV (G(0, Σ), G(0, Σ(s + t)))+ d TV G ϕ(s + t, x) √ ǫ , Σ , G(0, Σ) . (3.23) Let s ǫ ≪ ǫ 1 /2019 and t ǫ mix ≪ t ǫ := 1 ǫ 1 /8 . By Lemma A.5 and Lemma B.2 we obtain lim ǫ→0 d TV (G(0, Σ), G(0, Σ(s ǫ + t ǫ ))) = 0. From (H) we obtain ϕ(s ǫ + t ǫ , x) ≤ x e −δ(s ǫ +t ǫ ) . Straightforward computations led us to deduce that lim ǫ→0 ϕ(s ǫ + t ǫ , x) √ ǫ = 0. The latter together with item iii) of Lemma A.1 imply lim ǫ→0 d TV G ϕ(s ǫ + t ǫ , x) √ ǫ , Σ , G(0, Σ) = 0. Therefore, from inequality (3.23) we obtain lim ǫ→0 d TV (G(0, ǫΣ), z ǫ (s ǫ + t ǫ , x)) = 0 Now, using the same ideas as in Proposition 3.3 (much easier since t ǫ ≫ t ǫ mix ) we deduce lim ǫ→0 d TV (z ǫ (s ǫ + t ǫ , x), x ǫ (s ǫ + t ǫ , x)) = 0. From inequality (3.22), it remains to prove that lim ǫ→0 d TV (x ǫ (s ǫ + t ǫ , x), µ ǫ ) = 0. Notice that d TV (x ǫ (s + t, x), µ ǫ ) ≤ R d d TV (x ǫ (s + t, x), x ǫ (s + t,x))µ ǫ (dx). Since the stochastic differential equation associated to {y ǫ (t) : t ≥ 0} is not homogeneous we should improve the notation as we did in the beginning of Subsection 3.3. Following such notation, we always use T = t ǫ . Therefore, for simplicity, we can omit as we did in Proposition 3.3. Then z ǫ (t,x)), x ǫ (s, x ǫ (t,x)))µ ǫ (dx). R d d TV (x ǫ (s + t, x), x ǫ (s + t,x))µ ǫ (dx) ≤ d TV (x ǫ (s, x ǫ (t, x)), z ǫ (s, x ǫ (t, x))) + d TV (z ǫ (s, x ǫ (t, x)), z ǫ (s, z ǫ (t, x)))+ R d d TV (z ǫ (s, z ǫ (t, x)), z ǫ (s, z ǫ (t,x)))µ ǫ (dx)+ R d d TV (z ǫ (s, Again, using the same ideas as in Proposition 3.3 and Proposition 3.4 (much easier since t ǫ ≫ t ǫ mix ) we deduce lim ǫ→0 d TV (x ǫ (s ǫ , x ǫ (t ǫ , x)), z ǫ (s ǫ , x ǫ (t ǫ , x))) = 0. and lim ǫ→0 d TV (z ǫ (s ǫ , x ǫ (t ǫ , x)), z ǫ (s ǫ , z ǫ (t ǫ , x))) = 0. Fix R > 0. We split the remainders integrals as follows R d d TV (z ǫ (s, z ǫ (t, x)), z ǫ (s, z ǫ (t,x)))µ ǫ (dx) ≤ x ≤R d TV (z ǫ (s, z ǫ (t, x)), z ǫ (s, z ǫ (t,x)))µ ǫ (dx) + µ ǫ ( x > R) and R d d TV (z ǫ (s, z ǫ (t,x)), x ǫ (s, x ǫ (t,x)))µ ǫ (dx) ≤ x ≤R d TV (z ǫ (s, z ǫ (t,x)), x ǫ (s, x ǫ (t,x)))µ ǫ (dx) + µ ǫ ( x > R). Notice that x ≤R d TV (z ǫ (s, z ǫ (t, x)), z ǫ (s, z ǫ (t,x)))µ ǫ (dx) ≤ κ(R, x ) 1 √ ǫs e −δ(t+s) , where κ(R, x ) is a non-negative constant and δ > 0 comes from (H). By taking t ǫ = 1 ǫ 1 /8 ≫ t ǫ mix and s ǫ = 1 ǫ 1 /2019 we obtain e −δt ǫ = o( √ ǫs ǫ ). Therefore, lim ǫ→0 x ≤R d TV (z ǫ (s ǫ , z ǫ (t ǫ , x)), z ǫ (s ǫ , z ǫ (t ǫ ,x)))µ ǫ (dx) = 0. Following the same ideas as in the proof of Proposition 3.3, we deduce lim ǫ→0 x ≤R d TV (z ǫ (s ǫ , z ǫ (t ǫ ,x)), x ǫ (s ǫ , x ǫ (t ǫ ,x)))µ ǫ (dx) = 0. Now, we only need to prove that µ ǫ ( x > R) is negligible when ǫ → 0. Following the same ideas in [18] (page 122, Section 5, Step 1), the invariant measure µ ǫ has finite p-moments for any p ≥ 0. Moreover, we have R d x 2 µ ǫ (dx) ≤ ǫd δ . Indeed, from inequality (3.4) we have E x ǫ (t, x) 2 ≤ x 2 e −2δt + dǫ 2 2δ for any t ≥ 0 and x ∈ R d . For any two numbers a, b ∈ R, denote by a ∧ b the minimum between a and b. Recall that i) If a ≤ b then a ∧ c ≤ b ∧ c for any c ∈ R. ii) (a + b) ∧ c ≤ a ∧ c + b ∧ c for any a, b, c ≥ 0. Notice that for any t ≥ 0, n ∈ N and x ∈ R d we have E x ǫ (t, x) 2 ∧ n ≤ E x ǫ (t, x) 2 ∧ n. Then E x ǫ (t, x) 2 ∧ n ≤ x 2 e −2δt + dǫ 2 2δ ∧ n ≤ ( x 2 e −2δt ) ∧ n + dǫ 2 2δ ∧ n for any t ≥ 0, n ∈ N and x ∈ R d . Integrating this inequality against µ ǫ (dx) we obtain R d x 2 ∧ n µ ǫ (dx) ≤ R d ( x 2 e −2δt ) ∧ n µ ǫ (dx) + dǫ 2 2δ ∧ n for any t ≥ 0 and n ∈ N. Passing to the limit first as t → ∞ and using the Dominated Convergence Theorem we have R d x 2 ∧ n µ ǫ (dx) ≤ dǫ 2 2δ ∧ n for any n ∈ N. Now, taking n → ∞ and using the Monotone Convergence Theorem we have R d x 2 µ ǫ (dx) ≤ dǫ 2 2δ . The latter together with the Chebyshev inequality imply Lemma 3.7. Assume that (H) and (G) hold. Let {x ǫ (t, x 0 ) : t ≥ 0} be the solution of (2.4) and denote by µ ǫ the unique invariant probability measure for the evolution given by (2.4). Denote by µ ǫ ( x ≥ R) ≤ dǫ 22Rd ǫ (t) = d TV (x ǫ (t, x 0 ), µ ǫ ) for any t ≥ 0 the total variation distance between the law of the random variable x ǫ (t, x 0 ) and its invariant probability µ ǫ . Consider the cut-off time t ǫ mix given by (2.7) and the time window given by (2.6). Let x 0 = 0. Then for any c ∈ R we have lim ǫ→0 d ǫ (t ǫ mix + cw ǫ ) − D ǫ (t ǫ mix + cw ǫ ) = 0, where D ǫ (t) = d TV G (t − τ ) ℓ−1 e λ(t−τ ) √ ǫ Σ − 1 /2 m k=1 e iθ k (t−τ ) v k , I d , G(0, I d ) for any t ≥ τ with m, λ, ℓ, τ , θ 1 , . . . , θ m , v 1 , . . . , v m are the constants and vectors associated to x 0 in Lemma 2.1, and the matrix Σ is the unique solution of the matrix Lyapunov equation: DF (0)X + X(DF (0)) * = I d . Proof. Firstly, from Lemma C.3 we have that there exists a unique invariant probability measure for the evolution (2.4). Let call the invariant measure by µ ǫ . From Lemma 3.2 together with Proposition 3.3, Proposition 3.4 and Proposition 3.6 we deduce d ǫ (t ǫ mix + cw ǫ ) −D ǫ (t ǫ mix + cw ǫ ) = o(1) as ǫ → 0. From the triangle inequality we obtain d ǫ (t ǫ mix +cw ǫ ) − D ǫ (t ǫ mix + cw ǫ ) ≤ D ǫ (t ǫ mix + cw ǫ ) −D ǫ (t ǫ mix + cw ǫ ) + o(1) as ǫ → 0. The latter together with Proposition 3.5 allows to deduce the statement. Appendix A. Properties of the Total Variation Distance for Gaussian Distributions Recall that G(v, Ξ) denotes the Gaussian distribution in R d with vector mean v and positive definite covariance matrix Ξ. Since the proofs are straightforward, we left most of details to the interested reader. Lemma A.1. Let v,ṽ ∈ R d be two fixed vectors and Ξ,Ξ be two fixed symmetric positive definite d × d matrices. Then i) For any scalar c = 0 we have d TV G cv, c 2 Ξ , G cṽ, c 2Ξ = d TV G(v, Ξ), G ṽ,Ξ . ii) d TV G(v, Ξ), G ṽ,Ξ = d TV G(v −ṽ, Ξ), G 0,Ξ . iii) d TV (G(v, Ξ), G(ṽ, Ξ)) = d TV G Ξ − 1 /2 v, I d , G Ξ − 1 /2ṽ , I d . iv) d TV G(0, Ξ), G 0,Ξ = d TV G 0,Ξ − 1 /2 ΞΞ − 1 /2 , G(0, I d ) . Proof. The proofs follow from the characterisation of the total variation distance between two probability measures with densities, i.e., d TV (P 1 , P 2 ) = 1 2 R d |f 1 (x) − f 2 (x)| dx, where f 1 and f 2 are the densities of P 1 and P 2 , respectively, and using the Change of Variable Theorem. Lemma A.2. For any v ∈ R d we have d TV (G(v, I d ), G(0, I d )) = 2 π v /2 0 e − x 2 2 dx ≤ 1 √ 2π v . Proof. The proof in dimension one is a straightforward computation. We left the details to the interested reader. For dimension bigger than one, the idea is to reduce the proof to dimension one. To do that, we use the following fact: for any v,ṽ ∈ R d such that v = ṽ there exists an orthogonal matrix where the last equality follows from the characterisation of the total variation distance between two probability measures with densities and the Change of Variable Theorem. The latter allows us to reduce the proof to dimension one by observing that the vectors v and ( v , 0, . . . , 0) * ∈ R d have the same norm and the statement follows from a straightforward computation. Lemma A.3. Let {v ǫ : ǫ > 0} ⊂ R d such that lim ǫ→0 v ǫ = v ∈ R d . Then lim ǫ→0 d TV (G(v ǫ , I d ), G(0, I d )) = d TV (G(v, I d ), G(0, I d )) . Proof. The idea of the proof follows from Lemma A.2 together with the Dominated Convergence Theorem. Proof. The proof follows from the characterisation of the total variation distance between two probability measures with densities together with the Scheffé Lemma. For m ∈ R, N (m, 1) denotes the Gaussian distribution on R with mean m and unit variance . Lemma A.6. Let {v t : t ≥ 0} ⊂ R d . i) If lim sup t→+∞ v t ≤ C 0 ∈ [0, +∞) then lim sup t→+∞ d TV (G(v t , I d ), G(0, I d )) ≤ d TV (N (C 0 , 1), N (0, 1)). ii) If lim inf t→+∞ v t ≥ C 1 ∈ [0, +∞) then lim inf t→+∞ d TV (G(v t , I d ), G(0, I d )) ≥ d TV (N (C 1 , 1), N (0, 1)). Proof. From Lemma A.2 we deduce d TV (G(v t , I d ), G(0, I d )) = d TV (N ( v t , 1), N (0, 1)) = 2 π v t /2 0 e − x 2 2 dx which allows to reduce the proof for d = 1. The proof proceeds from the following straightforward argument: after passing a subsequence, we use the continuity of the total variation distance (Lemma A.3 and Lemma A.5) and the monotonicity property: d TV (N (m 1 , 1), N (0, 1)) ≤ d TV (N (m 2 , 1), N (0, 1)) for any 0 ≤ |m 1 | ≤ |m 2 | < +∞ in order to deduce item i) and item ii) of the statement. Appendix B. The deterministic dynamical system In this section we present a proof of Lemma 2.1. We start analysing the linear differential equation associated to the linearisation of the non-linear deterministic differential equation (2.1) around the hyperbolic fixed point 0. Proof. Write Λ = DF (0) and let t ≥ 0. We will use the Putzer spectral method to "compute" e −Λt x 0 . By (C), all eigenvalues of Λ have positive real part. Denote by {φ(t, x) : t ≥ 0} the solution of the linear system: d dt φ(t) = −Λφ(t) for t ≥ 0, φ(0) = x. Let (w j,k : j = 1, . . . , N ; k = 1, . . . , N j ) be a Jordan basis of −Λ, that is, −Λw j,k = −λ j w j,k + w j,k+1 for any j = 1, . . . , N ; k = 1, . . . , N j . In this formula we use the convention w j,N j +1 = 0. Since (w j,k : j = 1, . . . , N ; k = 1, . . . , N j ) is a basis of R d , the decomposition φ(t, x) = N j=1 N j k=1 φ j,k (t, x)w j,k defines the functions φ j,k (t, x) in a unique way. Then N j=1 N j k=1 d dt φ j,k (t, x)w j,k = N j=1 N j k=1 φ j,k (t, x) − λ j w j,k + w j,k+1 , and the aforementioned uniqueness implies d dt φ j,k (t, x) = −λ j φ j,k (x, t) + φ j,k−1 (t, x) for any j = 1, . . . , N ; k = 1, . . . , N j , where we use the convention φ j,0 (t, x) = 0. In addition, we have that φ j,k (0, x) = x j,k , where x = N j=1 N j k=1 x j,k w j,k . For each j, the system of equations for {φ j,k (t, x) : k = 1, . . . , N j } is autonomous, as well as the equation for φ j,1 (t, x). Notice that φ j,1 (t, x) = x j,1 e −λ j t and by the method of variation of parameters, for k = 2, . . . , N j we have φ j,k (t, x) = x j,k e −λ j t + t 0 e −λ j (t−s) φ j,k−1 (s, x)ds. Applying this formula for k = 2 we see φ j,2 (t, x) = x j,2 e −λ j t + x j,1 te −λ j t and from this expression we can guess and check the formula φ j,k (t, x) = k i=1 x j,i t k−i e −λ j t (k − i)! . We conclude that φ(t, x) = N j=1 N j k=1 k i=1 t k−i e −λ j t (k − i)! x j,i w j,k . (B.1) With this expression in hand, we are ready to prove Lemma B.1. Let x 0 ∈ R d be fixed. Assume that x 0 = 0 and write x 0 = N j=1 N j k=1 x 0 j,k w j,k . Take λ = min{Re(λ j ) : x 0 j,k = 0 for some k} and define J 0 = {j : Re(λ j ) = λ and x 0 j,k = 0 for some k}. In other words, we identify in (B.1) the smallest exponential rate of decay and we collect in J 0 all the indices with that exponential decay. Now, define ℓ = max{N j − k : j ∈ J 0 and x 0 j,k = 0} and J = {j ∈ J 0 : x 0 j,N j −ℓ = 0}. We see that for j ∈ J, lim t→∞ φ j,N j (t, x 0 ) e λt t ℓ = x j,N j −ℓ ℓ! , while for j / ∈ J and k arbitrary or j ∈ J and k = N j , lim t→∞ φ j,k (t, x 0 ) e λt t ℓ = 0. Therefore, lim t→∞ e λt t ℓ φ(t, x 0 ) − j∈J e −(λ j −λ)t ℓ! x j,N j −ℓ w j,N j = 0. Let m = #J and let σ : {1, . . . , m} → J be a numbering of J. By definition of λ and J, the numbers λ j − λ are imaginary. Therefore, Lemma B.1 is proved choosing θ k = i(λ σ k − λ) and v k = x σ k ,Nσ k −ℓ w σ k ,Nσ k ℓ! . Now, we are ready to prove Lemma 2.1. The proof is based in the Hartman-Grobman Theorem (see Theorem (Hartman) page 127 of [27] or the celebrated paper of P. Hartman [35]) that guarantees that the conjugation around the hyperbolic fixed point 0 of (2.1) is C 1 -local diffeomorphism under some resonance conditions which are fulfilled when all the eigenvalues of the matrix DF (0) have negative (or positive) real part. Lemma B.2. Assume that (C) holds. Then for any x 0 ∈ R d \ {0} there exist λ := λ(x 0 ) > 0, ℓ := ℓ(x 0 ), m := m(x 0 ) ∈ N, θ 1 := θ 1 (x 0 ), . . . , θ m := θ m (x 0 ) ∈ [0, 2π), v 1 := v 1 (x 0 ), . . . , v m := v m (x 0 ) in C d linearly independent and τ := τ (x 0 ) > 0 such that lim t→+∞ e λt t ℓ−1 ϕ(t + τ, x 0 ) − m k=1 e iθ k t v k = 0. Proof. Since all the eigenvalues of DF (0) have real positive real part, there exist open sets U, V around the hyperbolic fixed point zero and h : U → V a C 1 (U, V ) homeomorphism such that h(0) = 0 and h(x) = x + o( x ) as x → 0 such that ϕ(t, x) = h −1 (e −DF (0)t h(x) ) for any t ≥ 0 and x ∈ U . From (C) we obtain ϕ(t, x) ≤ x e −δt for any x ∈ R d and any t ≥ 0. Observe that there exists τ := τ (x 0 ) > 0 such that ϕ(t, x 0 ) ∈ U for any t ≥ τ . Then ϕ(t + τ, x 0 ) = ϕ(t, x τ ) = h −1 (e −DF (0)t h(x τ )) for any t ≥ 0. From the triangle inequality we obtain e λt t ℓ−1 ϕ(t + τ, x 0 ) − m k=1 e iθ k t v k ≤ e λt t ℓ−1 ϕ(t + τ, x 0 ) − e λt t ℓ−1 e −DF (0)tx + e λt t ℓ−1 e −DF (0)tx − m k=1 e iθ k t v k . (B.2) Observe that e λt t ℓ−1 ϕ(t + τ, x 0 ) − e −DF (0)tx = e λt t ℓ−1 h −1 (e −DF (0)t h(x τ )) − e −DF (0)tx = e λt t ℓ−1 o e −DF (0)tx = e λt e −DF (0)tx t ℓ−1 o(1) ≤ e λt t ℓ−1 e −DF (0)tx − m k=1 e iθ k t v k o(1) + m k=1 v k o(1), where o(1) goes to zero as t goes by. The latter together with inequality (B.2) and Lemma B.1 allows us to deduce lim t→+∞ e λt t ℓ−1 ϕ(t + τ, x 0 ) − m k=1 e iθ k t v k = 0. Lemma B.3. Assume that (C) holds. Let δ ǫ = o(1). Then lim ǫ→0 δ ǫ ϕ(t ǫ mix + δ ǫ + cw ǫ , x 0 ) 2 ǫ = 0 for any c ∈ R. Proof. Remember that t ǫ mix = 1 2λ ln (1/ǫ) + ℓ − 1 λ ln (ln (1/ǫ)) + τ, and w ǫ = 1 λ + o(1), where λ, ℓ and τ are the constants associated to x 0 in Lemma 2.1 and o(1) goes to zero as ǫ → 0. Define t ǫ := t ǫ mix − τ + δ ǫ + cw ǫ . Note 1 √ ǫ ϕ(t ǫ + τ, x 0 ) ≤ (t ǫ ) ℓ−1 e λt ǫ √ ǫ e λt ǫ (t ǫ ) ℓ−1 ϕ(t ǫ + τ, x 0 ) − m k=1 e iθ k tǫ v k + (t ǫ ) ℓ−1 e λt ǫ √ ǫ m k=1 v k . From the last inequality, using the fact that lim Appendix C. The stochastic dynamical system In this Appendix we analyse the zeroth and first order approximations for the Itô diffusion {x ǫ (t) : t ≥ 0}. Recall that {y(t) : t ≥ 0} is the solution of the stochastic differential equation (3.5), and δ > 0 is the constant that appears in (H). Lemma C.1. Assume that (H) holds. For any η > 0 and t ∈ 0, η 2 ǫd we have P sup 0≤s≤t x ǫ (t) − ϕ(t) ≥ η ≤ 2dǫ 2 t δ (η 2 − ǫdt) 2 and P sup 0≤s≤t √ ǫy(t) ≥ η ≤ 2dǫ 2 t δ (η 2 − ǫdt) 2 . Proof. Let ǫ > 0 and t ≥ 0 be fixed. From (3.3) we have d x ǫ (t) − ϕ(t) 2 ≤ − 2δ x ǫ (t) − ϕ(t) 2 dt + 2 √ ǫ (x ǫ (t) − ϕ(t)), dB(t) + dǫdt. (C.1) Let M ǫ (t) := 2 √ ǫ (x ǫ (t) − ϕ(t)) * for every t ≥ 0. Notice that N ǫ (t) := t 0 M ǫ (s)dB(s) : t ≥ 0 is a local martingale. Then, there exists a sequence of increasing stopping times {τ ǫ n } n∈N such that almost surely τ ǫ n ↑ ∞ as n goes to infinity and for each n ∈ N, {N ǫ,n (t) = N ǫ (min{τ ǫ n , t}) : t ≥ 0} is a true martingale. Taking expectation on (C.1) and using the fact that {N ǫ,n (t) : t ≥ 0} is a zero-mean martingale, we deduce E x ǫ (min{τ ǫ n , t}) − ϕ (min{τ ǫ n , t}) 2 ≤ ǫdmin{τ ǫ n , t} ≤ ǫdt for every t ≥ 0. Consequently, by the well-known Fatou Lemma we obtain E x ǫ (t) − ϕ(t) 2 ≤ ǫdt for any t ≥ 0. The latter implies N ǫ (t) = t 0 M ǫ (s)dB(s) : t ≥ 0 is a true martingale. From inequality (C.1) we have x ǫ (t) − ϕ(t) 2 ≤ ǫdt + N ǫ (t) for any t ≥ 0. For any η > 0 and 0 ≤ t < η 2 /(ǫd) we have P sup 0≤s≤t x ǫ (s) − ϕ(s) 2 ≥ η 2 ≤ P sup 0≤s≤t N ǫ (s) ≥ η 2 − ǫdt . From the Doob inequality for submartingales we obtain P sup 0≤s≤t N ǫ (s) ≥ η 2 − ǫdt ≤ E N ǫ (t) 2 (η 2 − ǫdt) 2 . The Itô isometry allows us to deduce that E N ǫ (t) 2 = 4ǫ t 0 E x ǫ (s) − ϕ(s) 2 ds. From inequality (3.4) we obtain E N ǫ (t) 2 ≤ 2dǫ 2 t /δ. Therefore P sup 0≤s≤t x ǫ (s) − ϕ(s) ≥ η ≤ 2dǫ 2 t δ(η 2 − ǫdt) 2 for 0 ≤ t < η 2 /(ǫd). The proof for the second part proceeds from the same ideas as the first part. We left the details to the interested reader. Proposition C.2. Assume that (H) holds. For t ≥ 0, write W (t) := sup 0≤s≤t B(s) . For any t ≥ 0, the following holds true: i) E x ǫ (t) − ϕ(t) 2 ≤ dǫ 2δ and E y(t) 2 ≤ d 2δ . ii) For each n ∈ N, define c n := n−1 j=0 (d + 2j). Then E x ǫ (t) − ϕ(t) 2n ≤ c n ǫ n 2 n δ n and E y(t) 2n ≤ c n 2 n δ n . iii) For any 0 ≤ r < δ we have E exp r x ǫ (t) − ϕ(t) 2 ǫ < +∞ and E exp r y(t) 2 < +∞. iv) Let δ ǫ ∈ (0, δ /2]. Then E exp δ ǫ x ǫ (t) − ϕ(t) 2 ǫ ≤ exp (dδ ǫ t) and E exp δ ǫ y(t) 2 ≤ exp (dδ ǫ t) . Proof. i) The first part follows from inequality (3.4). The second part follows exactly as inequality (3.4). We left the details to the interested reader. ii) We provide the proof for the first part. The second part proceeds exactly as the first part and we left the details to the interested reader. Let ǫ > 0 and t ≥ 0 be fixed. Notice that DF (ϕ(s) + θ (x ǫ (s) − ϕ(s)))dθ. We will use the induction method. The induction basis had already proved in item i) of this proposition. Consider f n+1 (x) = x 2(n+1) , x ∈ R d . By the Itô formula, it follows that d x ǫ (t) − ϕ(t) 2(n+1) = 2(n + 1) x ǫ (t) − ϕ(t) 2n x ǫ (t) − ϕ(t), A ǫ (t) (x ǫ (t) − ϕ(t)) dt + ǫ(d + 2n)(n + 1) x ǫ (t) − ϕ(t) 2n dt + 2(n + 1) √ ǫ x ǫ (t) − ϕ(t) 2n x ǫ (t) − ϕ(t), dB(t) . x ǫ (t) − ϕ(t) = − From (H) we obtain d x ǫ (t) − ϕ(t) 2(n+1) ≤ −2δ(n + 1) x ǫ (t) − ϕ(t) 2(n+1) dt + ǫ(d + 2n)(n + 1) x ǫ (t) − ϕ(t) 2n dt + 2(n + 1) √ ǫ x ǫ (t) − ϕ(t) 2n x ǫ (t) − ϕ(t), dB(t) . After a localisation argument, we can take expectation in both sides of the last differential inequality and deduce that d dt E x ǫ (t) − ϕ(t) 2(n+1) ≤ − 2δ(n + 1)E x ǫ (t) − ϕ(t) 2(n+1) + ǫ(d + 2n)(n + 1)E x ǫ (t) − ϕ(t) 2n . By the induction hypothesis we have E x ǫ (t) − ϕ(t) 2n ≤ c n ǫ n 2 n δ n for any t ≥ 0. Then d dt E x ǫ (t) − ϕ(t) 2(n+1) ≤ − 2δ(n + 1)E x ǫ (t) − ϕ(t) 2(n+1) + (n + 1) c n+1 ǫ n+1 2 n δ n . From Lemma C.7 we obtain E x ǫ (t) − ϕ(t) 2(n+1) ≤ c n+1 ǫ n+1 2 n+1 δ n+1 for any t ≥ 0. ii) We provide the proof for the first part. The second part follows exactly as the first part and again we left the details to the interested reader. Let ǫ > 0 and t ≥ 0 be fixed. By the Monotone Convergence Theorem it follows that E e r x ǫ (t)−ϕ(t) 2 ǫ = ∞ n=0 E r n x ǫ (t) − ϕ(t) 2n ǫ n n! . By item i) of this Proposition, we have ∞ n=0 E r n x ǫ (t) − ϕ(t) 2n ǫ n n! ≤ 1 + ∞ n=1 r n c n 2 n δ n n! . Since ∞ n=1 r n cn 2 n δ n n! < +∞ when 0 ≤ r < δ, then we deduce the statement. iii) We give the proof for the first part. The second part proceeds exactly as the first part and again we left the details to the interested reader. Let ǫ > 0 and t ≥ 0 be fixed. We will use the Itô formula for the function g ǫ (x) = e κǫ x 2 , x ∈ R d , where κ ǫ := δǫ ǫ . Then de κǫ x ǫ (t)−ϕ(t) 2 = − 2κ ǫ e κǫ x ǫ (t)−ϕ(t) 2 A ǫ (t)(x ǫ (t) − ϕ(t)), x ǫ (t) − ϕ(t) dt + ǫ 2κ 2 ǫ e κǫ x ǫ (t)−ϕ(t) 2 x ǫ (t) − ϕ(t) 2 + κ ǫ de κǫ x ǫ (t)−ϕ(t) 2 dt + 2d √ ǫκ ǫ e κǫ x ǫ (t)−ϕ(t) 2 x ǫ (t) − ϕ(t), dB(t) . Using (H) we obtain de κǫ x ǫ (t)−ϕ(t) 2 ≤ − 2κ ǫ δe κǫ x ǫ (t)−ϕ(t) 2 x ǫ (t) − ϕ(t) 2 dt + ǫ 2κ 2 ǫ e κǫ x ǫ (t)−ϕ(t) 2 x ǫ (t) − ϕ(t) 2 + κ ǫ de κǫ x ǫ (t)−ϕ(t) 2 dt + 2d √ ǫκ ǫ e κǫ x ǫ (t)−ϕ(t) 2 x ǫ (t) − ϕ(t), dB(t) . Since 0 < δ ǫ ≤ δ 2 then de κǫ x ǫ (t)−ϕ(t) 2 ≤ −κ ǫ δe κǫ x ǫ (t)−ϕ(t) 2 x ǫ (t) − ϕ(t) 2 dt + ǫκ ǫ de κǫ x ǫ (t)−ϕ(t) 2 dt + 2d √ ǫκ ǫ e κǫ x ǫ (t)−ϕ(t) 2 x ǫ (t) − ϕ(t), dB(t) . By item i) and item ii) of this proposition and using a localisation argument we deduce d dt E e κǫ x ǫ (t)−ϕ(t) 2 ≤ ǫκ ǫ dE e κǫ x ǫ (t)−ϕ(t) 2 for any t ≥ 0. Now, using Lemma C.7 we obtain E e δǫ x ǫ (t)−ϕ(t) 2 ǫ ≤ e dδǫt for any t ≥ 0. Lemma C.3 (Uniquely Ergodic). Assume that (C) holds. For any ǫ ∈ (0, 1] there exists a unique invariant measure µ ǫ for the dynamics (2.4). The unique probability invariant measure µ ǫ has exponential moments R d e β y µ ǫ (dy) < +∞ for any β ≥ 0. In addition, for any β > 0 there exist positive constants C ǫ,β 1 and C ǫ,β 2 such that for any initial condition x 0 ∈ R d we have Proof. This follows immediately from Theorem 3.3.4 page 91 of [6]. Lemma C.4. Assume that (C) holds. Consider the matrix differential equation: (Σ i,j (t) − Σ i,j ) d dt Σ i,j (t) = After rearrangement the sums and using (C) we have d dt r(t) ≤ −4δr(t) for t ≥ 0, r(0) = Σ 0 − Σ 2 . By Lemma C.7 we deduce Σ(t) − Σ 2 ≤ e −4δt Σ 0 − Σ 2 for any t ≥ 0 which implies the statement. Remark C.5. Let A be a d-squared matrix such that F (x) = Ax satisfies (C). If we take F (x) = Ax in the stochastic differential equation (2.4), the covariance matrix associated to the solution of (2.4) satisfies the matrix differential equation (C.2) with initial datum Σ 0 the zero-matrix. Lemma C.6. Assume that (H) holds. The covariance matrix of y(t) converge as t → +∞ to a non-degenerate covariance matrix Σ, where Σ is the unique solution of the Lyapunov matrix equation: DF (0)X + X(DF (0)) * = I d . Proof. For any t ≥ 0, let Λ(t) be the covariance matrix of the y(t). This matrix satisfies the matrix differential equation: d dt Λ(t) = −DF (ϕ(t))Λ(t) − Λ(t)(DF (ϕ(t))) * + I d for t ≥ 0, Σ(0) = 0. Let K x 0 := {x ∈ R d : x ≤ x 0 }. By (H) we have ϕ(x, t) ∈ K x 0 for any x ∈ K x 0 and t ≥ 0. Since F ∈ C 2 (R d , R d ) there exists a constant L := L x 0 > 0 such that DF (x) − DF (0) ≤ L x for any x ∈ K x 0 . Take η > 0 and τ η := 1 δ ln x 0 η such that DF (ϕ(t)) − DF (0) ≤ L ϕ(t) ≤ L x 0 e −δt ≤ Lη (C.4) for every t ≥ τ η . Call τ := τ η . Then, d dt ∆(t) = −DF (0) ∆(t) − ∆(t)(DF (0)) * + I d for t ≥ 0, ∆(0) = Λ(τ ). Let Π(t) = Λ(t + τ ) − ∆(t), t ≥ 0. Then d dt Λ(t) = −DF (ϕ(t + τ ))Π(t) − Π(t)(DF (ϕ(t + τ ))) * + g(t, τ ) for t ≥ 0, Π(0) = 0, where g(t, τ ) := (DF (0)−DF (ϕ(t+τ )))∆(t)+∆(t)(DF (0)−DF (ϕ(t+τ ))) * for t ≥ 0. Therefore d dt Π(t) 2 = 2 d i,j=1 Π i,j (t) d dt Π i,j (t) = 2 d i,j=1 Π i,j (t) − d k=1 DF (ϕ(t + τ )) i,k Π k,j (t) − d k=1 Π i,k (t)DF (ϕ(t + τ )) j,k + R i,j (t) , was done. Also, he would like to express his gratitude to FORDECyT-CONACyT-México for the travel support to the Summer postdoctoral stay 2017 at IMPA. 2. 4 . 4Results. Denote by G(v, Ξ) the Gaussian distribution in R d with vector mean v and positive definite covariance matrix Ξ. Let I d be the identity d × d-matrix. Given a matrix A, denote by A * the transpose matrix of A. Corollary 2. 7 ( 7Thermalisation). Suppose that x 0 = 0. Theorem 2.2 implies thermalisation for the family of stochastic Markov processes {x ǫ,x 0 } ǫ∈(0,1] . Corollary 2. 9 ( 9Profile thermalisation). Suppose that x 0 = 0. There is profile thermalisation for the family of stochastic Marvok processes {x ǫ,x 0 } ǫ∈(0,1] if and only if Corollary 2 . 10 . 210Suppose that x 0 = 0. If all the eigenvalues of DF (0) are real then the family of stochastic processes {x ǫ,x 0 } ǫ∈(0,1] has profile thermalisation. Proposition 3 . 3 . 33Assume that (H) and (G) hold. Let δ ǫ > 0 such that δ ǫ = o(1). Then for any c 3.15) together with relations (3.16) and (3.17) allow us to deduce the desired result. we prove that the upper bound of (3.18) is negligible as ǫ → 0. From relation (3.16) we have lim ǫ→0 P ( x ǫ (T ǫ , x 0 ) > K) = 0. Similar ideas as in the proof of relation (3.16) and item ii) of Proposition C.2 yield lim ǫ→0 P z ǫ (T ǫ , x 0 ) >K = 0. Proposition 3. 6 . 6Assume that (H) and (G) hold. Thenlim ǫ→0 d TV (G(0, ǫΣ), µ ǫ ) = 0. A such thatṽ = A(v). Recall that the law of G(0, I d ) is invariant under orthogonal transformations, i.e., OG(0, I d ) = G(0, I d ) for any orthogonal matrix O. Then for any v,ṽ ∈ R d we have d TV (G(ṽ, I d ), G(0, I d )) = d TV (G(Av, I d ), G(0, I d )) = d TV (G(Av, I d ), AG(0, I d )) = d TV (AG(v, I d ), AG(0, I d )) = d TV (G(v, I d ), G(0, I d )) , Lemma A. 4 . 4Let {v ǫ : ǫ > 0} ⊂ R d such that lim ǫ→0 v ǫ = +∞. Thenlim ǫ→0 d TV (G(v ǫ , I d ), G(0, I d )) = 1. Proof. The idea of the proof follows from Lemma A.2 together with the Dominated Convergence Theorem. Lemma A.5. Let S d denotes the set of d×d symmetric and positive definite matrices. Let {Ξ ǫ : ǫ > 0} ⊂ S d such that lim ǫ→0 Ξ ǫ = Ξ ∈ S d . Then lim ǫ→0 d TV (G(0, Ξ ǫ ), G(0, Ξ)) = 0. Lemma B. 1 . 1Assume that (C) holds. Then for any x 0 ∈ R d \{0} there exist λ := λ(x 0 ) > 0, ℓ := ℓ(x 0 ), m := m(x 0 ) ∈ {1, . . . , d}, θ 1 := θ 1 (x 0 ), . . . , θ m := θ m (x 0 ) ∈ [0, 2π) and v 1 := v 1 (x 0 ), . . . , v m := v m (x 0 ) in C d linearly independent such that lim t→+∞ e λt t ℓ−1 e −DF (0)t x 0 − m k=1 e iθ k t v k = 0. Letx := h(x τ ). By Lemma B.1 there exist λ(x) := λ > 0, ℓ(x) := ℓ, m(x) := m ∈ N, θ 1 (x) := θ 1 , . . . , θ m (x) := θ m ∈ [0, 2π) and v 1 (x) := v 1 , . . . , v m (x) := v m in C d linearly independent such that lim t→+∞ e λt t ℓ−1 e −DF (0)tx − m k=1 e iθ k t v k = 0. ǫ→0 (t ǫ ) ℓ− 1 e ǫ→01λt ǫ √ ǫ = e −c (2λ) ℓ−1 and Lemma B.2 we deduce the desired result. A ǫ (s) (x ǫ (s) − ϕ(s)) ds + √ ǫB(t), dd TV (x ǫ (t, x 0 ), µ ǫ ) ≤ C ǫ,β 1 e −tC ǫTV (x ǫ (t, x 0 ), µ ǫ ) = 0. ( dt Σ(t) = −DF (0)Σ(t) − Σ(t)(DF (0)) * + I d for t ≥ 0, Σ(0) = Σ 0 , (C.2) where Σ 0 is any d × d matrix. Then lim t→+∞ Σ(t) − Σ = 0,where Σ is the unique solution of the Lyapunov matrix equation:DF (0)X + X(DF (0)) * = I d . (C.3)Proof. Write Λ = DF (0) and let t ≥ 0. Notice that all eigenvalues of Λ have positive real part. Denote by {φ(t, x) : t ≥ 0} the solution of the linear system:d dt φ(t) = −Λφ(t) for t ≥ 0, φ(0) = x.Then {φ(t, x) : t ≥ 0} is globally asymptotic stable and consequently the Lyapunov matrix equation (C.3) has a unique positive definite solution Σ. From (C.3) it follows that Σ is a symmetric matrix. Letr(t) := Σ(t) − Σ Σ i,j (t) − Σ i,j ) 2 for any t ≥ 0.Let δ i,j = 1 if i = j and δ i,j = 0 if i = j. ,k Λ j,k = δ i,j for any i, j ∈ {1, . . . , d}. d ǫ (t) = d TV (x ǫ (t, x 0 ), µ ǫ ) for any t ≥ 0 d i,j=1 (Σ i,j (t) − Σ i,j ) − d k=1 Λ i,k (Σ k,j (t) − Σ k,j ) − d k=1 (Σ i,k (t) − Σ i,k )Λ j,k . for any t ≥ 0.Moreover, using Lipschitz condition (C.4) and Lemma C.4 we obtainwhere C is a positive constant. A priori we can take 0 < η < 3δ C and using Lemma C.7 we obtainLetting t → +∞ and after η → 0 we deduce that lim Refined long-time asymptotics for some polymeric fluid flow models. A Arnold, J Carrillo, &amp; C Manzini, Communications in Mathematical Sciences. 83A. Arnold, J. Carrillo & C. Manzini. Refined long-time asymptotics for some polymeric fluid flow models, Communications in Mathematical Sciences, Volume 8, Number 3, 2010, 763-782. Metastability in reversible diffusion processes I. Sharp asymptotics for capacities and exit times. A Bovier, M Eckhoff, V Gayrard, &amp; M Klein, Journal European Mathematical Society. 6A. Bovier, M. Eckhoff, V. Gayrard & M. Klein. Metastability in reversible diffusion processes I. Sharp asymptotics for capacities and exit times, Journal European Math- ematical Society, Volume 6, 2004, 399-424. Metastability in reversible diffusion processes II. Precise asymptotics for small eigenvalues. A Bovier, V Gayrard, &amp; M Klein, Journal European Mathematical Society. 7A. Bovier, V. Gayrard & M. Klein. Metastability in reversible diffusion processes II. Precise asymptotics for small eigenvalues, Journal European Mathematical Society, Volume 7, 2005, 69-99. Sticky couplings of multidimensional diffusions with different drifts, to appear in Annales de l'Institut Henri Poincaré (B) Probability and Statistics. A Eberle, &amp; R Zimmer, arXiv:1612.06125Available atA. Eberle & R. Zimmer. Sticky couplings of multidimensional diffusions with different drifts, to appear in Annales de l'Institut Henri Poincaré (B) Probability and Statistics. Available at arXiv:1612.06125. Metastability for a class of dynamical systems subject to small random perturbations. A Galves, E Olivieri, &amp; M Vares, Annals of Probability. 15A. Galves, E. Olivieri & M. Vares. Metastability for a class of dynamical systems subject to small random perturbations, Annals of Probability, Volume 15, 1987, 1288- 1305. Ergodic behavior of Markov processes with applications to limit theorems. A Kulik, De Gruyter Studies in Mathematics. A. Kulik. Ergodic behavior of Markov processes with applications to limit theorems, De Gruyter Studies in Mathematics, 2008. Long-time asymptotics of a multiscale model for polymeric fluid flows, Archive for Rational Mechanics and Analysis. B Jourdain, C Le Bris, T Lelièvre, &amp; F Otto, 181B. Jourdain, C. Le Bris, T. Lelièvre & F. Otto, Long-time asymptotics of a multiscale model for polymeric fluid flows, Archive for Rational Mechanics and Analysis, Volume 181, Number 1, 2006, 97-148. Cut-off and hitting times of a sample of Ornstein-Uhlenbeck process and its average. B Lachaud, Journal of Applied Probability. 424B. Lachaud. Cut-off and hitting times of a sample of Ornstein-Uhlenbeck process and its average, Journal of Applied Probability, Volume 42, Number 4, 2005, 1069-1080. The exit problem for randomly perturbed dynamical systems. B Matkowsky, &amp; Z Schuss, SIAM Journal on Applied Mathematics. 332B. Matkowsky & Z. Schuss. The exit problem for randomly perturbed dynamical sys- tems, SIAM Journal on Applied Mathematics, Volume 33, Number 2, 1977, 365-382. Laplace's method revisited: weak convergence of probability measures. C Hwang, Annals of Probability. 86C. Hwang. Laplace's method revisited: weak convergence of probability measures, An- nals of Probability, Volume 8, Number 6, 1980, 1177-1182. Accelerating Gaussian diffusions. C Hwang, S Hwang-Ma, &amp; S Sheu, Annals of Applied Probability. 33C. Hwang, S. Hwang-Ma & S. Sheu. Accelerating Gaussian diffusions, Annals of Ap- plied Probability, Volume 3, Number 3, 1993, 897-913. The metastable behavior of infrequently observed, weakly random, one-dimensional diffusion processes. C Kipnis, &amp; C Newman, SIAM Journal on Applied Mathematics. 456C. Kipnis & C. Newman. The metastable behavior of infrequently observed, weakly random, one-dimensional diffusion processes, SIAM Journal on Applied Mathematics, Volume 45, Number 6, 1985, 972-982. . C Villani, Hypocoercivity, Memoirs of the American Mathematical Society. 202950C. Villani. Hypocoercivity, Memoirs of the American Mathematical Society, Volume 202, Number 950, 2009. Shuffling cards and stopping times. D Aldous, &amp; P Diaconis, American Mathematical Monthly. 935D. Aldous & P. Diaconis. Shuffling cards and stopping times, American Mathematical Monthly, Volume 93, Number 5, 1986, 333-348. Strong uniform times and finite random walks. D Aldous, &amp; P Diaconis, Advances in Applied Mathematics. 81D. Aldous & P. Diaconis. Strong uniform times and finite random walks, Advances in Applied Mathematics, Volume 8, Number 1, 1987, 69-97. Markov Chains and Mixing Times. D Levine, Y Peres, &amp; E Wilmer, American Mathematical SocietyProvidence Rhode IslandD. Levine, Y. Peres & E. Wilmer. Markov Chains and Mixing Times, American Math- ematical Society, Providence Rhode Island, 2009. Multidimensional diffusion processes. D Stroock, &amp; S Varadhan, Springer-VerlagBerlinD. Stroock & S. Varadhan. Multidimensional diffusion processes, Springer-Verlag, Berlin, 1997. Exponential ergodicity and regularity for equations with Lévy noise. E Priola, A Shirikyan, L Xu, &amp; J Zabczyk, Stochastic Processes and their Applications. 122E. Priola, A. Shirikyan, L. Xu & J. Zabczyk. Exponential ergodicity and regularity for equations with Lévy noise, Stochastic Processes and their Applications, Volume 122, 2012, 106-133. Large deviations and metastability. E Olivieri, &amp; M Vares, Cambridge University PressE. Olivieri & M. Vares. Large deviations and metastability, Cambridge University Press, 2004. Generalisation of the Eyring-Kramers transition rate formula to irreversible diffusion processes. F Bouchet, &amp; J Reygner, Annales Henri Poincaré. 17F. Bouchet & J. Reygner. Generalisation of the Eyring-Kramers transition rate for- mula to irreversible diffusion processes, Annales Henri Poincaré, Volume 17, 2016, 3499-3532. Abrupt convergence for a family of Ornstein Uhlenbeck processes. G Barrera, Brazilian Journal of Probability and Statistics. 321G. Barrera. Abrupt convergence for a family of Ornstein Uhlenbeck processes, Brazilian Journal of Probability and Statistics, Volume 32, Number 1, 2018, 188-199. Abrupt convergence for stochastic small perturbations of one dimensional dynamical systems. G Barrera, &amp; M Jara, Journal of Statistical Physics. 1631G. Barrera & M. Jara. Abrupt convergence for stochastic small perturbations of one dimensional dynamical systems, Journal of Statistical Physics, Volume 163, Number 1, 2016, 113-138. The cutoff phenomenon for ergodic Markov processes. G Chen, &amp; L Saloff-Coste, Electronic Journal of Probability. 133G. Chen & L. Saloff-Coste. The cutoff phenomenon for ergodic Markov processes, Electronic Journal of Probability, Volume 13, Number 3, 2008, 26-78. Bounds for left and right window cutoffs. J Barrera, &amp; B Ycart, ALEA-Latin American Journal of Probability and Mathematical Statistics. 112J. Barrera & B. Ycart, Bounds for left and right window cutoffs, ALEA-Latin Ameri- can Journal of Probability and Mathematical Statistics, Volume 11, Number 2, 2014, 445-458. Abrupt convergence and escape behavior for birth and death chains. J Barrera, O Bertoncini, &amp; R Fernández, Journal of Statistical Physics. 1374J. Barrera, O. Bertoncini & R. Fernández. Abrupt convergence and escape behavior for birth and death chains, Journal of Statistical Physics, Volume 137, Number 4, 2009, 595-623. Gibbs measures asymptotics. K Athreya, &amp; C Hwang, Indian Journal of Statistics. 721K. Athreya & C. Hwang. Gibbs measures asymptotics, Indian Journal of Statistics, Volume 72-A, Part 1, 2018, 191-207. Differential equations and dynamical systems. L Perko, Third Edition. SpringerL. Perko. Differential equations and dynamical systems, Third Edition, Springer, 2001. L Saloff-Coste, Random walks on finite groups, Probability & Discrete Structures. SpringerL. Saloff-Coste. Random walks on finite groups, Probability & Discrete Structures, Springer, 2004, 263-346. On small random perturbations of dynamical systems. M Freidlin, &amp; A Wentzell, Russian Mathematical Surveys. 25M. Freidlin & A. Wentzell. On small random perturbations of dynamical systems, Russian Mathematical Surveys, Volume 25, 1970, 1-55. Some problems concerning stability under small random perturbations. M Freidlin, &amp; A Wentzell, Theory Probability Applied. 17M. Freidlin & A. Wentzell. Some problems concerning stability under small random perturbations, Theory Probability Applied, Volume 17, 1972, 269-283. Random perturbations of dynamical systems. M Freidlin, &amp; A Wentzell, SpringerM. Freidlin & A. Wentzell. Random perturbations of dynamical systems, Springer, 2012. Exponential leveling of stochastically perturbed dynamical systems. M Day, SIAM Journal on Mathematical Analysis. 13M. Day. Exponential leveling of stochastically perturbed dynamical systems, SIAM Journal on Mathematical Analysis, Volume 13, 1982, 532-540. On the exponential exit law in the small parameter exit problem, Stochastics. M Day, 8M. Day. On the exponential exit law in the small parameter exit problem, Stochastics, Volume 8, 1983, 297-323. The cut-off phenomenon in finite Markov chains. P Diaconis, Proceedings of the National Academy of Sciences, USA. 93P. Diaconis. The cut-off phenomenon in finite Markov chains, Proceedings of the National Academy of Sciences, USA, Volume 93, 1996, 1659-1664. On local homeomorphisms of Euclidean spaces. P Hartman, Bulletin of Mexican Mathematical Society. 5P. Hartman. On local homeomorphisms of Euclidean spaces, Bulletin of Mexican Math- ematical Society, Volume 5, 1960, 220-241. The theory of matrices with applications. P Lancaster, &amp; M Tismenetsky, Computer Science and Scientific Computing. Academic PressSecond EditionP. Lancaster & M. Tismenetsky, The theory of matrices with applications, Academic Press, Computer Science and Scientific Computing, Second Edition, 1985. Escape problem of irreversible systems. R Maier, &amp; D Stein, Physical Review E. 482R. Maier & D. Stein. Escape problem of irreversible systems, Physical Review E, Volume 48, Number 2, 1993, 931-938. Transition-Rate theory for nongradient drift fields. R Maier, &amp; D Stein, Physical Review Letters. 6926R. Maier & D. Stein. Transition-Rate theory for nongradient drift fields, Physical Review Letters, Volume 69, Number 26, 1993, 3691-3695. Limiting exit location distributions in the stochastic exit problem. R Maier, &amp; D Stein, SIAM Journal on Applied Mathematics. 573R. Maier & D. Stein. Limiting exit location distributions in the stochastic exit problem, SIAM Journal on Applied Mathematics, Volume 57, Number 3, 1997, 752-790. Asymptotic behavior of the second eigenvalue of Kolmogorov's process (in French). S Jacquot, Journal of Multivariate Analysis. 402S. Jacquot. Asymptotic behavior of the second eigenvalue of Kolmogorov's process (in French), Journal of Multivariate Analysis, Volume 40, Number 2, 1992, 335-347. Decay rates and cutoff for convergence and hitting times of Markov chains with countably infinite state space. S Martínez, &amp; B Ycart, Advances in Applied Probability. 331S. Martínez & B. Ycart. Decay rates and cutoff for convergence and hitting times of Markov chains with countably infinite state space, Advances in Applied Probability, Volume 33, Number 1, 2001, 188-205. Optimal non-reversible linear drift for the convergence to equilibrium of a diffusion. T Lelièvre, F Nier, &amp; G Pavliotis, Journal of Statistical Physics. 1522T. Lelièvre, F. Nier & G. Pavliotis. Optimal non-reversible linear drift for the conver- gence to equilibrium of a diffusion, Journal of Statistical Physics, Volume 152, Number 2, 2013, 237-174. Distances between transition probabilities of diffusions and applications to nonlinear Fokker-Planck-Kolmogorov equations. V Bogachev, M Röckner, &amp; S Shaposhnikov, Journal of Functional Analysis. 2715V. Bogachev, M. Röckner & S. Shaposhnikov. Distances between transition probabili- ties of diffusions and applications to nonlinear Fokker-Planck-Kolmogorov equations, Journal of Functional Analysis, Volume 271, Number 5, 2016, 1262-1300. Local Lyapunov exponents: sublimiting growth rates of linear random differential equations. W Siegert, SpringerW. Siegert. Local Lyapunov exponents: sublimiting growth rates of linear random dif- ferential equations, Springer, 2009. Singularities in large deviation functions. Y Baek, &amp; Y Kafri, Journal of Statistical Mechanics: Theory and Experiment. 08026Y. Baek & Y. Kafri. Singularities in large deviation functions, Journal of Statistical Mechanics: Theory and Experiment, P 08026, 2015, 1-31. Small noise limit for diffusions near heteroclinic networks. Y Bakhtin, Dynamical Systems. 253Y. Bakhtin. Small noise limit for diffusions near heteroclinic networks, Dynamical Systems, Volume 25, Number 3, 2010, 413-431. Noisy heteroclinic networks. Y Bakhtin, Probability Theory and Related Fields. 150Y. Bakhtin. Noisy heteroclinic networks, Probability Theory and Related Fields, Vol- ume 150, 2011, 1-42. On the variation distance for probability measures defined on a filtered space. Y Kabanov, R Liptser, &amp; A Shiryaev, Probability Theory and Related Fields. 711Y. Kabanov, R. Liptser & A. Shiryaev. On the variation distance for probability mea- sures defined on a filtered space, Probability Theory and Related Fields, Volume 71, Number 1, 1986, 19-35. The exit problem: a new approach to diffusion across potential barriers. Z Schuss, &amp; B Matkowsky, SIAM Journal on Applied Mathematics. 363Z. Schuss & B. Matkowsky. The exit problem: a new approach to diffusion across potential barriers, SIAM Journal on Applied Mathematics, Volume 36, Number 3, 1979, 604-623. E-mail address: [email protected] Instituto de Matemática Pura e Aplicada, IMPA. Estrada Dona Castorina 110. Jardim Botânico. Postal Code. University of Alberta, Department of Mathematical and Statistical SciencesCentral Academic Building. 116 Street and 85 AvenueUniversity of Alberta, Department of Mathematical and Statistical Sci- ences. Central Academic Building. 116 Street and 85 Avenue. Postal Code: T6G-2G1. Edmonton, Alberta, Canada. E-mail address: [email protected] Instituto de Matemática Pura e Aplicada, IMPA. Estrada Dona Castorina 110, Jardim Botânico. Postal Code: 22460-320. Rio de Janeiro, Rio de Janeiro, Brazil. E-mail address: [email protected]
[]
[ "Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding", "Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding", "Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding", "Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding" ]
[ "Dhanasekar Sundararaman [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Vivek Subramanian [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Guoyin Wang [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Shijing Si [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Dinghan Shen [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Dong Wang [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Lawrence Carin [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Dhanasekar Sundararaman [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Vivek Subramanian [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Guoyin Wang [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Shijing Si [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Dinghan Shen [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Dong Wang [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n", "Lawrence Carin [email protected] \nDepartment of Electrical & Computer Engineering\nDuke University Durham\n27708NC\n" ]
[ "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC", "Department of Electrical & Computer Engineering\nDuke University Durham\n27708NC" ]
[]
Attention-based models have shown significant improvement over traditional algorithms in several NLP tasks. The Transformer, for instance, is an illustrative example that generates abstract representations of tokens inputted to an encoder based on their relationships to all tokens in a sequence. Recent studies have shown that although such models are capable of learning syntactic features purely by seeing examples, explicitly feeding this information to deep learning models can significantly enhance their performance. Leveraging syntactic information like part of speech (POS) may be particularly beneficial in limited training data settings for complex models such as the Transformer. We show that the syntax-infused Transformer with multiple features achieves an improvement of 0.7 BLEU when trained on the full WMT '14 English to German translation dataset and a maximum improvement of 1.99 BLEU points when trained on a fraction of the dataset. In addition, we find that the incorporation of syntax into BERT fine-tuning outperforms baseline on a number of downstream tasks from the GLUE benchmark.
null
[ "https://arxiv.org/pdf/1911.06156v1.pdf" ]
208,006,052
1911.06156
beb91a773677872fc21f08722bdcc737bf5917b5
Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding Dhanasekar Sundararaman [email protected] Department of Electrical & Computer Engineering Duke University Durham 27708NC Vivek Subramanian [email protected] Department of Electrical & Computer Engineering Duke University Durham 27708NC Guoyin Wang [email protected] Department of Electrical & Computer Engineering Duke University Durham 27708NC Shijing Si [email protected] Department of Electrical & Computer Engineering Duke University Durham 27708NC Dinghan Shen [email protected] Department of Electrical & Computer Engineering Duke University Durham 27708NC Dong Wang [email protected] Department of Electrical & Computer Engineering Duke University Durham 27708NC Lawrence Carin [email protected] Department of Electrical & Computer Engineering Duke University Durham 27708NC Syntax-Infused Transformer and BERT models for Machine Translation and Natural Language Understanding Attention-based models have shown significant improvement over traditional algorithms in several NLP tasks. The Transformer, for instance, is an illustrative example that generates abstract representations of tokens inputted to an encoder based on their relationships to all tokens in a sequence. Recent studies have shown that although such models are capable of learning syntactic features purely by seeing examples, explicitly feeding this information to deep learning models can significantly enhance their performance. Leveraging syntactic information like part of speech (POS) may be particularly beneficial in limited training data settings for complex models such as the Transformer. We show that the syntax-infused Transformer with multiple features achieves an improvement of 0.7 BLEU when trained on the full WMT '14 English to German translation dataset and a maximum improvement of 1.99 BLEU points when trained on a fraction of the dataset. In addition, we find that the incorporation of syntax into BERT fine-tuning outperforms baseline on a number of downstream tasks from the GLUE benchmark. Introduction Attention-based deep learning models for natural language processing (NLP) have shown promise for a variety of machine translation and natural language understanding tasks. For word-level, sequence-to-sequence tasks such as translation, paraphrasing, and text summarization, attention-based models allow a single token (e.g., a word or subword) in a sequence to be represented as a combination of all tokens in the sequence (Luong, Pham, and Manning, 2015). The distributed context allows attention-based models to infer rich representations for tokens, leading to more robust performance. One such model is the Transformer, which features a multi-headed self-and cross-attention mechanism that allows many different representations to be learned for a given token in parallel (Vaswani et al., 2017). The encoder and decoder arms each contain several identical subunits that are chained together to learn embeddings for tokens in the source and target vocabularies. Though the Transformer works well across a variety of different language pairs, such as (English, German) and (English, French), it consists of a large number of parameters and relies on a significant amount of data and extensive training to accurately pick up on syntactic and semantic relationships. Previous studies have shown that an NLP model's performance improves with the ability to learn underlying grammatical structure of a sentence (Kuncoro et al., 2018;Linzen, Dupoux, and Goldberg, 2016). In addition, it has been shown that simultaneously training models for machine translation, part of speech (POS) tagging, and named entity recognition provides a slight improvement over baseline on each task for small datasets (Niehues and Cho, 2017). Inspired by these previous efforts, we propose to utilize the syntactic features that are inherent in natural language sequences, to enhance the performance of the Transformer model. We suggest a modification to the embeddings fed into the Transformer architecture, that allows tokens inputted into the encoder to attend to not only other tokens but also syntactic features including POS, case, and subword position. These features are identified using a separate model (for POS) or are directly specified (for case and subword position) and are appended to the one-hot vector encoding for each token. Embeddings for the tokens and their features are learned jointly during the Transformer training process. As the embeddings are passed through the layers of the Transformer, the representation for each token is synthesized using a combination of word and syntactic features. We evaluate the proposed model on English to German (EN-DE) translation on the WMT '14 dataset. For the EN-DE translation task, we utilize multiple syntactic features including POS, case and subword tags that denote the relative position of subwords within a word (Sennrich and Haddow, 2016). Like POS, case is a categorical feature, which can allow the model to distinguish common words from important ones. Subword tags can help bring cohesion among subwords of a complex word (say, "amalgamation") so that their identity as a unit is not compromised by tokenization. We prove that the incorporation of these features improves the translation performance in the EN-DE task with a number of different experiments. We show that the BLEU score improvements of the feature-rich syntaxinfused Transformer uniformly outperforms the baseline Transformer as a function of the training data size. Examining the attention weights learned by the proposed model further justifies the effectiveness of incorporating syntactic features. We also experiment with this modification of embeddings on the BERT BASE model on a number of General Language Understanding Evaluation (GLUE) benchmarks and observe considerable improvement in performance on multiple tasks. With the addition of POS embeddings, the BERT BASE + POS model outperforms BERT BASE on 4 out of 8 downstream tasks. To summarize, our main contributions are as follows: 1. We propose a modification to the trainable embeddings of the Transformer model, incorporating explicit syntax information, and demonstrate superior performance on EN-DE machine translation task. 2. We modify pretrained BERT BASE embeddings by feeding in syntax information and find that the performance of BERT BASE + POS outperforms BERT BASE on a number of GLUE benchmark tasks. Background Baseline Transformer The Transformer consists of encoder and decoder modules, each containing several subunits that act sequentially to generate abstract representations for words in the source and target sequences (Vaswani et al., 2017). As a preprocessing step, each word is first divided into subwords of length less than or equal to that of the original word (Sennrich, Haddow, and Birch, 2015). These subwords are shared between the source and target vocabularies. where E ∈ R D×N is a trainable matrix with column m constituting the embedding for subword m, N is the total number of subwords in the shared vocabulary, and x m ∈ {0, 1} N : i x mi = 1 is a one-hot vector corresponding to subword m. These embeddings are passed sequentially through six encoder subunits. Each of these subunits features a self-attention mechanism, that allows subwords in the input sequence to be represented as a combination of all subwords in the sequence. Attention is accomplished using three sets of weights: the key, query, and value matrices (K, Q, and V, respectively). The key and query matrices interact to score each subword in relation to other subwords, and the value matrix gives the weights to which the score is applied to generate output embedding of a given subword. Stated mathematically, K = HW K Q = HW Q V = HW V A = softmax QK √ ρ V(2) where H = [h 1 h 2 · · · h M ] ∈ R M ×D are the Ddimensional embeddings for a sequence of M subwords indexed by m; W K , W Q , and W V all ∈ R D×P are the projection matrices for keys, queries, and values, respectively; ρ is a scaling constant (here, taken to be P ) and A ∈ R M ×P is the attention-weighted representation of each subword. Note that these are subunit-specific -a separate attention-weighted representation is generated by each subunit and passed on to the next. Moreover, for the first layer, h m := e m . The final subunit then passes its information to the decoder, that also consists of six identical subunits that behave similarly to those of the encoder. One key difference between the encoder and decoder is that the decoder not only features self-attention but also cross-attention; thus, when generating new words, the decoder pays attention to the entire input sequence as well as to previously decoded words. BERT While the Transformer is able to generate rich representations of words in a sequence by utilizing attention, its decoder arm restricts it to be task-specific. The word embeddings learned by the Transformer encoder, however, can be fine-tuned to perform a number of different downstream tasks. Bidirectional encoder representations of Transformers (BERT) is an extension of the Transformer model that allows for such fine-tuning. The BERT model is essentially a Transformer encoder (with number of layers l, embedding dimension D, and number of attention heads α) which is pre-trained using two methods: masked language modeling (MLM) and next-sentence prediction (NSP). Subsequently, a softmax layer is added, allowing the model to perform various tasks such as classification, sequence labeling, question answering, and language inference. According to (Devlin et al., 2018), BERT significantly outperforms previous state-of-the-art models on the eleven NLP tasks in the GLUE benchmark (Wang et al., 2018). Model Syntax-infused Transformer Syntax is an essential feature of grammar that facilitates generation of coherent sentences. For instance, POS dictates how words relate to one another (e.g., verbs represent the actions of nouns, adjectives describe nouns, etc.). Studies have shown that when trained for a sufficiently large number of steps, NLP models can potentially learn underlying patterns about text like syntax and semantics, but this knowledge is imperfect (Jawahar et al., 2019). However, works such as (Kuncoro et al., 2018;Linzen, Dupoux, and Goldberg, 2016) show that NLP models that acquire even a weak understanding of syntactic structure through training demonstrate improved performance relative to baseline. Hence, we hypothesize that explicit prior knowledge of syntactic information can benefit NLP models in a variety of tasks. To aid the Transformer in more rapidly acquiring and utilizing syntactic information for better translation, we (i) employ a pretrained model 1 to tag words in the source sequence with their POS, (ii) identify the case of each word, and (iii) identify the position of each subword relative to other subwords that are part of the same word (subword tagging). We then append trainable syntax embedding vectors to the token embeddings, resulting in a combined representation of syntactic and semantic elements. Specifically, each word in the source sequence is first associated with its POS label according to syntactic structure. After breaking up words into their corresponding subwords (interchangeably denoted as tokens), we assign each subword the POS label of the word from which it originated. For example, if the word sunshine is broken up into subwords sun, sh, and ine, each subword would be assigned the POS NOUN. The POS embeddings are then extracted from a trainable embedding matrix using a look-up table, in a manner similar to that of the subword embeddings (see Figure 1). The POS embeddings f P m of each subword (indexed by m) are then concatenated with the subword embeddings e m ∈ R D−d to create a combined embedding where d is the dimension of the feature embedding. In a similar manner, we incorporate case and subword position features. For case, we use a binary element z c m ∈ {0, 1} to look up a feature embedding f c m for each subword, depending on whether the original word is capitalized. For subword position, we use a categorical element z s m ∈ {B, M, E, O} to identify a feature embedding f s m for each subword depending on whether the subword is at the beginning (B), middle (M ), or end (E) of the word; if the subword comprises the full word, it is given a tag of O. These are then added onto the POS embedding. Mathematically, in the input stage, h m becomes: of M subwords and ⊕ denotes either the concatenation or summation operation. [e m f m ] = h m ∈ R D where f m = f P m ⊕ f c m ⊕ f s m ∈ R d is We conjecture that our syntax-infused Transformer model can boost translation performance by injecting grammatical relationships, without having to learn them from examples. Syntax-infused BERT Adding syntactic features to the BERT model is a natural extension of the above modification to the Transformer. As mentioned above, embeddings trained by BERT can be utilized for a variety of downstream tasks. We hypothesize that infusing BERT with syntactic features is beneficial in many of these tasks, especially those involving semantic structure. Many of the datasets on which we evaluate our modified BERT model are low-resource (as few as 2.5k sentences) relative to those on which we evaluate the syntax-infused Transformer; hence, we choose to utilize only POS as a syntactic feature for BERT. We consider two approaches for combining POS features with the pre-trained embeddings in BERT, a model we denote as BERT BASE + POS : (1) addition of the trainable POS embedding vector of dimension d = D to the token embedding and (2) concatenation of the POS embedding with the token embedding. To make a fair comparison with BERT BASE , the input dimension D of the encoder must match that of BERT BASE (D = 768). Thus, if option 2 is used, the concatenated embedding must be passed through a trainable affine transformation with weight matrix of size (D + d) × D . While this option provides a more robust way to merge POS and word embeddings, it requires learning a large matrix, which is problematic for downstream tasks with very little training data. Hence, to facilitate training for these tasks and to standardize the comparison across different downstream tasks, we choose to use the first approach. Therefore, for a given token, its input representation is con-structed by summing the corresponding BERT token embeddings with POS embeddings (see Figure 2). Mathematically, the input tokens h m ∈ R D are given by h m = e m + f P m , where e m is the BERT token embedding and f P m is the POS embedding for token m. For single sequence tasks, m = 1, 2, . . . , M , where M is the number of tokens in the sequence; while for paired sequence tasks, m = 1, 2, . . . , M 1 + M 2 , where M 1 and M 2 are the number of tokens in each sequence. As is standard with BERT, for downstream classification tasks, the final embedded representationŷ CLS of the first token (denoted as [CLS]) is passed through a softmax classifer to generate a label. Datasets and Experimental Details For translation, we consider WMT '14 EN-DE dataset. The WMT '14 dataset consists of 4.5M training sentences. Validation is performed on newstest2013 (3000 sentences) and testing is on the newstest2014 dataset (2737 sentences, (Zhang, Titov, and Sennrich, 2019)). Parsers that infer syntax from EN sentences are typically trained on a greater number and variety of sentences and are therefore more robust than parsers for other languages. Since one of the key features of our models is to incorporate POS features into the source sequence, we translate from EN to DE. While incorporating all linguistic features described above is generally beneficial to NLP models, adding features may compromise the model by restricting the number of dimensions allocated to word embeddings, which still the play the primary role. We consider this tradeoff in greater detail below. Machine translation We train both the baseline and syntax-infused Transformer for 100,000 steps. All hyperparameter settings of the baseline Transformer, including embedding dimensions of the encoder and decoder, match those of (Vaswani et al., 2017). We train the syntax-infused Transformer model using 512dimensional embedding vectors. In the encoder, D = 492 dimensions are allocated for word embeddings while d = 20 for feature embeddings (chosen by hyperparameter tuning). In the decoder, all 512 dimensions are used for word embeddings (since we are interested only in decoding words, not word-POS pairs). The model architecture consists of six encoder and six decoder layers, with eight heads for multi-headed attention. Parameters are initialized with Glorot (Glorot and Bengio, 2010). We use a dropout rate of 0.1 and batch size of 4096. We utilize the Adam optimizer to train the model with β 1 = 0.9 and β 2 = 0.998; gradients are accumulated for two batches before updating parameters. A label-smoothing factor of 0.1 is employed. The context and size of the EN-DE translation dataset is quite different compared that of the datasets on which POS tagging methods are typically trained, implying that the POS tagging model may not generalize well. Hence, we include not only POS but also case and subword tag features. The training procedure is identical to that of (Vaswani et al., 2017) except that, for the syntax-infused Transformer, the dimension d of features f m is chosen to be 20 by doing a grid search over the range of 8 to 64. Natural language understanding The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) . For a summary of these datasets, see (Devlin et al., 2018). We use POS as the syntactic feature for BERT for these tasks. Aside from the learning rate, we use identical hyperparameter settings to fine-tune both the BERT BASE and BERT BASE + POS models for each task. This includes a batch size of 32 and 3 epochs of training for all tasks. For each model, we also choose a task-specific learning rate among the values {5, 4, 3, 2} × 10 −5 , which is standard for BERT BASE . Experimental Results Machine translation We evaluate the impact of infusing syntax into the baseline Transformer for the EN-DE translation task. We add three features namely POS, subword tags, and case to aid Transformer model learn underlying patterns about the sentences. With more than one feature, there are multiple ways to incorporate feature embeddings into the word embeddings. For a fair comparison to the Transformer baseline, we use a total of 512 dimensions for representing both the word embeddings as well as feature embeddings. One important tradeoff is that as the dimensionality of the syntax information increases, the dimensionality for actual word embeddings decreases. Since POS, case, and subword tags have only a limited number of values they can take, dedicating a high dimensionality for each feature proves detrimental (experimentally found). We find that the total feature dimension for which the gain in BLEU score is maximized is 20 (found through grid search). This means that (1) each feature embedding dimension can be allocated to 20 and summed together or (2) the feature embeddings can be concatenated to each other such that their total dimensionality is 20. Therefore, in order to efficiently learn the feature embeddings Figure 3: Comparison of attention for example sentences translated by baseline and POS Transformer models (obtained from the last layer). Rows depict the attention score for a given target subword to each of the subwords in the source sequence. In syntax-infused models for EN-DE translation, we find that attention is more widely distributed across subwords. For instance, the subword "Vater" (the German word for "father") attends mostly to the nearby subwords "his" and "father" in the base model while "Vater" also attends to the more distant words "Bwelle" (a person) and "escorting" in the syntax-infused model. This suggests that the syntax-infused model is able to better connect disparate parts of a sentence to aid translation. Note that the number of rows in the baseline and syntax-infused Transformer are different because each produces different predictions. while also not sacrificing the word embedding dimensionality, we find that summing the embeddings for all three different features of d = 20 and concatenating the sum to the word embeddings of D = 492 gives the maximum performance on translation. We also find that incorporation of a combination of two features among {POS, case, subword tags} does not perform as well as having all the three features. In Table 1, we vary the proportion of data used for training and observe the performance of both the baseline and syntax-infused Transformer. The syntax-infused model markedly outperforms the baseline model, offering an improvement of 0.57, 1.99, 1, 0.52, 0.33, and 0.7 points, respectively, for 1, 5, 10, 25, 50, and 100% of the data. It is notable that the syntax-infused model translates the best rel-ative to the baseline when only a fraction of the dataset is used for training. Specifically, the maximum improvement is 1.99 BLEU points when only 10% of the training data is used. This shows that explicit syntax information is most helpful under limited training data conditions. As shown in Figure 3(a)-(b), the syntax-infused model is better able to capture connections between tokens that are far apart yet semantically related, resulting in improved translation performance. In addition, Table 3 shows a set of sample German predictions made by the baseline and syntax-infused Transformer. Natural language understanding The results obtained for the BERT BASE + POS model on the GLUE benchmark test set are presented in Table 2. BERT BASE + POS outperforms BERT BASE on 4 out of the 8 tasks. The improvements range from marginal to significant, with a maximum improvement of 0.8 points of the POS model over BERT BASE on CoLA. Fittingly, CoLA is a task which assesses the linguistic structure of a sentence, which is explictly informed by POS embeddings. Moreover, BERT BASE + POS outperforms BERT BASE on tasks that are concerned with evaluating semantic relatedness. For examples of predictions made on the RTE dataset, see Table 4. Related Works Previous work has sought to improve the self-attention module to aid NLP models. For instance, introduced a Gaussian bias to model locality, to enhance model ability to capture local context while also maintaining the long-range dependency. Instead of absolute positional embeddings, (Shaw, Uszkoreit, and Vaswani, 2018) experimented with relative positional embeddings or distance between sequences and found that it led to a drastic improvement in performance. Adding linguistic structure to models like the Transformer can be thought of as a way of improving the attention mechanism. The POS and subword tags act as a form of relative positional embedding by enforcing the sentence structure. (Li et al., 2018) encourages different attention heads to learn about different information like position and representation by introducing a disagreement regularization. In order to model the local dependency between words more efficiently, (Im and Cho, 2017) introduced distance between words and incorporated that into the self-attention. Patek kann noch seinen Satz an rufen . Patek mag sein Urteil noch Berufung ein legen . Table 3: Translation examples of baseline Transformer vs. syntax-infused Transformer on the EN-DE dataset. The text highlighted in blue represents words correctly predicted by the syntax-infused model but not by the baseline Transformer. Sentence 1 Sentence 2 True label The Qin (from which the name China is derived) established the approximate boundaries and basic administrative system that all subsequent dynasties were to follow . Qin Shi Huang was the first Chinese Emperor . Not entailment In Nigeria, by far the most populous country in sub-Saharan Africa, over 2.7 million people are infected with HIV . 2.7 percent of the people infected with HIV live in Africa . Table 4: Examples of randomly chosen sentences from the RTE dataset (for evaluation of entailment between pairs of sentences) that were misclassified by BERT BASE and correctly classified by BERT BASE + POS . Not entailment Previous literature also has sought to incorporate syntax into deep learning NLP models. (Bastings et al., 2017) used syntax dependency tree information on a bidirectional RNN on translation systems by modeling the trees using Graph Convolutional Networks (GCNs) (Kipf and Welling, 2016). Modeling source label syntax information has helped significantly in the Chinese-English translation (Li et al., 2017) by linearizing parse trees to obtain drastic performance improvements. Adding a syntax-based distance constraint on the attention module, to generate a more semantic context vector, has proven to work for translation systems in the Chinese-English as well as English-German tasks. These works affirm that adding syntax information can help the NLP models to translate better from one language to another and also achieve better performance measures. Conclusions We have augmented the Transformer network with syntax information for machine translation. The syntax-infused Transformer improvements were highest when a subset of the training data is used. We then distinguish the syntaxinfused and baseline Transformer models by providing an interpretation of attention visualization. Additionally, we find that the syntax-infused BERT model performs better than baseline on a number of GLUE downstream tasks. It is an open question whether the efficiency of these sophisticated models can further be improved by creating an architecture that is enabled to model the language structure more inherently than using end to end models. Future work may extend toward this direction. For all m ∈ {1, 2, . . . , M }, where M is the length of the source sequence, the encoder embedding layer first converts subwords x m into embeddings e m : e m = Ex m Figure 1 : 1Formation of attention matrices (K, Q, and V) with syntactic information. The left column shows the word embedding matrix; the embedding matrices for the various features are shown on top. Embeddings for the chosen features are either concatenated or summed together (denoted by ⊕) and finally, concatenated to the word embeddings. Matrix multiplication with learned weights results in K, Q, and V. The attention matrices are double shaded to indicate the mix of word and syntax information. Figure 2 : 2the learned embedding for the syntactic features of subword m in the sequence 1 https://spacy.io/ The BERT BASE + POS model. Token embeddings are combined with trainable POS embeddings and fed into the BERT encoder. The final embedding of the [CLS] token is fed into a softmax classifer for downstream classification tasks. The model is illustrated as taking in a pair of sequences but single sequence classification is also possible. is a collection of different natural language understanding tasks evaluated on eight datasets: Multi-Genre Natural Language Inference (MNLI), Quora Question Pairs (QQP), Question Natural Language Inference (QNLI), Stanford Sentiment Treebank (SST-2), The Corpus of Linguistic Acceptability (CoLA), The Semantic Textual Similarity Benchmark (STS-B), Microsoft Research Paraphrase Corpus (MRPC), and Recognizing Textual Entailment (RTE) Table 2 : 2GLUE test results scored using the GLUE evaluation server. The number below each task denotes the number of training examples. The scores in bold denote the tasks for which BERT BASE + POS outperforms BERT BASE .Instead , B well e spent years escort ing his father to over cro w ded clinic s and hospital s , getting whatever treatment they could get . Stattdessen verbracht e B well e Jahre damit , seinen Vater zu über füll ten Klinik en und Krankenhäuser n zu begleiten , um jede Behandlung zu erhalten . </s> 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 (a) Baseline (EN-DE) Instead , B well e spent years escort ing his father to over cro w ded clinic s and hospital s , getting whatever treatment they could get . Stattdessen verbracht e B well e Jahre damit , seinen Vater zu über füll ten Klinik en und Krankenhäuser n zu begleiten , um jede Behandlung zu bekommen , die sie bekommen konnten . </s> 0.1 0.2 0.3 0.4 (b) Syntax-infused (EN-DE) ReferenceBaseline Transformer Syntax-infused Transformer Parken in Frankfurt könnte bald empfindlich teurer werden . Das Personal war sehr freundlich und hilfsbereit . Parken in Frankfurt könnte bald spürbar teurer sein . Die zurückgerufenen Modelle wurden zwischen dem 1. August und 10. September hergestellt . Zwischen August 1 und September 10. Die zurückgerufenen Modelle wurden zwischen dem 1. August und 10. September gebaut Stattdessen verbrachte Bwelle Jahre damit , seinen Vater inüberfüllte Kliniken und Hospitäler zu begleiten , um dort die Behandlung zu bekommen , die sie zu bieten hatten . Stattdessen verbrachte Bwelle Jahre damit , seinen Vater mitüber füllten Kliniken und Krankenhqusern zu beherbergen . Stattdessen verbrachte Bwelle Jahre damit , seinen Vater zuüberfüllten Kliniken und Krankenhäusern zu begleiten , um jede Behandlung zu bekommen , die sie bekommen konnten . Patek kann gegen sein Urteil noch Berufung ein legen . Graph convolutional encoders for syntax-aware neural machine translation. J Bastings, I Titov, W Aziz, D Marcheggiani, K Sima&apos;an, arXiv:1704.04675arXiv preprintBastings, J.; Titov, I.; Aziz, W.; Marcheggiani, D.; and Sima'an, K. 2017. Graph convolutional encoders for syntax-aware neural machine translation. arXiv preprint arXiv:1704.04675. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805. Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the thirteenth international conference on artificial intelligence and statistics. the thirteenth international conference on artificial intelligence and statisticsGlorot, X., and Bengio, Y. 2010. Understanding the dif- ficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, 249-256. Distance-based self-attention network for natural language inference. J Im, S Cho, arXiv:1712.02047arXiv preprintIm, J., and Cho, S. 2017. Distance-based self-attention network for natural language inference. arXiv preprint arXiv:1712.02047. What does bert learn about the structure of language?. G Jawahar, B Sagot, D Seddah, S Unicomb, G Iñiguez, M Karsai, Y Léo, M Karsai, C Sarraute, E Fleury, 57th Annual Meeting of the Association for Computational Linguistics (ACL). Florence, ItalyJawahar, G.; Sagot, B.; Seddah, D.; Unicomb, S.; Iñiguez, G.; Karsai, M.; Léo, Y.; Karsai, M.; Sarraute, C.; Fleury, E.; et al. 2019. What does bert learn about the structure of language? In 57th Annual Meeting of the Association for Computational Linguistics (ACL), Florence, Italy. Semi-supervised classification with graph convolutional networks. T N Kipf, M Welling, arXiv:1609.02907arXiv preprintKipf, T. N., and Welling, M. 2016. Semi-supervised classifi- cation with graph convolutional networks. arXiv preprint arXiv:1609.02907. Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. A Kuncoro, C Dyer, J Hale, D Yogatama, S Clark, P Blunsom, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Kuncoro, A.; Dyer, C.; Hale, J.; Yogatama, D.; Clark, S.; and Blunsom, P. 2018. Lstms can learn syntax-sensitive de- pendencies well, but modeling structure makes them bet- ter. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), 1426-1436. Modeling source syntax for neural machine translation. J Li, D Xiong, Z Tu, M Zhu, M Zhang, G Zhou, arXiv:1705.01020arXiv preprintLi, J.; Xiong, D.; Tu, Z.; Zhu, M.; Zhang, M.; and Zhou, G. 2017. Modeling source syntax for neural machine translation. arXiv preprint arXiv:1705.01020. Multi-head attention with disagreement regularization. J Li, Z Tu, B Yang, M R Lyu, T Zhang, arXiv:1810.10183arXiv preprintLi, J.; Tu, Z.; Yang, B.; Lyu, M. R.; and Zhang, T. 2018. Multi-head attention with disagreement regularization. arXiv preprint arXiv:1810.10183. Assessing the ability of lstms to learn syntax-sensitive dependencies. T Linzen, E Dupoux, Y Goldberg, Transactions of the Association for Computational Linguistics. 4Linzen, T.; Dupoux, E.; and Goldberg, Y. 2016. Assessing the ability of lstms to learn syntax-sensitive dependencies. Transactions of the Association for Computational Lin- guistics 4:521-535. M.-T Luong, H Pham, C D Manning, arXiv:1508.04025Effective approaches to attention-based neural machine translation. arXiv preprintLuong, M.-T.; Pham, H.; and Manning, C. D. 2015. Effec- tive approaches to attention-based neural machine trans- lation. arXiv preprint arXiv:1508.04025. Exploiting linguistic resources for neural machine translation using multi-task learning. J Niehues, E Cho, arXiv:1708.00993arXiv preprintNiehues, J., and Cho, E. 2017. Exploiting linguistic re- sources for neural machine translation using multi-task learning. arXiv preprint arXiv:1708.00993. Linguistic input features improve neural machine translation. R Sennrich, B Haddow, arXiv:1606.02892arXiv preprintSennrich, R., and Haddow, B. 2016. Linguistic input fea- tures improve neural machine translation. arXiv preprint arXiv:1606.02892. Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, arXiv:1508.07909arXiv preprintSennrich, R.; Haddow, B.; and Birch, A. 2015. Neural ma- chine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. P Shaw, J Uszkoreit, A Vaswani, arXiv:1803.02155Self-attention with relative position representations. arXiv preprintShaw, P.; Uszkoreit, J.; and Vaswani, A. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. At- tention is all you need. In Advances in neural information processing systems, 5998-6008. A Wang, A Singh, J Michael, F Hill, O Levy, S R Bowman, arXiv:1804.07461Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprintWang, A.; Singh, A.; Michael, J.; Hill, F.; Levy, O.; and Bowman, S. R. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Modeling localness for self-attention networks. B Yang, Z Tu, D F Wong, F Meng, L S Chao, T Zhang, arXiv:1810.10182arXiv preprintYang, B.; Tu, Z.; Wong, D. F.; Meng, F.; Chao, L. S.; and Zhang, T. 2018. Modeling localness for self-attention networks. arXiv preprint arXiv:1810.10182. Improving deep transformer with depth-scaled initialization and merged attention. B Zhang, I Titov, R Sennrich, arXiv:1908.11365arXiv preprintZhang, B.; Titov, I.; and Sennrich, R. 2019. Improving deep transformer with depth-scaled initialization and merged attention. arXiv preprint arXiv:1908.11365.
[]
[ "Causality matters in medical imaging", "Causality matters in medical imaging" ]
[ "Daniel C Castro \nDepartment of Computing\nBiomedical Image Analysis Group\nImperial College London\nSouth Kensington CampusSW7 2AZLondonUK\n", "Ian Walker \nDepartment of Computing\nBiomedical Image Analysis Group\nImperial College London\nSouth Kensington CampusSW7 2AZLondonUK\n", "Ben Glocker [email protected] \nDepartment of Computing\nBiomedical Image Analysis Group\nImperial College London\nSouth Kensington CampusSW7 2AZLondonUK\n" ]
[ "Department of Computing\nBiomedical Image Analysis Group\nImperial College London\nSouth Kensington CampusSW7 2AZLondonUK", "Department of Computing\nBiomedical Image Analysis Group\nImperial College London\nSouth Kensington CampusSW7 2AZLondonUK", "Department of Computing\nBiomedical Image Analysis Group\nImperial College London\nSouth Kensington CampusSW7 2AZLondonUK" ]
[]
Causal reasoning can shed new light on the major challenges in machine learning for medical imaging: scarcity of high-quality annotated data and mismatch between the development dataset and the target environment. A causal perspective on these issues allows decisions about data collection, annotation, preprocessing, and learning strategies to be made and scrutinized more transparently, while providing a detailed categorisation of potential biases and mitigation techniques. Along with worked clinical examples, we highlight the importance of establishing the causal relationship between images and their annotations, and offer stepby-step recommendations for future studies.T remendous progress has been achieved in predictive analytics for medical imaging. With the advent of powerful machine-learning (ML) approaches such as deep learning, staggering improvements in predictive accuracy have been demonstrated for applications such as computer-aided diagnosis 1 or assisting radiotherapy planning and monitoring of disease progression via automatic contouring of anatomical structures 2 . However, two of the main obstacles for translating these successes to more applications and into wider clinical practice remain: data scarcity, concerning the limited availability of high-quality training data required for building predictive models; and data mismatch, whereby a model trained in a lab environment may fail to generalise to real-world clinical data.Let us exemplify with a hypothetical scenario how these obstacles may arise in practice and pose real threats to the success of research projects. Suppose a team of academic radiologists is excited about the opportunities artificial intelligence seems to offer for their discipline. In a recent study, the clinical team was able to demonstrate the effectiveness of using human interpretation of magnetic resonance imaging (MRI) for diagnosis of prostate cancer, yielding higher sensitivity and specificity than a conventional diagnostic test, as confirmed via groundtruth labels from histopathology. Motivated by these results, the team decides to approach a ML research lab with the idea of developing a tool for automated, MRI-based diagnosis of prostate cancer. Because reading MRI requires advanced training and experience, they hope such a system may facilitate widespread adoption of MRI as a novel, accurate, and cost-effective tool for early diagnosis, especially in locations with lower availability of the required human expertise.The clinicians still have access to their previous study data, and are confident this may be used for ML development. Unfortunately, the sample size is small-there are insufficient pairs of images and diagnosis labels to train a state-of-the-art deep learning image classification method. However, the clinicians have access to large amounts of (unlabelled) routine MRI scans. The ML researchers are hopeful they can additionally leverage this data in a so-called semi-supervised learning strategy. After a pilot phase of development, the team is planning to evaluate their method in a large multi-centre study.
10.1038/s41467-020-17478-w
null
209,386,852
1912.08142
cecfd0db011df3826ef8b5267b00c31542d4c785
Causality matters in medical imaging Daniel C Castro Department of Computing Biomedical Image Analysis Group Imperial College London South Kensington CampusSW7 2AZLondonUK Ian Walker Department of Computing Biomedical Image Analysis Group Imperial College London South Kensington CampusSW7 2AZLondonUK Ben Glocker [email protected] Department of Computing Biomedical Image Analysis Group Imperial College London South Kensington CampusSW7 2AZLondonUK Causality matters in medical imaging 10.1038/s41467-020-17478-wPERSPECTIVE OPEN Causal reasoning can shed new light on the major challenges in machine learning for medical imaging: scarcity of high-quality annotated data and mismatch between the development dataset and the target environment. A causal perspective on these issues allows decisions about data collection, annotation, preprocessing, and learning strategies to be made and scrutinized more transparently, while providing a detailed categorisation of potential biases and mitigation techniques. Along with worked clinical examples, we highlight the importance of establishing the causal relationship between images and their annotations, and offer stepby-step recommendations for future studies.T remendous progress has been achieved in predictive analytics for medical imaging. With the advent of powerful machine-learning (ML) approaches such as deep learning, staggering improvements in predictive accuracy have been demonstrated for applications such as computer-aided diagnosis 1 or assisting radiotherapy planning and monitoring of disease progression via automatic contouring of anatomical structures 2 . However, two of the main obstacles for translating these successes to more applications and into wider clinical practice remain: data scarcity, concerning the limited availability of high-quality training data required for building predictive models; and data mismatch, whereby a model trained in a lab environment may fail to generalise to real-world clinical data.Let us exemplify with a hypothetical scenario how these obstacles may arise in practice and pose real threats to the success of research projects. Suppose a team of academic radiologists is excited about the opportunities artificial intelligence seems to offer for their discipline. In a recent study, the clinical team was able to demonstrate the effectiveness of using human interpretation of magnetic resonance imaging (MRI) for diagnosis of prostate cancer, yielding higher sensitivity and specificity than a conventional diagnostic test, as confirmed via groundtruth labels from histopathology. Motivated by these results, the team decides to approach a ML research lab with the idea of developing a tool for automated, MRI-based diagnosis of prostate cancer. Because reading MRI requires advanced training and experience, they hope such a system may facilitate widespread adoption of MRI as a novel, accurate, and cost-effective tool for early diagnosis, especially in locations with lower availability of the required human expertise.The clinicians still have access to their previous study data, and are confident this may be used for ML development. Unfortunately, the sample size is small-there are insufficient pairs of images and diagnosis labels to train a state-of-the-art deep learning image classification method. However, the clinicians have access to large amounts of (unlabelled) routine MRI scans. The ML researchers are hopeful they can additionally leverage this data in a so-called semi-supervised learning strategy. After a pilot phase of development, the team is planning to evaluate their method in a large multi-centre study. What are the chances of success for their project, and how could a causal analysis help them to identify potential issues in advance? Regarding the limited availability of annotated data, here the team may be lucky in successfully exploiting the unlabelled data thanks to the anticausal direction between images and confirmed diagnosis labels (as we will discuss later in more detail). However, major obstacles may arise due to data mismatch between the ML development and clinical validation stage, resulting from specific inclusion criteria (selection bias), varying patient populations (e.g. changes in demographics), and prevalence of disease (e.g. due to environmental factors). Identifying these issues is important for properly designing prospective validation studies. Although researchers are generally aware of the adverse effects of such differences in aspects of the data, they may be unaware that causal reasoning provides tools for laying out any underlying assumptions about the data-generating process in a clear and transparent fashion, such that any issues can be more easily identified beforehand and possibly resolved by employing suitable data collection, annotation and ML strategies. In this article, we discuss how causal considerations in medical imaging can shed new light on the above challenges-illustrated with cartoon examples in Fig. 1-and help in finding appropriate solutions. In particular, we demonstrate how the causal structure of a task can have profound, and sometimes surprising, consequences on the soundness of the employed ML approach and resulting analysis. We highlight that being aware of causal relationships, and related issues such as dataset shift and selection bias, allows for systematic reasoning about what strategies to prefer or to avoid. Here, the language of causal diagrams provides explicit means to specify assumptions, enabling transparent scrutiny of their plausibility and validity 3 . It is in fact a natural way of defining the relationships between variables of interest, because it reflects the expert's knowledge of the biological and logistical processes involved in the generation and collection of data, and has been successfully applied for building models for decision-making in healthcare, for example 4,5 . We hope our work can serve as a practical guide and inspire new directions for research in medical imaging. Causality matters Before diving into details of the challenges of data scarcity and data mismatch, the causal properties of the core predictive task must be analysed. In particular, one must pay close attention to the relationship between the inputs and targets of the devised model. Readers less familiar with causal reasoning may refer to Box 1 for a brief background and introductory references. Predictive analytics in medical imaging. The focus of this article is on predictive modelling: given an image X, train a model to predict some given annotation Y. Specifically, we wish to estimate the conditional probability distribution P(Y|X) by fitting a statistical model with a suitable objective function. This formulation encompasses a variety of common medical image analysis tasks, such as semantic segmentation (i.e. contouring of structures of interest), disease classification, outcome prediction, and many more. In this context, it is worth clarifying some terminology regarding the data that is used for development and after deployment, in order to avoid confusion of some terms that are sometimes used differently in clinical and ML communities. We refer to an annotated dataset with pairs (X, Y) as the development data, which is used to train and test a predictive model in a lab environment. In ML workflows, the development data is typically split into training, validation and hold-out test sets. The training set is used to learn the model parameters (e.g. the weights in a convolutional neural network), whereas the validation set is used during training to monitor the learning progress and avoid overfitting to the training set. The test set is used only after training is completed, in order to quantify the performance of the model on 'unseen' data. It is prudent to avoid re-using the test data in development cycles as it can lead to unrealistic performance estimates 6 . Importantly, the assumption that the performance of a trained model on the development test set is representative of the performance on new clinical data after deployment in varying environments is often violated due to differences in data characteristics, as discussed earlier. It is therefore absolutely critical to be able to clearly formalise and communicate the underlying assumptions regarding the data-generating processes in the lab and real-world environments, which in turn can help anticipate and mitigate failure modes of the predictive system. Causality in medical imaging. Given the specification of the input images, X, and the prediction targets, Y, it is imperative to determine which is the cause and which is the effect. Using the categorisation in ref. 7 , we wish to establish whether a task is • Causal: estimate P(Y|X), when X → Y (predict effect from cause); or • Anticausal: estimate P(Y|X), when Y → X (predict cause from effect). The answer is crucial to all further causal analysis of the problem, and has a strong impact on the applicability of semi-supervised learning 8,9 (discussed later) and on whether generative or discriminative models should be preferred 10 . Recall the definitions of cause and effect: if the annotation could have been different by digitally editing the image beforehand, then one can conclude that the image causes the annotation. For example, manual segmentation masks are drawn over the image by visual inspection and would evidently be influenced by certain pixel changes. On the other hand, a pathology lab result would be unaffected by such manipulations. Images and targets may alternatively be confounded, i.e. descend from a common cause. This relationship is often treated similarly to the anticausal case 7 . It is generally possible to discern causal structures only when we are aware of the acquired data's background details, as metainformation plays a fundamental role in understanding the data generation and collection processes. A recently compiled ontology of medical imaging meta-information 11 contains several attributes that can help characterise the predictive causal direction in an imaging study, such as field of application and task category (e.g. lesion detection for screening, segmentation for treatment planning), as well as details about the annotation process (manual vs. (semi-)automatic vs. laboratory; image-wide vs. pixel-wise annotations; factors affecting reliability; etc.). Let us further illustrate this discussion with two practical examples, depicted in Fig. 2. P(X ) P tr (X _Y=1) P tr (X _Y=0) P te (X _Y =1) P te (X _Y =0) P tr (X ) P te (X ) P te (X ) P tr (X ) Worked clinical examples. Consider a skin lesion classification task, wherein a set of dermoscopic images (X) is collected along with histopathology diagnosis for melanoma following biopsy (Y). Here, Y is a gold-standard proxy for the true presence of skin cancer, and as such can be considered as a cause of the visual appearance of the lesion, X. This task is therefore anticausal (note the arrow directions in Fig. 2a). Further, routine dermoscopic examination of pigmented skin lesions typically results in a 'benign', 'suspicious', or 'malignant' label. Prediction of such labels would instead be causal, as they are obtained visually and could be affected if the images were digitally manipulated. Box 1 | Brief background on causal reasoning A B C a A B D b B A E c A D B E C d Causal reasoning is the process of analysing the data-generating process in terms of cause-effect relationships. One can formalise causation as follows: a variable A is a direct cause of variable B, written A → B, if forcing A to different values changes the likelihood of B, all else held constant 31 . In other words, B (say, an outcome) is assumed to have a mechanistic dependence on A (say, exposure) and potentially also on other factors and on independent noise 64 . Crucially, A → B entails that the distribution of the cause, P(A), does not inform or influence the conditional P(B|A), a principle known as independence of cause and mechanism 64,67 . Taking this a step further, the postulated causal links between multiple variables form a directed acyclic graph (DAG), called a causal diagram. Such graphs encode assumptions about direct and indirect causal links and capture probabilistic information about variables such as conditional independences. To illustrate this more concretely, let us analyse some canonical relationships between three variables. If A (say, exposure to sunlight) affects a variable C (say, skin cancer) indirectly through its impact on B (damage to the skin cells), illustrated with the causal diagram in panel a, we say B is a mediator and A is an indirect cause of C. Here, B completely screens off the effect of A on C, meaning A ⫫ C|B (read 'A is conditionally independent of C, given B'). Alternatively, assume A is a common cause of B and D (say, vitamin D levels), represented by the causal diagram in panel b. In this case, A is known as a confounder, producing an association between B and D, thus B ⊥̸ D (read 'B is not independent of D'). However, controlling for A makes them independent: B ⫫ D| A. Finally, consider the case wherein B is a common effect of A and E (say, genetic predisposition), illustrated in panel c. Here, B is called a collider. Unlike the two situations above, this configuration implies A and E are independent a priori. On the other hand, conditioning on B introduces an association between A and E, as they may now 'explain away' the effect of each other on the observed outcome, B (i.e. A ⊥̸ E|B) 31 . For more general graph structures, such as the full example diagram in panel d, one should reason in terms of paths (i.e. chains of nodes connected by edges pointing in any direction), as they are the conduits for correlations propagated across the graph. Any path that does not contain a collider is said to be unblocked or open, and implies a potential statistical association between its endpoints. Conversely, a path containing a collider is said to be blocked or closed, and does not carry any indirect causal influence between its endpoints a priori 65 . If there are no unblocked paths between two variables, we conclude they are independent. As mentioned above, however, conditioning on a collider (or on a descendant of one) may unblock previously blocked paths. A purely statistical perspective would be unable to distinguish all three configurations in panels a-c, making it difficult to decide what to control for. The causal perspective, on the other hand, requires us to be clear about our assumptions and immediately reveals possible confounding. Under this model, for example, although vitamin D levels are predictive of skin cell damage, taking vitamin supplements would be assumed to have no effect on the sundamaged DNA molecules. The fact that causal models allow us to enquire about the effects of interventions is what sets them apart from pure statistical models, which are limited to studying correlations. This illustrates that careful considerations may be required when making decisions about the data collection, sample selection, and subsequent analysis. With the ability to formalise causal concepts in clear mathematical and probabilistic terms, causal reasoning opens the door for researchers to go beyond association by allowing them to incorporate domain expertise when answering fundamental scientific questions. We refer the reader to 'Methods' section for a more detailed treatment of causality theory, including advice on using domain knowledge to build their own causal graphs. Here we additionally highlight the direction of the predictive task. Skin cancer NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-17478-w PERSPECTIVE Now recall our earlier example where a team of radiologists had developed a new MRI-based diagnostic tool for prostate cancer. This time the team aims to improve the cancer treatment via radiotherapy by automating the planning process. Currently, the patient MRI scans (X) need to be manually segmented by carefully contouring the tumour regions and any organs-at-risk (Y). This annotation is done by visual inspection and evidently depends on image content, resolution, and contrast, for example, whereas manually editing the segmentation masks would have no effect on the images. These considerations allow us to conclude that image segmentation is a case of causal prediction (X → Y; Fig. 2b). For the two examples above, establishing the causal direction between images and prediction targets seemed reasonably straightforward. This is not always the case, and arguably in many settings identifying whether the relationship is causal or anticausal can be non-trivial, particularly if crucial metainformation is missing. Consider the case when prediction targets are extracted from radiology reports. At first, one may conclude that the report reflects purely the radiologist's reading of a medical image, hence image causes report. However, their conclusions could be based on additional information-potentially even more important than the findings in the images-such as blood tests or other diagnostic test results. In the context of segmentation, an annotator's knowledge about the grade of a tumour might influence how certain boundaries will be contoured, in which case an additional arrow from 'prostate cancer' to 'segmentation' could be included. This would however not alter the fact that the segmentations are a consequence of the images (and diagnoses), thus the task remains causal. Or what if image-derived diagnosis labels determined by an expert with long years of experience are nearly identical to biopsy results? Could these labels serve as proxies for the ground truth, configuring an anticausal relationship? These instances highlight the importance of investigating and modelling the full data-generating process to make informed decisions about the causal relationships underlying the data. As there may not always be a single correct answer, it is crucial to clearly communicate the assumptions we make so these can be open to scrutiny. Data scarcity One of the notorious challenges in medical image analysis is the scarcity of labelled data, in great part due to the high costs of acquiring expert annotations or expensive lab tests, e.g. to confirm initial diagnosis. The techniques often used to circumvent this shortage, namely semi-supervised learning and data augmentation, have markedly different properties under the lens of causality. Tackling data scarcity via semi-supervision. Semi-supervised learning (SSL) aims to leverage readily available unlabelled data in the hope of producing a better predictive model than is possible using only the scarce annotated data. Given this ambitious goal, it is perhaps unsurprising that strong requirements need to be met. Namely, the distribution of inputs needs to carry relevant information about the prediction task-otherwise it would be pointless to collect additional unlabelled data. This idea is typically articulated in terms of specific assumptions about the data which can be intuitively summarised as follows 8 : similar inputs (images in our case) are likely to have similar labels and will naturally group into clusters with high density in the input feature space. Lower density regions in that space in-between clusters are assumed to be ideal candidates for fitting decision boundaries of predictive models. In this context, considering large amounts of unlabelled data together with the scarce labelled data may reveal such low density regions and may lead to better decision boundaries than using labelled data alone. Note how this idea insinuates an interplay between the distribution of inputs, P(X), and the label conditional, P(Y|X). Now recall that, by independence of cause and mechanism, if the prediction task is causal (X → Y), then P(X) is uninformative with respect to P(Y|X), and SSL is theoretically futile in this case 8,9 . Since typical semantic segmentation tasks are causal, as illustrated in our prostate cancer example, there is likely very little hope that semantic segmentation can fundamentally benefit from unlabelled data, which may relate to recent concerns raised in the literature 12 . Intuitively, a model trained on image-derived annotations will attempt to replicate the (most often manual) annotation process, rather than to predict some pre-imaging ground truth (e.g. 'true' anatomy). It is plausible, then, that seeing more raw images without corresponding anatomical information provides no new insight about the annotation mechanism. Conversely, if Y → X as for skin lesions, then these distributions may be dependent, and semi-supervision has a chance of success 9 . We conjecture that, in practice, anticausal problems are more likely than causal ones to comply with the SSL assumptions outlined above, as observed, e.g. among the datasets analysed in ref. 10 . That is not to say that SSL is completely useless for causal tasks, as there can be practical algorithmic benefits. Under certain conditions, unlabelled data can be shown to have a regularising effect, potentially boosting the accuracy of an imperfect model by lowering its variance 13 , and may reduce the amount of labelled data required to achieve a given performance level 14,15 . To the best of our knowledge, there have been no empirical studies systematically investigating the efficacy of SSL in causal and anticausal medical imaging tasks, especially for segmentation, hence further work is needed to validate its gains. A recent comprehensive empirical study 12 reported that properly tuned purely supervised models and models pretrained on related labelled datasets (i.e. transfer learning) are often competitive with or outperform their semi-supervised counterparts. It also demonstrated that SSL can hurt classification performance under target shift (discussed later as prevalence shift) between labelled and unlabelled sets. This suggests that practitioners willing to apply SSL should be cautious of potential target distribution mismatch between labelled and unlabelled sets-e.g. unequal proportions of cases and controls or presence of different pathologies. Tackling data scarcity via data augmentation. In contrast with SSL, data augmentation produces additional plausible (x, y) pairs by systematically applying random, controlled perturbations to the data. Because it provides more information about the joint distribution, P(X, Y) rather than only the marginal P(X), it is suitable for both causal and anticausal tasks, without the theoretical impediments of semi-supervised learning for causal prediction. This now ubiquitous technique is a powerful way of injecting domain knowledge to improve model robustness to variations one expects to find in the test environment. Notably, we can distinguish between augmentations encouraging invariance and equivariance. Many tasks require predictions to be insensitive to certain types of variation. Examples include image intensity augmentations, such as histogram manipulations or addition of noise, and spatial augmentations (e.g. affine or elastic transformations) for imagelevel tasks (e.g. regression or classification, as in the skin lesion example). As these augmentations apply uniformly to all inputs x without changing the targets y, their benefits stem from a refined understanding of the conditional P(X|Y), while contributing no new information about P(Y). For other tasks, such as segmentation or localisation, predictions must change similarly to the inputs, e.g. a spatial transformation applied to an image x-such as mirroring, affine or elastic deformations-should be likewise applied to the target y (e.g. spatial coordinates or segmentation masks, as in the prostate tumour example). Information is gained about the joint distribution via its shared spatial structure, related to e.g. anatomy and acquisition conditions. Data mismatch The recurrent issue of mismatch between data distributions, typically between training and test sets or development and deployment environments, tends to hurt the generalisability of learned models. In the generic case when no assumptions can be made about the nature of these differences, any form of learning from the training set is arguably pointless, as the test-time performance can be arbitrarily poor. Nonetheless, causal reasoning enables us to recognise special situations in which direct generalisation is possible, and to devise principled strategies to mitigate estimation biases. In particular, two distinct mechanisms of distributional mismatch can be identified: dataset shift and sample selection bias. Learning about their differences is helpful for diagnosing when such situations arise in practice. Data mismatch due to dataset shift. Dataset shift is any situation in which the training and test data distributions disagree due to exogenous factors, e.g. dissimilar cohorts or inconsistent acquisition processes. As before, let X be the input images and Y be the prediction targets. We use an indicator variable D for whether we are considering the training (P tr (X, Y)) or the test domain (P te (X, Y)): P tr ðX; YÞ :¼ P D¼0 ðX; YÞ and P te ðX; YÞ :¼ P D¼1 ðX; YÞ: ð1Þ For simplicity, in the following exposition we will refer only to disparities between training and test domains. This definition can however extend to differences between the development datasets (training and test data) and the target population (after deployment), when the latter is not well represented by the variability in the test data. Moreover, when analysing dataset shift, it is helpful to conceptualise an additional variable Z, representing the unobserved physical reality of the subject's anatomy. We then interpret the acquired images X as imperfect and potentially domaindependent measurements of Z, i.e. Z → X. Switching between domains may produce variations in the conditional relationships between X, Y and Z or in some of their marginal distributions. Based on the predictive causal direction and on which factors of the joint distribution change or are invariant across domains, dataset shift can be classified into a variety of familiar configurations. Here we formulate the concepts of 'population shift', 'annotation shift', 'prevalence shift', 'manifestation shift' and 'acquisition shift'. These terms correspond roughly to particular dataset shift scenarios studied in general ML literature, namely 'covariate shift', 'concept shift', 'target shift', 'conditional shift' and 'domain shift', respectively 16 . However, we believe it is beneficial to propose specific nomenclature that is more vividly suggestive of the phenomena encountered in medical imaging. By also explicitly accounting for the unobserved anatomy, the proposed characterisation is more specific and enables distinguishing cases that would otherwise be conflated, such as population or manifestation shift versus acquisition shift. The basic structures are summarised in Fig. 3 in the form of selection diagrams (causal diagrams augmented with domain indicators) 3 , and some examples are listed in Table 1. We hope this may empower researchers in our field to more clearly communicate dataset shift issues and to more easily assess the applicability of various solutions. For causal prediction, we name population shift the case wherein only intrinsic characteristics (e.g. demographics) of the populations under study differ, i.e. P tr (Z) ≠ P te (Z). Fortunately, Z X Y D a Population shift: 17 . An underfitted model ('too simple') may however introduce spurious dependencies, for which importance reweighting with p te (x)/p tr (x) is a common mitigation strategy 18,19 . This approach is not without limitations, however, as it requires access to P te (X) and may rely on further assumptions in order to truly correct for changes in P(Z). Moreover, learning in this scenario makes sense only if the variability in the training data covers the support of the test distribution 16 -in other words, there are no guarantees about extrapolation performance to modes of variation that are missing from the training environment. Under prevalence shift (for anticausal tasks), the differences between datasets relate to class balance: P tr (Y) ≠ P te (Y). This can arise for example from different predispositions in the training and test populations, or from variations in environmental factors. If the test class distribution P te (Y) is known a priori (e.g. from an epidemiological study), generative models may reuse the estimated appearance model P tr (X|Y) (= P te (X|Y)) in Bayes' rule, and, for discriminative models, instances can be weighted by p te (y)/p tr (y) to correct the bias in estimating the training loss. Alternatively, more elaborate solutions based on the marginal P te (X) are possible 18,20 , or the unknown target prevalence P te (Y) may be approximated using the confusion matrix of a trained predictive model 21 . P D (Z )P(X _Z )P(Y _X ) P(Z )P D (X _Z )P(Y _X ) P(Z )P(X _Z )P D (Y _X ) P D (X _Z )P(Z _Y )P(Y ) P(X _Z )P D (Z _Y )P(Y ) P(X _Z )P(Z _Y )P D (Y ) Z X Y D b( Cases of annotation shift involve changes in class definitions, i.e. the same datum would tend to be labelled differently in each domain (P tr (Y|X) ≠ P te (Y|X)). For example, it is not implausible that some health centres involved in an international project could be operating slightly distinct annotation policies or grading scales, or employing annotators with varying levels of expertise (e.g. senior radiologists vs. trainees). Without explicit assumptions on the mechanism behind such changes, models trained to predict P tr (Y|X) evidently cannot be expected to perform sensibly in the test environment, and no clear solution can be devised 22 . A tedious and time-consuming calibration of labels or (partial) reannotation may be required to correct for annotation shift. Another challenging scenario is that of manifestation shift, under which the way anticausal prediction targets (e.g. disease status) physically manifest in the anatomy changes between domains. In other words, P tr (Z|Y) ≠ P te (Z|Y). As with annotation shift, this cannot be corrected without strong parametric assumptions on the nature of these differences. We lastly discuss acquisition shift, resulting from the use of different scanners or imaging protocols, which is one of the most notorious and well-studied sources of dataset shift in medical imaging 23 . Typical pipelines for alleviating this issue involve spatial alignment (normally via rigid registration and resampling to a common resolution) and intensity normalisation. In addition, the increasingly active research area of domain adaptation investigates data harmonisation by means of more complex transformations, such as extracting domain-invariant representations 24,25 or translating between imaging modalities 26 (e.g. synthesising MRI volumes from CT scans 27 ). Note that domain adaptation may fail or even be detrimental under changes in class prevalence 28 . Returning to the prostate cancer example, suppose our dataset was collected and annotated for research purposes, employing a high-resolution 3 T MRI scanner and containing a majority of younger patients, and that the trained predictive model is to be deployed for clinical use with conventional 1.5 T scanners. This is a clear case of dataset shift, firstly because the images are expected to be of different quality (acquisition shift). Secondly, because the different age distribution in the target population entails variations in prostate size and appearance (population shift). In addition, the presence of both types of shift can lead to confounding (Fig. 2b): a model trained on this data may erroneously learn that image quality is predictive of the risk of prostate cancer. Data mismatch due to sample selection bias. A fundamentally different process that also results in systematic data mismatch is sample selection. It is defined as the scenario wherein the training and test cohorts come from the same population, though each training sample is measured (S = 1) or rejected (S = 0) according to some selection process S that may be subject-dependent: Some examples are presented in Table 2. The main difference to standard dataset shift is the data-dependent selection mechanism (Fig. 4), as opposed to external causes of distributional changes (Fig. 3). In other words, the indicator variables in sample selection concern alterations in the data-gathering process rather than in the data-generating process 19 . Completely random selection simply corresponds to uniform subsampling, i.e. when the training data can be assumed to faithfully represent the target population (P tr (X, Y) ≡ P te (X, Y)). Since the analysis will incur no bias, the selection variable S can safely be ignored. We conjecture this will rarely be the case in practice, as preferential data collection is generally unavoidable without explicit safeguards and careful experimental design. Selection can be affected by the appearance of each image in two different manners. We can select subjects based on anatomical features-viewing the image X as a proxy for the anatomy Z-which has similar implications to population shift. Alternatively, selection criteria may relate to image quality (e.g. excluding scans with noise, poor contrast, or artefacts), which is akin to acquisition shift 22 . If selection is purely image-based (X → S, cf. Fig. 4b), we may exploit the conditional independence S ⫫ Y|X, which implies that the predictive relation is directly recoverable 29 , i.e. P te (Y|X) ≡ P tr (Y|X). In a learning scenario, however, the objective function would still be biased, and methods for mitigating the corresponding cases of dataset shift can be employed. Relating back to the skin lesion example, patients are referred for biopsy only if dermoscopy raises suspicions. As inclusion in this study is image-dependent, a dataset with ground-truth biopsy labels is not representative of the overall distribution of pigmented skin lesions. X Y S a X Y S b X Y S c X Y S d When selection is solely target-dependent (Y → S), we have P te (X|Y) ≡ P tr (X|Y), and it can be treated as prevalence shift. This will typically result from factors like hospital admission, recruitment or selection criteria in clinical trials, or annotation quality control. Notably, ML practitioners should be wary that it can also arise as a side-effect of certain training strategies, such as class re-balancing or image patch selection for segmentation (e.g. picking only patches containing lesion pixels). Sample selection can additionally introduce spurious associations when the selection variable S is a common effect of X and Y (or of causes of X and Y): implicitly conditioning on S unblocks an undesired causal path between X and Y (see Methods). This is the classic situation called selection bias 30 (cf. Berkson's paradox 31 ), and recovery is more difficult without assumptions on the exact selection mechanism. In general, it requires controlling for additional variables to eliminate the indirect influence of X on Y via conditioning on the collider S 3,29 . Discussion This paper provides a fresh perspective on key challenges in machine learning for medical imaging using the powerful framework of causal reasoning. Not only do our causal considerations shed new light on the vital issues of data scarcity and data mismatch in a unifying approach, but the presented analysis can hopefully serve as a guide to develop new solutions. Perhaps surprisingly, causal theory also suggests that the common task of semantic segmentation may not fundamentally benefit from unannotated images via semi-supervision. This possibly controversial conclusion may prompt empirical research into validating the feasibility and practical limitations of this approach. Other advanced topics could be worth exploring in future work for causally expressing more subtle facets of predictive modelling workflows. In particular, one recurring topic in epidemiology and sociology that is relevant to our imaging context is measurement bias 32,33 . This is the study of properties of proxy variables, which stand in for true variables of interest that are difficult or impossible to measure directly. Of particular note are the cases wherein proxies are additionally affected by other variables ('differential'), or when measurement errors for separate proxies are correlated ('dependent') 34 . Measurement bias was explored here for the case of acquisition shift (images as proxies for anatomy, affected by the domain), and similar considerations could extend to other variables, e.g. patient records or pathology results. A further pertinent topic is that of missingness. Whereas sample selection refers to the observability of full records, missingness concerns partial measurements-i.e. when some subjects may be missing observations of some variables. This is the context of semi-supervised learning, for example, as target labels are observed only for a subset of the data points. The classical characterisation distinguishes whether data is missing completely at random, missing at random, or missing not at random, when the missingness of a measurement is independent of any of the variables of interest, dependent on observed variables, or dependent on the missing values, respectively 35 . Causal diagrams again prove instrumental in identifying such structural assumptions about missingness mechanisms 36,37 . Finally, we highlight that our contribution is only the first step towards incorporating causality in medical image analysis. Here we introduce to this community purely the language of causal reasoning, hoping this will facilitate novel research directions exploiting causality theory to its full extent. Specifically, the endeavours of causal inference and causal discovery are so far largely unexplored in medical imaging. In this context, they could lead to the discovery of new imaging biomarkers and to exciting new applications such as personalised counterfactual predictions ('What if a patient were not a smoker?'). Large population imaging studies such as the UK Biobank 38,39 can greatly empower this kind of research, as they offer unique opportunities for extracting the relevant patterns of variation from sheer observational data. Beside enabling new research directions, incorporation of causal reasoning in medical image analysis aligns with a growing awareness among stakeholders of the need for responsible reporting in this field. There have been increasing efforts from regulatory bodies-such as the US Food and Drug Administration 40,41 , the UK's Department of Health and Social Care 42 , National Institute for Health and Care Excellence 43 , and NHSX 44 , and even the World Health Organization 45 -to outline best practices for the safe development and monitoring of AIenabled medical technologies 46 . Guidelines for designing and reporting traditional clinical trials are now also being specialised for AI-based interventions 47 . This has been accompanied by a recent surge in discussion among the medical community about the opportunities and, crucially, the risks of deploying such tools in clinical practice [48][49][50][51][52][53][54][55] . Most of the apprehension revolves around the external validity of these predictive models, i.e. their generalisability beyond the development environment, in terms of e.g. robustness to dataset shift 48,49 and selection bias 48,56 , as discussed herein. Other important concerns involve data inaccuracy, inconsistency, and availability [48][49][50]53,56 , and alignment of the model training objective with the target clinical setting 49,51,53,54 . In a similar yet complementary vein to the notable TRIPOD guidelines 57,58 , our work ties precisely into this context of encouraging transparent reporting of predictive analytics in healthcare. This debate also relates to parallel initiatives from within the machine learning community, in specific in the emerging field of fairness, accountability, and transparency (FAT). Scholars in FAT have proposed checklist-style guidelines for reporting datasets 59 and models 60 , for example, and have been investigating sources of failure for ML models, among which is poor reporting 61 . Interestingly, the same formalism of causal reasoning explored here was also shown to be especially well-suited for expressing and addressing issues of unfairness (e.g. social biases) 62 and dataset shift 63 in other contexts. Overall, the goal of this article has been to introduce to the medical imaging community the language of causal diagrams, and to demonstrate how it can illuminate common issues in predictive modelling. While causal reasoning by itself may not solve any of the data scarcity or mismatch problems, it provides a clear and precise framework for expressing assumptions about the data. Presenting such assumptions transparently in the form of causal diagrams makes them immediately recognisable by other researchers, and therefore easier to be confirmed or disputed. The real challenge lies in identifying these very assumptions, as they can often be unclear or ambiguous. To facilitate this task, we offer in Table 3 a step-by-step summary of our recommendations, and Fig. 5 presents a generic 'scaffold' diagram from which most typical workflows can be adapted. Readers may then refer to the other tables for help in identifying the components of their own diagram for the problem at hand. We believe that this exercise of building the full causal story of a dataset will encourage analysts to consider potential underlying biases more thoroughly, and that it may, like the TRIPOD checklist, lead to 'more comprehensive understanding, conduct, and analysis of prediction model studies' 58 . Methods Fundamentals of causal reasoning. Learning tasks can be broadly divided into three categories based on the causal information used: (i) prediction, in which observed data are used to infer values of unobserved variables, e.g. image classification; (ii) interventions, where investigators study the impact of forcing a variable to attain a certain value, e.g. randomised controlled trials (RCTs) for drug testing; and (iii) counterfactual analysis, wherein observed data combined with a structural causal model are used to answer questions of the form, 'What would have happened if individual I had received treatment T instead?' While most are familiar with causal inference in the context of RCTs or scientific experiments, causal information is vital even in certain purely predictive tasks, as we discussed in the context of medical imaging. Let us now illustrate the concept of causation and the principle of independence of cause and mechanism, presented earlier in the text. Consider the example wherein a radiologist makes a decision, B, for referral to further clinical testing (e.g. needle biopsy) based on any suspicious findings in the patient's medical scan, A. Given an image, the distribution over possible decisions is the conditional P(B|A). If the appearance of the scan changes, this referral distribution-reflecting the radiologist's judgement-changes as well. On the other hand, the mechanism that translates from a finding of a suspicious pattern in the scan A to a referral decision B is independent of how likely any individual scan is to appear in the real world, P(A). This is further justified as such mechanism may typically be formed by rules from radiology guidelines. Here, the cause of the referral decision is clearly the medical scan, as altering the decision would not affect the scan's appearance. In the above example, the correct graphical model would be A → B, as resolved via domain knowledge. If presented only with observational data of medical images and referrals, however, from a purely statistical perspective one would find it difficult to identify whether A → B or B → A. It may still be possible to identify which is the correct relationship if the gathered data were the result of two experiments, respectively manipulating A or B. Determining the presence and direction of causal relationships from data is the realm of causal discovery, which is an extremely challenging and active field of research but is beyond the scope of this article. Causal graphical models. When multiple variables are involved, causal assumptions can be expressed as a simple directed acyclic graph (DAG; no loops, at most one edge between any pair of nodes), whose nodes represent variables of interest and edges between them indicate postulated direct causal influences. Such a causal graphical model, referred to as a causal diagram, embodies the causal Markov assumption (or local Markov): every node is statistically independent of its noneffects (non-descendants), given its direct causes (parents). Therefore, the joint probability distribution over all variables V i on the graph can be factorised as a product of independent conditional mechanisms 64,65 : PðV 1 ; V 2 ; ; V N Þ ¼ Y N i¼1 PðV i jpaðV i ÞÞ;ð3Þ where pa(V i ) denotes the set of parents of variable V i , i.e. the nodes with arrows pointing toward V i . For those familiar with Bayesian networks, it appears that there is nothing new. However, Bayesian networks only encode conditional independence relationships, and they are thus not unique for a given observational distribution 31 . In fact, although causal arguments often guide the construction of such models, any alignment between arrows in Bayesian networks and causality is merely coincidental. In particular, causal models differ from Bayesian networks in that, beside representing a valid factorisation of the joint probability distribution, they enable reasoning about interventions 31 . In causal graphs, the values for each node are assumed to be determined via independent mechanisms (cf. independence of cause and mechanism) given their direct causes. An intervention is defined as any forced change to the value or distribution of a node, regardless of its direct causes, and results in a modified graph wherein this node is disconnected from its parents, though crucially all other mechanisms are unaffected. This can also be thought of as replacing the mechanism generating a variable by a function independent of its former direct causes (e.g. a constant). Incidentally, this is the principle behind randomised controlled trials: a treatment is assigned at random (an intervention on the 'treatment' variable), isolating its direct effect on the outcome by eliminating the influence of confounding factors (i.e. cutting the edges from common causes of treatment and outcome). Note that considering interventions on image and referral decision is also what allowed us to determine the causal direction in the example above. Table 3 Step-by-step recommendations. 1. Gather meta-information about the data collection and annotation processes to reconstruct the full story of the dataset 2. Establish the predictive causal direction: does the image cause the prediction target or vice versa? If annotations are scarce and image → target, semi-supervised learning may be futile, while data augmentation remains a viable alternative 3. Identify any evidence of mismatch between datasets (Table 1) Fig. 5 A 'scaffold' causal diagram summarising typical medical imaging workflows. We believe most practical cases can be adapted from this generic structure by removing or adding elements. Here are represented a variety of possible prediction targets (marked Y 1 -Y 4 ): some anticausal (Y 1 , Y 2 ) and others, causal (Y 3 , Y 4 ). 'Annotation' here refers to any imagederived data, such as lesion descriptions, regions of interest, spatial landmark coordinates, or segmentation maps. Note that annotators will often be aware of the patients' records and diagnoses, in which cases there could be additional arrows from Y 1 or Y 2 towards Y 4 . Building a causal diagram. The first step in constructing a causal model for a given system is to elicit the relevant variables to represent, which may be observed or not. These ought to be well defined: they should unambiguously correspond to real or postulated entities of the system, and separate variables must not have overlapping meanings 66 . In the medical imaging context, variables normally correspond to the collected data elements, such as images, meta-information fields, labels, patient records, etc. Not all important variables need to be concrete and measurable, however. Other relevant abstract concepts can be instantiated if they help in describing complex processes: e.g. 'annotation policy', 'patient's health status', 'proprietary image post-processing pipeline'. Secondly, the causal links between the defined variables must be determined. While each added arrow between two nodes in the graph corresponds to assuming causation, it is important to consider that the absence of an arrow also encodes a strong assumption. Namely, that there is no direct causal effect-any marginal association between those variables is fully explained via mediator variables or common causes (see below). In addition, the granularity of 'direct effects' is only relative to the chosen level of abstraction 65 . One may wish to detail the complete chain of effects between two causally linked variables, or represent them by a single arrow (e.g. A → B 1 → B 2 → C vs. A → C). In what is called a selection diagram 17 , one also includes special indicator variables that identify the 'domain' or 'environment', e.g. training vs. testing or which hospital in a multi-site study. Their direct causal effects (outgoing arrows) represent the specific mechanisms through which one assumes the observed populations differ, whereas the absence of a link from a domain selector to a variable implies the latter's mechanism is invariant across environments 17 . Domain indicators should normally be represented by root nodes in the diagram, with no incoming edges, as they embody exogenous changes to the data distributions. A causal diagram may additionally be augmented with selection variables, when the dataset is subject to preferential subsampling from the population (e.g. inclusion criteria for a clinical trial). The incoming arrows to such a node represent the various selection criteria (deliberate or otherwise) that impacted the collection of the dataset of interest. Finally, note that this construction is an iterative process. Once a full version of the diagram is written, one must verify that the assumptions implied by the graph match the domain knowledge (see following notes on interpretation), and corrections should be made as needed. Further, recall the diagram's intent as a communication tool when choosing its level of abstraction, as there is often a tradeoff to be made between accuracy and clarity: the graph should be sufficiently detailed not to omit relevant variables and pathways, though no more complex than necessary 66 . Interpreting causal diagrams. Causal diagrams offer a clear language to describe and communicate assumptions made about the underlying data-generating processes. Direct and indirect causal links between variables can be read from a diagram by following directed paths, while any missing connections between variables are equally important indicators that no direct relationship is being assumed. Careful interpretation of a diagram gives insights about potential biases that are important to take into account when designing experimental studies and when drawing conclusions from statistical analysis. In causality, what is usually referred to as bias is any spurious correlation between two variables, contributed by unblocked paths beside the relationship of interest (Box 1). The 'classic' prototypical configurations inducing such biases are confounding (unadjusted common cause; cf. Simpson's paradox 31 ) and collider bias (conditioning on a common effect; cf. selection bias, Berkson's paradox 31 ), and are widely studied in statistical literature 30 . This article in specific focused on how dataset shift results from unblocked paths between domain indicators and relevant variables, and on the consequences of (implicitly) conditioning on selection variables. For example, in a multi-site study wherein age distributions vary across sites, it would be useful to include age alongside the image as inputs to the predictive model, to block the 'site → age → image' path causing population shift. This is what is normally meant in the context of predictive modelling, as in statistics and causal inference, by 'adjusting/controlling for' or 'conditioning on' a variable. Though interpreting causal diagrams may require practice, it is a worthwhile endeavour that may help with the identification and mitigation of potential issues with the predictive model. Fig. 3 3Selection diagrams for dataset shift. a-c Causal and d-f anticausal scenarios, with corresponding factorisations of the joint distribution P D (X, Y, Z). X is the acquired image; Y, the prediction target; Z, the unobserved true anatomy; and D, the domain indicator (0: 'train', 1: 'test'). An unfilled node means the variable is unmeasured. P tr ðX; YÞ :¼ PðX; YjS ¼ 1Þ and P te ðX; YÞ :¼ PðX; YÞ: ð2Þ Fig. 4 4Causal diagrams for different sample selection scenarios. a Random; b image-dependent; c target-dependent; d jointly dependent. S = 1 indicates an observed sample, and plain edges represent either direction. Fig. 1Key challenges in machine learning for medical imaging. a Data scarcity and b-d data mismatch. X represents images and Y, annotations (e.g. diagnosis labels). P tr refers to the distribution of data available for training a predictive model, and P te is the test distribution, i.e. data that will be encountered once the model is deployed. Dots represent data points with any label, while circles and crosses indicate images with different labels (e.g. cases vs. controls).a Scarcity Decision boundary b Population shift c Prevalence shift d Selection Table 1 1Types of dataset shift.Type Direction Change Examples of differences Population shift Causal P D (Z) Ages, sexes, diets, habits, ethnicities, genetics Annotation shift Causal P D (Y|X) Annotation policy, annotator experience Prevalence shift Anticausal P D (Y) Baseline prevalence, case-control balance, target selection Manifestation shift Anticausal P D (Z|Y) Anatomical manifestation of the target disease or trait Acquisition shift Either P D (X|Z) Scanner, resolution, contrast, modality, protocol Table 2 2Types of sample selection.Type Causation Examples of selection processes . When applicable, importance reweighting is a common mitigation strategy; see further specific advice in the text • If causal (image → target): population shift, annotation shift • If anticausal (target → image): prevalence shift, manifestation shift 4. Verify what types of differences in image acquisition are expected, if any. Consider applying data harmonisation techniques and domain adaptation (if test images are available) 5. Determine whether the data collection was biased with respect to the population of interest, and whether selection was based on the images, the targets or both(Table 2). Refer to dataset shift guidance for mitigating the resulting biases 6. Draw the full causal diagram including postulated direction, shifts and selectionsAnnotation (Y 4 ) Image (X ) Disease Train / test (D) Patient characteristics (Y 2 ) Acquisition conditions Diagnosis (Y1) Annotation conditions Anatomy (Z ) Selection (S ) Referral (Y 3 ) Population shift (causal) Prevalence shift (anticausal) Annotation shift Acquisition shift Sample selection Predict? P re d ic t? P re d ic t? Predict? NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-17478-w NATURE COMMUNICATIONS | (2020) 11:3673 | https://doi.org/10.1038/s41467-020-17478-w | www.nature.com/naturecommunications © The Author(s) 2020 Author contributionsD.C.C., I.W. and B.G. ideated the paper; D.C.C. and B.G. conceptualised, wrote, and edited the paper; D.C.C. designed the paper; and I.W. supported the drafting of the paper.Competing interestsThe authors declare no competing interests.Additional informationCorrespondence and requests for materials should be addressed to D.C.C. or B.G.Peer review information Nature Communications thanks Niels Peek and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.Reprints and permission information is available at http://www.nature.com/reprintsPublisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. Dermatologist-level classification of skin cancer with deep neural networks. A Esteva, Nature. 542Esteva, A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115-118 (2017). The multimodal brain tumor image segmentation benchmark (BRATS). B H Menze, IEEE Trans. Med. Imag. 34Menze, B. H. et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imag. 34, 1993-2024 (2015). Causal inference and the data-fusion problem. E Bareinboim, J Pearl, Proc. Natl Acad. Sci. USA. 113Bareinboim, E. & Pearl, J. Causal inference and the data-fusion problem. Proc. Natl Acad. Sci. USA 113, 7345-7352 (2016). Bayesian networks in biomedicine and health-care. P J F Lucas, L C Van Der Gaag, A Abu-Hanna, Artif. Intell. Med. 30Lucas, P. J. F., van der Gaag, L. C. & Abu-Hanna, A. Bayesian networks in biomedicine and health-care. Artif. Intell. Med. 30, 201-214 (2004). Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment. M A Cypko, Int. J. Computer Assist. Radiol. Surg. 12Cypko, M. A. et al. Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment. Int. J. Computer Assist. Radiol. Surg. 12, 1959-1970 (2017). The reusable holdout: preserving validity in adaptive data analysis. C Dwork, Science. 349Dwork, C. et al. The reusable holdout: preserving validity in adaptive data analysis. Science 349, 636-638 (2015). On causal and anticausal learning. B Schölkopf, Proc. 29th International Conference on Machine Learning (ICML 2012. 29th International Conference on Machine Learning (ICML 2012Schölkopf, B. et al. On causal and anticausal learning. In Proc. 29th International Conference on Machine Learning (ICML 2012) 459-466 (2012). O Chapelle, Semi-Supervised Learning. B. & Zien, A.Cambridge, MAMIT PressChapelle, O., Schölkopf, B. & Zien, A. (eds) Semi-Supervised Learning (MIT Press, Cambridge, MA, 2006). B Schölkopf, Empirical Inference. Berlin, HeidelbergSpringer13Schölkopf, B., et al. in Empirical Inference (eds Schölkopf, B., Luo, Z. & Vovk, V.) Ch. 13, 129-141 (Springer, Berlin, Heidelberg, 2013). P Blöbaum, S Shimizu, T Washio, Advanced Methodologies for Bayesian Networks (AMBN 2015. ChamSpringer9505Blöbaum, P., Shimizu, S. & Washio, T. in Advanced Methodologies for Bayesian Networks (AMBN 2015) Vol. 9505, 209-221 (Springer, Cham, 2015). Why rankings of biomedical image analysis competitions should be interpreted with care. L Maier-Hein, Nat. Commun. 95217Maier-Hein, L. et al. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat. Commun. 9, 5217(2018). A Oliver, A Odena, C A Raffel, E D Cubuk, I Goodfellow, Advances in Neural Information Processing Systems. 31Oliver, A., Odena, A., Raffel, C. A., Cubuk, E. D. & Goodfellow, I. in Advances in Neural Information Processing Systems Vol. 31 (NeurIPS 2018), 3235-3246 (2018). F Cozman, I Cohen, Semisupervised Learning. Cambridge, MAMIT Press4Cozman, F. & Cohen, I. in Semisupervised Learning (eds Chapelle, O. et al.) Ch. 4, 57-72 (MIT Press, Cambridge, MA, 2006). A Singh, R Nowak, X Zhu, Advances in Neural Information Processing Systems. 21Singh, A., Nowak, R. & Zhu, X. in Advances in Neural Information Processing Systems Vol. 21 (NIPS 2008), 1513-1520 (2008). . M.-F Balcan, A Blum, O Chapelle, Ch. Semisupervised Learning22MIT PressBalcan, M.-F. & Blum, A. in Semisupervised Learning (eds Chapelle, O. et al.) Ch. 22, 397-419 (MIT Press, Cambridge, MA, 2006). J Quiñonero-Candela, M Sugiyama, A Schwaighofer, Lawrence, Dataset Shift in Machine Learning. N. D.Cambridge, MAMIT PressQuiñonero-Candela, J., Sugiyama, M., Schwaighofer, A. & Lawrence, N. D. (eds.) Dataset Shift in Machine Learning (MIT Press, Cambridge, MA, 2009). External validity: from do-calculus to transportability across populations. J Pearl, E Bareinboim, Stat. Sci. 29Pearl, J. & Bareinboim, E. External validity: from do-calculus to transportability across populations. Stat. Sci. 29, 579-595 (2014). A J Storkey, J Quiñonero-Candela, Dataset Shift in Machine Learning. Cambridge, MAMIT PressStorkey, A. J. in Dataset Shift in Machine Learning (eds Quiñonero-Candela, J. et al.), Ch. 1, 3-28 (MIT Press, Cambridge, MA, 2009). Multi-source domain adaptation: a causal view. K Zhang, M Gong, B Schölkopf, Proc. Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI 2015. Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI 2015AAAIZhang, K., Gong, M. & Schölkopf, B. Multi-source domain adaptation: a causal view. In Proc. Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI 2015) 3150-3157 (AAAI, 2015). Domain adaptation under target and conditional shift. K Zhang, B Schölkopf, K Muandet, Z Wang, Proc. 30th International Conference on Machine Learning (ICML 2013). 30th International Conference on Machine Learning (ICML 2013)PMLR28Zhang, K., Schölkopf, B., Muandet, K. & Wang, Z. Domain adaptation under target and conditional shift. In Proc. 30th International Conference on Machine Learning (ICML 2013), Vol. 28, 819-827 (PMLR, 2013). Detecting and correcting for label shift with black box predictors. Z C Lipton, Y.-X Wang, A J Smola, Proc. 35th International Conference on Machine Learning (ICML 2018). 35th International Conference on Machine Learning (ICML 2018)PMLR80Lipton, Z. C., Wang, Y.-X. & Smola, A. J. Detecting and correcting for label shift with black box predictors. In Proc. 35th International Conference on Machine Learning (ICML 2018), Vol. 80, 3122-3130 (PMLR, 2018). A unifying view on dataset shift in classification. J G Moreno-Torres, T Raeder, R Alaiz-Rodríguez, N V Chawla, F Herrera, Pattern Recognit. 45Moreno-Torres, J. G., Raeder, T., Alaiz-Rodríguez, R., Chawla, N. V. & Herrera, F. A unifying view on dataset shift in classification. Pattern Recognit. 45, 521-530 (2012). Machine learning with multi-site imaging data: an empirical study on the impact of scanner effects. B Glocker, R Robinson, D C Castro, Q Dou, E Konukoglu, Preprint atGlocker, B., Robinson, R., Castro, D. C., Dou, Q. & Konukoglu, E. Machine learning with multi-site imaging data: an empirical study on the impact of scanner effects. Preprint at https://arxiv.org/abs/1910.04597 (2019). Domain-adversarial training of neural networks. Y Ganin, J. Mach. Learn. Res. 17Ganin, Y. et al. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, 1-35 (2016). . K Kamnitsas, Information Processing in Medical Imaging. 10265SpringerKamnitsas, K. et al. in Information Processing in Medical Imaging, Vol. 10265, 597-609 (Springer, Cham, 2017). Simulation and synthesis in medical imaging. A F Frangi, S A Tsaftaris, J L Prince, IEEE Trans. Med. Imag. 37Frangi, A. F., Tsaftaris, S. A. & Prince, J. L. Simulation and synthesis in medical imaging. IEEE Trans. Med. Imag. 37, 673-679 (2018). SynSeg-Net: synthetic segmentation without target modality ground truth. Y Huo, IEEE Trans. Med. Imag. 38Huo, Y. et al. SynSeg-Net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imag. 38, 1016-1025 (2019). Invariant risk minimization. M Arjovsky, L Bottou, I Gulrajani, D Lopez-Paz, Preprint atArjovsky, M., Bottou, L., Gulrajani, I. & Lopez-Paz, D. Invariant risk minimization. Preprint at https://arxiv.org/abs/1907.02893 (2019). Recovering from selection bias in causal and statistical inference. E Bareinboim, J Tian, J Pearl, Proc. Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2014. Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2014AAAIBareinboim, E., Tian, J. & Pearl, J., Recovering from selection bias in causal and statistical inference. In Proc. Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI 2014) 2410-2416 (AAAI, 2014). A structural approach to selection bias. M A Hernán, S Hernández-Díaz, J M Robins, Epidemiology. 15Hernán, M. A., Hernández-Díaz, S. & Robins, J. M. A structural approach to selection bias. Epidemiology 15, 615-625 (2004). Causality: Models, Reasoning, and Inference 2nd edn. J Pearl, Cambridge University PressCambridge, UKPearl, J. Causality: Models, Reasoning, and Inference 2nd edn (Cambridge University Press, Cambridge, UK, 2009). Causal diagrams and measurement bias. M A Hernán, S R Cole, Am. J. Epidemiol. 170Hernán, M. A. & Cole, S. R. Causal diagrams and measurement bias. Am. J. Epidemiol. 170, 959-962 (2009). Causal diagrams for encoding and evaluation of information bias. E Shahar, J. Eval. Clin. Pract. 15Shahar, E. Causal diagrams for encoding and evaluation of information bias. J. Eval. Clin. Pract. 15, 436-440 (2009). Good practices for quantitative bias analysis. T L Lash, Int. J. Epidemiol. 43Lash, T. L. et al. Good practices for quantitative bias analysis. Int. J. Epidemiol. 43, 1969-1985 (2014). Inference and missing data. D B Rubin, Biometrika. 63581Rubin, D. B. Inference and missing data. Biometrika 63, 581 (1976). Using causal diagrams to guide analysis in missing data problems. R M Daniel, M G Kenward, S N Cousens, B L De Stavola, Stat. Methods Med. Res. 21Daniel, R. M., Kenward, M. G., Cousens, S. N. & De Stavola, B. L. Using causal diagrams to guide analysis in missing data problems. Stat. Methods Med. Res. 21, 243-256 (2012). K Mohan, J Pearl, J Tian, Advances in Neural Information Processing Systems. 26Mohan, K., Pearl, J. & Tian, J. in Advances in Neural Information Processing Systems Vol. 26 (NIPS 2013), 1277-1285 (2013). Multimodal population brain imaging in the UK Biobank prospective epidemiological study. K L Miller, Nat. Neurosci. 19Miller, K. L. et al. Multimodal population brain imaging in the UK Biobank prospective epidemiological study. Nat. Neurosci. 19, 1523-1536 (2016). The advantages of UK Biobank's open-access strategy for health research. M Conroy, J. Intern. Med. 286Conroy, M. et al. The advantages of UK Biobank's open-access strategy for health research. J. Intern. Med. 286, 389-397 (2019). Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). US Food and Drug AdministrationUS Food and Drug Administration. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). https://www.fda.gov/media/122535/ download (FDA, 2019). Regulation of predictive analytics in medicine. R B Parikh, Z Obermeyer, A S Navathe, Science. 363Parikh, R. B., Obermeyer, Z. & Navathe, A. S. Regulation of predictive analytics in medicine. Science 363, 810-812 (2019). Code of Conduct for Data-Driven Health and Care Technology. UK Department of Health and Social CareUK Department of Health and Social CareUK Department of Health and Social Care. Code of Conduct for Data-Driven Health and Care Technology. https://www.gov.uk/government/publications/ code-of-conduct-for-data-driven-health-and-care-technology (UK Department of Health and Social Care, 2018). Evidence Standards Framework for Digital Health Technologies. UK National Institute for Health and Care ExcellenceUK National Institute for Health and Care ExcellenceUK National Institute for Health and Care Excellence. Evidence Standards Framework for Digital Health Technologies. https://www.nice.org.uk/about/ what-we-do/our-programmes/evidence-standards-framework-for-digital- health-technologies (UK National Institute for Health and Care Excellence, 2019). Artificial Intelligence: How to Get it Right. Putting Policy into Practice for Safe Data-Driven Innovation in Health and Care. London Nhsx, U K , NHSXNHSX, London, UK. Artificial Intelligence: How to Get it Right. Putting Policy into Practice for Safe Data-Driven Innovation in Health and Care. https:// www.nhsx.nhs.uk/assets/NHSX_AI_report.pdf (NHSX, 2019). WHO and ITU establish benchmarking process for artificial intelligence in health. T Wiegand, Lancet. 394Wiegand, T. et al. WHO and ITU establish benchmarking process for artificial intelligence in health. Lancet 394, 9-11 (2019). Walking the tightrope of artificial intelligence guidelines in clinical practice. Editorial, Lancet Digit. Health. 1100Editorial. Walking the tightrope of artificial intelligence guidelines in clinical practice. Lancet Digit. Health 1, e100 (2019). Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. X Liu, Nat. Med. 25Liu, X. et al. Reporting guidelines for clinical trials evaluating artificial intelligence interventions are needed. Nat. Med. 25, 1467-1468 (2019). Practical guidance on artificial intelligence for health-care data. M Ghassemi, Lancet Digit. Health. 1Ghassemi, M. et al. Practical guidance on artificial intelligence for health-care data. Lancet Digit. Health 1, e157-e159 (2019). Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. L M Prevedello, Radiol. Artif. Intell. 1180031Prevedello, L. M. et al. Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiol. Artif. Intell. 1, e180031 (2019). A roadmap for foundational research on artificial intelligence in medical imaging: from the 2018 NIH/RSNA/ACR/The Academy Workshop. C P Langlotz, Radiology. 291Langlotz, C. P. et al. A roadmap for foundational research on artificial intelligence in medical imaging: from the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology 291, 781-791 (2019). Do no harm: a roadmap for responsible machine learning for health care. J Wiens, Nat. Med. 25Wiens, J. et al. Do no harm: a roadmap for responsible machine learning for health care. Nat. Med. 25, 1337-1340 (2019). Predictive analytics in health care: how can we know it works?. B Van Calster, L Wynants, D Timmerman, E W Steyerberg, G S Collins, J. Am. Med. Inform. Assoc. 26Van Calster, B., Wynants, L., Timmerman, D., Steyerberg, E. W. & Collins, G. S., Predictive analytics in health care: how can we know it works? J. Am. Med. Inform. Assoc. 26, 1651-1654 (2019). Big data and predictive analytics: recalibrating expectations. N D Shah, E W Steyerberg, D M Kent, J. Am. Med. Assoc. 320Shah, N. D., Steyerberg, E. W. & Kent, D. M. Big data and predictive analytics: recalibrating expectations. J. Am. Med. Assoc. 320, 27-28 (2018). Making machine learning models clinically useful. N H Shah, A Milstein, S C Bagley, J. Am. Med. Assoc. 3221351Shah, N. H., Milstein, A. & Bagley, S. C. Making machine learning models clinically useful. J. Am. Med. Assoc. 322, 1351 (2019). . U K London, Artificial Intelligence in Healthcare. Academy of Medical Royal CollegesAcademy of Medical Royal CollegesAcademy of Medical Royal Colleges, London, UK. Artificial Intelligence in Healthcare. https://www.aomrc.org.uk/reports-guidance/artificial-intelligence- in-healthcare (Academy of Medical Royal Colleges, 2019). The use and misuse of biomedical data: is bigger really better?. S Hoffman, A Podgurski, Am. J. Law Med. 39Hoffman, S. & Podgurski, A. The use and misuse of biomedical data: is bigger really better? Am. J. Law Med. 39, 497-538 (2013). Transparent reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): The TRIPOD statement. G S Collins, J B Reitsma, D G Altman, K G Moons, Ann. Intern. Med. 16255Collins, G. S., Reitsma, J. B., Altman, D. G. & Moons, K. G. M. Transparent reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD): The TRIPOD statement. Ann. Intern. Med. 162, 55 (2015). Reporting of artificial intelligence prediction models. G S Collins, K G Moons, Lancet. 393Collins, G. S. & Moons, K. G. M. Reporting of artificial intelligence prediction models. Lancet 393, 1577-1579 (2019). Datasheets for datasets. T Gebru, Proc. 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018). 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018)Gebru, T. et al. Datasheets for datasets. In Proc. 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2018) (2018). Model cards for model reporting. M Mitchell, Proc. 2019 Conference on Fairness, Accountability, and Transparency (FAT* 2019. 2019 Conference on Fairness, Accountability, and Transparency (FAT* 2019ACMMitchell, M., et al. Model cards for model reporting. In Proc. 2019 Conference on Fairness, Accountability, and Transparency (FAT* 2019) 220-229 (ACM, 2019). Tutorial: safe and reliable machine learning. S Saria, A Subbaswamy, Preprint atSaria, S. & Subbaswamy, A. Tutorial: safe and reliable machine learning. Preprint at https://arxiv.org/abs/1904.07204 (2019). S Chiappa, W Isaac, Privacy and Identity Management. Fairness, Accountability, and Transparency in the Age of Big Data. ChamSpringer547Chiappa, S. & Isaac, W. S. in Privacy and Identity Management. Fairness, Accountability, and Transparency in the Age of Big Data (Privacy and Identity 2018) Vol. 547, 3-20 (Springer, Cham, 2019). Preventing failures due to dataset shift: Learning predictive models that transport. A Subbaswamy, P Schulam, S Saria, Proc. Twenty-Second International Conference on Artificial Intelligence and Statistics (AISTATS 2019). Twenty-Second International Conference on Artificial Intelligence and Statistics (AISTATS 2019)PMLR89Subbaswamy, A., Schulam, P. & Saria, S. Preventing failures due to dataset shift: Learning predictive models that transport. In Proc. Twenty-Second International Conference on Artificial Intelligence and Statistics (AISTATS 2019), Vol. 89, 3118-3127 (PMLR, 2019). Elements of Causal Inference: Foundations and Learning Algorithms. J Peters, D Janzing, B Schölkopf, MIT PressCambridge, MAPeters, J., Janzing, D. & Schölkopf, B. Elements of Causal Inference: Foundations and Learning Algorithms (MIT Press, Cambridge, MA, 2017). S Greenland, J Pearl, International Encyclopedia of Statistical Science. Berlin, HeidelbergSpringerGreenland, S. & Pearl, J. in International Encyclopedia of Statistical Science (ed. Lovric, M.) 208-216 (Springer, Berlin, Heidelberg, 2011). J W Swanson, J K Ibrahim, Public Health Law Research: Theory and Methods. Wagenaar, C. A. & BurrisJossey-Bass10Swanson, J. W. & Ibrahim, J. K. in Public Health Law Research: Theory and Methods (eds Wagenaar, C. A. & Burris, S.) Ch. 10, 217-236 (Jossey-Bass, 2013). Inferring deterministic causal relations. P Daniušis, Proc. Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI 2010. Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI 2010AUAI PressDaniušis, P. et al. Inferring deterministic causal relations. In Proc. Twenty- Sixth Conference on Uncertainty in Artificial Intelligence (UAI 2010) 143-150 (AUAI Press, 2010).
[]
[ "Automating Cryptographic Protocol Language Generation from Structured Specifications", "Automating Cryptographic Protocol Language Generation from Structured Specifications" ]
[ "Roberto Metere [email protected] ", "Luca Arnaboldi [email protected] ", "\nNewcastle University Newcastle upon Tyne\nUK\n", "\nThe University of Edinburgh Edinburgh\nUK\n" ]
[ "Newcastle University Newcastle upon Tyne\nUK", "The University of Edinburgh Edinburgh\nUK" ]
[]
Security of cryptographic protocols can be analysed by creating a model in a formal language and verifying the model in a tool. All such tools focus on the last part of the analysis, verification, and the interpretation of the specification is only explained in papers. Rather, we focus on the interpretation and modelling part by presenting a tool to aid the cryptographer throughout the process and automatically generating code in a target language. We adopt a data-centric approach where the protocol design is stored in a structured way rather than as textual specifications. Previous work shows how this approach facilitates the interpretation to a single language (for Tamarin) which required aftermath modifications. By improving the expressiveness of the specification data structure we extend the tool to export to an additional formal language, ProVerif, as well as a C++ fully running implementation. Furthermore, we extend the plugins to verify correctness in ProVerif and executability lemmas in Tamarin. In this paper we model the Diffie-Hellman key exchange, which is traditionally used as a case study; a demo is also provided for other commonly studied protocols, Needham-Schroeder and Needham-Schroeder-Lowe.CCS CONCEPTS• Software and its engineering → Application specific development environments; • Security and privacy → Formal security models; • Networks → Network protocol design.
10.1145/3524482.3527654
[ "https://arxiv.org/pdf/2105.09150v2.pdf" ]
247,958,438
2105.09150
f25e8f3395e87a42201cf8ad6dfd2d4cb308163f
Automating Cryptographic Protocol Language Generation from Structured Specifications Roberto Metere [email protected] Luca Arnaboldi [email protected] Newcastle University Newcastle upon Tyne UK The University of Edinburgh Edinburgh UK Automating Cryptographic Protocol Language Generation from Structured Specifications Protocol DesignAutomated Software DevelopmentFormal Secu- rity Models Security of cryptographic protocols can be analysed by creating a model in a formal language and verifying the model in a tool. All such tools focus on the last part of the analysis, verification, and the interpretation of the specification is only explained in papers. Rather, we focus on the interpretation and modelling part by presenting a tool to aid the cryptographer throughout the process and automatically generating code in a target language. We adopt a data-centric approach where the protocol design is stored in a structured way rather than as textual specifications. Previous work shows how this approach facilitates the interpretation to a single language (for Tamarin) which required aftermath modifications. By improving the expressiveness of the specification data structure we extend the tool to export to an additional formal language, ProVerif, as well as a C++ fully running implementation. Furthermore, we extend the plugins to verify correctness in ProVerif and executability lemmas in Tamarin. In this paper we model the Diffie-Hellman key exchange, which is traditionally used as a case study; a demo is also provided for other commonly studied protocols, Needham-Schroeder and Needham-Schroeder-Lowe.CCS CONCEPTS• Software and its engineering → Application specific development environments; • Security and privacy → Formal security models; • Networks → Network protocol design. INTRODUCTION Design and specification of cryptographic protocols are usually the first stage when creating a new protocol. Their implementation and verification is commonly deferred to a secondary stage, and often done by a separate set of people. At this second stage, the specification gets interpreted into a formal language able to run the protocols or verify security properties in the form of mechanised or automated theorems. We can appreciate that such interpretations are affected by (at least) two problems: first, the language of specification may be ambiguous or contain gaps that become noticeable only at later stages, and second, proposed interpretations are difficult to reuse as they exist only on papers. The former is a common concern when one models from specification [10,16]. The * Also with The Alan Turing Institute. † Equally collaborative work; work partly done whilst at Newcastle University. latter is a manual refinement from the specification language, that is often a mix of maths and natural language, to a formal language of choice, with limited semantics, that consequently may only capture a subset of the initial specification. The model interpreting the design of a protocol is the first mathematical artefact in the process of formal verification, but the interpretation process itself is not mathematical and currently manually done by experts. Hence, papers need to be written to convince the readers that a particular interpretation indeed captures the aspects relevant to the analysis. As a consequence, different researchers may formalise the same specification differently, even more likely if they choose different formal languages. The natural outcome of this process is that their output may show different results, depending on what security details are being modelled. Indeed, protocols proven correct by one interpretation [21] may be found to suffer from several vulnerabilities when formalised differently [13]. Nonetheless, both are valid interpretations of the same protocol. It is therefore important to analyse the same specification from multiple interpretations that can cover security aspects more exhaustively. Even though this concept sounds intuitive, all state-of-the-art tools exclusively focus their efforts on automating the last part of the verification process, i.e., after the model has been formalised. We see in a structured, centralised approach for specification an effective way to tame the above mentioned difficulties. A seminal prototype of such approach has been implemented in the tool MetaCP [4], which focuses on the automated modelling part of the process of formal verification, completely delegating the security proofs to external tools. A mechanised refinement from structured specification to formal languages can offer consistency, reusability and repeatability: as if a security aspect is specified in the same manner multiple times, it will always be formalised in the same manner. Not only does MetaCP improve a previously manual and bespoke process, but it also does so in record time and without the need of expert knowledge of multiple formal languages -although experts may still be required to check the final results or to adjust the exported code. We extend the core of the tool to support new plugins (ProVerif and C++) on top of previous work (Tamarin plugin); we also model executability and correctness as a first attempt to model security properties from specification 1 . Section 3 of this paper discusses in more details the architecture of MetaCP, Section 4 its workflow, and Sections 5.1 and 5.2 interpretation process. In the latter two, we illustrate one possible interpretation from the structured specification of MetaCP to ProVerif and an interpretation to C++. We emphasise that the tool supports for multiple interpretations across the same target language too; so, the interpretations we propose can easily coexist with many others to both different or same languages. RELATED WORK The tools that allow to mechanise security proofs and finally perform formal security evaluation, e.g. Tamarin [16], ProVerif [8], EasyCrypt [5] among others, have improved in the past decades and enjoy a wide spread use. However, these tools do not provide any means to relate to the whole design process, impacting the usability, reproducibility and replication of the evaluations. As it stands, it is very difficult for the casual user, and highly time-consuming for the security professional unaware of those formal languages, to ascertain the truthfulness of a formal protocol analysis, and how it relates to the original protocol. Witnessing the sensibility of the research community about this problem, projects such as CAPSL (Common Authentication Protocol Specification Language) [12], AVISPA (Automated Validation of Internet Security Protocols) [3] or AVANTSSAR (Automated Validation of Trust and Security of Service-Oriented Architectures) [2] have attempted to unify the verification process by presenting a single intermediate language of specification, and by automating the translation into various back-end tooling. These proposed approaches based themselves on the assumption that most people would be familiar with their intermediate language. Even with the integration of multiple verification options, research shows that protocols found to be secure by a tool (i.e., AVISPA) [21] were later found flawed when modelled manually in a different language (i.e., ProVerif) [13]; this is (also) due to the strict semantics of the intermediate language that makes it difficult (or impossible) to capture all the specification aspects relevant to security properties. Another approach is provided by ProScript [14] (developed as part of the tool Cryptocat -discountinued), where authors propose a new high level language for the specification of security protocols based on Javascript. ProScript is able to export to applied pi-calculus (without a sound translation and similar methodology to ours), whose result is automatically verifiable in ProVerif and manually in CryptoVerif. However, its closeness to the exported language strongly limits its expressiveness, hindering its very characteristic of being more general and, thus, impractical: in fact, the task of supporting new languages from it is as difficult as it would be starting from ProVerif code itself. Additionally, many typical specification aspects, e.g. number of bits of security parameters, cannot be even expressed in the language. All the above demonstrate the need for simplifying the integration to more tooling in the back-end verification as the field advances. ARCHITECTURE The crucial innovative aspect that we propose lies in a data-centric approach, where the protocol specification is stored in a structured way, as shown in Figure 1. The benefits of this approach are manifold and enable for unprecedented little effort in going from the design to formal verification of security protocols. The tool architecture is composed of three kinds of components: design, specification, and export. At the design level, an intuitive Graphical Design Editor (GDE) is provided which allows for the graphical design Plugins source code (C++, ...) Figure 1: MetaCP supports a data-centric approach where the specification is stored as structured information (PSV). The dashed arrows point to the targets that we might attempt in future extensions. creation, and dragging and dropping elements that will be later saved into the specification. The GDE is written in a modern web application framework, using ReactJs, Bootstrap, NodeJs, and Redux. At the specification level, a data structure written in XML language is provided and is meant to collect the information required to fully describe a security protocol. Such structure follows a minimal syntax described in Section 3.1, and its code can later be exported by means of a plugin. The tool provides two plugins towards formal verification languages, one for Tamarin [4] and one for ProVerif (presented in Section 5.1), that automatically interpret the protocol described in their syntax. Furthermore, a third plugin exports into C++ code for which parties can truly exchange messages over the Internet and cryptographic operations are done with the Crypto++ library: we briefly illustrate it in Section 5.2. We found it comfortable to write the new plugins in XSLT, but they can be written in any language of choice. All components, design, specification and export, can be developed independently, and their synergy provides a tool usable to kick-start projects from the design to formal verification languages. More details are explained in the following subsections. Protocol Specification and Verification Data Structure The ability for MetaCP to automatically translate into multiple verification languages resides in its description language, denoted as Protocol Specification and Verification data structure (PSV). A file in PSV format is effectively an XML file (although alternative formats such as JSON could be used) whose constraints are defined in a Document Type Definition (DTD) file. DTDs merely enforce some structure to XML files of reference without adding strong constraints to their semantics, thus not breaking the flexibility required by our approach. Such flexibility makes PSV format suitable to be easily extended and enjoy multiple interpretations. Our approach is sensibly different from all previous approaches, where researchers struggled to find a single semantics capable of embracing the semantics of all desired target languages [2,3,12,14]. The single generic semantics approach could work well for a few languages whose semantics were not too far apart, but would either fail, or find it very difficult, to capture the requirements of other languages. We illustrate how our approach is suitable for being a multilanguage translation tool, through a traditional example of reference in cryptography, the Diffie-Hellman key exchange (DHKE) protocol. Once we defined the protocol in PSV, we export it to ProVerif and C++, extending previous work where a prototype of a Tamarin plugin [4] was conceptually suggested. We implement correctness in ProVerif and, additionally, extend the Tamarin plugin to provide executability. The C++ plugin is able to generate compilable source code that allow parties to actually use the protocol through a real network, e.g., the Internet. 3.1.1 Basic structure of PSV. The tool plugins rely on the following basic structure and formalism. Generally speaking, a party in a protocol manipulates variables whose values match a type related to some mathematical set, prepares them to be sent to the other party (or parties) through a communication channel, and elaborates the input received from the channel. For a set , we use the notation * for the Kleene closure and • for ∪ {⊥} where ⊥ is considered as none. We start considering the set of non empty strings denoted as L = Σ * \ { }, a set of variable modifiers = {nonce, const, entity, var}, channel modifiers C = {insecure, auth, secure} and hints . Hints are labels providing suggestions on the semantic interpretation of various elements. For example, a variable may be labelled as private asymmetric key. We do not list all hints explicitly as it is unnecessary; it will be up to the exporting plugin to interpret the hints according to the semantics of the target language. We support probabilistic, pr, and deterministic, det, assignments; the former draws from a distribution over a set, the latter binds a value to an identifier. Table 1 shows syntactic elements in bottom-up description as sets (some are by definition mutually dependent). ⟨ / ⟩ L × • ⟨ ⟩ * × ⟨ ⟩ L × * × • ⟨ ⟩ × × × • ⟨// [@] ⟩ L × • ⟨ ⟩ × • ( * ∪ ) ⟨ ⟩ L × * × ( , ∪ ) ⟨ ⟩ C L × C ⟨ ℎ ⟩ × {det, pr} × ( ∪ ∪ ) ⟨ ⟩ • × × * ⟨ ⟩ • * × * × * × C × * × * ⟨ ⟩ Π * × * × • × • ⟨ ⟩ M • * × • × • × • × Π ⟨ ⟩ Support to security properties is still immature in MetaCP, and the tool defines only executability (Tamarin plugin) and correctness (ProVerif plugin). Correctness is similar to executability in that it tests if the end of the protocol is reachable, but differently it also tests final conditions. In the PSV, this notion is provided in the finalisation element, . The syntactic structure introduced in Table 1 shows which XML tags correspond to the syntactic elements. The full syntactic description of PSV is accessible from its DTD. The DTD describing the structure of the PSV to specify a protocol is available here: http://metacp.eu/meta-cp.dtd?v=0.1. High-level description of the structure of the specification language A PSV file describes a model of a single protocol matching the syntax of Table 1. Figure 2 describes its general structure that includes the following sections: declarations and the protocol. Figure 2: High level description of the PSV data structure to specify protocols. Declarations. To allow for a type system over all the structures used within a protocol specification, e.g., typesets, variables, constants and functions, the declarations of the corresponding membership sets are mandatory beforehand. Each subsequent declaration needs to refer to an existing set identifier. Typesets. Declarations of typesets enforce strict typing rules when constructing function applications, messages and statements. PSV notation allows for the definition of custom sets that can be used for function declarations. For example, we use N to denote the set of natural numbers and Z to define the ring of integer modulo , used in modular arithmetic of the Diffie-Hellman exponentiation. To declare such typesets, we use the markup: <sets> <set id="N">Natural numbers</set> <set id="Zp">Integers modulo p</set> </sets> Variables. Declarations are used to preemptively specify the variables which will be used in the protocol. In a variable declaration, one must specify its related typeset and its scope. The scope of a variable is what entity manipulates it, assuming that the (implementation of) protocols will eventually run in different execution environments with separated memories. <declaration variable="x" entity="A" set="N"></declaration> Functions. Functions follow on from the set definitions, only existing sets may be used as an argument set (argset) of a function allowing for easy syntax checking and disallowing errors in the protocol declaration. Whilst the PSV automatically enforces the existence of an identifier by way of tags in the DTD, it cannot check the semantic correctness of their later usage. This problem is overcome by the consistency of identifiers enforced by the graphical interface. A function contains not only a set of arguments but also notations and hints, which allow for the plugins to interpret the function structure efficiently. <function id="exp" arity="2" hint="group-exp"> <argset set="Zp"></argset> <argset set="N"></argset> <argset set="Zp"></argset> </function> The hint="group-exp" attribute highlights that the function is to be interpreted as part of the group exponentiation theory, whose usage will depend on the target language. For example, Tamarin may want to include the diffie-hellman theories with Conversely, C++ may want to use a specific library. Protocol. A protocol is composed of entities, messages, a final elaboration step after the messages, and finally the desired (security) properties. The entities are the participants of the protocol that exchange messages whose directives affect their knowledge. The final elaboration step can include statements; for example, at the end of a key exchange protocol, the parties may reconstruct the key at that stage. Security properties that can be specified are correctness, authentication and secrecy. In this paper, we only focus on modelling correctness (and executability for Tamarin); we reserve the study of additional security properties to future works. The messages are structured in four parts: the knowledge, the sender, the receiver and a communication channel in their between. The knowledge part is per entity and lists all the known variables and constants by the entity before either sending or receiving the message. The knowledge is beneficial to detect or restrict the designer not to use unknown structures. The sender part shows two sub-parts: the first can include statements required to construct the message to send, and the second is the message as it is pushed to the channel. Similarly, the receiver part shows two sub-parts but, in this case, they are inverted: the first is the incoming message, while the second are statements manipulating variables in the knowledge of the receiver, which has been just augmented with the received message. We remark that the received message may not be the same as originally sent by the sender. Any manipulation to the message can be done in the channel part. This structure has the benefit of allowing the designer to model different scenarios of interest. In particular, (i) systematically biased channels can be implemented with a function in the channel, (ii) a man-in-the-middle may be modelled by tampering with the received message, without creating additional parties and simplifying the design of attacks, and (iii) faults can be implemented either as empty received messages or probabilistic functions in the channel. The above listed scenarios are merely examples, and other scenarios can benefit from this particular structure of the message. Using the DHKE running example, we cherry-picked the first message sent in the protocol: where we replaced the details of its content with a brief summary. Graphical Design Editor XML is an intuitive language for describing a protocol in a specification, as its format is purposely easy to be manipulated by both humans and machines. Additionally, MetaCP is equipped with a Graphical Design Editor (GDE). The GDE aids the user with the design of the protocol rather than focusing on the formalisation part, i.e., the PSV. The GDE mimics the standard drawing process most familiar to any protocol designer, and it lets the user specify variables, functions and message flow. It does so through a smooth drag and drop design, making it easy to piece together the protocol. The GDE is intended to guide a user through the coherent definition of the PSV, automatically providing the following relationships in the data structure: first, the knowledge is automatically augmented as the protocol is constructed, and second, the GDE will enforce correct typing across the protocol and functions. The ability to store further relations and information about the protocol is a significant aid that modern frameworks for programming languages usually incorporate as the basics -but currently is not supported in any existing automated protocol verification tool. Once a desired protocol is drawn out, it can be saved as PSV. Figure 3 highlights how some parts of the design reflect to the structure in PSV format. MetaCP Protocol Designer -Any Web Browser File Edit Help MetaCP Load Save Export <event type="send"> <variable id="gy"></variable> </event> <channel></channel> <event type="receive"> <variable id="gy"></variable> </event> Alice's initial knowledge droppable area for variables <knowledge entity="Alice"> <variable id="g" type="constant"></variable> </knowledge> Exporting Plugins A plugin provides a fully automated protocol-agnostic interpreter from PSV code to the desired semantics of the target language. We remark that our plugins are examples of interpretation of the target semantics: additional plugins targeting the same language are allowed. The combination of the benefits of the GDE and the exporting plugins can sensibly improve the experience of protocol designers, even if they are an expert in a specific language. To the best of our knowledge, the languages used in formal verification for protocols do not enjoy frameworks for design or editing. In addition, many tools work (or can work) with untyped variables and constants. So in comparison to other languages, their source code is more prone to subtle and hard-to-spot bugs that can influence the consistency of the specification. Imagine simply asking for the confidentiality of a never-used variable -due to a typo -it can verify correctly as unknown to any attacker. Plugins can be called in the GDE directly, as well as natively, e.g., as scripts in a shell, once the PSV is available. The architecture of MetaCP is such that all the components, PSV (with DTD), GDE and exporting plugins, are independent. So when a plugin is called in the GDE, it automatically and transparently generates a PSV as input to the plugin. WORKFLOW The aim of MetaCP is to facilitate the design and modelling process up to the formal verification (excluded) of a cryptographic protocol. With this goal in mind, the ideal workflow of its user can be summarised by the following points: (1) Design the protocol with the aid of the graphical design editor of MetaCP. (2) Save the design to PSV format, that ideally specifies the protocol. (3) Export the PSV to any target language or format, e.g., picalculus. (4) Optionally run the formal verification tool, e.g., ProVerif, to formally verify the protocol model or execute the protocol code exported (discussed for completeness, although it sits outside MetaCP). Only the first and the last step are interactive, as both saving and exporting are automated. Nevertheless, the user can intervene and manually modify the result of any step of the process. After saving to PSV, the user might enrich the PSV with additional information that are not supported by the GDE, e.g., with additional properties. Similarly, after exporting to pi-calculus, the user might modify the model if required to verify later in ProVerif. Modifying the PSV is as easy as modifying an XML file. Differently, modifying the exported protocol requires expertise in the target format or language. The application of our exporting plugin (e.g., PSV-to-ProVerif) can be as good (or as wrong) as the manual modelling task (e.g., Englishto-ProVerif); while both need to show convincing arguments, only the former is mechanised and can be consistently reused. Graphical design editor To show the design process, we provide some details of the MetaCP graphical design editor, introduced in Section 3.3. The MetaCP GDE is composed by the following macro-blocks: a section with two parties, a toolbox, the exchanged messages and the final operations. Parties. Currently, the GDE supports two parties, Alice and Bob, each with their knowledge, as shown by Figure 3. The user can drop variables to the knowledge of either party, determining their initial knowledge, i.e., what they know before running the protocol. Toolbox. The toolbox is illustrated in Figure 4. The toolbox is split in sections for handing sets, functions, constants, variables and statements. They contain buttons to add new elements. Functions themselves cannot be directly dragged out of the toolbox, and they require the user to create an application, i.e., an instance that specifies arguments to the function. A type match helps the user to avoid incorrect function applications or statements. Once new objects are created, they can be dragged and dropped to target boxes external to the toolbox. Messages. The structure of the messages exchanged by Alice and Bob is illustrated in Figure 5. An arrow shows the direction Diffie-Hellman key exchange As a case study, we show some details of the workflow to successfully design and formally verify the Diffie-Hellman key exchange (DHKE) protocol, a traditional protocol of reference in cryptography. This case study saves to PSV and then exports to ProVerif as intended, i.e., no manual intervention is required. The workflow can be summarised by the following steps: • Declare the sets, constants and variables that will be used across the protocol. These can be found in the toolbox (see Figure 4) and are applied as follows: -TypeSets , N, for the exponents, and Zp, for the group. -Constants, g, the group generator is a constant as it is a pre-shared knowledge before engaging with the protocol. -Variables, x, y: N for fresh and secret exponents, X, Y: Zp for the messages, and kA, kB: Zp for the key they share at the end. • Create the function applications that will be assigned to variables. -Secret exponent sampled by Alice, x <$ N -Secret exponent sampled by Bob , y <$ N -Message from Alice, X <-exp(g,x) -Message from Bob , Y <-exp(g,y) -Shared key reconstructed by Alice, kA <-exp(Y,x) -Shared key reconstructed by Bob , kB <-exp(X,y) • Add two (empty) messages the first from Alice to Bob and the last from Bob to Alice, that will have the structure shown in Figure 5. • Drag statements to the pre boxes of the messages that calculate the messages to send, then drag the variables and constants to define the message exchanged. In particular, the first message will be: -Alice calculates x <$ N and X <-exp(g, x) -Alice sends out X Analogously, the second message will be: -Bob calculates y <$ N and Y <-exp(g, y) -Bob sends out Y • Fill the finalise boxes, as seen at the bottom of Figure 3. -Alice constructs the key kA <-exp(Y,x) -Bob constructs the key kB <-exp(X,y) • Finally, drag variables to the initial knowledge of the parties. All the following knowledge bubbles will be automatically populated. Once the design of the protocol is completed, we can save its specification to file that can be later interpreted by plugins. EXPORTING PLUGINS The PSV is refined into various target languages through the use of plugins -these will be described in the following sections. Since the target languages we discuss in this section are different, we provide a brief discussion of main high-level differences in Section 5.3. Exporting to ProVerif While the PSV is a structured container of the specification of the protocol, an interpreting plugin confers semantics to that specification from the point of view of their target language. Hence, a plugin can be seen as the effort of applying the semantics of the target language to the structure of the source PSV. To do that, the plugin translates the PSV into the target language grammar. For this paper we illustrate an example of exporting to ProVerif, for reference the syntax of ProVerif is illustrated in Figure 6. ::= Expressions term (variables, names, constructors) ( , , . . . , ) function application ::= Processes 0 no operations out ¯, write to channelī n ¯, : bind (of type ) reading from| parallel composition ! (infinite) replication : , P restriction (probabilistic assignment) let = in expression evaluation if then else conditional Figure 6: Extract of the core syntax of ProVerif [7]. If the PSV and target language were two languages with their own semantics, some sort of bisimulation certifying that semantics are preserved in the translation would be expected. Conversely in our case, the best we can do is to illustrate how the methodology of our ProVerif plugin does not introduce errors in the target code upon certain conditions. Arguably, the most convenient way to describe the validity of the interpretation of a plugin is by demonstrating how the PSV structure uniquely maps to the target syntax to model the protocol, similarly to a refinement. The interpretation methodology is illustrated in Figure 7 and can be summarised in the following points: (i) in a first step it handles declarations and descriptions of types, entities, functions (including constants) and channels, (ii) then it creates the processes by entity, extracting from the messages the relevant parts, and (iii) it creates a process describing the protocol run for infinite repetitions of the two entity-related processes. The semantics that the DTD confers to the PSV is very intuitive and is meant to be interpreted directly. The plugin rules complement them by applying the target language semantics to the syntax of PSV. We only describe the rules in charge of interpreting a by-entity process in pi-calculus, as they where the least obvious and probably the most interesting, in Figure 8 and Figure 9. We note that our plugin converts the protocol model through a generic interpretation of the rules, agnostic of security properties to verify. Whilst this is enough for our case study, it may not be the case for other properties in different protocols. In such cases, additional plugins may convert the model in a way that is specific for the security properties being verified. yThe key to read those rule is as follows: at the conclusion (bottom or bottom-right) we have the grammar of the applied picalculus, which is inferred by the parts above its line or at the left of the corresponding inline symbol ⊢, while the lines above are the interpretation reading from the PSV whose notation uses xPath directives. For a comprehensive explanation of the applied pi-calculus grammar, syntax and semantics, we refer to Abadi et al. [1]; similarly for the xPath directives, we refer to Clark et al. [9]. [variable|argument] m : ∈ ⟨ | ⟩ = ⊥ ← [@ ] ⊢ [variable|argument] m : ∈ ⟨ | ⟩ = "typed" ← ⟨// [@ = [@ ]]/[@ ]⟩ ← [@ ] : [application]: Additionally, we use the notation explained as follows. We refer to the (ordered) sets of elements generated by an application of an xPath directive within angle brackets, i.e. ⟨ ⟩. If the result set is a singleton, we also refer to the single element with the same notation. Some rules, names inside square brackets, are parametric, the parameter passed to them is superscripted after their name, so the notation [r] is for the rule named "r" with parameter . Parameters do not have corresponding attributes in PSV, but are different interpretations of the same tags from the plugin. ∈ ⟨ ⟩ ← [@ ] ← [@ ] 1 , 2 , . . . , ← [application|argument] "typed" ∀ ( 1 , 2 , . . . , ) To read attributes from tags, we use square brackets notation traditional in xPath, so we denote the attribute type from the tag in as [@ ]. Unlikely other common rules, they have to be explicitly called. The notation we use to apply a rule to all elements of a set of elements is a vertical bar with the application domain as subscript, e.g. to apply the rule [r] to all elements in ⟨ ⟩, we write [r] | ∀ ∈ ⟨ ⟩ . As a short notation, if the rule to apply has the same name as the xPath directive of the set, we omit it leaving only the ∀ symbol, e.g. [r] | ∀ is short for [r] | ∀ ∈ ⟨ ⟩ . Finally, we shorten the call of two rules applying to set of diverse elements, e.g. el1 el2 with the vertical bar |, e.g. [el1|el2] will apply to elements whose tag is either el1 or el2 in the order they appear in the application domain. So for example, the PSV assignment <assignment variable="x" type="probabilistic"></assignment> is transformed by the rule [assignment] in Figure 9 to -calculus as : N (see Figure 6), where N is the typeset specified in the declaration of , ultimately written as new n:N; in ProVerif. ∈ ⟨ ⟩ [@ ] = "deterministic" ← [@ ] ← [application] | ∀ .({ / } .•) [event]: ∈ ⟨ ⟩ [@ ] = "send" ← [variable] ⊥ ∀ ⊢¯⟨ ⟩ [event]: ∈ ⟨ ⟩ [@ ] = "receive" ← [variable] "typed" ∀ ⊢ ( ) As the reader may already have noticed, the rules in Figure 8 and Figure 9 rely on some assumed relationships between the elements in the PSV. These relationships cannot be enforced by the DTD: for example, the rule [variable] "typed" assumes that the type will be actually found in the declarations. The DTD can only guarantee that the variable specified appears as an identifier in the past, but it cannot guarantee that the identifier was actually defined for the desired element. By designing the protocol with the GDE, MetaCP is able to generate a PSV where these relationships are always valid. A further area of exploration would be to enforce such relationships in the PSV itself: we reserve this investigation to future extensions, perhaps upgrading the DTD to the more powerful XML Schema Definition [19]. As introduced before, to overcome the limitations of the DTD, the GDE currently confers a type system to the functions, variables and statements to be respected across the whole protocol, and manages the knowledge automatically at each step of the protocol. In particular, the extra properties provided by using the GDE are that all messages are exchanged between intended parties, and all statements can refer only to corresponding pre-declared sets, variables and functions, according to the knowledge of the party at that specific point of the protocol. Exporting to C++ To provide a broader evaluation of the expressivity of the PSV in its current development (see the DTD for details), we created a plugin that targets C++ and allows for two parties to truly run the protocol over the Internet. Even if the plugin is agnostic of the particular protocol under design, we tested it only for our use case, the Diffie-Hellman key exchange. Currently, the PSV does not contain as many implementation details as a programming language, i.e. C++, so it is necessary for the plugin to make some assumptions: the protocol is a two-party protocol, and two sets N and Z are present. The plugin looks for group exponentiation operations, in particular: • The group exponentiation constant of type Z is the generator, and the constant of type N is considered as the modulo . We remark that such values are globally declared, so that entity-scoped values will not be treated the same way. • If a function for modular exponentiation (hint) with signature exp : Z → N → Z is found, its implementation gets filled with modular exponentiation with modulo , where references the global value . The implementation of cryptographic functions is borrowed from the library Crypto++ [11], the implementation of the network operations is borrowed from the library Asio [18]. Those are examples of interpretation details that are allowed by the structured nature of the PSV; although in the future, such details may be very well part of its definition -to strictly specify that use of a particular routine is mandated by the specification. We do not go in further details of the C++ plugin, as they are analogous to the process explained in Section 5.1 for the ProVerif plugin. Rather, we go into the details of how to compile and run the code, to appreciate the actual usability of the automatically generated implementation. Additionally to the above mentioned libraries, the plugin relies on an external open-source class (available at: https://github.com/nitrogl/snippets/); one needs to download and compile C++/net/channel.cpp and C++/net/channel.h to easily map send and receive operations. Once all dependencies are installed, assuming that the automatically generated C++ code is saved as dh.cpp, compilation is straightforward, see Figure 10. We Figure 10: Compiling the source code automatically generated by the C++ plugin of MetaCP. For simplicity, we assume we're using a *nix machine. Terminal -Compile and Run DHKE notice that the generator and the prime number defining the modulus of the group set Z are publicly known by the entities before they run the protocol. Hence, they are asked as arguments. In the example in Figure 11 we used the following: = 3 = 9692442802821327950508911771308328052666887550900435 6828952073475684064958438492246724161309678845542592 11675299291454161197981395799145169370398324975923 where is a 512-bit prime number and "512" is the security parameter specified in the PSV. Alice Shared key! Figure 11: An actual run of the Diffie-Hellman key exchange protocol, directly from the design. As we see the two hosts are able to exchange a secret key correctly. To enact the algorithm as either Alice or Bob, an additional argument is required that matches the identifier of the entity used in the PSV. Finally, the remote host to send messages to can be specified as optional argumentlocalhost is used if omitted. The execution runs between machines over the Internet 2 , as shown in Figure 11. We remark that the Diffie-Hellman key exchange protocol is known to suffer from man-in-the-middle attacks; so it is not of particular interest in the real-world. Traditionally though, the DHKE is used as a reference protocol comparing with related work. Its attacks can be easily shown by ProVerif and Tamarin; a few manual tweaks are required with the MetaCP auto-generated scripts. The C++ plugin can be used to generate instances of the other two available examples, the Needham-Schroeder and the Needham-Schroeder-Lowe protocols. Protocol refinement using plugins for ProVerif and C++ In the following, we provided a quick comparison between the specification in PSV format and the output of plugins to appreciate structure similarities and dissimilarities. In particular, Figure 12 shows different PSV elements and roughly the corresponding code output by two plugins, ProVerif and C++. The applied pi-calculus Figure 12: The plugins refining the PSV to C++ and ProVerif. <event type="send"> <variable id="gx"></variable> </event> out(c, (gx)); c->send( message.str(), 16010 + 'B', host); C++ ProVerif <entity id="A" name="A" desc="Alice"></entity> <entity id="B" name="B" desc="Bob"></entity> C++ ProVerif (ProVerif) is a symbolic description of protocols, so for the most its corresponding PSV code is straightforward. Though, it sometimes needs to navigate to other tags, i.e. the channel name c in out(c, (gx)); in the bottom conversion is taken from outside of the tag. Conversely, C++ is not symbolic and needs to know what exact algorithm to run. This can be seen in the conversion of the probabilistic assignment, where a specific groupExp.rndexp function is called; while in ProVerif the sampling distribution is abstract and (uniformly) draws elements from the corresponding declaration typeset. In the same conversion, the parameter 512 is a security parameter that is actually specified in the PSV, but outside of the assignment. SECURITY PROPERTIES Our first attempt to model security properties models correctness in ProVerif and executability in the case of Tamarin. They are similar properties: the former checks if a final condition is met after a honest execution of the protocol (e.g. for a key agreement protocol, all parties must share the same key), the latter simply checks whether all the rules (Tamarin uses multiset rewriting rules to specify the protocol) describing the protocol can run in the desired order. Thus, correctness may fail even if executability holds. While correctness is a traditional security property, executability can be considered an extremely weak property [6] that only shows that the model can run to completion; so, it is mostly used as a self-check that the model in Tamarin is not affected by (major) errors. Executability Executability is a security property that does not require changes in the structure of the protocol described as PSV, as the definition of the protocol already contains enough information to formalise the property. In Tamarin, executability is generated by the plugin in the following way. • Per message, two rules (sub-procedures) are generated, one to send and one to receive. • Each rule contains a unique action representing each message being sent or received, e.g. Send_m1 (A,B,m1) for the first message m1 sent from A to B; the order of when such actions happen are recorded in the execution trace of the reasoning core of Tamarin. • We write a lemma requiring that the above actions happen in temporal order: lemma executable_protocol : exists -trace " Ex A B m1 # a # b m2 # c # d . Send_m1 (A , B , m1 ) @ # a & Receive_m1 (B , A , m1 ) @ # b & Send_m2 (B , A , m2 ) @ # c & Receive_m2 (A , B , m2 ) @ # d & # a < # b & # b < # c & # c < # d " where A,B are parties, m1,m2 are variables, #a, #b, #c, #d are temporal (ordered) values and Send_m1, Receive_m1, Send_m2, Receive_m2 are actions in the rules. The above lemma translates to checking whether a trace exists in which A has sent B a message m1 at time a and B has received the same message at a different time b. Executability is captured by a trace where all rules are in the right order, i.e., #a < #b < #c < #d. Correctness of key-exchange protocols We now describe how we modelled correctness in the specification language. Correctness is modelled by mean of the tag ⟨ ⟩. Inside of it, one can specify a relation that evaluates to boolean. Twoparty key-exchange protocols aim to share a secret key between the parties, so correctness can informally be thought as the equality = , where and are the keys reconstructed by and respectively, as the main property after running the protocol. <correctness> <application function="eq"> <argument id="kA"></argument> <argument id="kB"></argument> </application> </correctness> This can be extended to multi-party settings. Considering the ProVerif plugin introduced in Section 5.1, one way to model correctness is as property over execution traces when attackers are passive and do not tamper with the messages. A trace is a list that includes: sent and received messages, insertions to (private) tables, and events that must be explicitly recorded by the processes. Events annotate processes marking important stages reached by the protocol but do not otherwise affect their behaviour. Traces are analysed by the reasoning core of ProVerif and are determined by each possible ramification of concurrently executing processes. A security property over execution traces is defined as a predicate over elements in a subset of the space of all traces . We base our definition of correctness on the event ∈ Z 2 → , as a pair of keys mapped to the space of events . Each argument of the event is stored by a different entity; so if the execution of the protocol records the event ( , ) with = into a trace among the (infinite) traces , then the two parties must have successfully exchanged the same key. The existence of ∈ is sufficient to prove correctness, as the processes are run only once and the adversary cannot inject messages (so cannot happen by chance). More formally, we model correctness as ∀ , ∈ Z .∃ ∈ Z 2 , ∈ . = ( , ) ∧ = ∧ ∈ . Our plugin reflects it in programming code by • setting the adversary to passive; attacker = passive; • creating two tables finalA(Zp) and finalB(Zp); • injecting into the parties' processes a command to fill the tables with the shared key: insert finalA(exp(gy, x)) for Alice and analogously for Bob; • appending a process agreement, that can generate the desired event, to the protocol process: let agreement = get finalA ( kA ) in get finalB ( kB ) in event correctness ( kA , kB ). • and finally, creating the event event correctness(Zp, Zp) and querying for it to be triggered by any execution where the keys are the same: query k : Zp ; event ( correctness (k , k )). Briefly, ProVerif analyses the execution traces and returns whether or not the required event can be found in a trace. Diffie-Hellman key exchange To have a complete picture of MetaCP, we also briefly illustrate the formal verification of the Diffie-Hellman key exchange to complete the steps described in Section 4.2. The output from ProVerif running the automatically exported protocol, see Figure 13b, shows the formal verification of correctness. When the same specification is interpreted by the Tamarin plugins it, verifies the executability of the protocol, see Figure 13a. We stress that this part is delegated to external tools that can parse the language exported through related MetaCP plugins, in our example ProVerif. CONCLUSION AND FUTURE WORK Motivated from the current limitations in manual interpretations to single tools, we explore the possibility of automatically generating multiple interpretations of cryptographic protocols from a single structured specification. In particular, we demonstrate that the the expressiveness offered by this data-centric approach is enough to interpret a traditional key-exchange protocol into multiple (very different) formal verification languages and a popular object-oriented programming language. In detail, this paper presented a new plugin for ProVerif, over the only plugin for Tamarin that existed before [4], and a new plugin to generate C++ code, that compiles and can run the protocol between devices over the Internet. To do that, we delegated almost all semantic aspects to interpreting plugins; further research is required to understand how much semantics can be included in the specification language. This is an important aspect toward the goals of formal verification, as, ultimately, the trustworthiness of verification results will either depend on the correctness of the interpretation, or, as at the current stage of development, require an expert in the target verification language. Documentation and a demo of the tool have been released 3 . The graphical interface does not (currently) support all the features provided by the specification, e.g., it only allows to design twoparty protocols and no security properties can be specified yet. After its promising entrance, MetaCP is far from a mature solution. Future extensions envision the inclusions of more security properties (executability and correctness are not enough) along with the protocol model to further accelerate the verification process. Furthermore, both ProVerif and Tamarin reason in the symbolic model -we have yet to research into the translation targeting tools that reason in the computational model, e.g., CryptoVerif or EasyCrypt. Having showcased the flexibility of the approach through these first plugins, further plugins are an obvious next step to incorporate further verification and programming languages alike. Finally, our tool can be used to bring the benefits of formal verification to domains where security flaws can affect critical infrastructures, e.g., the electric vehicle charging infrastructure [17] or smart building with distributed energy generation [20]. This necessity has already been appreciated by Lauser et al. [15] in the automotive industry. We can see how a future extension of our approach could alleviate the security burden in the current design process for practitioners in a myriad of further security-critical domains. want to explicitly implement the commutativity equation forall b : Zp , x : N , y : N ; exp ( exp (b , x ) , y ) = exp ( exp (b , y ) , x ). Figure 3 : 3The graphical design of MetaCP is saved as the PSV format (boxes). Figure 4 : 4Toolbox in the GDE. Figure 5 : 5Message structure in the GDE. the sender/receiver at this point of the flow of the message from the sender to the receiver. The user can drop statements from the toolbox to the pre and post boxes that are computations made by the parties before and after the message has been exchanged. A box with knowledge is automatically populated according to the initial knowledge and the computations made by the parties in the message. Additionally, the user can drop variables and constants to the event box above the arrow, representing what is being sent by the sender.Final operations. Final operations are statements computed (offline) by either of the parties after all message have been exchanged. For example in the case of a key exchange protocol, they may contain the final elaboration of the exchanged key done by each party. Figure 7 : 7The interpretation process automatised in the ProVerif plugin of MetaCP. Figure 8 : 8Rules applied by the ProVerif plugin in MetaCP for variables, arguments and function applications. Figure 9 : 9Rules applied by the ProVerif plugin in MetaCP for messages in the protocol along with depending rules. Generic insecure channel denoted as . Figure 13 : 13Security properties as formally verified in Tamarin (above) and in ProVerif (below). Table 1 : 1Syntactic structure of the PSV.Set symbol Set description XML/xPath L × • <message [...] from="Alice" to="Bob"> <knowledge entity="Alice">[...knows g]</knowledge> <knowledge entity="Bob">[...knows g]</knowledge> <pre>[...samples x, elaborates gx]</pre> <event type="send">[...sends gx]</event> <channel security="insecure"></channel> <event type="receive">[...receives gx]</event> <post></post> </message> Terminal -Tamarin Prover Output(a) Final part of Tamarin's output showing executability. Terminal -ProVerif Prover Output (b) Final part of ProVerif's output showing correctness.File Edit Help end summary of summaries: analyzed: dh.spthy executable_protocol (exists-trace): verified (6 steps) =============================================================== [email protected]$ File Edit Help ... The event correctness(idm80,idm80) is executed. A trace has been found. RESULT not event(correctness(m,m)) is false. [email protected]$ The tool is available at http://metacp.eu. As a network detail, for two (non-isolated) machines in the same network, no further operations are usually required. Conversely in the Internet, commonly hosts are behind a router. In this case, server ports need to be opened either manually (port forwarding or triggering) or automatically (UPnP, DMZ). Documentation and demo at http://demo.metacp.eu. To reproduce our results, we share our implementation with instructions at https://doi.org/10.5281/zenodo.6394688. ACKNOWLEDGMENTSThis research was partly funded by The Alan Turing Institute through the Lloyd's Register Foundation (G0095), an Innovate UK e4Future grant (104227), and EPSRC grants EP/S016627/1, Active Building Centre, and EP/T027037/1 AISEC. The applied pi calculus: Mobile values, new names, and secure communication. Martín Abadi, Bruno Blanchet, Cédric Fournet, Journal of the ACM (JACM). 65Martín Abadi, Bruno Blanchet, and Cédric Fournet. 2017. The applied pi calculus: Mobile values, new names, and secure communication. Journal of the ACM (JACM) 65, 1 (2017), 1-41. The AVANTSSAR platform for the automated validation of trust and security of service-oriented architectures. Alessandro Armando, Wihem Arsac, Tigran Avanesov, Michele Barletta, Alberto Calvi, Alessandro Cappai, Roberto Carbone, Yannick Chevalier, Luca Compagna, Jorge Cuéllar, International Conference on Tools and Algorithms for the Construction and Analysis of Systems. SpringerAlessandro Armando, Wihem Arsac, Tigran Avanesov, Michele Barletta, Alberto Calvi, Alessandro Cappai, Roberto Carbone, Yannick Chevalier, Luca Compagna, Jorge Cuéllar, et al. 2012. The AVANTSSAR platform for the automated validation of trust and security of service-oriented architectures. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 267-282. The AVISPA tool for the Automated Validation of Internet Security Protocols and Applications. Alessandro Armando, David Basin, International conference on Computer Aided Verification. SpringerAlessandro Armando, David Basin, et al. 2005. The AVISPA tool for the Auto- mated Validation of Internet Security Protocols and Applications. In International conference on Computer Aided Verification. Springer, 281-285. Poster: Towards a Data Centric Approach for the Design and Verification of Cryptographic Protocols. Luca Arnaboldi, Roberto Metere, Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. the 2019 ACM SIGSAC Conference on Computer and Communications SecurityLuca Arnaboldi and Roberto Metere. 2019. Poster: Towards a Data Centric Ap- proach for the Design and Verification of Cryptographic Protocols. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2585-2587. Computer-aided security proofs for the working cryptographer. Gilles Barthe, Benjamin Grégoire, Sylvain Heraud, Santiago Zanella Béguelin, Annual Cryptology Conference. SpringerGilles Barthe, Benjamin Grégoire, Sylvain Heraud, and Santiago Zanella Béguelin. 2011. Computer-aided security proofs for the working cryptographer. In Annual Cryptology Conference. Springer, 71-90. A formal analysis of 5G authentication. David Basin, Jannik Dreier, Lucca Hirschi, Saša Radomirovic, Ralf Sasse, Vincent Stettler, Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. the 2018 ACM SIGSAC conference on computer and communications securityDavid Basin, Jannik Dreier, Lucca Hirschi, Saša Radomirovic, Ralf Sasse, and Vincent Stettler. 2018. A formal analysis of 5G authentication. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security. 1383-1396. Modeling and verifying security protocols with the applied pi calculus and ProVerif. Bruno Blanchet, Foundations and Trends® in Privacy and Security. 1Bruno Blanchet. 2016. Modeling and verifying security protocols with the applied pi calculus and ProVerif. Foundations and Trends® in Privacy and Security 1, 1-2 (2016), 1-135. An Efficient Cryptographic Protocol Verifier Based on Prolog Rules. Bruno Blanchet, csfw. 1Bruno Blanchet et al. 2001. An Efficient Cryptographic Protocol Verifier Based on Prolog Rules.. In csfw, Vol. 1. 82-96. XML path language (XPath) version 1.0. James Clark, Steve Derose, James Clark, Steve DeRose, et al. 1999. XML path language (XPath) version 1.0. Improving the ISO/IEC 11770 standard for key management techniques. Cas Cremers, Marko Horvat, International Journal of Information Security. 15Cas Cremers and Marko Horvat. 2016. Improving the ISO/IEC 11770 standard for key management techniques. International Journal of Information Security 15, 6 (2016), 659-673. Wei Dai, Crypto++® library. n.d.Wei Dai. [n.d.]. Crypto++® library. https://cryptopp.com/ CAPSL integrated protocol environment. Grit Denker, Jonathan Millen, Proceedings DARPA Information Survivability Conference and Exposition. DISCEX'00. DARPA Information Survivability Conference and Exposition. DISCEX'00IEEE1Grit Denker and Jonathan Millen. 2000. CAPSL integrated protocol environ- ment. In Proceedings DARPA Information Survivability Conference and Exposition. DISCEX'00, Vol. 1. IEEE, 207-221. Analyzing and patching SPEKE in ISO/IEC. Feng Hao, Roberto Metere, Siamak F Shahandashti, Changyu Dong, IEEE Transactions on Information Forensics and Security. 13Hao, Feng and Metere, Roberto and Shahandashti, Siamak F and Dong, Changyu. 2018. Analyzing and patching SPEKE in ISO/IEC. IEEE Transactions on Information Forensics and Security 13, 11 (2018), 2844-2855. Automated verification for secure messaging protocols and their implementations: A symbolic and computational approach. Nadim Kobeissi, Karthikeyan Bhargavan, Bruno Blanchet, IEEE EuroS&P. Kobeissi, Nadim and Bhargavan, Karthikeyan and Blanchet, Bruno. 2017. Auto- mated verification for secure messaging protocols and their implementations: A symbolic and computational approach. In IEEE EuroS&P. Security Analysis of Automotive Protocols. Timm Lauser, Daniel Zelle, Christoph Krauß, Computer Science in Cars Symposium. Timm Lauser, Daniel Zelle, and Christoph Krauß. 2020. Security Analysis of Automotive Protocols. In Computer Science in Cars Symposium. 1-12. Cas Cremers, and David Basin. 2013. The TAMARIN prover for the symbolic analysis of security protocols. Simon Meier, Benedikt Schmidt, International Conference on Computer Aided Verification. SpringerSimon Meier, Benedikt Schmidt, Cas Cremers, and David Basin. 2013. The TAMARIN prover for the symbolic analysis of security protocols. In International Conference on Computer Aided Verification. Springer, 696-701. Roberto Metere, Myriam Neaimeh, Charles Morisset, Carsten Maple, Xavier Bellekens, Ricardo M Czekster, arXiv:2105.02905Securing the Electric Vehicle Charging Infrastructure. arXiv preprintRoberto Metere, Myriam Neaimeh, Charles Morisset, Carsten Maple, Xavier Bellekens, and Ricardo M Czekster. 2021. Securing the Electric Vehicle Charging Infrastructure. arXiv preprint arXiv:2105.02905 (2021). . Thinkasynch, n.d.ThinkAsynch. [n.d.]. . C++ Asio, Library, Asio C++ Library. https://think-async.com/Asio/ W3C XML schema definition language (XSD) 1.1 part 1: Structures. The World Wide Web Consortium (W3C). Noah Henry S Thompson, D Mendelsohn, M Beech, Maloney, W3C Working Draft Dec. 3Henry S Thompson, Noah Mendelsohn, D Beech, and M Maloney. 2009. W3C XML schema definition language (XSD) 1.1 part 1: Structures. The World Wide Web Consortium (W3C), W3C Working Draft Dec 3 (2009). Active building as an energy system: concept, challenges, and outlook. Vahid Vahidinasab, Chenour Ardalan, Behnam Mohammadi-Ivatloo, Damian Giaouris, Sara L Walker, IEEE Access. 9Vahid Vahidinasab, Chenour Ardalan, Behnam Mohammadi-Ivatloo, Damian Giaouris, and Sara L Walker. 2021. Active building as an energy system: concept, challenges, and outlook. IEEE Access 9 (2021), 58009-58024. Automated security protocol analysis with the AVISPA tool. Luca Viganò, Electronic Notes in Theoretical Computer Science. 155Luca Viganò. 2006. Automated security protocol analysis with the AVISPA tool. Electronic Notes in Theoretical Computer Science 155 (2006), 61-86.
[ "https://github.com/nitrogl/snippets/);" ]
[ "A Bayesian spatio-temporal nowcasting model for public health decision-making and surveillance", "A Bayesian spatio-temporal nowcasting model for public health decision-making and surveillance", "A Bayesian spatio-temporal nowcasting model for public health decision-making and surveillance", "A Bayesian spatio-temporal nowcasting model for public health decision-making and surveillance" ]
[ "David Kline ", "Ayaz Hyder ", "Enhao Liu ", "Michael Rayo ", "Samuel Malloy ", "Elisabeth Root ", "David Kline ", "Ayaz Hyder ", "Enhao Liu ", "Michael Rayo ", "Samuel Malloy ", "Elisabeth Root " ]
[]
[]
As COVID-19 spread through the United States in 2020, states began to set up alert systems to inform policy decisions and serve as risk communication tools for the general public.Many of these systems, like in Ohio, included indicators based on an assessment of trends in reported cases. However, when cases are indexed by date of disease onset, reporting delays complicate the interpretation of trends. Despite a foundation of statistical literature to address this problem, these methods have not been widely applied in practice. In this paper, we develop a Bayesian spatio-temporal nowcasting model for assessing trends in county-level COVID-19 cases in Ohio. We compare the performance of our model to the current approach used in Ohio and the approach that was recommended by the Centers for Disease Control and Prevention.We demonstrate gains in performance while still retaining interpretability using our model. In addition, we are able to fully account for uncertainty in both the time series of cases and in the reporting process. While we cannot eliminate all of the uncertainty in public health surveillance and subsequent decision-making, we must use approaches that embrace these challenges and deliver more accurate and honest assessments to policymakers.
10.1093/aje/kwac034
[ "https://export.arxiv.org/pdf/2102.04544v1.pdf" ]
231,855,751
2102.04544
0c57aa6481bcc5e71e45d6dcd8aff8cd5958e262
A Bayesian spatio-temporal nowcasting model for public health decision-making and surveillance 8 Feb 2021 David Kline Ayaz Hyder Enhao Liu Michael Rayo Samuel Malloy Elisabeth Root A Bayesian spatio-temporal nowcasting model for public health decision-making and surveillance 8 Feb 2021surveillance Abbreviations: COVID-19 -Coronavirus Disease 2019, OPHAS -Ohio Public Health Alert System 1Bayesian hierarchical modelingCOVID-19reporting lagspatial analysis As COVID-19 spread through the United States in 2020, states began to set up alert systems to inform policy decisions and serve as risk communication tools for the general public.Many of these systems, like in Ohio, included indicators based on an assessment of trends in reported cases. However, when cases are indexed by date of disease onset, reporting delays complicate the interpretation of trends. Despite a foundation of statistical literature to address this problem, these methods have not been widely applied in practice. In this paper, we develop a Bayesian spatio-temporal nowcasting model for assessing trends in county-level COVID-19 cases in Ohio. We compare the performance of our model to the current approach used in Ohio and the approach that was recommended by the Centers for Disease Control and Prevention.We demonstrate gains in performance while still retaining interpretability using our model. In addition, we are able to fully account for uncertainty in both the time series of cases and in the reporting process. While we cannot eliminate all of the uncertainty in public health surveillance and subsequent decision-making, we must use approaches that embrace these challenges and deliver more accurate and honest assessments to policymakers. The first cases of SARS-CoV-2 in the United States were reported in early March [1], though recent phylogenetic evidence suggests the first introductions occurred in January 2020 [2,3,4]. As COVID-19 spread throughout the country, states began to set up risk alert systems to support data-driven decision-making, improve government accountability, and communicate health risks to the public [5]. The goal of such systems is to provide clear and consistent messaging around the current state of the COVID-19 pandemic and help people adopt protective behaviors while policymakers implement appropriate structural changes to mitigate spread. Risk or public health alert systems typically develop a series of indicators which use various sources of surveillance data [5]. In some states, these systems were linked to specific policy actions [6] while in others they serve more as a risk communication tool to inform local health departments and the general public [7]. In most systems, several key indicators are tied to the reporting of confirmed COVID-19 cases and their onset date of illness (i.e., the date an individual first began to have symptoms) [8]. However, chronic delays in outbreak investigation and case reporting have led to challenges in estimating case-based indicators and communicating the situation in a location in near real-time. Issues related to reporting lag or reporting delay are not a new challenge in public health surveillance [9,10,11]. It is quite common for reporting in infectious disease and vital statistics systems to not occur instantaneously with the onset or occurrence of the event of interest. For infectious diseases, this delay can be due to: 1) a prolonged interval between the time an individual recognizes symptoms and is able to seek care and receive confirmatory testing, 2) administrative backlogs and delays in the acquisition, processing, and ultimate reporting of information, and 3) the length of time necessary to conduct a full case investigation. However, particularly when facing a fast-moving epidemic, important decisions need to be made in real-time despite the fact that the most recent information is likely incomplete. This added uncertainty can reduce the confidence of both policymakers and the public in the public health decision-making process. Methodology is needed to help provide a clearer picture to decision-makers in the face of the uncertainty from delays in reporting. To address this issue and build on the foundational methodology [9,10,11], a relatively recent literature around "nowcasting" has emerged for delayed reporting. In contrast to forecasting which focuses on estimating what could happen in the future, nowcasting focuses on estimating what has already happened but has not yet been reported. Nowcasting leverages historical patterns in reporting and trajectories of the disease outcome to estimate current counts given partially reported values. To enhance model flexibility and interpretability, recent work [12,13,14] has extended prior work for nowcasting time series [15] and aberration detection [16] within a Bayesian framework. This work has been applied to estimate COVID-19 deaths in regions of the United Kingdom [17] and to incorporate spatial dependence [18,19]. In addition, simulation modeling approaches have also been used for nowcasting [20,21,22]. In contrast to much of the current epidemiological work that relies on the specification of splines to capture trends, Bayesian structural time series can be specified as hierarchical autoregressive processes [23]. Given the link between autoregressive processes and infectious disease dynamics, we propose a spatial extension of the Bayesian structural time series model to nowcast county-level counts of confirmed COVID-19 cases in Ohio while accounting for reporting delay. Despite prior and current literature on methods for accounting for reporting delay, these methods have not been fully embraced in practice. The purpose of this paper is to highlight the critical need to account for reporting lag and other potential daily reporting patterns when assessing whether case rates are increasing. This is important because an increase in case rates is an indicator in many states' alert systems, including the Ohio Public Health Alert System (OPHAS) [7], and can also serve as an early warning signal of disease spread. We apply our method to OPHAS Indicator 2 which measures "an increasing trend of at least 5 consecutive days in overall cases by onset date over the last 3 weeks" [7]. Ohio adopted a 21 day "look-back" period in an attempt to manually curtail the effect of reporting delays. We develop an extension of a Bayesian structural time series model that incorporates spatial dependence across counties and flexibly captures temporal dynamics with an autoregressive structure. We use case data from earlier in the pandemic that is now fully reported so the true trends can be determined for each county in Ohio. We then compare indicators based on the method currently used in Ohio, the method suggested by the Centers for Disease Control and Prevention (CDC) [8], and our Bayesian approach. METHODS Data We used data on confirmed cases of COVID-19 in the state of Ohio which are captured by the Ohio Disease Reporting System and reported publicly [24]. In Ohio, case investigation is done by local, typically county, health departments and entered into the state system. Confirmed cases are defined as individuals who have a positive result on a laboratory molecular amplification test [1] or other approved testing methods. For each individual case, the system records the county of residence and the onset date of illness as determined by case investigators. If onset date is unknown, the system records the earliest date associated with the record. Onset date currently provides the index date for all reporting and analysis at the state-level in Ohio. The reporting date is defined as the first date at which a case appears in the system and is often several days or possibly weeks after the onset date. Thus, when examining case counts by onset date, counts for the most recent days are incomplete because of the delay between onset date and reporting date. The reporting delay can also be impacted by system strains due to case volume and daily variation in reporting that differ by local health department. To explore the impact of reporting patterns on the calculation and subsequent interpretation of public health alert indicators, we retrospectively consider four points in time during the pandemic: June 15, 2020, July 15, 2020, August 15, 2020, and September 15, 2020. At the time of the analysis, at least one full month had passed since September 15, and we assume that case reporting was complete through this date. For each date, we examine cases reported by that date and compute indicators related to the trends in case counts. Since the data are completely reported, we can compare the estimates from the indicators to the true trend observed in the onset cases at that point in time. This will allow us to examine the performance of each proposed approach for determining if a county is experiencing an increasing trend of cases. Rolling Average Approach We refer to the current approach for determining if case rates are increasing used by the Ohio Department of Health as the rolling average approach [7]. This approach computes a 7 day rolling average of case counts, indexed by onset date, for each of the last 21 days. The alert indicator for an increasing trend in cases is flagged if there are 5 consecutive days of increasing averages at any point in the 21 day window. That is, the indicator flags if for 5 consecutive days the average is greater than the average the day before. This approach crudely accounts for daily reporting variation by averaging across 7 days but makes no attempt to account for reporting lag or any other sources of variation. Spline Approach A slightly more sophisticated but still simple approach was recommended by the CDC for detecting rebounds [8] and will be referred to as the spline approach. This approach is similar to the rolling average approach described above but fits a spline to the time series of rolling averages. For consistency, we used 7 day rolling averages over a 21 day period to align with the temporal window of interest for the alert system. We fit a cubic spline [8] to each series with 4 knots. By using a spline, we are able to smooth daily and other systematic variation in reporting patterns. Aligned with the CDC [8], we determine if there is an increasing trend by looking at the fitted values from the spline and determining if there are any 5 consecutive day periods where the fit for each day is greater than the previous day. Like the rolling average approach, uncertainty is not incorporated into the decision-making process. Splines were estimated using the mgcv package in R [25]. Model-based Approach In contrast to the simpler approaches, we explicitly model both the process for new onset cases and the reporting delay process. We extend the general framework outlined by previous work [12,18] by using an autoregressive spatial Bayesian structural time series, rather than a spline based model. While the spline based model is flexible, it relies on reasonably specifying knots and is not ideal for estimating beyond the range of the observed data. In addition, it can be more challenging to incorporate hierarchical structure when temporal trends may be quite different across locations, which has been the case for COVID-19. Instead, an autoregressive structure retains the ability to flexibly capture spatio-temporal trends while also linking more closely to the dynamics of infectious disease [26]. It also allows for added flexibility in specifying a spatially varying reporting delay process. We follow the general set up outlined in previous work [12,16]. In Ohio, COVID-19 cases are reported daily so we use a daily time scale. To reduce computation time, we will take a moving window approach [14] that considers the past 90 days (T = 90). From April through September 2020, 94% of cases were reported within 2 weeks of onset and 98% of cases were reported within 30 days. To be conservative, we set a maximum reporting delay time of 30 days following onset (D = 30). Outcome Model. Let Y it be the count of reported cases in county i = 1, . . . , N with onset date t = 1, . . . , T . Note that Y it is assumed to be the true total count, which is assumed to be partially observed for time t such that t + D > T . We assume Y it ∼ Poisson(λ it ) log(λ it ) = O i + α it + X t η i where O i is an offset of the log population of county i, α it is the latent state of the process, X t is a design vector indicating the day of the week, and η i is the day of the week effect. Note that X t is parameterized using sum to 0 effect coding so α it reflects the average of the process across days of the week. By using this structure for the model, we are able to remove daily reporting variation from the latent state, α it , through X t η i . After removing the daily "seasonal" variation, we focus on the model for the latent state or structural part of the model. We use a semi-local linear trend model [27] to allow for some degree of longer term structure while still facilitating a very flexible model. That is for t > 1, α it = α i(t−1) + δ i(t−1) + α it where α it iid ∼ N (0, τ 2 α ) and the initial value at t = 1 is α i1 ∼ N (0, 100). Then for the model for trend, we let δ i1 = δ + d i + δ i1 δ it = δ + d i + ρ δ (δ i(t−1) − δ − d i ) + δ it where δ is a common statewide trend, d i is a county-specific spatial trend, and ρ δ is an autoregres- sive term. Let δ it iid ∼ N (0, τ 2 δ ) . A benefit to this parameterization is it allows us to separate changes that are due to white noise ( α it ) from those that are due to more consistent temporal trends (δ it ). By using a stationary model for δ, we are able to provide some structure around a longer term trend while retaining flexibility for local deviations in space and time. To account for spatial correlation, we assume the trends in neighboring counties are correlated and specify an intrinsic conditional autoregressive model. That is, d i |d −i ∼ N 1 w i+ j w ij d j , τ 2 d w i+ where d −i is the set of counties excluding county i, w ij is an indicator of whether counties i and j are adjacent, w i+ = j =i w ij , and τ 2 d is a variance. To ensure a valid process model, we enforce a sum to 0 constraint on the d i [28]. We chose to incorporate spatial dependence in the trend to reflect a belief that cases in a county are likely to change in a similar fashion as cases in neighboring counties. This choice explicitly aligns with our general surveillance and risk evaluation strategy for counties where we have implicitly considered trends in neighboring counties when making our assessments. Another added benefit is that this helps to stabilize estimates for counties with small populations by borrowing strength from neighboring counties. We also assume county-specific effects of the day of the week. We assume that while variability exists between counties, the daily patterns are similar across the state. We assume the following hierarchical model η i iid ∼ N (η, τ 2 η I 6 ) where η i is a vector of state average day of the week effects, τ 2 η is a variance, and I 6 is a 6 × 6 identity matrix. This allows each county to have its own daily pattern while borrowing strength across all counties in the state as warranted. Reporting Model. Since we know that Y it is observed with reporting lag, we must specify a model for the delay. Let Z itd be the count of cases observed in county i with onset date t that are observed d = 0, . . . , D days after t. Note that Z itd corresponds to when cases are reported d days after onset date t and so is unobserved when t + d > T . We assume Z it |p it , Y it ∼ Multinomial(p it , Y it ) p it ∼ GD(α it , β it ) where Z it = (Z it0 , . . . , Z itD ), p it is the vector of proportions of the total Y it reported on each of the D days, and GD is the generalized Dirichlet distribution. We use a generalized Dirichlet distribution to properly account for potential overdispersion of the p it [12]. This leads to the following conditional distribution: Z itd |Z it(−d) , Y it ∼ Beta-Binomial α itd , β itd , Y it − j<d Z itj where Z it(−d) is set of counts reported with a delay that is not d days. To model more intuitive quantities, we reparameterize the distribution [12] in terms of the mean ν itd and dispersion φ d such that α itd = ν itd φ d and β itd = (1 − ν itd )φ d . Then similar to a hazard function, we let logit(ν itd ) = ψ itd and assume the following AR1 model ψ i1d = β d + V 1d ξ i + ψ i1d ψ itd = β d + V td ξ i + ρ ν (ψ i(t−1)d − β d + V (t−1)d ξ i ) + ψ itd where β d is the average log odds of remaining cases being reported by delay d, V td is a design matrix indicating the day of the week, ξ i is a day of the week effect, ρ ψ is an autoregressive parameter, and ψ itd is an error term. We assume ψ itd iid ∼ N (0, τ 2 ψ ). Note that V td is parameterized using sum to 0 effect coding. The parameterizaton of the delay model allows us to accommodate several important features of COVID-19 reporting and should, in general, be customized to reflect the actual reporting process. First, reporting in Ohio is done by county health departments who may have varying capacity and resources for timely reporting. Thus, the delay model is county-specific. We account for day of the week effects, much like in the model for the case counts, because in many counties, reporting primarily aligns with the work week. We also assume autoregressive temporal dependence to capture the potential for administrative backlogs. For example, if a smaller portion of cases are reported today, we may also expect a smaller proportion the next day because of a backlog. We do not incorporate a term to account for spatial dependence in the delay model as we assume neighboring health departments are independent agencies, and so we would not anticipate spatial structure. As with the outcome model, we allow for county-specifc variability in day of the week reporting effects. We again assume similar patterns across the state and specify the following hierarchical model: ξ i iid ∼ N (ξ, τ 2 ξ I 6 ) where ξ is a vector of state average day of the week effects, τ 2 ξ is a variance, and I 6 is a 6 × 6 identity matrix. Prior Model and Computation. Since we fit our model in the Bayesian paradigm, we must specify prior distributions on all unknown parameters. For each element of η and ξ, we assign independent normal priors with 0 mean and variance 1. We also assign δ a normal prior with 0 mean and variance 1. We use a variance of 1 for these prior distributions as each parameter reflects a relative daily difference on the log scale, and so these priors reflect a reasonable range for those parameters. We assign β d independent normal priors with mean 0 and variance 4, which puts adequate probability on reasonable values on the logit scale. We also assign all variance parameters inverse gamma priors with shape and scale both set to 0.5. All autoregressive parameters are assigned uniform prior distributions over -1 to 1. To compare across approaches, we fit the model for each of the four dates considered. We treat the last day in the series (i.e., the current date) as missing and forecast the expected case count, which reduces model instability due to the rarity of cases reported on the day of onset (d = 0). The model was fit using a Markov chain Monte Carlo algorithm implemented in R using nimble [29]. The algorithm was run for 30,000 iterations with the first 15,000 discarded as burn-in and then thinned by keeping every 10th iteration. Computation time was approximately 20 hours, which would enable a daily update in practice. To determine whether the cases were increasing in the most recent 21 day period, we use the posterior distribution of δ it . Since δ it reflects the trend in county i at time t, there is a net increasing trend over the past 21 days if T t=T −20 δ it > 0. Using the posterior distribution, we can directly compute the posterior probability of an increasing trend for each county. True Change One major advantage of a model-based approach is the flexibility to address more complex questions of interest. However, the goal of this paper is to assess the method used to calculate the OPHAS indicator for when cases are increasing in a county. To most closely align with the question as currently posed by the state of Ohio, we define a true increase in cases as when the number of cases in the most recent 7 day period is greater than the number of cases two weeks prior. While there are other potential ways to define a true increase, this most closely reflects the current definition used by the state of Ohio. RESULTS The results from applying each of the three methods for calculating increasing case rates are shown in Figure 1. There are several general observations that can be made across the four time points. The rolling average indicator generally does a poor job at accurately capturing counties where the cases have increased, and in most counties, there were true increases that went undetected. The spline indicator tends to make errors in the other direction by incorrectly flagging counties that did not meet the definition of a true increase. For the model-based approach, we generate a posterior probability of an increasing trend and highlight counties in yellow with a probability greater than 0.7 and in red those with a probability greater than 0.9. In addition to visually examining the results, we calculated sensitivity and specificity for each approach in Table 1. The rolling average approach currently in use has a very low sensitivity of 0.20 and so is not successfully identifying counties with increasing trends. The spline approach has a much higher sensitivity of 0.87 but at the cost of a specificity of 0.48. Three cut points are shown for the model-based posterior probabilities. As expected, the higher thresholds exhibit excellent specificity but lower sensitivity since it reflects stronger evidence of an increase. Using a cut-point of 0.5, which reflects that the trend is more likely increasing than decreasing, we estimate The model-based approach also provides a rich set of additional results that can provide useful insights. Typically, the main goal of these models is to nowcast case counts. In Figure 2, we show nowcast estimates with their 90% credible interval in black and the true counts in red for an urban and rural county. Averaging across the 4 time points, the 90% credible interval coverage was 0.96 over the 30 day period with incomplete reporting. The coverage was 0.92 in the most recent 7 days which have the most incomplete reporting. Thus, our model performs as expected for nowcasting cases. In Figure 2, we also show time series plots of the latent state, which removes the daily seasonality, and the trend. The trend can also be viewed as the derivative of the latent state curve so when it is greater than 0, it indicates increasing case counts. DISCUSSION We applied three approaches for assessing increasing trends in cases to completely observed data at four time points during the COVID-19 pandemic. When assessments are linked to onset date, case reporting is subject to reporting lag or delay. We illustrate that the simple approach currently used in OPHAS does not perform well as it fails to account for lag and other variation in reporting. The spline approach outlined by the CDC is more sensitive as it smooths over daily reporting variation but also fails to account for lag. In contrast, the model-based approach accommodates lag, daily variation, and spatio-temporal dependence. The model-based approach can also directly summarize observed evidence of increasing trends and the associated uncertainty through posterior distributions. This results in a better trade-off between sensitivity and specificity and can allow for prioritization of areas where the evidence of an increase is strongest. We note several key advantages to the model-based approach. First, the Bayesian approach allows us to use calculated posterior summaries to directly communicate uncertainty. Public health officials are constantly considering trade offs between different policy options -e.g., stay-at-home Since the posterior probability reflects the probability of an increasing trend given the observed data, this quantity can be used to directly address the policy question of interest and provides an indication of how strong the evidence is in each county. Unlike the spline or 7-day rolling average approaches that return a binary decision, the ability of the model-based approach to convey additional meaning through continuous estimates is a clear advantage that can improve decisionmaking [30,31]. Second, by accounting for reporting delays and fully exploiting partially reported counts, the Bayesian approach can be more responsive to changing trends and provide earlier warning of changes in trends. Finally, the output from the Bayesian models (shown in Figure 2) provide important additional information that can be used by surveillance teams to understand trends over time. These results do require a team of epidemiologists to review the data, but still provide more information than the spline or 7-day moving average methods. When responding to a pandemic, it is important that the public health and policy response is guided by the best available information. Often even the best information can be incomplete and uncertain. However, statistical models have been developed to overcome these issues and aid in characterizing and quantifying uncertainty. These models are not as simple as the approach currently used in Ohio, and this is one limitation of this method. Risk alert systems should be transparent and easy to understand. Complex modeling approaches are difficult to explain to the general public and can lead to mistrust in the data and, by extension, the system as a whole. However, with proper preparation, the model output can be summarized to simply communicate the core messages, while leaving much of the complexity and technical details to the experts implementing the model. Additionally, the Bayesian models provide a wide range of information that can be used internally by epidemiologists and other public health data scientists to directly address important policy questions. Given the clear improvements our Bayesian models offer, it is imperative that we take advantage of these methodological advances to better serve the public and inform the distribution of limited resources. In conclusion, we have illustrated shortcomings in using simple approaches for public health decision-making. We have also illustrated how more sophisticated statistical models can account for the real-world complexities associated with surveillance data. Despite the added complexity, the output from these models can be summarized in a relatively simple and concise form that still appropriately reflects uncertainty. While we cannot eliminate all of the uncertainty in public health surveillance and decision-making, we must use approaches that embrace these challenges and deliver more accurate and honest assessments to policymakers. Figure 1 : 1Comparison between the rolling average indicator, the spline indicator, the true observed indicator of an increase, and the model-based posterior probability across 4 time points during the pandemic. For the model-based probabilities, counties outlined in red have a probability greater than 0.9 and outlined in yellow have a probability greater than 0.7. a sensitivity of 0.83 and a specificity of 0.6, which seems to most reasonably balance false positives and false negatives among the approaches considered. Figure 2 : 2Case nowcast projections and time series model components for an urban and rural county on September 15, 2020. The left panel shows the posterior mean number of cases in bold, 90% credible interval in black, and the true number of cases in red. The green vertical line indicates the divide between complete and incomplete reporting. The center panel shows the posterior mean and 90% credible interval of the latent state which is the mean process on the log scale with daily variation removed. The right panel shows the posterior mean and 90% credible interval for the daily change with a red reference line at 0. orders vs. economic impacts. Specific policy responses may only be warranted when evidence for an increase in COVID-19 cases is very strong and models indicate a very high level of certainty. Table 1 : 1Estimated sensitivity and specificity of the rolling average indicator, the spline indicator, and the model-based indicator at 3 different posterior probability cut-points across the 4 dates examined. Method Sensitivity Specificity Rolling Average 0.20 0.96 Spline 0.87 0.48 Model-based: >0.9 0.07 1.00 Model-based: >0.7 0.46 0.93 Model-based: >0.5 0.83 0.60 This corresponds to comparing the first week with the last week in the most recent 21 day period. COVID-19) 2020 Interim Case Definition. Centers for Disease Control and Prevention . Coronavirus Disease. Centers for Disease Control and Prevention . Coronavirus Disease 2019 (COVID-19) 2020 Interim Case Definition, Approved August 5, 2020 2020. https://wwwn. cdc.gov/nndss/conditions/coronavirus-disease-2019-covid-19/ case-definition/2020/08/05/. Substantial underestimation of SARS-CoV-2 infection in the United States. S L Wu, A N Mertens, Y S Crider, Nature Communications. 11Wu S.L., Mertens A.N., Crider Y.S., et al. Substantial underestimation of SARS-CoV-2 in- fection in the United States Nature Communications. 2020;11:1-10. The emergence of SARS-CoV-2 in Europe and North America Science. Worobey Michael, Pekar Jonathan, B Larsen Brendan, 370Worobey Michael, Pekar Jonathan, Larsen Brendan B., et al. The emergence of SARS-CoV-2 in Europe and North America Science. 2020;370:564-570. Coast-to-Coast Spread of SARS-CoV-2 during the Early Epidemic in the United States Cell. Fauver Joseph, R Petrone, Mary E Hodcroft, Emma B , 181e5Fauver Joseph R., Petrone Mary E., Hodcroft Emma B., et al. Coast-to-Coast Spread of SARS-CoV-2 during the Early Epidemic in the United States Cell. 2020;181:990 -996.e5. Resolve to Save Lives . Staying Alert: Navigating COVID-19 Risk Toward a New Normal 2020. Resolve to Save Lives . Staying Alert: Navigating COVID-19 Risk Toward a New Normal 2020. https://preventepidemics.org/wp-content/uploads/2020/05/ STAYING-ALERT-Navigating-COVID-19-Risk-Toward-a-New-Normal_ final.pdf. Utah Department of Health . Phased Guidelines for the General Public and Businesses to Maximize Public Health and Economic Reactivation. Utah Department of Health . Phased Guidelines for the General Public and Busi- nesses to Maximize Public Health and Economic Reactivation 2020. https: Ohio Department of Health . Summary of Alert Indicators 2020. Ohio Department of Health . Summary of Alert Indicators 2020. https:// gov/static/OPHASM/Summary-Alert-Indicators. pdf. coronavirus.ohio.gov/static/OPHASM/Summary-Alert-Indicators. pdf. Activities and Initiatives Supporting the COVID-19 Response and the President's Plan for Opening America Up Again. Centers for Disease Control and Prevention . CDC. Centers for Disease Control and Prevention . CDC Activities and Initiatives Support- ing the COVID-19 Response and the President's Plan for Opening America Up Again 2020. https://www.cdc.gov/coronavirus/2019-ncov/downloads/php/ CDC-Activities-Initiatives-for-COVID-19-Response.pdf. Statistical methods for short-term projections of AIDS incidence. R Brookmeyer, A Damiano, Statistics in Medicine. 8Brookmeyer R., Damiano A.. Statistical methods for short-term projections of AIDS inci- dence Statistics in Medicine. 1989;8:23-34. Inference Based on Retrospective Ascertainment: An Analysis of the Data on. J Kalbfleisch, J Lawless, Transfusion-Related AIDS Journal of the American Statistical Association. 84Kalbfleisch J., Lawless J.. Inference Based on Retrospective Ascertainment: An Analysis of the Data on Transfusion-Related AIDS Journal of the American Statistical Association. 1989;84:360-372. Adjustments for reporting delays and the prediction of occurred but not reported events. J F Lawless, Canadian Journal of Statistics. 22Lawless J.F.. Adjustments for reporting delays and the prediction of occurred but not reported events Canadian Journal of Statistics. 1994;22:15-31. . O Stoner, T Economou, Stoner O., Economou T.. Multivariate hierarchical frameworks for modeling delayed reporting in count data. Biometrics. 76Multivariate hierarchical frameworks for modeling delayed report- ing in count data Biometrics. 2020;76:789-798. Nowcasting the Number of New Symptomatic Cases During Infectious Disease Outbreaks Using Constrained P-spline Smoothing Epidemiology. J Kassteele, P H Eilers, J Wallinga, 30Kassteele J., Eilers P.H., Wallinga J.. Nowcasting the Number of New Symptomatic Cases During Infectious Disease Outbreaks Using Constrained P-spline Smoothing Epidemiology. 2019;30:737-745. Nowcasting by Bayesian Smoothing: A flexible, generalizable model for real-time epidemic tracking. S F Mcgough, M A Johansson, M Lipsitch, N A Menzies, PLOS Computational Biology. 16McGough S.F., Johansson M.A., Lipsitch M., Menzies N.A.. Nowcasting by Bayesian Smoothing: A flexible, generalizable model for real-time epidemic tracking PLOS Computa- tional Biology. 2020;16:1-20. Bayesian nowcasting during the STEC O104:H4 outbreak in Germany. M Hohle, M Heiden, Biometrics. 70Hohle M., Heiden M.. Bayesian nowcasting during the STEC O104:H4 outbreak in Germany, 2011 Biometrics. 2014;70:993-1002. Bayesian outbreak detection in the presence of reporting delays. M Salmon, D Schumacher, K Stark, M Höhle, Biometrical Journal. 57Salmon M., Schumacher D., Stark K., Höhle M.. Bayesian outbreak detection in the presence of reporting delays Biometrical Journal. 2015;57:1051-1067. Nowcasting CoVID-19 Deaths in England by Age and Region medRxiv. S Seaman, P Samartsidis, M Kall, D De Angelis, Seaman S., Samartsidis P., Kall M., De Angelis D.. Nowcasting CoVID-19 Deaths in England by Age and Region medRxiv. 2020. A Hierarchical Modelling Framework for Correcting Delayed Reporting in Spatio-Temporal Disease Surveillance Data arXiv. O Stoner, T Economou, Stoner O., Economou T.. A Hierarchical Modelling Framework for Correcting Delayed Re- porting in Spatio-Temporal Disease Surveillance Data arXiv. 2019. Bayesian spatiotemporal modeling with sliding windows to correct reporting delays for real-time dengue surveillance in Thailand. C Rotejanaprasert, N Ekapirat, D Areechokchai, R J Maude, International Journal of Health Geographics. 19Rotejanaprasert C., Ekapirat N., Areechokchai D., Maude R.J.. Bayesian spatiotemporal modeling with sliding windows to correct reporting delays for real-time dengue surveillance in Thailand International Journal of Health Geographics. 2020;19. Real-time influenza forecasts during the 2012-2013 season. Shaman Jeffrey, Karspeck Alicia, Yang Wan, Tamerius James, Lipsitch Marc, Nature Communications. 4Shaman Jeffrey, Karspeck Alicia, Yang Wan, Tamerius James, Lipsitch Marc. Real-time in- fluenza forecasts during the 2012-2013 season Nature Communications. 2013;4. Combining Search, Social Media, and Traditional Data Sources to Improve Influenza Surveillance. Santillana Mauricio, Nguyen André, T , Dredze Mark, Paul Michael, J Nsoesie Elaine, O Brownstein John, S , PLOS Computational Biology. 11Santillana Mauricio, Nguyen André T., Dredze Mark, Paul Michael J., Nsoesie Elaine O., Brownstein John S.. Combining Search, Social Media, and Traditional Data Sources to Im- prove Influenza Surveillance PLOS Computational Biology. 2015;11:1-15. Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study The Lancet. Wu Joseph, T , Leung Kathy, Leung Gabriel, M , 395Wu Joseph T, Leung Kathy, Leung Gabriel M. Nowcasting and forecasting the potential do- mestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study The Lancet. 2020;395:689-697. Predicting the present with Bayesian structural time series. S Scott, H Varian, Int. J. Math. Model. Numer. Optimisation. 5Scott S., Varian H.. Predicting the present with Bayesian structural time series Int. J. Math. Model. Numer. Optimisation. 2014;5:4-23. Ohio Department of Health . COVID-19 Dashoard 2020. Ohio Department of Health . COVID-19 Dashoard 2020. https://coronavirus. ohio.gov/wps/portal/gov/covid-19/dashboards/overview. Generalized Additive Models: An Introduction with. S N Wood, R. Chapman and Hall/CRC2Wood S.N. Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC2 ed. 2017. Synchrony, Waves, and Spatial Hierarchies in the Spread of Influenza Science. Bjørnstad Viboud Cécile, N Ottar, Smith David, L Simonsen Lone, Miller Mark, A , Grenfell Bryan, T , 312Viboud Cécile, Bjørnstad Ottar N., Smith David L., Simonsen Lone, Miller Mark A., Grenfell Bryan T.. Synchrony, Waves, and Spatial Hierarchies in the Spread of Influenza Science. 2006;312:447-451. Inferring causal impact using Bayesian structural time-series models. K H Brodersen, F Gallusser, J Koehler, N Remy, S L Scott, Ann. Appl. Stat. 9Brodersen K.H., Gallusser F., Koehler J., Remy N., Scott S.L.. Inferring causal impact using Bayesian structural time-series models Ann. Appl. Stat.. 2015;9:247-274. Hierarchical modeling and analysis for spatial data. Carlin Banerjee Sudipto, P Bradley, Alan E Gelfand, Chapman & Hall/CRCBoca Raton, FlaBanerjee Sudipto., Carlin Bradley P., Gelfand Alan E.. Hierarchical modeling and analysis for spatial data. Boca Raton, Fla.: Chapman & Hall/CRC 2004. . P De Valpine, D Turek, C J Paciorek, C Anderson-Bergman, Temple Lang, D Bodik, R , de Valpine P., Turek D., Paciorek C.J., Anderson-Bergman C., Temple Lang D., Bodik R.. Programming with models: writing statistical algorithms for general model structures with NIMBLE. Journal of Computational and Graphical Statistics. 26Programming with models: writing statistical algorithms for general model structures with NIMBLE Journal of Computational and Graphical Statistics. 2017;26:403-417. Comparing the Effectiveness of Alerts and Dynamically Annotated Visualizations (DAVs) in Improving Clinical Decision Making Human Factors: the. Mf Rayo, Kowalczyk, S Bw Liston, White, Patterson, Journal of the Human Factors and Ergonomics Society. 57MF Rayo, N Kowalczyk, BW Liston, S White, ES Patterson. Comparing the Effectiveness of Alerts and Dynamically Annotated Visualizations (DAVs) in Improving Clinical Deci- sion Making Human Factors: the Journal of the Human Factors and Ergonomics Society. 2015;57:1002-1014. Alarm system management: evidence-based guidance encouraging direct measurement of informativeness to improve alarm response. Mf Rayo, Moffatt-Bruce, BMJ Quality & Safety. 24MF Rayo, SD Moffatt-Bruce. Alarm system management: evidence-based guidance encour- aging direct measurement of informativeness to improve alarm response BMJ Quality & Safety. 2015;24.
[]
[ "Finite Variation Sensitivity Analysis for Discrete Topology Optimization of Continuum Structures", "Finite Variation Sensitivity Analysis for Discrete Topology Optimization of Continuum Structures" ]
[ "Daniel Candeloro ", "Cunha · Breno ", "Vincenzo De Almeida ", "· Heitor ", "Nigro Lopes ", "Renato Pavanello ", "Daniel Candeloro Cunha [email protected] ", "\nDepartment of Computational Mechanics\nSchool of Mechan-ical Engineering\nUniversity of Campinas -R\nMendeleyev 200\n", "\nCidade Universitária\n13083-860CampinasBrazil\n" ]
[ "Department of Computational Mechanics\nSchool of Mechan-ical Engineering\nUniversity of Campinas -R\nMendeleyev 200", "Cidade Universitária\n13083-860CampinasBrazil" ]
[]
This paper proposes two novel approaches to perform more suitable sensitivity analyses for discrete topology optimization methods. To properly support them, we introduce a more formal description of the Bi-directional Evolutionary Structural Optimization (BESO) method, in which the sensitivity analysis is based on finite variations of the objective function. The proposed approaches are compared to a naive strategy; to the conventional strategy, referred to as First-Order Continuous Interpolation (FOCI) approach; and to a strategy previously developed by other researchers, referred to as High-Order Continuous Interpolation (HOCI) approach. The novel Woodbury approach provides exact sensitivity values and is a better alternative to HOCI. Although HOCI and Woodbury approaches may be computationally prohibitive, they provide useful expressions for a better understanding of the problem. The novel Conjugate Gradient Method (CGM) approach provides sensitivity values with arbitrary precision and is computationally viable for a small number of steps. The CGM approach is a better alternative to FOCI since, for appropriate initial conditions, it is always more accurate than the conventional strategy. The standard compliance minimization problem with volume constraint is considered to illustrate the methodology. Numerical examples are presented together with a broad discussion about BESO-type methods.
10.1007/s00158-021-03066-z
[ "https://arxiv.org/pdf/2104.04571v2.pdf" ]
233,210,282
2104.04571
1ae359a59b685e956c556e8c92f2862085517091
Finite Variation Sensitivity Analysis for Discrete Topology Optimization of Continuum Structures Daniel Candeloro Cunha · Breno Vincenzo De Almeida · Heitor Nigro Lopes Renato Pavanello Daniel Candeloro Cunha [email protected] Department of Computational Mechanics School of Mechan-ical Engineering University of Campinas -R Mendeleyev 200 Cidade Universitária 13083-860CampinasBrazil Finite Variation Sensitivity Analysis for Discrete Topology Optimization of Continuum Structures Received: date / Accepted: date 2013/08293-7, 2019/05393-7, 2019/19237-7 and 2020/07391-9Structural and Multidisciplinary Optimization manuscript No. (will be inserted by the editor)Topology optimization · Sensitivity analysis · Finite variation · Discrete optimization · BESO This paper proposes two novel approaches to perform more suitable sensitivity analyses for discrete topology optimization methods. To properly support them, we introduce a more formal description of the Bi-directional Evolutionary Structural Optimization (BESO) method, in which the sensitivity analysis is based on finite variations of the objective function. The proposed approaches are compared to a naive strategy; to the conventional strategy, referred to as First-Order Continuous Interpolation (FOCI) approach; and to a strategy previously developed by other researchers, referred to as High-Order Continuous Interpolation (HOCI) approach. The novel Woodbury approach provides exact sensitivity values and is a better alternative to HOCI. Although HOCI and Woodbury approaches may be computationally prohibitive, they provide useful expressions for a better understanding of the problem. The novel Conjugate Gradient Method (CGM) approach provides sensitivity values with arbitrary precision and is computationally viable for a small number of steps. The CGM approach is a better alternative to FOCI since, for appropriate initial conditions, it is always more accurate than the conventional strategy. The standard compliance minimization problem with volume constraint is considered to illustrate the methodology. Numerical examples are presented together with a broad discussion about BESO-type methods. Introduction In topology optimization problems, the goal is to obtain the topology of a structure which minimizes (or maximizes) a given objective function. In density methods, the structural domain is discretized in a mesh of finite elements and a density value is assigned to each one of them. It is 0 for void elements and 1 for solid elements. Therefore, for a fixed mesh, the vector of density values completely defines the topology of a structure. Topology optimization of continuum structures is essentially a large-scale non-linear integer programming problem. A common way to solve this binary optimization problem is to perform a continuous relaxation of the discrete variables, then solve the continuous optimization problem with a gradient-based method. A well established approach is to use the Solid Isotropic Material with Penalization (SIMP) interpolation scheme (Bendsøe, 1989;Zhou and Rozvany, 1991), in which a penalization parameter is used to inhibit intermediary density values in the solution. For structural compliance minimization, it can be shown that, if the penalization parameter is sufficiently large, the continuous solution corresponds to a solution of the original binary problem (Rietz, 2001;Martinez, 2005). Although this approach is robust and can provide nearly discrete solutions with a proper tuning of the penalization parameter, methods that only consider discrete structures throughout their search algorithm have their own advantages. In discrete methods, no post-processing is necessary to classify remaining ele-ments with intermediary density values; the interfaces between domains are always well defined; multiple solutions can be stored, since a valid candidate for the optimized solution is obtained in each iteration. The discrete method considered in this work is the Bi-directional Evolutionary Structural Optimization (BESO) (Querin et al., 1998;Yang et al., 1999), developed after its uni-directional version, Evolutionary Structural Optimization (ESO) (Xie and Steven, 1993). This heuristic method produces only discrete topologies, it has been improved over the past years and it now constitutes a well established branch of topology optimization methods (Huang and Xie, 2007;Xia et al., 2018a). It has been extended to a wide range of problems, among them: optimization considering multiple materials (Huang and Xie, 2009); problems with displacement constraints (Huang and Xie, 2010); problems with topology-dependent fluid pressure loads (Picelli et al., 2015;Cunha and Pavanello, 2017); multi-objective and multi-scale optimization (Yan et al., 2015); problems with multi-scale non-linear structures (Xia and Breitkopf, 2017); maximum stress minimization (Xia et al., 2018b); frequency responses minimization (Vicente et al., 2016); frequency gaps maximization (Lopes et al., 2021); design of acoustic mufflers (Azevedo et al., 2018); design of piezoelectric energy harvesters with topology-dependent constraints (de Almeida et al., 2019). There are alternative strategies to BESO-type methods, e.g., algorithms based on discrete mathematical programming (Beckers, 1999). By linearizing the functions, integer Linear Programming (LP) can be used to solve the discrete optimization problems. Such approach, together with a strategy to easily include constraints, has been referred to as Topology Optimization of Binary Structures (TOBS) . Moreover, for specific settings, it has been shown that the BESO method can be equivalent to a sequential LP approach (Tanskanen, 2002). All optimization methods mentioned thus far are based on sensitivity analysis, which determines how the objective function is affected by a minimal variation of each design variable. In continuous relaxation approaches, such as SIMP, the sensitivity analysis consists in computing the gradient vector of the objective function. While in discrete methods, such as BESO, it should ideally consist in computing the objective function after switching the state of each discrete design variable. Since this is usually an excessively expensive procedure, estimations are used instead. The sensitivity analysis for discrete methods, based on finite variations, is henceforth referred to as Finite Variation Sensitivity Analysis (FVSA). Another branch of topology optimization methods uses the concept of topological derivative to perform the sensitivity analysis (Céa et al., 2000;Novotny et al., 2003). It is commonly used together with level set methods to obtain optimal designs (Norato et al., 2007;van Dijk et al., 2013). In level set methods, interpolation schemes are unnecessary and topologies with no intermediary density values are obtained. Similarly to discrete density methods, they are advantageous in problems where interfaces must be well delineated, e.g., the design of fluid flow channels (Sá et al., 2016). Despite these similarities, the development of the present paper is not directly applicable to this class of optimization methods. The paper focuses on BESO-type methods. In order to develop the FVSA procedures, a more formal description of this class of methods is presented, in which the sensitivity analysis is properly defined to consider finite variations. To narrow down the focus of the study, the specific case of structural compliance minimization subject to a volume constraint is taken. It should be noted, however, that the proposed procedures can be extended to different objective functions and they can be adjusted to produce improved linearizations for LP approaches. BESO-type methods, also referred to as Sequential Element Rejections and Admissions (SERA) (Rozvany and Querin, 2002a,b), have two main hypotheses. If they are reasonably satisfied, these methods should be able to provide efficient optimized structures. It is assumed that the variation in the objective function is approximately equal to the sum of the variations which would occur if only one element were switched at a time (i.e., the objective function can be well approximated by an additively separable function). The second assumption is that the objective function is approximately linear with respect to the relaxed continuum density values (i.e., the objective function can be well approximated by a sum of linear continuous functions). If the second hypothesis is satisfied for a given continuous interpolation function, the sensitivity analysis in these methods becomes the same as the one from continuous relaxation approaches: it is performed by computing the gradient vector of the objective function, which is usually a fairly simple operation. In Rozvany (2009), SERA and SIMP approaches were compared and some shortcomings of SERA methods were discussed, e.g., the lack of rigorous proof of efficacy due to its heuristic nature. As shown in Zhou and Rozvany (2001), SERA methods may produce topologies far from the global optimum. This means that this class of methods cannot be blindly applied, it is necessary to discuss what are the conditions in which it should be used, and how reasonable are the hypotheses under such conditions. The BESO algorithm cannot circumvent the requirement of the first hypothesis. Even if the combined effects of switching multiple elements simultaneously could be predicted, a new algorithm would have to be developed to take this information into account. However, the second hypothesis may be discarded if the variation of the objective function after switching the state of an element can be accurately predicted. Such an improvement may overcome some of the current limitations of this class of methods. The problem of predicting the effects of finite variations has been studied by some researchers in the field of topology optimization. In Mróz and Bojczuk (2003), finite variations in topological parameters were considered to optimize truss, beam and frame structures. In Bojczuk and Mróz (2009), topological derivatives together with finite topology modifications were used in a heuristic algorithm to optimize bending plates. In algorithms based on topological derivatives, high-order terms of the topological asymptotic expansion can be considered to improve predictions (de Faria et al., 2007;Hassine and Khelifi, 2016). Likewise, quadratic approximations can be produced from second-order derivatives of the objective function (Groenwold and Etman, 2010), which can be used in integer Quadratic Programming (QP) approaches (Liang and Cheng, 2019). For well behaved functions, QP should perform better than LP when dealing with finite variations. For continuum structures, specifically for compliance minimization ESO, Ghabraie (2015) proposed an accurate sensitivity analysis, using high-order terms of the Taylor series expansion of the objective function. In this paper, three existing FVSA procedures are shown and two novel ones are proposed and tested. All sensitivity formulations are developed for the structural compliance minimization problem, with volume constraint. The finite element method is used to solve the equilibrium equation, which corresponds to a static linear-elastic problem, with homogeneous isotropic material, under constant load, constrained by homogeneous Dirichlet boundary conditions. In Section 2, the stiffness matrix, the displacements vector, the volume of material and the structural compliance are described as functions of the density vector. In Section 3, a variation-based representation for functions of discrete variables is presented. Using such representation for the objective function, Section 4 presents the considered heuristic optimization method, that is just a more formal description of the BESO method that considers finite variations of the objective function in the sensitivity analysis. In Section 5, the five sensitivity analyses are presented: a naive approach, used as reference; the First-Order Continuous Interpolation (FOCI) approach, which corresponds to the standard procedure in literature; the High-Order Continuous Interpolation (HOCI) approach, which corresponds to the one presented in Ghabraie (2015); the Woodbury approach, proposed in this work; and the Conjugate Gradient Method (CGM) approach, also proposed in this work. Then, some considerations are made about the error of the developed sensitivity expressions. In Section 6, numerical examples are discussed. And, in Section 7, the main conclusions are summarized. In Appendix A, there is a proof of convergence of the Taylor series used in the HOCI sensitivity analysis for solid elements. In Appendix B, there is a counterexample to prove that the corresponding series may be divergent for void elements. In Appendix C, there is a procedure to update the selective inverse of the system matrix, which can be useful for both HOCI and Woodbury approaches. In Appendix D, there are some explicit sensitivity formulations for the CGM approach, considering different initial conditions, for 1 and 2 CGM steps. Density-based topology For a fixed finite element mesh of N elements, the topology of a structure can be described by a density vector x ∈ {0, 1} N . If x i = 0, the ith element is a void element with no stiffness; if x i = 1, the ith element is a solid element with the material stiffness. The global matrix is given by K(x) = N i=1 x i K [0] i ,(1) where K [0] i is the stiffness matrix of the ith element, when it is solid. Since only homogeneous Dirichlet boundary conditions are considered, the constraints can be applied by simply removing from the stiffness matrices the rows and columns corresponding to restricted degrees of freedom. The matrices K [0] i are already the constrained ones. Furthermore, they are all symmetric matrices of dimensions G × G, where G is the number of unconstrained degrees of freedom of the discretized structure. Each one of them is positive semi-definite and assumes zero values everywhere outside a small submatrix of dimensions g i × g i , where g i is the number of unconstrained degrees of freedom of the ith element. To prevent K from becoming singular as solid elements are removed (turned into voids), a small stiffness is assigned to void elements. For a given soft-kill parameter ε k , the stiffness matrix assigned to the ith element, when it is void, is ε k K [0] i . Thus, the following base stiffness is assigned to the whole structure: K 0 = ε k N i=1 K [0] i .(2) The elemental variation matrix can be defined as K i = (1 − ε k ) K [0] i ,(3) then, the global matrix can be redefined as K(x) = K 0 + N i=1 x i K i .(4) The matrices K i are all symmetric positive semidefinite; and the matrices K 0 and K are both symmetric positive definite. The relation between the stiffness matrix K, the displacements vector u and the load vector f is given by the equilibrium equation K u = f .(5) Since constant load is considered, the dependence of u on x is expressed as u(x) = [K(x)] −1 f .(6) The total volume of material in the topology x is given by V (x) = N i=1 x i V i ,(7) where V i is the volume of the ith element. By only considering meshes with elements of the same volume, the number of solid elements can be used as a measure of volume, so V (x) can be redefined as V (x) = N i=1 x i .(8) The structural compliance C is defined as C = 1 2 u T K u(9) and its dependence on x is given by C(x) = 1 2 f T [K(x)] −1 f .(10) 3 Discrete function representation The domain {0, 1} N , in which the design variables are defined, has 2 N elements. Given an ordering rule for the possible entries, any discrete scalar function h(x) of N binary variables can be defined by a finite ordered set of output values. In Fig. 1 Another way of representing such a function is by variations around a pointx. Denoting N v the number of 0-valued terms inx, and N s the number of 1valued terms in it, a variation vector can be defined in {0, 1} Nv × {0, −1} Ns as y = x −x .(11) Based onx, if y i = 0, there is no variation in the state of the ith element, if y i = ±1, the state is switched (from void to solid, or from solid to void). Thus, for a givenx, a new function can be defined as h(y) =h(y(x)) = h(x) .(12) It can be parameterized by a scalar α 0 and N tensors of ascending order α k , as shown below: h(y) = α 0 + N k=1 α k (·) k y k .(13) The (·) k -product represents the operation given by α k (·) k y k = N i1=1 N i2=1 . . . N i k =1 α k i1i2...i k y i1 y i2 . . . y i k .(14) Except for α 1 , which is a vector, each tensor α k is strictly upper triangular, so α k i1i2...i k only assumes non-zero values when i 1 < i 2 < . . . < i k . This means that α k is defined by N k parameters. Together with the scalar α 0 , they all sum up to 2 N . The scalar α 0 corresponds to the value of the function without variation, α 0 =h(0) = h(x). The ith term of the 1st-order tensor, α 1 i , is related to the variation of h when only x i is switched. The i 1 i 2 th term of the 2nd-order tensor, α 2 i1i2 with i 1 < i 2 , takes into account the coupled effect of switching both x i1 and x i2 . It somewhat predicts how α 1 i1 (or α 1 i2 ) would change after switching x i2 (or x i1 ) and rewritingh around the new point. As a general interpretation, the kth-order tensor α k is related to the combined effect of switching k variables simultaneously. For a given non-negative integer d ≤ N , let the dneighborhood ofx be defined as the set B d (x) of vectors x that differ fromx in at most d terms, which can be expressed as B d (x) = x ∈ {0, 1} N N i=1 |x i −x i | ≤ d ,(15) so, for anyx, {x} = B 0 (x) ⊂ B 1 (x) ⊂ . . . ⊂ B N (x) = {0, 1} N . A reduced domain may be considered to ignore topologies that are too far away from the current one. For a specified parameter n, if only points in B n (x) are considered, y can assume at most n non-zero values, i.e., y T y ≤ n. Under this restriction, the tensors α k become irrelevant for k > n, so the function can be redefined with only n k=0 N k parameters as ≈ h(y) = α 0 + n k=1 α k (·) k y k . Although Eq. (13) presents a very compact algebraic form for the functionh, it may be challenging to readily understand it. In Fig. 2, for a particular case,x is defined and all the possible d-neighborhood sets are presented. The topologies are linked to their N immediate neighbors, composing a graph. By assigning proper values to the edges, this graph provides an equivalent representation ofh, a less compact but clearer one. The function value in the leftmost node of the graph is given by h(x) =h(0) = α 0 . Any topological variation corresponds to a path in the graph and produces a variation ∆h in the function. Thus, the edge-values are defined so their sum over a path results in ∆h. To use an undirected graph, it can be established that steps from left to right add up the values, and steps from right to left subtract them. There are 4 1 topologies in the second layer, so 4 parameters (related to α 1 ) must be defined to assign the 4 edge-values between the first and second layers. There are 4 2 topologies in the third layer, so 6 parameters (related to α 2 ) must be defined to assign the 12 edge-values between the second and third layers. There are 4 3 topologies in the fourth layer, so 4 parameters (related to α 3 ) must be defined to assign the 12 edge-values between the third and fourth layers. Finally, since there is 4 4 topology in the last layer, 1 parameter (related to α 4 ) must be defined to assign the 4 edge-values between the fourth and last layers. x _ 0 ( ) x _ 1 ( ) x _ 2 ( ) x _ 3 ( ) x _ 4 ( ) x _ When considering y T y ≤ n, the tensors α k become irrelevant for k > n because the information they carry is about unreachable topologies, i.e., topologies outside the n-neighborhood ofx. Furthermore, the reduced function ≈ h can also be interpreted as a truncated approximation ofh. It esti-matesh values outside B n (x) by disregarding some of the combined effects of simultaneously switching more than n elements. Heuristic optimization It is desired to solve the discrete optimization problem x * = arg min x C(x) subject to V (x) = V * ,(17) where V * corresponds to a specified target volume for the structure. Considering a pointx of volume V (x) =V , the optimization problem can be written in its variationbased form as y * = arg min yC (y) subject to N i=1 y i = V * −V .(18) The strategy to solve this problem is to start from a pointx (0) and progress iteratively towards a local minimum, defined as follows. For a given non-negative integer d ≤ N , a d-local minimum of a function h(x) is any point x * such that h(x * ) ≤ h(x), ∀x ∈ B d (x * ). For a specified parameter n, the procedure consists in performing, in each iteration, an optimal move in the n-neighborhood of the current point. Therefore, x (j+1) ∈ B n (x (j) ) is obtained by solving the subproblem y * = arg min y ≈ C(y) subject to N i=1 y i =V (j+1) −V (j) y T y ≤ n ,(19) where the variation vector y and the function ≈ C(y) are defined aroundx (j) . As in Eq. )16), ≈ C corresponds to the reduced objective function given by ≈ C(y) = α 0 + n k=1 α k (·) k y k .(20) Since the volume change is limited by n, a volume valueV (j) must be assigned for each iteration, assuring a proper progression fromV (0) to V * . If it can be performed, this optimization procedure is guaranteed to converge to an n-local minimum of C. However, the n k=0 N k parameters of ≈ C are not easily obtained and the subproblem is not much simpler than the original one. Thus, a heuristic approach is adopted. Instead of optimizing ≈ C, the subproblem is redefined for the additively separable function C(y) = α 0 + α 1 · y .(21) This means that the combined effects, given by α k for k from 2 to n, are disregarded. The scalar α 0 corresponds to C(x (j) ) and the vector α 1 is given by α 1 i = C(x (j) , x i = 1) − C(x (j) , x i = 0) ,(22) where the arguments (x (j) , x i = 1) and (x (j) , x i = 0) denote vectors that are equal tox (j) except at their ith term, which assumes the explicitly defined value. In order to obtain α 0 and α 1 , C(x) must be known for all x ∈ B 1 (x (j) ). The vector α 1 is the sensitivity ofĈ to any change on x. The class of optimization algorithms considered in this work is based on the estimation of this sensitivity vector. Functions that measure the volume variation, VV (y), and the topological variation, TV (y), can be defined as VV (y) = N i=1 y i(23) and TV (y) = N i=1 y 2 i .(24) The constraints over these variations are expressed by specified values for the volume variation V (j+1) −V (j) , which will be denoted by VV (j) , and for the maximal topological variation n, which will be denoted by TV (j) max . Both must be assigned for each iteration of the optimization procedure. It should be noted that VV (j) corresponds to the Evolutionary Rate (ER) and TV (j) max is related to the maximal Addition Ratio (AR max ) from the BESO method, as shown below: VV (j) = N · ER ;(25)TV (j) max = N · (ER + 2 AR max ) .(26) Finally, the simplified subproblem y * = arg min yĈ (y) subject to VV (y) = VV (j) TV (y) ≤ TV (j) max(27) can be solved as follows: to obtainx (j+1) , the elements are ordered by their sensitivity values, then the elements with higher values are turned into voids while the elements with lower values are turned into solids, ensuring that the volume variation and topological variation constraints are both satisfied. It is assumed that, starting fromx (0) , successive solutions of the simplified subproblems move towards an n-local minimum of the original problem x * . If such assumption is realistic, after a finite number of iterations, the solutions of the simplified subproblems will oscillate around x * , i.e., ∃ p, J |x (j) ∈ B p (x * ), ∀j > J. The smallest value for p and its corresponding J represent, respectively, the accuracy of the method and the number of iterations that characterizes its convergence. After reaching the target volume V * , the topology vector should be stored whenever the objective function assumes a value lower than the smallest one obtained thus far. A patience parameter defines the maximum number of iterations that the algorithm can perform with no improvements in the objective function. If this number is reached, the best topology obtained thus far is returned as the optimized result. This is a coherent stop criterion for this algorithm, it is similar to the one used in Xia et al. (2018b). The main hypothesis here is that by making small alterations in the topology, only local effects, related to α 1 and B 1 , are relevant. Most topology updates only change the shape of the existing solid boundaries, if a small number of elements is switched, it is reasonable to consider that the global behavior of the structure will be nearly the same between iterations. However, if all of the switched elements are clustered, the combined effects in that disturbed region would not be negligible anymore. Such case should not spoil the method though, since, if there is a critical region, many of the elements in it should indeed be switched. The most problematic scenario is when new boundaries are created or old ones are removed, which can produce unpredictable changes and undesirable moves may be made. Nevertheless, practice shows that, when applied methodically, this heuristic approach consistently provides satisfactory results (Xia et al., 2018a). As in other density methods, sensitivity filters can be used to deal with the checkerboard problem and mesh dependency (Sigmund and Petersson, 1998;Sigmund and Maute, 2012). Also, to improve stability, momentum strategies can be included. A popular approach is to average the sensitivity vector with its previous values throughout the iterations, properly weighting each term (Huang and Xie, 2007). In this work, a simple conic filter was used to smoothen the sensitivity map and equal weights were used for the momentum: the filtered sensitivity number of the current iteration is added to the final sensitivity number of the last one (which contain previous momentum effects), then the sum is divided by 2. Finite Variation Sensitivity Analysis (FVSA) The linear system of Eq. (5) must be solved to obtain the displacements vector and compute the objective function. Other than that, the only potentially costly task in the optimization algorithm is to perform the sensitivity analysis. This section presents some ways to obtain or estimate the sensitivity vector α 1 , defined in Eq. (22). The usual approach is to define a continuous interpolation function to describe the material behavior for intermediary density values, then obtain the sensitivity vector from the truncated Taylor series of the objective function. In the proposed alternative approaches, the sensitivity vector is obtained without continuous interpolations, by considering finite variations of the elemental stiffness matrices. The proposed FVSA can also be interpreted as an improved linearization for integer LP approaches. The usual linearization takes into account only local information (function and gradient values in an extreme point), however, since the variables are binary, the local behavior is unimportant, the relevant information is only at the extreme points. Therefore, a more appropriate linearization can be produced through FVSA, which is a linear interpolation of extreme values. This is illustrated in Fig. 3, in which the continuous relaxation of h(x i ) corresponds to a simple cubic polynomial. Even if is not possible to obtain the exact function variation between extreme points, estimated values can be enough to produce better results than local linearizations. Naive approach The most obvious way to perform the FVSA is through exhaustive effort. For a given topology, the displacements vector after switching the state of the ith element can be computed as u = K + K i −1 f , if x i = 0 , K − K i −1 f , if x i = 1 ,(28) whereK is the stiffness matrix for the current topology. Thus, each term of the sensitivity vector can be obtained by α 1 i =      − 1 2 f T [ū −ū] , if x i = 0 , − 1 2 f T [ū −ū] , if x i = 1 ,(29) whereū is the displacements vector for the current topology. Evidently, this is not a viable strategy, since N linear systems must be solved before each topology update. Yet, it can be used as reference when evaluating other strategies. First-Order Continuous Interpolation (FOCI) approach By considering a continuous density vector, x ∈ [0, 1] N , an interpolation function γ(x i ) can be defined with the following properties: γ(0) = 0; γ(1) = 1; γ is differentiable and monotonic in [0, 1]. Thus, the global stiffness matrix can be redefined as a differentiable function of x, given by K(x) = K 0 + N i=1 γ(x i ) K i .(30) By using such K(x) in Eqs. (6) and (10), u and C can also be described as differentiable functions of x. Then, a first-order approximation of C(x i ) can be obtained by disregarding the high-order terms of its Taylor expansion centered atx. From this approximation, the sensitivity vector can be calculated as α 1 i = − 1 2 ∂γ ∂x iū T K iū ,(31) which is easily obtained sinceū is the same for every element and all the matrices K i are known. Since the sensitivity values are only used to compare elements, they can all be multiplied by a positive value without affecting the optimization process. Therefore, the sensitivity vector can be redefined as α 1 i =      − ε v 2ū T K iū , if x i = 0 , − 1 2ū T K iū , if x i = 1 ,(32) where a penalization factor is defined for void elements as ε v = ∂γ ∂xi (0)/ ∂γ ∂xi (1). The interpolation function is only relevant to define the derivatives at the two extreme points, so there is no reason to define it differently from the linear interpolation, while using a specified penalization parameter for void elements. As long as the same interpolation function is considered for every element, the presence of the parameter ε v covers all possible definitions for such function. High-Order Continuous Interpolation (HOCI) approach If the whole Taylor expansion of C(x i ) is considered, the sensitivity vector can be obtained as the series α 1 i =            ∞ a=1 1 a! ∂ a C ∂x a i (x) , if x i = 0 , ∞ a=1 (−1) a+1 a! ∂ a C ∂x a i (x) , if x i = 1 .(33) For a general interpolation function γ, this series becomes a quite complex expression. In Ghabraie (2015), it was simplified by considering γ(x i ) = x i . In such case, the sensitivity vector is given by α 1 i =            1 2 ∞ a=1 f T −K −1 K i aK −1 f , if x i = 0 , 1 2 ∞ a=1 −f T K −1 K i aK −1 f , if x i = 1 .(34) There is an alternate, more natural way to obtain this same expression. Since the displacements field u is a function of K(x), with no explicit dependence on x, its variation can be evaluated with respect to variations of K. Considering K an independent variable, Eq. (6) can be rewritten as the Taylor series u(K + ∆K) = ∞ a=0 −K −1 ∆K a u(K) ,(35) from which Eq. (34) can be obtained as the ith sensitivity value. There is a more convenient form for the sensitivity expression, that will shorten the discussion about the series convergence. It can be written in terms of the symmetric positive semi-definite matrix A i = K iK −1 K i(36) and the vector v i = K iū .(37) The resultant sensitivity expression is given by α 1 i =            − 1 2 ∞ a=1v i T −Ā i a−1v i , if x i = 0 , − 1 2 ∞ a=1v i T Ā i a−1v i , if x i = 1 .(38) The convergence of such a series depends on the eigenvalues ofĀ i . LetΛ i be the diagonal eigenvalues matrix andΦ i be the orthogonal eigenvectors matrix, so that A i =Φ iΛiΦi T ,(39) by defining the vectorw i as w i =Φ i Tv i ,(40) the sensitivity vector can be written as α 1 i =            − 1 2 ∞ a=1w i T −Λ i a−1w i , if x i = 0 , − 1 2 ∞ a=1w i T Λ i a−1w i , if x i = 1 .(41) For any vectorw i , those series are convergent if and only if Ā i 2 = max(Λ i ) < 1. In Appendix A, it is shown that this condition is always satisfied when x i = 1. And in Appendix B, it is shown that it may not be satisfied when x i = 0. Therefore, through the truncated series from Eqs. (38) or (41), this approach can estimate, with arbitrary precision, the sensitivity values for solid elements. Since it may fail to provide sensitivity values for void elements, it requires further considerations. A possible strategy is to assign all void sensitivity numbers as 0. In such case, the solid part of the topology would guide the optimization process, and void elements could only be turned into solid ones through the effects of a sensitivity filter. Woodbury approach This approach is named after Woodbury because the Woodbury matrix identity is used to obtain the sensitivity expressions. For a given perturbation term added to a matrix, the Woodbury formula provides an expression for the inverse of the perturbed matrix. Its general formulation and some applications can be seen in Hager (1989). Although it is named after Woodbury's report (Woodbury, 1950), published in 1950, it should be noted that it had already appeared in previous papers of other authors, e.g., Guttman (1946). Without the need of an interpolation function, after switching an element state, the inverse of the updated global matrix can be obtained by the Woodbury identity as K ± K i −1 =K −1 ∓K −1 K i I ±Ā i −1 K iK −1 .(42) It can be used to computeū from Eq. (28), resulting in the following expression for the sensitivity vector: α 1 i =      − 1 2v i T I +Ā i −1v i , if x i = 0 , − 1 2v i T I −Ā i −1v i , if x i = 1 .(43) Which can be written in terms ofΛ i andw i as α 1 i =      − 1 2w i T I +Λ i −1w i , if x i = 0 , − 1 2w i T I −Λ i −1w i , if x i = 1 .(44) It should be noted that, when Ā i 2 < 1, those expressions correspond to the sums of the power series from Eqs. (38) and (41). Moreover, this approach can provide sensitivity values even for void elements such that Ā i 2 ≥ 1. From Eq. (44), it should be clear that the FOCI approach, with linear interpolation, always overestimates α 1 i for void elements and always underestimates it for solid elements. Thus, the void penalization factor ε v , from Eq. (32), should always be defined in [0, 1[ for more accurate comparisons between elements in different states. In both HOCI and Woodbury approaches, each ma-trixĀ i = √ K iK −1 √ K i must be known to obtain the sensitivity vector. This means that it is necessary to know a selective inverse ofK, more specifically: all the termsK −1 i1i2 with indexes (i 1 , i 2 ) corresponding to valued entries of the sparse matrixK. The calculation of this selective inverse is fairly expensive and may take these expressions out of practical use. However, this statement should not be taken as conclusive. Since it was out of the scope of this work, no specific algorithm for selective inversion was implemented to test how prohibitive this task would actually be (Lin et al., 2011;Jacquelin et al., 2016). Nonetheless, if the selective inverse is known in the first iteration, after each topology change, it can be updated by the procedure presented in Appendix C. In this work, the initial selective inverses were computed through exhaustive effort. Conjugate Gradient Method (CGM) approach In this section, an iterative strategy that does not require interpolation functions or selective inversions is proposed. A variation ∆K inK, K =K + ∆K ,(45) results in a variation ∆u inū, u =ū + ∆u .(46) Since the load vector is constant, the following equation must be satisfied: Kū =Kū = f .(47) Thus, two expressions can be obtained for the corresponding variation of structural compliance ∆C = − 1 2ū T ∆Kū = 1 2 f T ∆u .(48) In order to obtain an approximated value for ∆C, u must be estimated. It can be done by using a preconditioned Conjugate Gradient Method (CGM) to solve the linear system presented in Eq. (47). For given initial guess u 0 , initial direction d 0 and preconditioner matrix M, the CGM provides a set of K-orthogonal directions {d 0 , d 1 , . . . , d G−1 } and a corresponding set of coefficients {µ 0 , µ 1 , . . . , µ G−1 }, from whichū can be obtained as u = u 0 + G−1 k=0 µ k d k .(49) If m directions and coefficients are calculated, with m < G, Eq. (49) can be rewritten as u = u 0 + δ (m) u + ε (m) u ,(50) where δ (m) u is a known term, given by δ (m) u = m−1 k=0 µ k d k ,(51) and ε (m) u is the unknown error, given by ε (m) u = G−1 k=m µ k d k .(52) Therefore, the m-steps estimation ofū can be expressed as u m = u 0 + δ (m) u .(53) From Eq. (48), the variation ∆C can be written as ∆C = D (m) u + E (m) u = D (m) f + E (m) f ,(54) where the known terms D (m) u and D (m) f are given by D (m) u = − 1 2ū T ∆K u m(55) and D (m) f = 1 2 f T [u m −ū] ;(56) and the unknown errors E (m) u and E (m) f are given by E (m) u = 1 2 u 0 −ū + ε (m) u TK ε (m) u = 1 2 u 0 −ū , ε (m) u K + 1 2 ε (m) u 2 K(57) and E (m) f = 1 2 u 0 + ε (m) u TK ε (m) u = 1 2 u 0 , ε (m) u K + 1 2 ε (m) u 2 K .(58) The preconditioned CGM procedure is presented in Algorithm 1. For a given problem (defined by the pair K and f ), an initial guess u 0 , an initial direction d 0 , a preconditioner matrix M, the maximum number of iterations m and an early stop criteria τ must be provided. This procedure returns u m , used to compute the estimations of ∆C: D . The direction d m is also returned so that, in case the m iterations were not enough to obtain the desired precision, more iterations can be performed, using u m and d m as input. Algorithm 1 Preconditioned Conjugate Gradient Method. Given Problem:K, f Input: u 0 , d 0 , M, m, τ g 0 ←K u 0 − f for k ∈ {0, 1, . . . , m − 1} do if g k < τ then return u k , d k , k end if e k ←K d k µ k ← − d k T g k d k T e k u k+1 ← u k + µ k d k g k+1 ← g k + µ k e k q k ← M −1 g k+1 β k ← e k T q k d k T e k d k+1 ← −q k + β k d k end for return u m , d m , m The stiffness variation can be defined so that the variation of structural compliance corresponds to the sensitivity number of an element: if x i = 0 and ∆K = K i , α 1 i = ∆C; if x i = 1 and ∆K = −K i , α 1 i = −∆C. Considering u 0 = 0 and d 0 = M −1 f , E (m) f decreases monotonically as more CGM iterations are performed. Thus, the sensitivity values can be estimated, with arbitrary precision, as α 1 i =      − 1 2 f T [ū − u m ] , if x i = 0 , − 1 2 f T [u m −ū] , if x i = 1 .(59) Considering u 0 =ū and d 0 = −M −1 ∆Kū, E (m) u decreases monotonically as more CGM iterations are performed. Thus, the sensitivity values can be estimated, with arbitrary precision, as α 1 i = − 1 2ū T K i u m , x i ∈ {0, 1} .(60) Considering u 0 = 0 and d 0 =ū, both E In Appendix D, sensitivity expressions are presented for each considered initial condition (u 0 , d 0 ), for 1 and 2 CGM steps. For u 0 =ū and d 0 = −M −1 ∆Kū, Eq. (60) always provides a better result than the FOCI approach with linear interpolation. For this case, considering no preconditioning, i.e., M = I, the following sensitivity values are obtained after performing 1 CGM step: For void elements, only the first-order approximation should be considered, since the series can diverge. For the ith element, such that x i = 0, the error ε α of FOCI sensitivity expression is bounded, as given below: α 1 i =                        − 1 2 ū T K iū − ū T K i 2 ū 2 u T K iK K i ū +ū T K i 3 ū if x i = 0 , − 1 2 ū T K iū + ū T K i 2 ū 2 u T K iK K i ū −ū T K i 3 ū if x i = 1 .(61)ε α ≤ 1 2ū T K iū Ā i 2 1 + Ā i 2 = [C i ] [B r ] .(62) The term on the left, denoted by C i , is the firstorder sensitivity number itself. The term on the right, denoted by B r , is an upper bound for the elemental sensitivity relative error. Its behavior with respect to Ā i 2 is presented in Fig. 4. For a sufficiently small Ā i 2 , the linear approximation is accurate. As it increases, the relative error bound goes towards its maximal value of 100%. The series will be considered up to its qth order term for solid elements. For the ith element, such that x i = 1, the error ε α of the HOCI sensitivity expression is bounded, as given below: ε α ≤ 1 2ū T K iū Ā i 2 q 1 − Ā i 2 = [C i ] [B r ] .(63) Again, the term on the left is the first-order sensitivity number; and the term on the right is an upper bound for the elemental sensitivity relative error. Its behavior with respect to Ā i 2 is presented in Fig. 5 for different values of q. For a sufficiently small Ā i 2 , the linear approximation is accurate. As it increases, the relative error bound goes towards indefinitely large values. It should be noted that even for very high values of q, the error bound rapidly increases as Ā i 2 gets near to 1. Therefore, when considering linear continuous interpolations for the density vector, the maximal eigenvalue ofĀ i can be used as a measure of potential non-linearity of the objective function with respect to x i . When using u 0 =ū, for any number of steps, the error for the CGM sensitivity values is always smaller than the one for the FOCI expression. So the FOCI error bound is also a bound for the error of CGM approach. An upper bound for Ā i 2 can be obtained as Ā i 2 ≤ K i 2 K −1 2 = K i 2 K 2 κ(K) ,(64) where κ(K) denotes the condition number ofK. It can be shown that K 2 ≥ K i 2 (Fried, 1973). This means that κ(K) is itself an upper bound for Ā i 2 : Ā i 2 ≤ κ(K) .(65) This is uninformative for solid elements, since a lower upper bound has already been obtained in Appendix A, however, no bound had been presented for void elements until now. This expression shows that Ā i 2 cannot be arbitrarily high for a fixed non-singular system. Although expressions for void elements were presented, the relevance of assigning sensitivity numbers to void elements should be discussed. In the compliance minimization problem, any element with no solid elements in its immediate neighborhood must have a sensitivity value of 0, since any change on this disconnected element would have no effect on the structural compliance. This means that, for a refined mesh, the sensitivity values of most of the void elements will be 0 and the optimization process will be guided only by the solid part of the topology, through the effects of the sensitivity filter. So it would hardly be beneficial to compute the sensitivity numbers of void elements in a refined mesh. It is a reasonable action to simply assign all void sensitivity numbers as 0 before the filtering procedure. Besides disconnected elements, there is another configuration for an element in which the sensitivity number is known beforehand. An element which is the sole solid element connecting an imposed load to a constrained part of the structure will have an arbitrarily high sensitivity absolute value, according to the softkill parameter ε k . In practical applications with refined meshes, this should only happen in elements that are directly loaded by external forces. Since any element within the sensitivity filter radius from these connective solid elements would be artificially removed from the design domain, it is a reasonable action to simply disregard their sensitivity values in the filtering procedure. Numerical examples In the following numerical examples, the structural compliance minimization was performed through successive solutions of the optimization subproblem from Eq. (27), which corresponds to the BESO method. In all problems, a linear-elastic, homogeneous and isotropic material was considered; four-nodes bilinear square elements in plane stress state were used; and the soft-kill parameter was ε k = 10 −9 . The exact sensitivity analyses were performed through the Woodbury approach. Such analyses were duplicated through the naive approach in order to validate the implementation. Although the implemented algorithm works with the presented volume measure V (x), the information commonly used to define constraints and compare results is the fraction of solid material in the design domain, defined as V f (x) = V (x) N .(66) Therefore, in the following numerical examples, V f (x) and V * f were used instead of V (x) and V * . 6.1 Cantilever tie-beam In Zhou and Rozvany (2001) this problem was presented as a simple example in which the standard ESO method results in a highly non-optimal design. For this mesh, the FOCI approach is not enough to correctly predict the variation of the objective function when the state of an element in the vertical tie is switched. Such imprecision results in the removal of all the elements from the vertical tie, which are critical elements. Their removal qualitatively changes the behavior of the structure since a mechanical restriction is artificially removed from the problem. It was stated that this problem would still happen for fine meshes, when certain values of rejection ratio are used (Zhou and Rozvany, 2001). However, it has already been shown that by using fine meshes with appropriate parameters, the ESO method is able to obtain satisfactory optimized solutions (Edwards et al., 2007;Huang and Xie, 2008). To discourage misleading discussions over this problem, some complementary remarks should be made. Unlike problems in which there are trivial solutions that correspond to disconnected structures (Wang, 2009), this problem does not require any further considerations, it is solved simply by proper usage of the method. When density methods are used to optimize continuum structures, the mesh should be fine enough to produce complex (or at least non-trivial) topologies for the structures. Moreover, when discrete methods are used, the mesh should be fine enough so any disconnection between structural components happens gradually, which may avoid highly non-optimal topology alterations. Lastly, although 1% rejection ratio would usually be a reasonable value, because the mesh is too coarse, it forces the method to remove all of the material from the same spot, which should be enough to highlight how poorly defined is the setting of the presented problem. Nonetheless, an example with extreme properties can be useful to evaluate potential improvements on this kind of method. In Ghabraie (2015), it was shown that the removal of the vertical tie can be prevented by using accurate sensitivity values. In this section, this problem was further explored considering the proposed approaches. By considering VV (j) = −1 and TV (j) max = 1 in all iterations, the proposed optimization procedure becomes the ESO method with rejection ratio of 1%. Firstly, a fully solid initial topology was considered. It has a volume fraction of V f (x (0) ) = 100% and a structural compliance of C(x (0) ) = 194.4. 7 shows the solutions for V * f = 99% when FOCI approach is adopted and when the exact sensitivity vector is used. The solid elements are represented in black; the void elements are represented in gray; the index of the void element from Fig. 7(a) is denoted by a; and the index of the void element from Fig. 7(b) is denoted by b. It can be seen that the FOCI prediction, based only on small density variations, results in a very inefficient structure. In Figs. 8 and 9, the FVSA linearizations produced from HOCI and CGM approaches are presented for the elements a and b. The behavior of the relaxed continuum function for a linear interpolation is also presented. On the legend, the number next to HOCI indicates its order (HOCI-1 is referred to as FOCI); the number next to CGM indicates how many steps were performed; and the letter "J" after CGM indicates that Jacobi preconditioning were used. For the CGM approach, u 0 =ū and d 0 = ±M −1 K iū were considered. In Fig. 8, the linearizations are not straight lines, since the horizontal axis is in a logarithmic scale. The continuous relaxation of C(x b ) is fairly well behaved. The FOCI approach produces a reasonable result and, for slightly higher orders, the HOCI approach can provide accurate sensitivity values. The CGM approach can also provide accurate sensitivity values in a small number of steps. On the other hand, C(x a ) has very distinct local and global behaviors. When x a is near 1, it has little influence in the structural compliance. When x a goes exact sensitivity linearization FOCI linearization HOCI-500 linearization HOCI-1000 linearization HOCI-2000 linearization HOCI-4000 linearization HOCI-8000 linearization continuous relaxation below 0.01, the effect of disconnecting the vertical tie appears as an abrupt compliance variation, which cannot be properly predicted by a local sensitivity analysis. The HOCI approach provides reasonable values only for orders higher than 1000. And, to produce reasonable values, the CGM approach needs nearly half of the maximal number of CGM steps, which would provide the exact result. By performing the optimization with exact sensitivity values, for a target volume fraction of 40%, the result shown in Fig. 10 is obtained. Since exact predictions are used, each iteration produces a topology with minimal compliance gain. It successfully prevented the removal of the vertical tie. It should be noted that some non-physical effects are present due to the type of finite element used: elements connected only by their nodes do not negatively affect the structural stiffness; the three rightmost elements have a distributed load on their edges, even so, the one in the middle can be removed without producing a singularity. After removing all but one of the middle elements of the horizontal beam, the method is forced to remove a critical element. It chooses to break the bottom horizontal beam, because it would result in a smaller compliance gain than to break the top horizontal beam or the vertical tie. The breaking point can be easily identified in the plot of the objective function over the iterations, it is the point where there is an abrupt growth of the structural compliance. The obtained result is worse than the solution presented in Zhou and Rozvany (2001). In most iterative optimization methods, to obtain the best possible so- V f = 100% C = 194.4 Initial Topology lution, the initial guess must be within the basins of attraction of the global optimum. In a sense, this is also true for this discrete method, a path of successive topologies is generated and its final point will always depend on the first one. It was shown that, when the optimization process starts from the fully solid topology, the succession of topological variations with minimal compliance gain does not lead to the desired optimum. V f = 40% C = 1024.8 Final Topology However, when the optimization with exact sensitivity values starts from the initial topology of Fig. 11, with a volume fraction of 91%, the reference solution is obtained. V f = 91% C = 328.9 Initial Topology V f = 40% C = 558.6 Final Topology Fig. 11: Cantilever tie-beam optimization with V f (x (0) ) = 91% and V * f = 40%. As discussed, to approach this problem systematically, a finer mesh should be used with appropriate optimization parameters. A mesh of 25600 elements was considered, obtained by remeshing each coarse element as 16 × 16 fine elements of dimensions 0.0625 × 0.0625. Starting from a fully solid topology and from the ini-tial topology of Fig. 11, the optimization was performed for a target volume fraction of 40%. A sensitivity filter with radius of 0.2 was used, as well as the presented momentum strategy. In iterations with changing volume, the constraints were given by VV (j) = −128 (0.5%) and TV (j) max = 1152 (4.5%); in iterations with constant volume, they were given by VV (j) = 0 (0%) and TV (j) max = 1024 (4%). Which corresponds to the BESO method with ER = 0.5% and AR max = 2%. The FOCI approach was used, with void sensitivity penalization of ε v = 10 −6 . The results are presented in Fig. 12. Two different solutions with the same structural compliance were obtained, both are more efficient than the solution presented in Zhou and Rozvany (2001), whose structural compliance is 562.9 when calculated with this mesh. Therefore, even using the simplest sensitivity estimation, when a proper setting is defined, the method successfully produces optimized structures for this problem. The mesh refinement not only allows the method to distribute the topological variations over the domain, but it also improves the general behavior of the objective function with respect to each design variable. A way to observe this effect is through maps of Ā i 2 for different meshes, as shown in Fig. 13 for fully solid topologies. As presented in Eqs. (62) and (63), these norms define upper bounds for the error of the sensitivity estimations. The average of all Ā i 2 goes from 0.83, in the 100 elements mesh, to 0.64, in the 25600 elements mesh. Elements in corners have norms near 1, which means the sensitivity error is pratically unbounded; elements in free edges have norms below 1, but still relatively high; elements inside the structure have norms below V f = 100% C = 195.0 Initial Topology Final Topology V f = 40% C = 505.4 (a) Optimization with V f (x (0) ) = 100% and V * f = 40%. V f = 91% C = 377.2 Initial Topology Final Topology V f = 40% C = 505.4 (b) Optimization with V f (x (0) ) = 91% and V * f = 40%. 0.7; and elements in the clamped edge have the lowest norms, around 0.5. This is related to the potential each element has to disconnect a load from the constrained part of the structure. A corner element can always produce a singularity: if there is a load in the corner and the element is removed the displacement of such a node will go towards infinity. When the mesh is refined, the ratio between elements in the interfaces and elements inside the structure is reduced, so the overall behavior of the objective function is improved. It should be noted that, even though every interface element has higher upper bounds for the sensitivity error, this does not mean the error will necessarily be high. In Fig. 14, sensitivity maps are presented for the fully solid topology in the 100 elements mesh and in the 25600 elements mesh, obtained through the FOCI, CGM-2J (2 steps with Jacobi preconditioning) and Woodbury approaches. The CGM initial conditions were the same: u 0 =ū and d 0 = ±M −1 K iū . Since the exact sensitivity absolute values of the loaded corner elements go towards infinity, these elements were not considered in this sensitivity analysis. It can be seen that the highly non-linear behavior for elements in the vertical tie vanished in the refined mesh. For this mesh, the FOCI approach was able to produce coherent sensitivity values and the CGM approach produced very satisfactory results. Evidently, the sensitivity numbers assume smaller absolute values for finer meshes, since the influence of each element alone is reduced when the elements are smaller. For fully solid topologies, Fig. 15 presents the progression of the relative l 2 -error, given by the l 2 -norm of the error vector divided by the l 2 -norm of exact sensitivity vector, with respect to the number of elements in the mesh. The CGM with 1 and 2 steps were considered, with and without Jacobi preconditioning. The FOCI results were also presented for comparison. In the coarse mesh, the relative l 2 -error is around 100% for all cases, because of the very high error for the elements in the vertical tie. From the second mesh (400 elements) to the last one (25600 elements), it seems that the sensitivity errors are reduced following fixed power laws, since their plots in logarithmic scale result in nearly straight lines. In all cases, including the FOCI approach, the overall sensitivity error was reduced with the mesh refine- ment. This means that, as the mesh is refined, the objective function becomes "more linear" (its linear approximation becomes more accurate) with respect to each elemental density. Moreover, by using the proposed CGM sensitivity analysis, there was a substantial improvement of the error values and in how fast they drop with mesh refinement. For the most refined mesh, even when using a single CGM step without preconditioning, this measure of error was reduced from more than 50% to around 30%, when compared with the FOCI result. 6.2 Cantilever beam Fig. 16 presents the design domain and the initial topology considered for the optimization of a cantilever beam. A mesh with 32 × 20 elements of dimensions 2.5 × 2.5 mm was considered. In Fig. 17, the FVSA linearizations produced with the FOCI and CGM approaches are presented for three elements in the clamped extremity: the first solid element from top (element c); the void element right above it (element b); and the next void element above (element a). The relaxed function behavior for a linear interpolation is also presented. As before, u 0 =ū and d 0 = ±M −1 K iū , with Jacobi preconditioning. It can be noted that the three relaxed functions are well behaved and a small number of CGM steps is needed to achieve accurate sensitivity values. The FOCI sensitivity analysis results in very inaccurate values for the disconnected void elements. This reinforces the argument that it may be better to always disregard the sensitivity values (set them to 0) of void elements, which can be done without any changes to the implementation, by using ε v = 0. Fig. 18 presents the topology optimization of the presented cantilever beam, using exact sensitivity values. It was performed with a fixed volume fraction of were VV (j) = 0 (0.0%) and TV (j) max = 28 (≈ 4.4%), which corresponds to the BESO method with ER = 0.0% and AR max = 2.2%. The structure of minimal compliance was obtained at the 29th iteration, after that, the topologies oscillate until the patience criterion is achieved. No filter was used so the sensitivity numbers truly correspond to objective function variations. The influence of the initial condition used in the CGM approach was evaluated for the sequence of topologies produced in this optimization process. Three cases were considered: in the first case, u 0 = 0 and d 0 = M −1 f ; in the second case, u 0 =ū and d 0 = ±M −1 K iū ; in the third case, u 0 = 0 and d 0 =ū. For the second and third cases, with and without Jacobi preconditioning, Fig. 19 presents the minimal number of steps to achieve different criteria for the topology of each iteration: the number of steps so that the relative l 2 -error of the sensitivity vector is below 10%; the number of steps so that the relative l 2 -error is below 50%; and the number of steps so that the solid element with the lowest sensitivity absolute value is correctly classified. Only solid elements were considered in this analysis. It can be seen that the preconditioning consistently reduced the number of steps needed to achieve the criteria in both cases. For a small number of steps, the performance was better for the second case. This was expected since its initial guess is closer to the solution. For a large number of steps, the performance was better for the third case. This result indicates that there may be better initializations than the most intuitive one, given in the second case, which starts at the last equilibrium point and moves in the direction of steepest descent of the preconditioned problem. In the second case without preconditioning, when the error was below 50%, at least 1.3% of the solid elements were correctly classified; when the error is below 10%, at least 30.8% of them were correctly classified. With Jacobi preconditioning those rates of correct classification were 4.4% and 32.1%. For the third case, the results were similar for 50% of error, but the error below 10% resulted only to minimal rates of correct classification around 22%. The relative l 2 -error is a good measure to evaluate if the overall behavior of the sensitivity vector is being well predicted. However, it should be noted that, even V f = 50% C = 835.4 nJ Initial Topology Final Topology V f = 50% C = 231.0 nJ if such an error is small, elements with similar sensitivity values may still be incorrectly classified, when compared to the exact sensitivity classification. The results for the first case were substantially worse. For the case with preconditioning, the smallest number of steps to achieve one of the criteria was 82. The poor performance for the first case is because it does not take advantage of the knowledge that was already obtained about the structural behavior, stored inū. In this section, a topology optimization without filtering was considered and non-filtered sensitivity values were compared. Evidently, to obtain proper optimized structures, without the checkerboard problem, the filtering procedure is essential. When a filter with radius of 3 mm was included to the optimization algorithm, the topology shown in Fig. 20 was obtained. The optimization was performed for six different sensitivity analyses: FOCI with ε v = 10 −6 (simply de-noted by FOCI); FOCI with ε v = 0 (denoted by FOCIs); CGM with 1 step, without preconditioning, for void and solid elements (denoted by CGM-1); CGM with 1 step, without preconditioning, for solid elements and void sensitivity values assigned as 0 (denoted by CGM-1s); CGM with 2 steps, with Jacobi preconditioning, for void and solid elements (denoted by CGM-2J); and CGM with 2 steps, with Jacobi preconditioning, for solid elements and void sensitivity values assigned as 0 (denoted by CGM-2Js). The considered initial condition for the CGM was u 0 =ū and d 0 = ±M −1 K iū . Firstly, a fully solid initial topology was used, so V f (x (0) ) = 100%. For a target volume fraction of V * f = 50%, four sets of constraints were considered: -VV (j) = −300 (1%) and TV (j) max = 1500 (5%) until the target volume fraction is achieved, then VV (j) = 0 (0%) and TV (j) max = 1200 (4%), which corresponds to the BESO method with ER = 1% and AR max = 2%; no constraint over TV (y), VV (j) = −300 (1%) until the target volume fraction is achieved, then VV (j) = 0 (0%), which corresponds to the BESO method with ER = 1% and AR max = 100%; -VV (j) = −3000 (10%) and TV (j) max = 4200 (14%) until the target volume fraction is achieved, then VV (j) = 0 (0%) and TV (j) max = 1200 (4%), which corresponds to the BESO method with ER = 10% and AR max = 2%; no constraint over TV (y), VV (j) = −3000 (10%) until the target volume fraction is achieved, then VV (j) = 0 (0%), which corresponds to the BESO method with ER = 10% and AR max = 100%. Table 1 presents, for each case, the minimized compliance and the number of iterations needed to achieve it. In some cases, an early convergence occurred, resulting in a structure with thin components. These early results were presented within parentheses. In such cases, the actual results (without thin components) were obtained after performing a new optimization process starting from the solutions with thin components. The structural compliances of the whole MBB beams are twice the presented values, since these were computed for the design domain (half of the symmetric structure). The notation · | · was used to express the constraints until the target volume was achieved (number on the left) and the constraints for the iterations at a constant volume fraction (number on the right). When using CGM-1, the process was unstable and resulted in degenerated structures. This was due the inaccurate sensitivity values assigned to disconnected void elements. However, except for the case with VV (j) = −3000|0 and unconstrained TV (y), CGM-2J was able to stabilize the process. FOCI was stable because a small value was used to penalize the sensitivity numbers of void elements. As can be seen, FOCI and FOCI-s produced very similar results, so there was no negative effect when the sensitivity values of void elements were simply assigned as 0. Counterintuitively, when greater topological variations were performed in each iteration, most cases needed more iterations to converge. This may happen because more unstable procedures are more susceptible to produce oscillations between iterations, delaying the convergence. Fig. 22 presents the results for the most reasonable constraints, VV (j) = −300|0 and TV (j) max = 4200|1200. The whole MBB beams are presented, in meshes with 600 × 100 elements. When using CGM-2J or CGM-2Js, more efficient topologies were obtained. Next, optimizations were performed for a fixed volume fraction of V f (x (j) ) = V * f = 50%, from the initial topology presented in Fig. 23. Two sets of constraints were considered: -VV (j) = 0 (0%) and TV (j) max = 1200 (4%), which corresponds to the BESO method with ER = 0% and AR max = 2%; no constraint over TV (y) and VV (j) = 0 (0%), which corresponds to the BESO method with ER = 0% and AR max = 100%. Table 2 presents, for each case, the minimized compliance and the number of iterations needed to achieve it. Again, the structural compliances of the whole MBB beams are twice the presented values. Conclusions In this work, a more formal description of the BESO method was presented, in which the sensitivity analysis consists in estimating finite variations of the objective function. In addition to the naive approach, four ways to perform the sensitivity analysis were presented: FOCI, HOCI, Woodbury and CGM approaches. For the problem of structural compliance minimization with volume constraint, these analyses were developed, compared through numerical examples and discussed. The standard FOCI approach was reformulated to have simpler parameters. Instead of allowing the interpolation function to assume an arbitrary form, it is fixed as a linear function and a parameter ε v is used to penalize the sensitivity values of void elements. In the compliance minimization problem, it was shown that ε v should assume non-negative values below 1 to improve the accuracy of the sensitivity analysis. Next, the convergence conditions for the Taylor series considered in the HOCI approach were discussed. In Appendix A and Appendix B, it was shown that the series is always convergent for solid elements and that it may be divergent for void elements. The Woodbury approach was proposed as an alternative over the HOCI approach, providing a closed-form expression for the exact sensitivity values, for both solid and void elements. Although it is an improvement over the HOCI approach, it may still be a computationally prohibitive strategy, because a selective inverse of the global stiffness matrix would have to be known in each iteration to perform this sensitivity analysis. Nonetheless, if the selective inverse is known at the first iteration, it would not be necessary to compute it again from scratch, since it can be updated after each topological variation instead, as shown in Appendix C. For a conclusive viabil-ity evaluation, specific algorithms for selective inversion should be properly implemented. The CGM approach was then derived. In contrast to the previous approach, it is computationally viable, providing an estimated sensitivity vector that is guaranteed to be more accurate than the one obtained through the FOCI approach, when u 0 =ū and d 0 = ± M −1 K iū . Different CGM initial conditions were also explored and explicit formulations can be found in Appendix D. With the FVSA approaches formulated, the bounds of the errors for certain approaches were investigated. Upper bounds, that are independent from the applied load, were presented for the element-wise sensitivity error. For void elements, the bounds were presented for the error of the FOCI expression; for solid elements, they were presented for the error of the HOCI expression. A coarsely meshed cantilever tie-beam problem was then considered. It was shown that the use of accurate sensitivity values may prevent highly non-optimal topological variations. General properties of the optimization method were discussed, such as the need of a minimally refined mesh and the dependence on the initial topology. It was shown that the upper bounds for the sensitivity errors are reduced with mesh refinement. Moreover, for finer meshes, the CGM approach produced substantially more accurate sensitivity values than the FOCI approach. Next, a coarsely meshed cantilever beam was optimized with exact sensitivity numbers and different CGM sensitivity expressions were compared for each topology of the optimization procedure. The results were better when the information of the displacements vector was used to define the initial conditions of the CGM. Jacobi preconditioning consistently reduced the number of steps needed to achieve a chosen precision. Lastly, a finely meshed MMB beam was optimized with FOCI and CGM approaches, using different parameters. The best results were obtained by CGM approach, with two steps and Jacobi preconditioning. It was shown that unpenalized sensitivity values for void elements may produce unstable behavior. The results suggest that it is advantageous to perform the sensitivity analysis only for solid elements, and to assign the void sensitivity numbers as 0 before the filtering procedure. From the results, it can be seen that the FOCI sensitivity analysis performs well when the mesh is sufficiently refined and reasonable parameters are used for the optimization method. The HOCI and Woodbury analyses provide useful expressions to understand the problem. For appropriate initial conditions, the CGM analysis provides a more accurate expression than the one obtained through FOCI approach, which may result in more stable processes and in more efficient optimized structures. Although the FOCI approach seems to be enough for the compliance minimization problem, it may not be true when different objective functions and constraints are considered. Therefore, the use of such approach in BESO-type methods must be justified and it must be shown that there are conditions in which it can reasonably estimate the finite variations of the objective function. To summarize, this paper presented new methods to perform the sensitivity analysis for discrete topology optimization problems. Before defining the sensitivity expression to be used in a new problem, the behavior of the objective function with respect to finite variations of the design variables should be carefully evaluated. Special conditions which may inform beforehand the exact sensitivity values of some elements should be identified, e.g., disconnected or externally loaded elements. Closed-form expressions should be developed for the sensitivity vector. The Taylor series of the relaxed objective function should be constructed and its convergence conditions should be evaluated. CGM estimated sensitivity expressions should be developed for appropriate initial conditions. Moreover, the viability and accuracy of the different approaches should be compared for different mesh refinements. The presented contributions can also be used to produce improved linearizations for integer LP approaches. Furthermore, the HOCI, Woodbury and CGM approaches can all be used to predict the effects of switching the state of multiple elements simultaneously, which may lead to the development of new discrete optimization methods. Appendix A Upper bound of A i 2 for solid elements Considering that the ith element is solid (x i = 1),K can be separated as K =R i + K i , (A.1) in whichR i is a positive definite matrix. Thus,Ā i can be written as A i = K i R i + K i −1 K i . (A.2) Through Woodbury identity, it becomes A i = K i R i −1 −R i −1 K i I + K iRi −1 K i −1 K iRi −1 K i . (A.3) LetB i be defined bȳ B i = K iRi −1 K i , (A.4) since it is symmetric and positive semi-definite, it has an orthogonal eigenvectors matrix F and a diagonal eigenvalues matrix L, composed by non-negative values. Its diagonalization is given bȳ B i = F L F T . (A.5) So Eq. (A. 3) can be rewritten as A i = F L − L [I + L] −1 L F T = F L I − [I + L] −1 L F T . (A.6) By simplifying it,Ā i is obtained as A i = F L [I + L] −1 F T , (A.7) which means that F is also an eigenvectors matrix of A i and L [I + L] −1 is the corresponding eigenvalues matrix. Therefore, each eigenvalue λ k ofĀ i can be written with respect to its corresponding term of L: λ k = L k 1 + L k ∈ [0, 1[ . (A.8) This means that, when x i = 1, all eigenvalues ofĀ i must be non-negative and strictly less than 1. Thus, x i = 1 ⇒ Ā i 2 < 1 . (A.9) Appendix B Uncertainty about A i 2 for void elements Considering that the ith element is void (x i = 0),K can be separated as K =R i − K i , (B.1) in whichR i is a positive definite matrix. Thus,Ā i can be written as A i = K i R i − K i −1 K i . (B.2) Through Woodbury identity, it becomes A i = K i R i −1 +R i −1 K i I − K iRi −1 K i −1 K iRi −1 K i . (B.3) LetB i be defined bȳ B i = K iRi −1 K i , (B.4) since it is symmetric and positive semi-definite, it has an orthogonal eigenvectors matrix F and a diagonal eigenvalues matrix L, composed by non-negative values. Its diagonalization is given bȳ B i = F L F T . (B.5) So Eq. (B. 3) can be rewritten as A i = F L + L [I − L] −1 L F T = F L I + [I − L] −1 L F T . (B.6) By simplifying it,Ā i is obtained as A i = F L [I − L] −1 F T , (B.7) which means that F is also an eigenvectors matrix of A i and L [I − L] −1 is the corresponding eigenvalues matrix. Therefore, each eigenvalue λ k ofĀ i can be written with respect to its corresponding term of L: λ k = L k 1 − L k . (B.8) SinceĀ i is a semi-positive matrix with finite eigenvalues, Eq. (B.8) establishes an upper bound for L k : L k ∈ [0, 1[. As for λ k , it will only be lesser than 1 when L k is lesser than 1/2: L k ∈ 0 , 1 2 ⇒ λ k ∈ [0, 1[ ; L k ∈ 1 2 , 1 ⇒ λ k ∈ [1, ∞[ . (B.9) Thus, x i = 0 and B i 2 < 1 2 ⇒ Ā i 2 < 1 , x i = 0 and B i 2 ≥ 1 2 ⇒ Ā i 2 ≥ 1 . (B.10) In the counterexample below, it is shown that Ā i 2 may indeed assume values greater than 1 when x i = 0. The value of Ā i 2 highly depends on the current topology and on the soft-kill parameter ε k . The presence of nodes connected only to void elements, especially for a small ε k , favors higher values of Ā i 2 . In Fig. B.1, four topologies are presented for a cantilever beam of dimensions 80 × 50 mm, clamped on its left end. Solid elements are represented in black and void elements are represented in gray, they are numbered from top to bottom, starting at the leftmost column. Four-nodes bilinear quadrilateral elements of dimensions 20.0×12.5 mm were considered, in plane stress state. The material is homogeneous and isotropic, with a Young's modulus of 210 GPa and a Poisson's ratio of 0.3. For ε k = 0.1, Table B.1 presents the values of Ā i 2 and B i 2 for each void element in these topologies. It can be seen that Eq. (B.8) is satisfied and that Ā i 2 is less than 1 when B i 2 is less than 1/2. Even for such unusually high ε k , except for the void element in Topology 1, all values of Ā i 2 were greater than 1. This suggests that, in a practical case, Ā i 2 will hardly be less than 1 for any void element. Appendix C Selective inverse update Considering that the selective inverse ofK is known, it is desired to obtain its new value after a variation ∆K. The matrix I ∆ matches the identity matrix for entries corresponding to valued terms of ∆K, and it is zerovalued elsewhere. The updated complete inverse can be written through Woodbury identity as the updated complete inverse can be written as shown below: K + ∆K −1 =K −1 − T T Q T . (C.4) Let g ∆ be the dimension of the valued part of ∆K; and let G be the dimension of the whole matrix. After each alteration, g ∆ linear systems of dimension G must be solved to compute T, and a matrix of dimension g ∆ must be inverted to compute Q. All columns t i of T have only g ∆ valued terms. The dimension of the valued submatrix of Q is also g ∆ . Therefore, after obtaining T and Q, each updated entry, given by K + ∆K −1 i1i2 =K −1 i1i2 − t i1 T Q t i2 , (C.5) can be obtained by computing a bilinear expression of dimension g ∆ . Even for a small topological variation, which means a small g ∆ , this update procedure would still be costly, since it involves solving linear systems of dimension G. Nevertheless, it should be advantageous over obtaining the selective inverse from scratch in each iteration of the topology optimization algorithm. Appendix D Explicit expressions for the CGM approach Appendix D.1 Definitions and notation For the ith element, let the matrixK be defined as K =K + ∆K , (D.1) where ∆K is given by ∆K = K i , if x i = 0 , − K i , if x i = 1 . (D.2) LetC, C i , C ∆ and C T be the scalars defined as For a given preconditioner matrix M, let the matrices W j be defined as C = 1 2ū T f , (D.3) C i = 1 2ū T K iū ,(W j = M −1 K M −1 j . (D.7) The following notation will be used in this appendix to represent inner products in a more compact way: a 1 , a 2 j = a 1 , a 2 W j = a 1 T W j a 2 ; (D.8) a j = a W j = a 2 W j = a T W j a . From Eq. (59), the sensitivity expression is obtained as The second step of the CGM results in the displacements vector α 1 i =            − C − f 0 2 2 f 1 , if x i = 0 , − f 0 2 2 f 1 −C , if x i = 1 .u 2 = f 0 f 3 − f 1 f 2 f 1 f 3 − f 2 f 2 W 0 f + f 1 f 1 − f 0 f 2 f 1 f 3 − f 2 f 2 W 1 f . (D. 17) The coefficients f 2 and f 3 are given by (D.19) where the vectors v L and v R correspond to v L = M −1 v K (D.20) f 2 = v K T v L (D.18) and f 3 = v R T v L , and v R =K v L + ∆K v L . (D.21) From Eq. (59), the sensitivity expression is obtained as α 1 i =                        − C − f 0 2 f 3 − 2 f 0 f 1 f 2 + f 1 3 2 [ f 1 f 3 − f 2 2 ] if x i = 0 , − f 0 2 f 3 − 2 f 0 f 1 f 2 + f 1 3 2 [ f 1 f 3 − f 2 2 ] −C if x i = 1 . (D.22) For this and all the following sensitivity expressions, the selective inverse ofK would be needed to use M = K. To reduce computational costs, a diagonal M is recommended (Jacobi preconditioning). From Eq. (60), the sensitivity expression is obtained as α 1 i =            − C i − b 0 2 2 b 1 , if x i = 0 , − C i + b 0 2 2 b 1 , if x i = 1 . (D.29) The second step of the CGM results in the displacements vector u 2 =ū + b 0 b 3 − b 1 b 2 b 1 b 3 − b 2 b 2 W 0 b + b 1 b 1 − b 0 b 2 b 1 b 3 − b 2 b 2 W 1 b . (D.30) The coefficients b 2 and b 3 are given by From Eq. (60), the sensitivity expression is obtained as Both Eqs. (59) and (60) produce the same sensitivity expression, given by Conflicts of interest The authors declare that they have no conflict of interest. b 2 = v K T v L (D.31) and b 3 = v R T v L ,(α 1 i =                        − C i − b 0 2 b 3 − 2 b 0 b 1 b 2 + b 1 3 2 [ b 1 b 3 − b 2 2 ] if x i = 0 , − C i + b 0 2 b 3 − 2 b 0 b 1 b 2 + b 1 3 2 [ b 1 b 3 − b 2 2 ] if x i = 1 .u 2 = C z , g 0 2 − 2C C T g 1 − C T g 0 z , g 0 C T z , g 0 2 − 2 C T 2 g 1 ū + 2 C T 2 g 0 C T z , g 0 2 − 2 C T 2 g 1 W 0 g ,α 1 i =                        − C C T C i + C T g 0 2 2 C T g 1 − z , g 0 2 if x i = 0 , − C C T C i − C T g 0 2 2 C T g 1 − z , g 0 2 if x i = 1 . Availability of data and material Not applicable. Code availability Not applicable. Ethics approval Not applicable. Consent to participate Not applicable. Consent for publication Not applicable. Replication of results All necessary information was provided in the numerical examples. Any researcher with an implemented BESO program for structural compliance minimization can easily adapt their sensitivity analysis with the presented expressions and replicate the results. Fig. 1 : 1Discrete function of binary variables for a 2 × 2 mesh (N = 4). Fig. 2 : 2Neighborhood sets for a 2 × 2 mesh (N = 4). Fig. 3 : 3Different linearizations of a function. as more iterations are performed. Thus, either Eq. (59) or Eq. (60) can be used to estimate the sensitivity values. Fig. 4 : 4Upper bound for the sensitivity relative error of a void element. Fig. 5 : 5Upper bound for the sensitivity relative error of a solid element. Fig. 6 Fig. 6 : 66presents a 100 elements mesh for a cantilever tiebeam. The material has a Young's modulus of 1.0 and a Poisson's ratio of 0.0; the dimensions of the elements are 1.0 × 1.0; over the rightmost edge, the intensity of the horizontal load per unit length is 2.0; on the bottom edge, below the vertical tie, the intensity of the vertical load per unit length is 1.0. Cantilever tie-beam. with exact sensitivity analysis. Fig. 7 : 7Cantilever tie-beams optimized for V * f = 99%. Fig. Fig. 7 shows the solutions for V * f = 99% when FOCI approach is adopted and when the exact sensitivity vector is used. The solid elements are represented in black; the void elements are represented in gray; the index of the void element from Fig. 7(a) is denoted by a; and the index of the void element from Fig. 7(b) is denoted by b. It can be seen that the FOCI prediction, based only on small density variations, results in a very inefficient structure. Fig. 8 : 8FVSA linearizations of the structural compliance with respect to x a . Fig. 9 : 9FVSA linearizations of the structural compliance with respect to x b . Fig. 10 : 10Cantilever tie-beam optimization with V f (x (0) ) = 100% and V * f = 40%. Fig. 12 :Fig. 13 : 1213Cantilever tie-beam optimizations with refined meshes. Maps of Ā i 2 for the fully solid topology in different meshes. ) Sensitivity maps for the 100 elements mesh. ) Sensitivity maps for the 25600 elements mesh. Fig. 14 :Fig. 15 : 1415FOCI, CGM and exact sensitivity maps for fully solid topologies. Sensitivity relative l 2 -error with respect to mesh refinement. V f (x (j) ) = V * f = 50%.The constraints for all iterations Fig. 16 : 16Cantilever beam. Fig. 17 : 17FVSA linearizations for the cantilever beam. Fig. 18 : 18Cantilever beam optimization with a fixed V f (x (j) ) = 50%. Fig. 19 : 19Number of CGM steps to achieve different criteria. Fig. 20 : 20Optimized presents the design domain considered for the optimization of a MBB beam. A mesh with 300 × 100 elements of dimensions 4×4 mm was considered. A sensitivity filter with radius of 40 mm was used, as well as the presented momentum strategy. Fig. 21 : 21MBB beam. Fig. 22 : 22Optimized MBB beams for VV (j) = −300|0 and TV (j) max = 4200|1200. Fig. 23 :Fig. 24 : 2324Initial Topology for MBB beam with V f (x (0) ) = 50%.Fig. 24 presents the results for the most reasonable constraints, VV (j) = 0 and TV (j) max = 1200. The whole MBB beams are presented, in meshes with 600 × 100 elements. All strategies resulted in similar topologies. Optimized MBB beams for VV (j) = 0 and TV (j) max = 1200. Fig. B. 1 : 1Different topologies for a 4 × 4 mesh. C T =C + C ∆ . (D.6) conditions were considered: in the first case, u 0 = 0 and d 0 = M −1 f (direction of steepest descent); in the second case, u 0 =ū and d 0 = −M −1 ∆Kū (direction of steepest descent); and in the third case, u 0 = 0 and d 0 =ū. Appendix D.2 Explicit expressions for the 1st case Considering u 0 = 0 and d 0 = M −1 f , the first step of the CGM results in the displacements vector f 0 and f 1 are given by f 0 = f T v M (D.11) and f 1 = v K T v M , (D.12) where the vectors v M and v K correspond to v M = M −1 f (D.13) and v K =K v M + ∆K v M . (D.14) C i , x i ∈ {0, 1} . (D.16) Appendix D. 3 3Explicit expressions for the 2nd case Considering u 0 =ū and d 0 = −M −1 ∆Kū, the first step of the CGM results in the displacements vectoru 1 =ū + b 0 b 1 W 0 b , (D.23) where the vector b is corresponds to b = −∆Kū = − K iū , if x i = 0 , K iū , if x i = 1 . (D.24)The coefficients b 0 and b 1 are given byb 0 = b T v M (D.25) and b 1 = v K T v M ,(D.26) where the vectors v M and v K were redefined as v M = M −1 b (D.27) and v K =K v M + ∆K v M . (D.28) D.32) where the vectors v L and v R were redefined as v L = M −1 v K (D.33) and v R =K v L + ∆K v L . (D.34) 0 = 0 and d 0 =ū, the first step of the CGM results in the displacements vectoru 1 =C C Tū . (D.36)Both Eqs.(59)and(60)produce the same sensitivity expression, C i , x i ∈ {0, 1} , (D.37) which is the same as Eq. (D.16). The second step of the CGM results in the displacements vector vectors z and g correspond toz = f − b (D.39)andg =C C T z − f . (D.40)The coefficients z , g 0 , g 0 and g 1 are given byz , g 0 = v M T z , vectors v M and v K were redefined as v M = M −1 g (D.44) and v K =K v M + ∆K v M . (D.45) Funding This work was supported by the São Paulo Research Foundation (FAPESP), grant num- , this is illustrated for a particular case, with N = 4. The 16 possible entries are presented as topologies in a 2×2 mesh, solid elements are represented in black and void elements are represented in gray. Each entry is paired with a corresponding output value, so the set {h 1 , h 2 , . . . , h 16 } completely defines the discrete scalar function h(x).h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 h 10 h 11 h 12 h 13 h 14 h 15 h 16 Table 1 : 1Results for different MBB optimizations with V f (x (0) ) = 100% and V * f = 50%.Sensitivity Analysis VV (j) TV (j) max Number of Iterations Compliance [mJ] FOCI −300 | 0 1500 | 1200 131 (52) 45.74 (45.12) FOCI −300 | 0 30000 | 30000 147 (64) 45.71 (45.10) FOCI −3000 | 0 4200 | 1200 189 (100) 45.62 (45.54) FOCI −3000 | 0 30000 | 30000 285 45.08 FOCI-s −300 | 0 1500 | 1200 131 (52) 45.74 (45.12) FOCI-s −300 | 0 30000 | 30000 131 (52) 45.74 (45.11) FOCI-s −3000 | 0 4200 | 1200 186 45.59 FOCI-s −3000 | 0 30000 | 30000 294 45.08 CGM-1 −300 | 0 1500 | 1200 -- degenerated CGM-1 −300 | 0 30000 | 30000 -- degenerated CGM-1 −3000 | 0 4200 | 1200 -- degenerated CGM-1 −3000 | 0 30000 | 30000 -- degenerated CGM-1s −300 | 0 1500 | 1200 152 (53) 45.65 (45.29) CGM-1s −300 | 0 30000 | 30000 164 (53) 45.61 (45.29) CGM-1s −3000 | 0 4200 | 1200 243 44.94 CGM-1s −3000 | 0 30000 | 30000 291 44.92 CGM-2J −300 | 0 1500 | 1200 163 44.78 CGM-2J −300 | 0 30000 | 30000 142 44.78 CGM-2J −3000 | 0 4200 | 1200 109 45.32 CGM-2J −3000 | 0 30000 | 30000 -- degenerated CGM-2Js −300 | 0 1500 | 1200 122 44.87 CGM-2Js −300 | 0 30000 | 30000 131 44.87 CGM-2Js −3000 | 0 4200 | 1200 168 45.76 CGM-2Js −3000 | 0 30000 | 30000 284 44.88 Table 2 : 2Results for different MBB optimizations with a fixed V f (x (j) ) = 50%.Sensitivity Analysis VV (j) TV (j) max Number of Iterations Compliance [mJ] FOCI 0 1200 89 45.74 FOCI 0 30000 120 45.65 FOCI-s 0 1200 92 45.74 FOCI-s 0 30000 114 45.66 CGM-1 0 1200 -- degenerated CGM-1 0 30000 -- degenerated CGM-1s 0 1200 112 45.60 CGM-1s 0 30000 179 45.36 CGM-2J 0 1200 86 45.84 CGM-2J 0 30000 67 45.85 CGM-2Js 0 1200 90 45.66 CGM-2Js 0 30000 161 45.34 10 kN 10 kN Table B . B1: Values of Ā i 2 and B i 2 for void elements.Ā 3 2 Ā 7 2 Ā 2 2 Ā 6 2 B 3 2 B 7 2 B 2 2 B 6 2 Topology 1 0.900 -- -- -- 0.474 -- -- -- Topology 2 1.794 2.007 -- -- 0.642 0.667 -- -- Topology 3 2.046 2.598 1.663 -- 0.672 0.722 0.624 -- Topology 4 3.427 3.632 3.427 3.632 0.774 0.784 0.774 0.784 Topology optimization of bimorph piezoelectric energy harvesters considering variable electrode location. B V De Almeida, D C Cunha, R Pavanello, Smart Materials and Structures. 28885030de Almeida BV, Cunha DC, Pavanello R (2019) Topol- ogy optimization of bimorph piezoelectric energy har- vesters considering variable electrode location. Smart Materials and Structures 28(8):085030 Topology optimization of reactive acoustic mufflers using a bi-directional evolutionary optimization method. F Azevedo, M Moura, W Vicente, R Picelli, R Pavanello, Structural and Multidisciplinary Optimization. 585Azevedo F, Moura M, Vicente W, Picelli R, Pavanello R (2018) Topology optimization of reactive acoustic mufflers using a bi-directional evolutionary optimiza- tion method. Structural and Multidisciplinary Opti- mization 58(5):2239-2252 Topology optimization using a dual method with discrete variables. M Beckers, Structural Optimization. 171Beckers M (1999) Topology optimization using a dual method with discrete variables. Structural Optimiza- tion 17(1):14-24 Optimal shape design as a material distribution problem. M P Bendsøe, Structural optimization. 14Bendsøe MP (1989) Optimal shape design as a ma- terial distribution problem. Structural optimization 1(4):193-202 Topological sensitivity derivative and finite topology modifications: application to optimization of plates in bending. D Bojczuk, Z Mróz, Structural and Multidisciplinary Optimization. 3911Bojczuk D, Mróz Z (2009) Topological sensitivity derivative and finite topology modifications: applica- tion to optimization of plates in bending. Structural and Multidisciplinary Optimization 39(1):1 The shape and topological optimizations connection. J Céa, S Garreau, P Guillaume, M Masmoudi, Computer methods in applied mechanics and engineering. 1884Céa J, Garreau S, Guillaume P, Masmoudi M (2000) The shape and topological optimizations connection. Computer methods in applied mechanics and engi- neering 188(4):713-726 Evolutionary topology optimization for designing cellular fluid actuators. D C Cunha, R Pavanello, World Congress of Structural and Multidisciplinary Optimisation. SpringerCunha DC, Pavanello R (2017) Evolutionary topology optimization for designing cellular fluid actuators. In: World Congress of Structural and Multidisciplinary Optimisation, Springer, pp 1484-1496 Level-set methods for structural topology optimization: a review. Structural and Multidisciplinary Optimization. N P Van Dijk, K Maute, M Langelaar, F Van Keulen, 48van Dijk NP, Maute K, Langelaar M, Van Keulen F (2013) Level-set methods for structural topology optimization: a review. Structural and Multidisci- plinary Optimization 48(3):437-472 An evaluative study on eso and simp for optimising a cantilever tie-beam. C S Edwards, H A Kim, C J Budd, Structural and Multidisciplinary Optimization. 345Edwards CS, Kim HA, Budd CJ (2007) An evalua- tive study on eso and simp for optimising a can- tilever tie-beam. Structural and Multidisciplinary Optimization 34(5):403-414 Second order topological sensitivity analysis. J R De Faria, A A Novotny, R A Feijóo, E Taroco, C Padra, International journal of solids and structures. 44de Faria JR, Novotny AA, Feijóo RA, Taroco E, Padra C (2007) Second order topological sensitivity anal- ysis. International journal of solids and structures 44(14-15):4958-4977 Bounds on the spectral and maximum norms of the finite element stiffness, flexibility and mass matrices. I Fried, International Journal of Solids and Structures. 99Fried I (1973) Bounds on the spectral and maximum norms of the finite element stiffness, flexibility and mass matrices. International Journal of Solids and Structures 9(9):1013-1034 The eso method revisited. Structural and Multidisciplinary Optimization. K Ghabraie, 51Ghabraie K (2015) The eso method revisited. Structural and Multidisciplinary Optimization 51(6):1211-1222 A quadratic approximation for structural topology optimization. A A Groenwold, Lfp Etman, International Journal for Numerical Methods in Engineering. 824Groenwold AA, Etman LFP (2010) A quadratic ap- proximation for structural topology optimization. In- ternational Journal for Numerical Methods in Engi- neering 82(4):505-524 Enlargement methods for computing the inverse matrix. L Guttman, The annals of mathematical statistics. 173Guttman L (1946) Enlargement methods for comput- ing the inverse matrix. The annals of mathematical statistics 17(3):336-343 Updating the inverse of a matrix. W W Hager, SIAM review. 312Hager WW (1989) Updating the inverse of a matrix. SIAM review 31(2):221-239 Higher-order topological sensitivity analysis for the laplace operator. M Hassine, K Khelifi, Comptes Rendus Mathematique. 35410Hassine M, Khelifi K (2016) Higher-order topological sensitivity analysis for the laplace operator. Comptes Rendus Mathematique 354(10):993-999 Convergent and meshindependent solutions for the bi-directional evolutionary structural optimization method. Finite Elements in. X Huang, Y M Xie, Analysis and Design. 4314Huang X, Xie YM (2007) Convergent and mesh- independent solutions for the bi-directional evolu- tionary structural optimization method. Finite Ele- ments in Analysis and Design 43(14):1039-1049 A new look at eso and beso optimization methods. Structural and Multidisciplinary Optimization. X Huang, Y M Xie, 35Huang X, Xie YM (2008) A new look at eso and beso optimization methods. Structural and Multidis- ciplinary Optimization 35(1):89-92 Bi-directional evolutionary topology optimization of continuum structures with one or multiple materials. X Huang, Y M Xie, Computational Mechanics. 433393Huang X, Xie YM (2009) Bi-directional evolutionary topology optimization of continuum structures with one or multiple materials. Computational Mechanics 43(3):393 A further review of eso type methods for topology optimization. X Huang, Y M Xie, Structural and Multidisciplinary Optimization. 415Huang X, Xie YM (2010) A further review of eso type methods for topology optimization. Structural and Multidisciplinary Optimization 41(5):671-683 Pselinv-a distributed memory parallel algorithm for selected inversion: The symmetric case. M Jacquelin, L Lin, C Yang, ACM Transactions on Mathematical Software (TOMS). 433Jacquelin M, Lin L, Yang C (2016) Pselinv-a dis- tributed memory parallel algorithm for selected in- version: The symmetric case. ACM Transactions on Mathematical Software (TOMS) 43(3):1-28 Topology optimization via sequential integer programming and canonical relaxation algorithm. Y Liang, G Cheng, Computer Methods in Applied Mechanics and Engineering. 348Liang Y, Cheng G (2019) Topology optimization via sequential integer programming and canonical relax- ation algorithm. Computer Methods in Applied Me- chanics and Engineering 348:64-96 Selinv-an algorithm for selected inversion of a sparse symmetric matrix. L Lin, C Yang, J C Meza, J Lu, L Ying, E , ACM Transactions on Mathematical Software (TOMS). 374Lin L, Yang C, Meza JC, Lu J, Ying L, E W (2011) Selinv-an algorithm for selected inversion of a sparse symmetric matrix. ACM Transactions on Mathemat- ical Software (TOMS) 37(4):1-19 High natural frequency gap topology optimization of bi-material elastic structures and band gap analysis. Structural and Multidisciplinary Optimization. H N Lopes, J Mahfoud, R Pavanello, Lopes HN, Mahfoud J, Pavanello R (2021) High natural frequency gap topology optimization of bi-material elastic structures and band gap analysis. Structural and Multidisciplinary Optimization pp 1-16 A note on the theoretical convergence properties of the simp method. J M Martinez, Structural and Multidisciplinary Optimization. 294Martinez JM (2005) A note on the theoretical conver- gence properties of the simp method. Structural and Multidisciplinary Optimization 29(4):319-323 Finite topology variations in optimal design of structures. Structural and Multidisciplinary Optimization. Z Mróz, D Bojczuk, 25Mróz Z, Bojczuk D (2003) Finite topology variations in optimal design of structures. Structural and Mul- tidisciplinary Optimization 25(3):153-173 A topological derivative method for topology optimization. J A Norato, M P Bendsøe, R B Haber, D A Tortorelli, Structural and Multidisciplinary Optimization. 334-5Norato JA, Bendsøe MP, Haber RB, Tortorelli DA (2007) A topological derivative method for topology optimization. Structural and Multidisciplinary Opti- mization 33(4-5):375-386 Topological sensitivity analysis. A A Novotny, R A Feijóo, E Taroco, C Padra, Computer methods in applied mechanics and engineering. 1927-8Novotny AA, Feijóo RA, Taroco E, Padra C (2003) Topological sensitivity analysis. Computer methods in applied mechanics and engineering 192(7-8):803- 829 Bidirectional evolutionary structural optimization for design-dependent fluid pressure loading problems. R Picelli, W M Vicente, R Pavanello, Engineering Optimization. 4710Picelli R, Vicente WM, Pavanello R (2015) Bi- directional evolutionary structural optimization for design-dependent fluid pressure loading problems. Engineering Optimization 47(10):1324-1342 Evolutionary structural optimisation (eso) using a bidirectional algorithm. O M Querin, G P Steven, Y M Xie, Engineering computations. 158Querin OM, Steven GP, Xie YM (1998) Evolutionary structural optimisation (eso) using a bidirectional al- gorithm. Engineering computations 15(8):1031-1048 Sufficiency of a finite exponent in simp (power law) methods. Structural and Multidisciplinary Optimization. A Rietz, 21Rietz A (2001) Sufficiency of a finite exponent in simp (power law) methods. Structural and Multidis- ciplinary Optimization 21(2):159-163 A critical review of established methods of structural topology optimization. Structural and multidisciplinary optimization. Gin Rozvany, 37Rozvany GIN (2009) A critical review of established methods of structural topology optimization. Struc- tural and multidisciplinary optimization 37(3):217- 237 Combining eso with rigorous optimality criteria. Gin Rozvany, O M Querin, International journal of vehicle design. 284Rozvany GIN, Querin OM (2002a) Combining eso with rigorous optimality criteria. International journal of vehicle design 28(4):294-299 Theoretical foundations of sequential element rejections and admissions (sera) methods and their computational implementation in topology optimization. Gin Rozvany, O M Querin, 9th AIAA/ISSMO symposium on multidisciplinary analysis and optimization. 5521Rozvany GIN, Querin OM (2002b) Theoretical founda- tions of sequential element rejections and admissions (sera) methods and their computational implementa- tion in topology optimization. In: 9th AIAA/ISSMO symposium on multidisciplinary analysis and opti- mization, p 5521 Topological derivatives applied to fluid flow channel design optimization problems. Structural and Multidisciplinary Optimization. Lfn Sá, Rcr Amigo, A A Novotny, Ecn Silva, 54Sá LFN, Amigo RCR, Novotny AA, Silva ECN (2016) Topological derivatives applied to fluid flow channel design optimization problems. Structural and Multi- disciplinary Optimization 54(2):249-264 Sensitivity filtering from a continuum mechanics perspective. O Sigmund, K Maute, Structural and Multidisciplinary Optimization. 464Sigmund O, Maute K (2012) Sensitivity filtering from a continuum mechanics perspective. Structural and Multidisciplinary Optimization 46(4):471-475 Numerical instabilities in topology optimization: a survey on procedures dealing with checkerboards, mesh-dependencies and local minima. O Sigmund, J Petersson, Structural optimization. 161Sigmund O, Petersson J (1998) Numerical instabili- ties in topology optimization: a survey on procedures dealing with checkerboards, mesh-dependencies and local minima. Structural optimization 16(1):68-75 Topology optimization of binary structures using integer linear programming. R Sivapuram, R Picelli, Finite Elements in Analysis and Design. 139Sivapuram R, Picelli R (2018) Topology optimization of binary structures using integer linear programming. Finite Elements in Analysis and Design 139:49-61 Topology optimization of binary microstructures involving various non-volume constraints. R Sivapuram, R Picelli, Y M Xie, Computational Materials Science. 154Sivapuram R, Picelli R, Xie YM (2018) Topology op- timization of binary microstructures involving vari- ous non-volume constraints. Computational Materi- als Science 154:405-425 The evolutionary structural optimization method: theoretical aspects. P Tanskanen, Computer methods in applied mechanics and engineering. 191Tanskanen P (2002) The evolutionary structural op- timization method: theoretical aspects. Computer methods in applied mechanics and engineering 191(47-48):5485-5498 Concurrent topology optimization for minimizing frequency responses of two-level hierarchical structures. W Vicente, Z Zuo, R Pavanello, T Calixto, R Picelli, Y Xie, Computer Methods in Applied Mechanics and Engineering. 301Vicente W, Zuo Z, Pavanello R, Calixto T, Picelli R, Xie Y (2016) Concurrent topology optimization for min- imizing frequency responses of two-level hierarchical structures. Computer Methods in Applied Mechanics and Engineering 301:116-136 An analysis of the compliant mechanism models. M Y Wang, 2009 ASME/IFToMM International Conference on Reconfigurable Mechanisms and Robots. IEEEWang MY (2009) An analysis of the compliant mech- anism models. In: 2009 ASME/IFToMM Interna- tional Conference on Reconfigurable Mechanisms and Robots, IEEE, pp 377-385 Inverting modified matrices. M A Woodbury, Memorandum report. 42106336Woodbury MA (1950) Inverting modified matrices. Memorandum report 42(106):336 Recent advances on topology optimization of multiscale nonlinear structures. L Xia, P Breitkopf, Archives of Computational Methods in Engineering. 242Xia L, Breitkopf P (2017) Recent advances on topol- ogy optimization of multiscale nonlinear structures. Archives of Computational Methods in Engineering 24(2):227-249 Bi-directional evolutionary structural optimization on advanced structures and materials: a comprehensive review. L Xia, Q Xia, X Huang, Y M Xie, Archives of Computational Methods in Engineering. 252Xia L, Xia Q, Huang X, Xie YM (2018a) Bi-directional evolutionary structural optimization on advanced structures and materials: a comprehensive review. Archives of Computational Methods in Engineering 25(2):437-478 Stress-based topology optimization using bi-directional evolutionary structural optimization method. L Xia, L Zhang, Q Xia, T Shi, Computer Methods in Applied Mechanics and Engineering. 333Xia L, Zhang L, Xia Q, Shi T (2018b) Stress-based topology optimization using bi-directional evolu- tionary structural optimization method. Computer Methods in Applied Mechanics and Engineering 333:356-370 A simple evolutionary procedure for structural optimization. Y M Xie, G P Steven, Computers & structures. 495Xie YM, Steven GP (1993) A simple evolutionary procedure for structural optimization. Computers & structures 49(5):885-896 Two-scale optimal design of structures with thermal insulation materials. X Yan, X Huang, G Sun, Y M Xie, Composite Structures. 120Yan X, Huang X, Sun G, Xie YM (2015) Two-scale optimal design of structures with thermal insulation materials. Composite Structures 120:358-365 Bidirectional evolutionary method for stiffness optimization. X Y Yang, Y M Xie, G P Steven, O M Querin, AIAA journal. 3711Yang XY, Xie YM, Steven GP, Querin OM (1999) Bidi- rectional evolutionary method for stiffness optimiza- tion. AIAA journal 37(11):1483-1488 The coc algorithm, part ii: Topological, geometrical and generalized shape optimization. M Zhou, Gin Rozvany, Computer methods in applied mechanics and engineering. 891-3Zhou M, Rozvany GIN (1991) The coc algorithm, part ii: Topological, geometrical and generalized shape op- timization. Computer methods in applied mechanics and engineering 89(1-3):309-336 On the validity of eso type methods in topology optimization. M Zhou, Gin Rozvany, Structural and Multidisciplinary Optimization. 211Zhou M, Rozvany GIN (2001) On the validity of eso type methods in topology optimization. Structural and Multidisciplinary Optimization 21(1):80-83
[]
[ "MULTIPLIER SPECTRA AND THE MODULI SPACE OF DEGREE 3 MORPHISMS ON P 1", "MULTIPLIER SPECTRA AND THE MODULI SPACE OF DEGREE 3 MORPHISMS ON P 1" ]
[ "Benjamin Hutz ", "Michael Tepper " ]
[]
[]
The moduli space of degree d morphisms on P 1 has received much study. McMullen showed that, except for certain families of Lattès maps, there is a finite-to-one correspondence (over C) between classes of morphisms in the moduli space and the multipliers of the periodic points. For degree 2 morphisms Milnor (over C) and Silverman (over Z) showed that the correspondence is an isomorphism[7,8]. In this article we address two cases: polynomial maps of any degree and rational maps of degree 3. Definition 1. Let φ ∈ Hom d and Per n (φ) = {P ∈ P 1 : φ n (P ) = P } be the set of periodic points of period n for φ. For P ∈ Per n (φ), φ n induces a map on the cotangent space of P 1 at P to itself. The cotangent space is dimension 1 in this case, so the induced map is an element of GL 1 (a scalar) and we call it the multiplier at P and is denoted λ P (φ).Define the n-multiplier spectrum Λ n = {λ P (φ) : P ∈ Per n (φ)}, where the multipliers are taken with appropriate multiplicity.Define σ n,i for 1 ≤ i ≤ d n + 1 as the i th symmetric function on the n-multiplier spectrum. We denote the (d n + 1)-tuple σ n = (σ n,1 , . . . , σ n,d n +1 ).It is an easy chain rule exercise to show that, as an unordered set, Λ n is invariant under conjugation. The multipliers depend algebraically, but not rationally, on the coefficients on φ, however, the σ n,i are actually rational functions in the coefficients of φ [7, 8]. Furthermore, the σ n,i are in fact regular functions on M d [8]. Definition 2. Define the map τ d,n : M d → A k[φ] → (σ 1 , σ 2 , . . . , σ n ).
null
[ "https://arxiv.org/pdf/1110.5082v2.pdf" ]
117,006,413
1110.5082
a20bb73bdfe9f345155b895cbf68ab97b83be30f
MULTIPLIER SPECTRA AND THE MODULI SPACE OF DEGREE 3 MORPHISMS ON P 1 23 Oct 2011 Benjamin Hutz Michael Tepper MULTIPLIER SPECTRA AND THE MODULI SPACE OF DEGREE 3 MORPHISMS ON P 1 23 Oct 2011 The moduli space of degree d morphisms on P 1 has received much study. McMullen showed that, except for certain families of Lattès maps, there is a finite-to-one correspondence (over C) between classes of morphisms in the moduli space and the multipliers of the periodic points. For degree 2 morphisms Milnor (over C) and Silverman (over Z) showed that the correspondence is an isomorphism[7,8]. In this article we address two cases: polynomial maps of any degree and rational maps of degree 3. Definition 1. Let φ ∈ Hom d and Per n (φ) = {P ∈ P 1 : φ n (P ) = P } be the set of periodic points of period n for φ. For P ∈ Per n (φ), φ n induces a map on the cotangent space of P 1 at P to itself. The cotangent space is dimension 1 in this case, so the induced map is an element of GL 1 (a scalar) and we call it the multiplier at P and is denoted λ P (φ).Define the n-multiplier spectrum Λ n = {λ P (φ) : P ∈ Per n (φ)}, where the multipliers are taken with appropriate multiplicity.Define σ n,i for 1 ≤ i ≤ d n + 1 as the i th symmetric function on the n-multiplier spectrum. We denote the (d n + 1)-tuple σ n = (σ n,1 , . . . , σ n,d n +1 ).It is an easy chain rule exercise to show that, as an unordered set, Λ n is invariant under conjugation. The multipliers depend algebraically, but not rationally, on the coefficients on φ, however, the σ n,i are actually rational functions in the coefficients of φ [7, 8]. Furthermore, the σ n,i are in fact regular functions on M d [8]. Definition 2. Define the map τ d,n : M d → A k[φ] → (σ 1 , σ 2 , . . . , σ n ). Introduction Let Hom d be the space of degree d endomorphisms of P 1 . Let φ ∈ Hom d , then after choosing coordinates for P 1 we may represent the coordinates of φ as two degree d homogeneous polynomials with no common zeros. We may consider Hom d ⊂ P 2d+1 by identifying a morphism φ with its set of coefficients. There is a natural action of PGL 2 by conjugation on Hom d , which extends to P 2d+1 , and we get a moduli space M d = Hom d / PGL 2 [4,8]. The moduli space M d , and its generalization to P N , has received much study, see for instance [2,4,5,6,7,8]. Milnor [7] gave an isomorphism M 2 ∼ = A 2 over C which Silverman [8] extended to Z in addition to showing that M d is an affine integral scheme over Z. However, for d > 2 less is known about the structure of M d . McMullen [6] showed that there is a finite-to-one correspondence (over C) between classes of morphisms in the moduli space and certain conjugation invariants called multipliers. It is this correspondence of McMullen that we study in this article. We next supply the necessary definitions to state the correspondence precisely. We define the degree of τ d,n as the number of points in τ −1 d,n (P ) for a generic point P in τ d,n (M d ). From [9,Theorem 4.54] the degree of τ d,n stabilizes as n → ∞ and we write deg(τ d ) for this value. Specifically, McMullen [6,Corollary 2.3] showed that the conjugacy class of φ ∈ Hom d is determined up to finitely many choices by its multiplier spectra if φ is not a flexible Lattès map. This is typically stated as the following theorem. Two maps which have the same set of multipliers are called isospectral. All flexible Lattès maps (integer multiplication on elliptic curves) are isospectral since they all have the same multiplier spectra. In other words, the regular maps σ n,i are constant on the family of flexible Lattès maps. Additionally, one can use rigid Lattès maps to show that deg( [9, §6.6]. In this article we address two cases on P 1 : polynomial maps of any degree and rational maps of degree 3. τ d ) ≥ C ǫ d 1 2 −ǫ for some constant C. In particular, for squarefree d, deg(τ d ) ≥ h where h is the class number of Q( √ −d) Results In Section 3 we examine the locus of polynomial maps in M d . A polynomial map is a map with a totally ramified fixed point. We denoteτ d,n as the restriction of τ d,n to the space of polynomial maps of degree d. We show Theorem 4. Except for possibly finitely many fibers, the mapτ d,1 is finite-to-one. For d = 2, 3, 4, and 5 we compute that there are no exceptional fibers and for d = 2 and 3 the correspondence is 1-to-1, for d = 4 the correspondence is 2-to-1, and for d = 5 the correspondence is 6-to-1. Furthermore, we show thatτ d,n is one-to-one for d = 2, 3, 4, and 5, when considering the 2-multiplier spectra. That is deg(τ d ) = 1 for d = 2, 3, 4, and 5. Theorem 5. The mapsτ 4,2 andτ 5,2 are one-to-one. In Section 4 we consider rational maps and examine M 3 . Our main result is computing the degree of the correspondence τ 3,2 (the number of isospectral maps up to conjugation equivalence) using groebner bases. Theorem 6. The value deg(τ 3,2 ) = 12. While this value represents an upper bound on deg(τ 3 ) it is not necessarily equal to deg(τ 3 ). See Example 14 for an example where additional multiplier information cases the degree to decrease. While our methods theoretically would allow us to compute deg(τ 3 ), the computations with additional multiplier information did not finish. An interesting open question is to determine what information uniquely specifies a class in M d . Towards this end, in Section 5 we restrict to the case of a generic map, where generic is the open dense subset of M d of maps with distinct fixed points. Theorem 7. If φ has distinct fixed points, then the following correspondence is one-to-one: τ + d,1 : M d → A 2d+1 [φ] → (σ 1 , Per 1 (φ)). Polynomials We first examine polynomial maps, φ with a totally ramified fixed point. Definition 8. We denote P d ⊂ M d as the moduli space of degree d polynomial maps. Define the restriction of τ d,n to the space of polynomial maps as τ d,n : P d → A k . Any given polynomial map may be conjugated to the form (1) z d + a 2 z d−2 + · · · + a d . Note however that this is not a normal form since two such polynomials may be conjugate. We are interested in the correspondence coming from McMullen's theorem τ d,n : P d → A k [φ] → (σ 1 , . . . , σ n ). 3.1. Multipliers of fixed points. We first extend the relation σ 1,1 = σ 1,3 + 2 in M 2 to M d . Theorem 9. Let [φ] ∈ M d . The symmetric functions σ 1 satisfy (−1) d+2 σ 1,d+1 + (−1) d−1 σ 1,d−1 + (−1) d−2 2σ 1,d−2 + · · · − (d − 1)σ 1,1 + d = 0. Proof. Label the multipliers of the fixed points as {λ 0 , . . . , λ d }. Assume first that the fixed points are distinct. Distinct fixed points implies the multipliers are all different from 1 and we may apply the relation [9, Theorem 1.14] d i=0 1 1 − λ i = 1. Clearing denominators we have d j=0 i =j (1 − λ i ) = d i=0 (1 − λ i ). Expanding the right-hand side we have d j=0 i =j (1 − λ i ) = 1 − σ 1,1 + σ 1,2 + · · · + (−1) d σ 1,d + (−1) d+1 σ 1,d+1 . Now we expand the left-hand side. The term of degree n ≥ 0 in the λ i 's is (−1) n (d + 1 − n)σ 1,n , since each term of σ 1,i is missing exactly from n terms of the sum, for notational convenience we define σ 1,0 = 1. We rewrite this as d i=0 (−1) i (d + 1 − i)σ 1,i = d+1 i=0 (−1) i σ 1,i Combining with the right-hand side we have the desired result when φ has distinct fixed points. The set of φ with 1 ∈ Λ 1 is a Zariski closed set, so the φ with distinct fixed points are dense in Hom d . Thus, the function d i=0 (−1) i (d + 1 − i)σ 1,i − d+1 i=0 (−1) i σ 1,i is identically zero. Corollary 10. For [φ] ∈ P d we have (−1) d−1 σ 1,d−1 + (−1) d−2 2σ 1,d−2 + · · · − (d − 1)σ 1,1 + d = 0. Proof. The fixed point at ∞ has λ = 0 and hence σ 1,d+1 = i λ i = 0. We wish to show that specifying σ 1 ∈τ d,1 (A d+1 ) determines a polynomial map in P d ⊂ M d up to finitely many choices. Theorem 11. Let φ ∈ P d with affine fixed points {z 1 , . . . , z d }. Each λ i ∈ Λ 1 such that λ i = 1 determines a homogeneous equation of degree d − 1 of the form F i (z 1 , . . . , z d ) = λ i − 1. Proof. For a polynomial map φ(z) we may write φ(z) − z = d i=1 (z − z i ). If λ i = 1, then z i is a multiple root of the above equation and upon taking derivatives we get a totality. For λ i = 1 we compute φ ′ (z i ) − 1 = λ i − 1 = d j=1,j =i (z i − z j ). Thus, we get F i (z 1 , . . . , z d ) = λ i − 1 for a homogeneous equation F i (z 1 , . . . , z d ) of degree d − 1. Theorem 12. Except for possibly finitely many fibers, the map τ d,1 : P d → A d+1 [φ] → σ 1 is finite-to-one. Proof. We take as our starting point the system of homogeneous equations of degree d − 1 F i (z 1 , . . . , z d ) = λ i − 1 from Theorem 11. Label the hypersurfaces as H i = V (F i − λ i + 1). We proceed one λ i at a time. Except for finitely many λ 2 , the hypersurfaces H 1 and H 2 intersect properly (codimension 2) since H 1 has finitely many components. Similarly, except for finitely many λ 3 the varieties (H 1 ∩ H 2 ) and H 3 intersect properly (co-dimension 3). We proceed similarly for the remaining H i avoiding at most finitely many λ i . Thus, we have a codimension d set in a dimension d space, so there are only finitely many solutions. The solutions are the possible sets of fixed points of the polynomial map φ, and the set of fixed points determines a unique polynomial map. We compute the degree of the correspondenceτ d,1 for d = 2, 3, 4, and 5 using groebner bases for the system of equations from Theorem 11 for a polynomial with distinct fixed points. Theorem 13. The value deg(τ d,1 ) =          1 d = 2 1 d = 3 2 d = 4 6 d = 5. Proof. The case d = 2 was done by Milnor [7]. The cases d = 3, 4, and 5 are groebner basis calculations and were performed in Singular [3]. For d = 3 and 4 we are able to compute the groebner basis of the system of equations from Theorem 11 with the λ i as indeterminants. This produces (d−1)! configurations of fixed points. However, these are not all distinct up to conjugation because if {z i } is a set of fixed points, then so is {ζ d−1 z i } where ζ d−1 is a primitive (d − 1) st root of unity. This is readily apparent from the form of the groebner basis when using the lexicographic ordering for elimination. Thus, for d = 3 we have (3−1)!/2 = 1 and for d = 4 we have (4−1)!/3 = 2 distinct conjugacy classes in the moduli space. For d = 5, the groebner basis calculation with the λ i as indeterminants did not finish, so we employed an alterative method. The method is to pick a specialization of the λ i and compute the degree, and then show that this is in fact a generic value by showing that it is constant under perturbation of the λ i . Singular was able to compute the groebner basis fixing any three of the four Remark. For d = 6 and fixed Λ 1 , the computation would finish modulo primes, but not in any general situation. It appears that there are at least 1900 solutions of fixed point arrangements for d = 6, so we expect a large jump in degree from d = 5 to d = 6. λ i for Λ 1 = {−2, Theorem 13 says that Λ 1 specifies a polynomial up to finitely many choices. The next example shows that Λ 2 can further distinguish between polynomials. there are at least 600 points on this variety. Specializing to Λ 1 = {−5, 5, −4, −2, 29/9, 0} and using Magma we have a zero-dimensional scheme in the coordinates (β, λ β ). Working modulo 29 the scheme is reduced with 600 distinct points over F 29 240 . There are in fact 6 distinct possibilities for Λ 2 . Thus, there is one polynomial map associated to {Λ 1 , Λ 2 }. Since these points are all multiplicity one, there will remain 6 distinct maps under perturbation of Λ 1 . Remark. It would be interesting to determine ifτ d,2 (orτ d,n ) is generically one-to-one on polynomials or if the degree will eventually grow. 3.2. Explicit P 1 3 . Milnor [7] gave an explicit normal form for classes in M 2 in terms of Λ 1 . We give a similar description for P 3 . In particular, we give an explicit description of the fiberτ −1 3,1 (σ 1 ) ⊂ P 3 . Theorem 17. Let [φ] ∈τ −1 d,1 (σ 1 ) and write [φ] in the form φ(z) = z 3 + az + b. If φ has 3 distinct affine fixed points with multipliers λ 1 , λ 2 , and λ 3 . Then, a = − (λ 2 1 + (λ 2 − 6)λ 1 + (λ 2 2 − 6λ 2 + 9)) (3λ 1 + (3λ 2 − 6)) 27b 2 = σ 1,3 − σ 1,2 a + σ 1,1 a 2 − a 3 . If φ has 2 distinct affine fixed points with λ the multiplier that is not 1. Then, a = 1 − λ − 1 3 b = −2 ± λ − 1 9 3/2 . If φ has a single affine fixed point, then a = b = 0. Proof. Specifying σ 1 in fact specifies the 1-multiplier spectrum Λ 1 = {λ 1 , λ 2 , λ 3 , 0}. Case 1 (3 distinct affine fixed points). Then we know for the three affine fixed points that z i = ± λ i − a 3 . Additionally, we have z 3 + (a − 1)z + b = 3 i=1 (z − z i ) and we have b = − 3 i=1 z i = − 3 i=1 ± λ i − a 3 (a − 1) = z 1 z 2 + z 1 z 3 + z 2 z 3 0 = z 1 + z 2 + z 3 = 3 i=1 ± λ i − a 3 . Solving, we arrive at a = − (λ 2 1 + (λ 2 − 6)λ 1 + (λ 2 2 − 6λ 2 + 9)) (3λ 1 + (3λ 2 − 6)) 27b 2 = σ 3 − σ 2 a + σ 1 a 2 − a 3 . Case 2 (2 distinct affine fixed points). Let λ 3 = λ be the multiplier that is not equal to 1, then since z 1 + z 2 + z 3 = 0 and z 1 = z 2 we know that z 3 = −2z 1 and that φ(z) − z = z 3 + (a − 1)z + b = (z − z 1 ) 2 (z + 2z 1 ). Thus, we have a = 1 − 3z 2 1 b = 2z 3 1 λ = 3z 2 3 + a = 12z 2 1 + a = 9z 2 1 + 1. Thus, we solve z 1 = ± λ − 1 9 and a = 1 − λ − 1 3 b = ±2 λ − 1 9 3/2 . Case 3 (1 distinct affine fixed point). If λ i = 1 for i = 1, 2, 3, then we must have z 1 = z 2 = z 3 for the three affine fixed points since they must all have multiplicity at least 2. Since also z 1 + z 2 + z 3 = 0, we must have z i = 0 and, hence, φ(z) = z 3 . Similar to the computation that φ c (z) = z 2 +c is the family σ 1 = (2, 4c, 0) in M 1 2 we can compute the image of degree 3 polynomials. Corollary 18. We may write [φ] ∈ P 3 as φ a,b (z) = z 3 + az + b. up to the sign of b. In particular, [φ a,b ] = [φ a,−b ]. Theorem 19. The image of φ a,b under τ 3,1 is given by τ 3,1 (φ a,b ) = σ 1 = (6 − 3a, 9 − 6a, 9a − 12a 2 + 4a 3 + 27b 2 , 0) Proof. Direct computation (performed with Mathematica [11]). Rational Maps Theorem 20. The value deg(τ 3,2 ) = 12. Proof. The proof proceeds in two steps, first we determine a zero-dimensional variety whose points give the coefficients of the map. Then we determine the number of points on this variety. Recall that deg(τ 3 ) = #(τ −1 3 (P )) for a generic point P . Generically, there are 4 distinct fixed points (λ i = 1 is a closed condition) and at least one 2-periodic point which is not also a fixed point (λ i = −1). We dehomogenize and write φ as a rational map denoted as φ(z) We label the coefficients of the general such map as φ(z) = a 1 z 3 + a 2 z 2 + a 3 z + a 4 b 1 z 3 + b 2 z 2 + b 3 z + b 4 . Under a PGL 2 transformation we may move two of the fixed points to 0 and ∞ to have φ(z) = a 1 z 3 + a 2 z 2 + a 3 z b 2 z 2 + b 3 z + b 4 . Computing the multipliers we also have λ 0 b 4 = a 3 λ ∞ a 1 = b 2 . So we have left to determine the coefficients {a 1 , a 2 , b 3 , b 4 }. A PGL 2 transformation allows us to move a third fixed point to 1. Then we have a 2 = (b 2 + b 3 + b 4 ) − a 1 − a 3 . Looking at the multiplier at z = 1 we can solve (since λ 1 = 1) for b 3 = (1 − λ 1 λ ∞ )a 1 + (2 − λ 0 − λ 1 )b 4 λ 1 − 1 . Letting α be the fourth (and last) fixed point. Then we can solve for b 4 = (αa 1 λ ∞ − αa 1 ) 1 − λ 0 . This has provided all of the coefficients of φ(z) in terms of {λ 0 , λ 1 , λ ∞ , α} except for a 1 . To account for a 1 we may either consider φ(z) as a map on P 1 and set a 1 = 1 or, equivalently, notice that a 1 is a factor of each of the coefficients and cancels in φ(z). Also, note that λ α is uniquely determined by the relation [9, Theorem 1.14] 1 1 − λ 1 + 1 1 − λ 0 + 1 1 − λ ∞ + 1 1 − λ α = 1. Let β ∈ {0, 1, ∞, α} be an exact 2-periodic point. We have new equations φ 2 (β) = β (2) λ β = (φ 2 ) ′ (β) = φ ′ (β)φ ′ (φ(β)). Given λ β , the system (2) is 2 variables {α, β} and 2 equations. Except for possibly finitely many choices of λ β , this system defines a 0-dimensional variety in coordinates (α, β). Thus, for a generic point P , the fiber τ −1 3,2 (P ) is finite-to-one. We now determine deg(τ 3,2 ). Computations were done in Magma [1]. A generic groebner basis computation does not finish. Choosing a particular specialization (values of the multipliers) and computing the degree of the reduced subscheme we find that it has 18 distinct points (in P 2 ). We will show there remains 18 distinct points under perturbation of the multipliers. We find 6 points where the map is not generic, α = 0 or 1 is a fixed point with multiplicity greater than one. In coordinates (α, β, z), where z is the homogenizing variable, the 6 points are P 1 = (1, 0, 0) P 2 = (0, 0, 1) P 3 = (1, 1, 1) P 4 = (0, −λ 1 − λ ∞ + 2 1 − λ 1 , 1) P 5 = (1, λ ∞ − 1 1 − λ 0, 1)P 6 = (1, − λ 0 λ 1 λ ∞ − λ 0 λ 1 − λ ∞ + 1 λ 0 λ 1 − λ 0 − λ 1 + 1 , 0). By computing the determinant of the Jacobian matrix for P 4 , P 5 , and P 6 we see that they have generic multiplicity at least 2. We next compute that P 1 , P 2 , and P 3 each have generic multiplicity 42. The method is to pick a specialization where this occurs, and then show that the multiplicities remain constant under perturbation of the multipliers. Magma is able to compute the multiplicities with 3 of the 4 multipliers fixed, demonstrating that the generic multiplicities are each 42. Since the total number of points of intersection (with multiplicities) by Bézout's theorem is 144, we see that the other twelve points must have multiplicity 1 (and P 4 , P 5 , and P 6 have multiplicity exactly 2). Thus, these twelve points also remain distinct under perturbation of the multipliers. The points P 1 , . . . , P 6 do satisfy the necessary equations, but they correspond to non-generic maps, α = 0 or 1, causing the fixed points to not be distinct. It is easy to check that these six points are all of the possibilities for α, β ∈ {0, 1}. Similarly, if (α, β) and (α, β ′ ) corresponded to the same map, then we would have λ β = λ β ′ which is a non-generic situation. Thus, the remaining twelve points are inverse images under τ 3,2 and the degree of τ 3,2 = 12. However, τ 3,2 is not necessarily finite-to-one on fibers with higher multiplicity fixed points. Example 21. Consider a map φ ∈ M 3 with two multiplicity 2 fixed points. Move them to 0 and ∞ to get φ(z) = a 1 z 3 + a 2 z 2 + a 3 z b 1 z 2 + b 3 z + b 3 . Since they are multiplicity 2 then we know λ 1 = λ ∞ = 1. Now, consider φ(φ(z)) − z. Solving z 1 = −a 1 (a 2 /3 + b 3 /6) z 1 = φ(φ(z 1 )) we can produce z 1 the only exact 2-periodic point and move it to 1 with a PGL 2 transformation. Since it has multiplicity greater than 1, its multiplier is 1. So we have two equations in the 4 (up to scaling) remaining coefficients {a 1 , a 2 , a 3 , b 3 } which is not enough to determine φ up to finitely many choices. Remark. The set {Λ 1 , Λ 2 , Λ 3 , Λ 4 } should be enough for finite-to-one for all φ ∈ M 3 and is necessary in the case of a map with 1 fixed point, 1 exact 2 periodic point, and 1 exact 3 periodic point. Multipliers and Fixed Points McMullen's theorem tells us that there are only finitely many maps with a given set of multiplier spectra. An interesting question would be to determine what information is necessary to specify a map uniquely in M d . In the case of distinct fixed points, the following theorem shows that specifying the fixed points is enough. Theorem 22. If φ has distinct fixed points, then the following correspondence is 1-to-1: Per 1 (φ)). τ + d,1 : Hom d → A 2d+1 [φ] → (σ 1 , Proof. We write φ(z) = z − p(z) q(z) . The fixed points are the roots of the polynomial p(z) so it can be specified (up to constant). We have φ ′ (z) = 1 − p ′ (z) q(z) − p(z)q ′ (z) q(z) 2 9 Evaluated at a fixed point is λ i = φ ′ (z i ) = 1 − p ′ (z i ) q(z i ) − 0. Thus we get the linear equation (1 − λ i )q(z i ) − p ′ (z i ) = 0. Note that deg(q(z)) = d so has d + 1 coefficients to go along with the d + 1 fixed points. Finding the coefficients of q(z) when λ i = 1 is then a question of whether or not the matrix of coefficients of the linear system is invertible. This matrix is a Vandermonde with each row scaled by the nonzero constant (1 − λ i ). Thus, since the fixed points are distinct, the matrix is invertible and there is a unique solution. If there is a fixed point at infinity, then the argument is the same except that deg(p(z)) is one less. 5.1. Normal Form for Degree 3. We propose the following normal form for a rational map of degree 3, with distinct fixed points. Theorem 23. φ(z) = ((−λ1 + 1)λ0 + (λ1 − 1))z 3 + (((−αλ1 + 1)λ0 + (α − 1))λ∞ + (((α + 1)λ1 − 2)λ0 + (−λ1 + (−α + 2))))z 2 ((−λ1 + 1)λ0 + (λ1 − 1))λ∞z 2 + (((λ1 − α)λ0 + ((−α − 1)λ1 + 2α))λ∞ + ((α − 1)λ0 + (αλ1 + (−2α + 1))))z +((αλ1 − α)λ0λ∞ + (−αλ1 + α)λ0)z) +((αλ1 − α)λ∞ + (−αλ1 + α))) Where the fixed points are {0, 1, ∞, α} with corresponding multipliers {λ 0 , λ 1 , λ ∞ , λ α } and λ α = 1 1 1−λ 0 + 1 1−λ∞ + 1 1−λ 1 − 1 + 1. Proof. The details are identical to the beginning of the proof of Theorem 20, so we merely state the outline. The explicit calculation was carried out in Pari [10]. We conjugate φ so that {0, 1, ∞} are fixed points with multipliers {λ 0 , λ 1 , λ ∞ }. Labeling the last fixed point as α, we have the equation φ(α) = α allowing us to determine the form of φ depending only on the choices of {λ 0 , λ 1 , λ ∞ , α}. E-mail address: [email protected] Theorem 3 . 3[6, Corollary 2.3] Fix d ≥ 2. For n sufficiently large, τ d,n is finite-to-one on M d (C) except for certain families of Lattès maps. − 3 , 3−4, 8, 0}. In all four cases, there were 24 solutions of fixed point arrangements some of which differ by a 4 th root of unity. Thus, there are 24/4 = 6 distinct conjugacy classes in the moduli space. Example 14 . 14Consider the mapτ 4,1 (P 4 ) → A 5 and the fiber τ −1 4,1 (−1724, −1163982, 74470803, 4530821869, 0). In other words Λ 1 = {−2243, −59, 0, 67, 511}. There are two polynomials (up to conjugation) in this inverse image f (z) = z 4 − 77z 2 + 217z − 140 g(z) = z 4 − 721/8z 2 + 217z + 165025/256. However,τ 4,2 (f ) =τ 4,2 (g). Theorem 15. deg(τ 4 ) = deg(τ 4,2 ) = 1.Proof. We use a groebner basis calculation in Magma[1]. To the fixed point equations, we add equations for a single 2-periodic pointφ(φ(β)) = β and (φ 2 ) ′ (β) = λ β .From just the fixed point equations there were 6 distinct choices of the set of fixed points (only 2 up to conjugation). For each set of fixed points there are 16 possible 2-periodic points, and, hence, there are at least 96 points on this variety. Specializing to Λ 1 = {−5, 5, 4, 7/5, 0} and using Magma we have a zero-dimensional scheme in the coordinates (β, λ β ) which is reduced with 96 distinct points. Working modulo 13 (where the scheme is still reduced) we determine the 96 points over F 13 60 and that there are in fact 2 distinct possibilities for Λ 2 . Thus, there is one polynomial map associated to {Λ 1 , Λ 2 }. Since these points are all multiplicity one, there will remain 2 distinct maps under perturbation of Λ 1 .Theorem 16. deg(τ 5 ) = deg(τ 5,2 ) = 1.Proof. We again use a groebner basis calculation in Magma[1]. To the fixed point equations, we add equations for a single 2-periodic point φ(φ(β)) = β and (φ 2 ) ′ (β) = λ β .From just the fixed point equations there were 24 distinct choices of the set of fixed points (6 up to conjugation). For each set of fixed points there are 25 possible 2-periodic points, and, hence, The magma algebra system. I. The user language. Wieb Bosma, John Cannon, Catherine Playoust, J. Symb. Comp. 243-4Wieb Bosma, John Cannon, and Catherine Playoust. The magma algebra system. I. The user language. J. Symb. Comp., 24(3-4):235-265, 1997. The moduli space of quadratic rational maps. Laura Demarco, J. Amer. Math. Soc. 20Laura DeMarco. The moduli space of quadratic rational maps. J. Amer. Math. Soc., 20:321-355, 2007. Singular 2.0. A Computer Algebra System for Polynomial Computations. G.-M Greuel, G Pfister, H Schönemann, Centre for Computer Algebra, University of KaiserslauternG.-M. Greuel, G. Pfister, and H. Schönemann. Singular 2.0. A Computer Algebra System for Polynomial Com- putations, Centre for Computer Algebra, University of Kaiserslautern, 2001. http://www.singular.uni-kl.de. Alon Levy, arXiv:0903.1318The space of morphisms on projective space. Alon Levy. The space of morphisms on projective space. arXiv:0903.1318, 2009. Moduli spaces for families of rational maps on P1. Michelle Manes, J. Number Theory. 1277Michelle Manes. Moduli spaces for families of rational maps on P1. J. Number Theory, 127(7):1623-1663, 2009. Families of rational maps and iterative root-finding algorithms. Curtis Mcmullen, Ann. of Math. 1253Curtis McMullen. Families of rational maps and iterative root-finding algorithms. Ann. of Math., 125(3):467-493, 1987. Geometry and dynamics of quadratic rational maps. J Milnor, Experiment. Math. 21J. Milnor. Geometry and dynamics of quadratic rational maps. Experiment. Math., 2(1):37-83, 1993. The space of rational maps on P1. Joseph H Silverman, Duke Math. J. 94Joseph H. Silverman. The space of rational maps on P1. Duke Math. J., 94:41-118, 1998. The Arithmetic of Dynamical Systems. Joseph H Silverman, Graduate Texts in Mathematics. 241Springer-VerlagJoseph H. Silverman. The Arithmetic of Dynamical Systems, volume 241 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2007. PARI/gp, version 2.3.2. Pari The, Group, BordeauxThe PARI Group, Bordeaux. PARI/gp, version 2.3.2, 2007. available from http://pari.math.u-bordeaux.fr/. Mathematica Version 7.0. Wolfram Inc, Research, 10016-4309 E-mail address: [email protected] Center of CUNY, 365 Fifth Avenue. New York, NYWolfram Research IncInc. Wolfram Research. Mathematica Version 7.0. Wolfram Research Inc., 2008. Ph.D. Program in Mathematics, Graduate Center of CUNY, 365 Fifth Avenue, New York, NY 10016-4309 E-mail address: [email protected]
[]
[ "The non-Abelian T-dual of Klebanov-Witten Background and its Penrose Limits", "The non-Abelian T-dual of Klebanov-Witten Background and its Penrose Limits" ]
[ "Sourav Roychowdhury [email protected] \nSIPCOT IT Park\nChennai Mathematical Institute\n603 103SiruseriIndia\n", "Prasanta K Tripathy [email protected] \nDepartment of Physics\nIndian Institute of Technology Madras\n600 036ChennaiIndia\n" ]
[ "SIPCOT IT Park\nChennai Mathematical Institute\n603 103SiruseriIndia", "Department of Physics\nIndian Institute of Technology Madras\n600 036ChennaiIndia" ]
[]
In this paper we consider both Abelian as well as non-Abelian T-duals of the Klebanov-Witten background and inspect their various Penrose limits. We show that these backgrounds admit pp-wave solutions in the neighbourhood of appropriate null geodesics. We study the quantization of closed string propagating on some of the resulting pp-wave backgrounds and comment on the probable field theory duals.
10.1007/jhep11(2019)125
null
195,791,550
1907.01904
a00177008c90256dd715caba49e6663932e7f6de
The non-Abelian T-dual of Klebanov-Witten Background and its Penrose Limits arXiv:1907.01904v3 [hep-th] 18 Nov 2019 Sourav Roychowdhury [email protected] SIPCOT IT Park Chennai Mathematical Institute 603 103SiruseriIndia Prasanta K Tripathy [email protected] Department of Physics Indian Institute of Technology Madras 600 036ChennaiIndia The non-Abelian T-dual of Klebanov-Witten Background and its Penrose Limits arXiv:1907.01904v3 [hep-th] 18 Nov 2019 In this paper we consider both Abelian as well as non-Abelian T-duals of the Klebanov-Witten background and inspect their various Penrose limits. We show that these backgrounds admit pp-wave solutions in the neighbourhood of appropriate null geodesics. We study the quantization of closed string propagating on some of the resulting pp-wave backgrounds and comment on the probable field theory duals. Introduction Duality plays a significant role in understanding various aspects of string theory. T-duality is one such example which relates low energy effective actions of various string theories among each other [1,2]. One of the most familiar description of T-duality widely used in the literature concerns with U(1) isometry. In this case the duality is not merely relating the low energy theories among each other but manifests as a symmetry of the full string theory [3]. A non-trivial generalization of this duality exists for isometries admitting the structure of a non-Abelian group [4]. However, unlike their Abelian counterparts, these non-Abelian T-dualities are not extended to full string theory [5]. Instead, they are used to relate the low energy theories among each other. Several aspects of the non-Abelian T-duality have been investigated in recent years. An important development in this area was to generalize this formalism in the presence of RR fields [6]. This in turn was used as a solution generating technique in supergravity to obtain new backgrounds. To demonstrate the applicability of this formalism, a SU (2) sub-group of isometry in the near horizon limit of coincident D3 branes as well as D1 − D5 system has been used to generate a new background in massive type IIA supergravity [6]. An immediate generalization of this construction for non-Abelian T-duals in coset geometries has been carried out [7]. Non-Abelian T-duality for the Plich-Warner background has been carried out in [8] where the transformation rules for the background RR fields were derived from Fourier-Mukai transform. These techniques have been applied over and again to generate several new backgrounds, as well as to relate known backgrounds among each other [9]. Moreover, the role of non-Abelian T-duality in the context of AdS/CFT correspondence has been explored [10][11][12][13][14][15]. Interesting connections of these dual geometries with the Penrose limit [16] of some well known supergravity backgrounds has been established in this context [17][18][19]. For a large class of supergravity backgrounds the Penrose limit gives rise to pp-wave geometry. There has been immense study of the field theory duals for various pp-wave backgrounds during the last two decades [20]. It has been proven that the pp-wave solutions provide exact backgrounds to all orders in α ′ and g s in string theory [21,22]. Thus they have become instrumental in the context of AdS/CFT correspondence to construct interacting string states from the perturbative gauge theory [23]. More recently, the Penrose limits of non-Abelian T-dual for the orbifolds of AdS 5 × S 5 have been studied in detail [24]. Plane wave geometry has been obtained by considering the Penrose limit along appropriate null geodesic and quantization of string theory in the background of this plane wave geometry has been carried out and the corresponding field theory dual has been constructed. Blowing up the singularities of orbifolds of S 5 gives rise to smooth geometries. One such geometry which has played an interesting role in understanding the AdS/CFT duality is T 1,1 . This geometry arises as the near horizon limit of coincident D3 branes placed on a conifold singularity [25]. The field theory dual for AdS 5 × T 1,1 background was first constructed by Klebanov and Witten to obtain N = 1 SYM theory [26]. Penrose limit for this background and its field theory dual were also analysed in detail [27][28][29]. In the present work, we extend the aforementioned results about the non-Abelian T-duality, for the Klebanov-Witten background. We consider both Abelian as well as non-Abelian T-dual geometries and analyze Penrose limits for various null geodesics. We show that these backgrounds give rise to pp-wave geometries for suitably chosen geodesics. We discuss quantization of closed strings propagating in some of these pp-wave backgrounds and comment on the resulting field theory duals. In the following we will first summarise the important results discussing Penrose limits of the dual backgrounds obtained from AdS 5 × S 5 . We subsequently consider the generalisation of these results to the Klebanov-Witten background. Finally, we comment on the probable field theory duals for some of the resulting pp-wave backgrounds. 2 Dual Backgrounds From AdS 5 × S 5 We will first consider the Penrose limits from T-duals of AdS 5 × S 5 background. The background metric is given as ds 2 = 4L 2 − cosh 2 rdt 2 + dr 2 + sinh 2 rdΩ 2 3 + dα 2 + sin 2 αdβ 2 + L 2 cos 2 α dθ 2 + dφ 2 + dψ 2 + 2 cos θdφdψ . (2.1) Here L is the AdS radius and dΩ 3 is the round metric on S 3 . This background is supported by a self-dual five form field strength F 5 . The Penrose limit of this background has been considered in the seminal paper [23]. The field theory dual of the resulting pp-wave background corresponds to the BMN sector of N = 4 Super-Yang-Mills theory. The Abelian T-duality along the ψ direction [17], after appropriate coordinate redefinitions, gives rise to the metric ds 2 = 4L 2 ds 2 (AdS 5 ) + 4L 2 dΩ 2 2 (α, β) + L 2 dψ 2 cos 2 α + L 2 cos 2 α dΩ 2 2 (χ, ξ) , (2.2) along with a dilaton φ, B 2 and a three form field C 3 . Here ds 2 (AdS 5 ) is the metric on AdS 5 and dΩ 2 2 (θ, φ) = dθ 2 + sin 2 θdφ 2 is the metric on S 2 . In this background, for motion along the ξ direction the null geodesics are {α = 0, χ = π/2} and {α = π, χ = π/2}. Focusing in the vicinity of both these geodesics one gets a pp-wave geometry [24]. In addition, pp-waves are also obtained by considering the geodesic {α = 0, χ = π/2} for motion along ψ and ξ directions. The authors of [24] considered quantization of closed string propagating in this background. However their main focus of discussion was the pp-wave solutions originating from the non-Abelian T-duals. After T-dualizing along an SU(2) direction [17], the geometry becomes ds 2 = 4L 2 ds 2 (AdS 5 ) + 4L 2 dΩ 2 2 (α, β) + α ′2 dρ 2 L 2 cos 2 α + α ′2 L 2ρ2 cos 2 α α ′2ρ2 + L 4 cos 4 α dΩ 2 2 (χ, ξ) . (2.3) In this case, however, the motion along the ξ direction does not admit any pp-wave solution in the vicinity of any of the null geodesics. To obtain pp-wave solution we need to focus on motion along ρ(≡ α ′ρ /L 2 ) and ξ direction [24]. In this case, the null geodesic is located at {χ = π/2, α = 0}. The Lagrangian for a massless particle moving on a null geodesic admits two cyclic coordinates giving rise to the conservation of energy and angular momentum. Expanding around the null geodesic and making appropriate coordinate redefinition, one can bring the resulting pp-wave metric to the standard Brinkmann form [24]: ds 2 = 2 du dv + dr 2 +r 2 dΩ 2 3 + dx 2 + x 2 dβ 2 + dz 2 + dw 2 − r 2 16 + x 2 16 (8J 2 − 1) + (ρ 2 + 1) 2 ρ 4 J 2 z 2 − F z z 2 − F w w 2 du 2 , (2.4) where F z = 4 J 2 4 ρ 2 + 1 + 3 4 J 2 − 1 ρ 4 4 ρ 4 ρ 2 + 1 2 , F w = − 3 4 ρ 2 + 1 2 . (2.5) The NS-NS three form H 3 and RR four form F 4 hold the following expressions upon taking the Penrose limit: H 3 = 1 2 ρ 2 + 3 ρ 2 + 1 du ∧ dz ∧ dw , F 4 = 2 J x ρ 2 + 1 g s du ∧ dx ∧ dz ∧ dβ . (2.6) The authors of [24] studied propagation of closed strings in this background. Solutions to the equation of motion are constructed. Further, they have proposed a field theory dual for this background. Abelian T-dual of Klebanov-Witten Background Our goal in the present work is to generalize these results for the background AdS 5 × T 1,1 . This geometry corresponds to the near horizon limit of parallel D3 branes at conical singularities, and provides one of the earliest examples of the AdS/CFT correspondence. The metric corresponding to the geometry has the form ds 2 = L 2 ds 2 AdS 5 + L 2 ds 2 T 1,1 , (3.1) ds 2 AdS 5 = − cosh 2 r dt 2 + dr 2 + sinh 2 r dΩ 2 3 , (3.2) ds 2 T 1,1 = λ 2 1 dΩ 2 2 θ 1 , φ 1 + λ 2 2 dΩ 2 2 θ 2 , φ 2 + λ 2 dψ + cos θ 1 dφ 1 + cos θ 2 dφ 2 2 . (3.3) Here L is the AdS 5 radius, and the parameters λ, λ 1 , λ 2 in the T 1,1 metric have the numerical values λ 2 1 = λ 2 2 = 1 6 , λ 2 = 1 9 . In addition, the supergravity background contains a constant dilation Φ, and a self-dual RR five form field strength F 5 = 4 g s L Vol(AdS 5 ) − L 5 Vol(T 1,1 ) . (3.4) We will study both Abelian as well as non-Abelian T-duals of this background. We will first focus on the Abelian T-duality. The brane constructions for the corresponding dual geometry was studied in detail [30,31]. They correspond to various intersecting branes in type IIA string theory. In the following we will consider the Penrose limits for the Abelian T-duality of this background about some of the U(1) isometry directions. The background has a manifest U(1) invariance along φ 1 , φ 2 and ψ directions. First focus on the azimuthal directions (φ 1 , φ 2 ). There is a symmetry under the exchange of (θ 1 , φ 1 ) with (θ 2 , φ 2 ). Thus, it would be sufficient to consider the duality along one of these two directions. Here we will consider the Abelian T-duality along φ 2 isometry. It is straightforward to obtain the dual geometry using the standard rules of T-duality [32]. The duality preserves all the supersymmetries of the Klebanov-Witten background.The field theory dual corresponding to this background has been analysed [33]. The metric corresponding to the dual background is given by L −2 dŝ 2 = ds 2 AdS 5 + λ 2 1 dΩ 2 2 θ 1 , φ 1 + dθ 2 2 + λ 2 sin 2 θ 2 P (θ 2 ) dψ + cos θ 1 dφ 1 2 + dφ 2 2 λ 2 1 P (θ 2 ) . (3.5) Here we have used the notation P (θ 2 ) = λ 2 cos 2 θ 2 + λ 2 2 sin 2 θ 2 . The dilaton and the NS-NS two form fields are given respectively by e −2Φ = L 2 g 2 s P (θ 2 ) , (3.6) andB 2 = − L 2 λ 2 cos θ 2 P (θ 2 ) dφ 2 ∧ dψ + cos θ 1 dφ 2 ∧ dφ 1 ,(3.7) The RR two form F 2 vanishes, whereas the RR four form F 4 has the expression F 4 = 4L 4 λλ 4 1 g s sin θ 1 sin θ 2 dθ 1 ∧ dφ 1 ∧ dθ 2 ∧ dψ . (3.8) We will now focus on obtaining Penrose limits for this background. To this end, consider the geodesic equation d 2 x µ du 2 + Γ µ νρ dx ν du dx ρ du = 0 . (3.9) Here {x µ } are the space-time coordinates and u denotes the affine parameter along the geodesic. We will consider the motion along some isometry direction. If x µ 0 is such an isometry direction, then the velocity as well as acceleration along any x µ , µ = µ 0 vanish: dx µ du = 0 = d 2 x µ du 2 , µ = µ 0 . (3.10) Substituting the above in (3.9), we find ∂ µ g µ 0 µ 0 = 0 . (3.11) To obtain the Penrose limit, we need to focus in the vicinity of null geodesics. Thus, in addition to the above condition, we must require ds 2 = 0. We will now analyse the motion along various isometry directions of the T-dual geometry (3.5) and obtain the corresponding Penrose limits. Consider first the φ 1 isometry. The geodesic equation along this direction is ∂ µ g φ 1 φ 1 = 0. The relevant component of the metric is g φ 1 φ 1 = L 2 λ 2 1 sin 2 θ 1 + λ 2 1 λ 2 sin 2 θ 2 λ 2 cos 2 θ 2 + λ 2 2 sin 2 θ 2 cos 2 θ 1 . (3.12) The geodesic condition for µ = θ 1 as well as for µ = θ 2 can be solved to obtain θ 1 = (0, π 2 , π) and θ 2 = (0, π 2 , π) respectively. This gives us four geodesics: {θ 1 = 0, θ 2 = π/2}, {θ 1 = π, θ 2 = π/2}, {θ 1 = π/2, θ 2 = 0} and {θ 1 = π/2, θ 2 = π}. We first consider the following large L expansion around the geodesic {θ 1 = 0, θ 2 = π 2 }: r =r L , θ 1 = z L , θ 2 = π 2 + x L , t = ax + , φ 1 = bx + + x − L 2 , φ 2 = φ 2 L , (3.13) while keeping ψ unchanged. Here a and b are unknown parameters. Ignoring the subleading terms in L → ∞ limit, we find the T-dual metric to have the following expression ds 2 = dr 2 +r 2 dΩ 2 3 + λ 2 1 dz 2 + λ 2 1 dx 2 − r 2 a 2 + b 2 z 2 λ 2 − λ 2 1 (dx + ) 2 − λ 2 bz 2 dψdx + − 2dψdx − − 2bdx + dx − − λ 4 λ 2 2 x 2 dψ + bdx + 2 + 1 λ 2 2 dφ 2 2 − L 2 a 2 (dx + ) 2 − λ 2 dψ + bdx + 2 . (3.14) Note that the metric diverges in the limit L → ∞ due to the presence of O(L 2 ) terms. This divergence occurs because, in this case we have not been able to impose the geodesic to be null. This amounts to setting a 2 (dx + ) 2 − λ 2 dψ + bdx + 2 = 0 . Clearly, this can't be satisfied for any choice of the parameters a and b due to the presence of the dψ term. A similar analysis can be carried out for the geodesic {θ 1 = π, θ 2 = π/2}, leading to a divergent metric in the large L expansion. In contrast, expanding the T-dual metric around the remaining two geodesics gives rise to pp-wave geometry as we will show currently. Consider first the following expansion about the geodesic {θ 1 = π/2, θ 2 = 0}: r =r L , θ 1 = π 2 + z L , θ 2 = x L , t = ax + , φ 1 = bx + + x − L 2 , φ 2 = φ 2 L , (3.15) keeping the ψ-coordinate unchanged. Here, as before a and b are unknown parameters to be chosen suitable 1 in order to obtain ds 2 pp = 2dx + dx − + dr 2 +r 2 dΩ 2 3 + dz 2 + dx 2 + x 2 dψ 2 + dφ 2 2 − 6 r 2 + 6z 2 − 6x 2 (dx + ) 2 . (3.16) 1 To get the metric in standard from, we set a = 1/λ 1 , b = 1/λ 2 1 and, in addition, we rescale some of the coordinates as x → √ 6x, z → √ 6z, φ 2 → 1 3 φ 2 . Clearly, the background geometry is a pp-wave solution in the standard Brinkmann form. The background dilaton has the expression e −2Φ = 1 g 2 s λ 2 ,(3.17) and NS-NS two-form fieldB 2 = 2 √ 6z dφ 2 ∧ dx + ,(3.18) with corresponding field strengtĥ H 3 = 2 √ 6 dz ∧ dφ 2 ∧ dx + . (3.19) The RR fields in this limit has the expression F 2 = 0 ,F 4 = 4 √ 6 3g s x dz ∧ dx + ∧ dx ∧ dψ . (3.20) Taking Penrose limit for the geodesic {θ 1 = π/2, θ 2 = π} also leads to a pp wave geometry with the same metric as (3.16). The expressions for the background fields are also quite similar. We omit the details because the analysis is identical to the aforementioned discussion. Now consider motion along the φ 2 direction. To obtain the geodesics along this isometry, consider the metric component g φ 2 φ 2 = L 2 λ 2 cos 2 θ 2 + λ 2 2 sin 2 θ 2 . (3.21) From the geodesic condition, ∂ θ 2 g φ 2 φ 2 = 0 we find θ 2 = 0, π/2, π . Consider the following expansion around the geodesic {θ 1 = 0, θ 2 = 0}: r =r L , θ 1 = z L , θ 2 = x L , t = ax + , φ 2 = bx + + x − L 2 ,(3.22) keeping φ 1 and ψ unchanged. To remove the O(L 2 ) divergent piece in the metric, we need to choose a = λ, b = λ 2 . This choice leads to a null geodesic. With appropriate redefinition of the x and z coordinates, we find the T-dual metric as ds 2 = 2dx + dx − +dr 2 +r 2 dΩ 2 3 +dz 2 +z 2 dφ 2 1 +dx 2 +x 2 (dψ+dφ 1 ) 2 − 1 9 r 2 +3x 2 (dx + ) 2 . (3.23) Though the metric is now finite, the scalar curvature for this solution is non-zero and hence it does not correspond to a pp-wave geometry. This is due to the fact that the geodesic is placed on a singular location. The metric component g φ 1 φ 1 vanishes for the values {θ 1 = 0, θ 2 = 0}. This is a generic feature and hence we will no longer consider such singular geodesics from now on. Finally, consider the ψ-isometry direction. The null geodesics can be obtained by considering the g ψψ component of the metric, which is given by g ψψ = L 2 λ 2 1 λ 2 sin 2 θ 2 λ 2 cos 2 θ 2 + λ 2 2 sin 2 θ 2 . (3.24) Solving the geodesic condition one obtains θ 2 = 0, π/2, π . For the values θ 2 = (0, π) the above metric component vanishes and hence, we do not consider these values here. Consider the following expansion around the geodesic θ 1 = 0 and θ 2 = π 2 : r =r L , θ 1 = x L , θ 2 = π 2 + z L , t = ax + , ψ = bx + + x − L 2 , φ 2 = φ 2 L , (3.25) while keeping the φ 1 coordinate unchanged. The leading terms of T-dual metric in the limit L → ∞ are given by ds 2 = dr 2 +r 2 dΩ 2 3 + λ 2 1 dx 2 + λ 2 1 dz 2 + λ 2 1 − λ 2 x 2 − λ 4 λ 2 2 z 2 dφ 2 1 + 1 λ 2 2 dφ 2 2 − r 2 a 2 + λ 4 λ 2 2 b 2 z 2 (dx + ) 2 + λ 2 2bdx + dx − − b x 2 + λ 2 λ 2 2 2z 2 dx + dφ 1 + 2dx − dφ 1 − L 2 a 2 (dx + ) 2 − λ 2 bdx + + dφ 1 2 . (3.26) This contains a divergent term which cant be removed by any choice of the parameters a and b. This is because, in this case too, we do not have a null geodesic for any choice of the parameters a and b. Hence motion along the isometry direction ψ does not lead to any pp-wave geometry. In the above, we have considered the Abelian T-duality along φ 2 direction and analysed all the geodesics admitted by this geometry. Some of these geodesics are singular and taking Penrose limit does not lead to any interesting solution in such cases. Only two of these geodesics are null. Taking Penrose limit in the vicinity of these two null geodesics leads to pp wave geometries. An identical result will hold for T-duality along φ 1 direction. We will now focus on the remaining isometry direction ψ. Using the standard rules of T-duality [32], we obtain dŝ 2 = L 2 ds 2 AdS 5 + L 2 λ 2 1 dΩ 2 2 θ 1 , φ 1 + λ 2 2 dΩ 2 2 θ 2 , φ 2 + 1 λ 2 dψ 2 . (3.27) Here we have rescaled ψ → L 2 α ′ ψ in order to get L 2 as a common factor in the metric and set α ′ = 1 for convenience. The resulting T-dual geometry has the well known product form AdS 5 × S 2 × S 2 × S 1 . Unlike the previous case, the background is non-supersymmetric in this case. The NS-NS sector also contains a constant dilaton e −2Φ = λ 2 L 2 g 2 s ,(3.28) and a two-form fieldB 2 = −L 2 cos θ 1 dφ 1 + cos θ 2 dφ 2 ∧ dψ ,(3.29) with field strengthĤ 3 = L 2 sin θ 1 dθ 1 ∧ dφ 1 + sin θ 2 dθ 2 ∧ dφ 2 ∧ dψ . (3.30) The RR sector of the resulting background consists of a non-vanishing four-form flux F 4 = 4L 4 λλ 2 1 λ 2 2 g s sin θ 1 sin θ 2 dφ 1 ∧ dθ 1 ∧ dφ 2 ∧ dθ 2 . (3.31) We will now focus on the metric (3.27). Clearly φ 1 , φ 2 and ψ are the isometry directions. Motion along the ψ direction does not give any non-trivial constraint. Since the analysis is identical for both φ 1 and φ 2 , it will be sufficient to consider geodesics along one of these directions. For the φ 1 isometry direction the condition (3.11) gives θ 1 = (0, π 2 , π). However, the singular values θ 1 = 0 and π correspond to points and not curves and hence we do not have any corresponding geodesics. This leaves behind the choice θ 1 = π/2. To get the Penrose limit, we consider the following large L expansion of the dual metric (3.27) retaining the leading terms: r =r L , θ 1 = π 2 + z L , θ 2 = x L , t = ax + , φ 1 = bx + + x − L 2 , ψ = y L , φ 2 = β,(3.32) and redefine the string coupling as g s = Lg s to ensure that the dilaton remains finite. To obtain a null geodesic we must impose the condition a = λ 1 b. In addition, we set λ 2 1 b = 1 and make the co-ordinate redifinitions x + = u, x − = v, z → √ 6z, x → √ 6x, y → 1 3 y to bring the resulting pp-wave metric to the standard form: ds 2 = 2dudv + dr 2 +r 2 dΩ 2 3 + dz 2 + dx 2 + x 2 dβ 2 + dy 2 − 6(r 2 + 6z 2 )du 2 . (3.33) The expressions for the dilaton and NS-NS three-form flux in this limit are given as: e −2Φ = λ 2 g 2 s ,(3.34)andĤ 3 = 2 √ 6 dz ∧ du ∧ dy . (3.35) Field strengths for the RR fluxes have the expression: F 2 = 0,F 4 = 4 √ 6 3g s x du ∧ dz ∧ dβ ∧ dx . (3.36) The null geodesics can also carry angular momentum. To obtain such a geodesic, we consider motion along φ 1 and ψ directions. The geodesic equation now implies that θ 1 = π/2, θ 2 = 0. Consider the Lagrangian for a massless particle moving along this geodesic: L = 1 2 g µνẊ µẊ ν . (3.37) Here we choose u to be the affine parameter and the dots denote derivative with respect to it. Substituting the explicit expression for the background metric (3.27) in the above Lagrangian we find L = L 2 2 −ṫ 2 + 1 6φ 2 1 + 9ψ 2 . (3.38) Clearly, the conjugate momenta corresponding to the generalized coordinates t, φ 1 and ψ are conserved. Suitably choosing the affine parameter u we set ∂L ∂ṫ = −L 2ṫ = −L 2 . Denoting J to be the conserved quantity associated with the variable φ 1 , we have ∂L ∂φ 1 = 1 6φ 1 L 2 = −JL 2 . The conserved momentum with respect to the variable ψ however can no longer be arbitrary. It has to be determined by requiring the geodesic to be null, i.e., we set L = 0. We findψ 2 = 1 9 1 − 6J 2 ,(3.39) which upon integration gives ψ = 1 3 √ 1 − 6J 2 u . Here we set the constant of integration to zero. From the above expression we find that in order to get a real value for ψ the angular momentum J must be bounded by 0 ≤ J ≤ 1 √ 6 . (3.40) To obtain the Penrose limit for a null geodesic carrying angular momentum J on the (ψ, φ 1 ) plane around r = 0, θ 1 = π 2 , θ 2 = 0, we redefine the coordinates r =r L , θ 1 = π 2 + z L , θ 2 = x L . (3.41) and consider the following expansion in the limit L → ∞: dt = c 1 du, dφ 1 = c 2 du + c 3 dw L , dψ = c 4 du + c 5 dw L + c 6 dv L 2 ,(3.42) Requiring the geodesic to be null sets the constant coefficients c 1 , c 2 and c 4 the values c 1 = 1, c 2 = −6J, c 4 = 1 3 √ 1 − 6J 2 . (3.43) The metric, in addition contains a O(L) which can be removed upon requiring λ 2 1 c 2 c 3 + 1 λ 2 c 4 c 5 = 0. Normalizing the coefficient of dw 2 to unity gives the condition λ 2 1 c 2 3 + 1 λ 2 c 2 5 = 1. Similarly, appropriate normalization of the cross term 2dudv gives c 4 c 6 λ 2 = 1. These condition can be solved uniquely to obtain the remaining coefficients c 3 , c 5 and c 6 . We find c 3 = 6(1 − 6J 2 ), c 5 = J 2 3 , c 6 = 1 3 1 √ 1 − 6J 2 . (3.44) The resulting pp-wave metric after a rescaling x → √ 6x, z → √ 6z has the expression ds 2 pp = 2dudv + dr 2 +r 2 dΩ 2 3 + dz 2 + dx 2 + x 2 dβ 2 + dw 2 − r 2 + 36J 2 z 2 du 2 . (3.45) The background dilaton and B 2 field are found to be e −2Φ = λ 2 g 2 s ,B 2 = 2z dw ∧ du + x 2 √ 1 − 6J 2 dβ ∧ du ,(3.46) with the corresponding three form flux H 3 = 2 dz ∧ dw ∧ du + 2x √ 1 − 6J 2 dx ∧ dβ ∧ du . (3.47) In addition, the RR fluxes have the limit F 2 = 0,F 4 = 4 √ 6 3g s Jx du ∧ dz ∧ dx ∧ dβ . (3.48) Before closing this section, we note that all the pp-wave backgrounds we have obtained in this paper do indeed satisfy the supergravity equations. In the following, we demonstrate this for the background specified by eqs.(3.45)-(3.48). For type-IIA supergravity, the Bianchi identity and gauge field equation are given as dH 3 = 0 , dF 2 = 0 , dF 4 = H 3 ∧ F 2 , d e −2Φ * H − F 2 ∧ * F 4 − 1 2 F 4 ∧ F 4 = 0 , d * F 2 + H 3 ∧ * F 4 = 0 , d * F 4 + H 3 ∧ F 4 = 0 . (3.49) A quick inspection of the background shows that the Bianchi identities are indeed satisfied. The equation of motion for B 2 is satisfied for our background, because the dilaton is constant, F 2 = 0 and F 4 ∧ F 4 = 0. Further, the hodge dual of H 3 , given by * H 3 = 2 dx ∧ dβ + √ 1 − 6J 2 dz ∧ dω ∧ du ∧ dΩ 4 is closed. To verify the F 2 equation of motion, note that F 2 is zero and * F 4 = 4 √ 6 3g s J du ∧ dΩ 4 , and hence H 3 ∧ * F 4 = 0. The last equation holds because * F 4 is closed and H 3 ∧ F 4 = 0. The equations of motion for metric and dilaton are given as R µν + 2D µ D νΦ = 1 4 H 2 µν + e 2Φ 1 2 (F 2 2 ) µν + 1 12 (F 2 4 ) µν − 1 4 g µν 1 2 F 2 2 + 1 4! F 2 4 , R + 4D 2Φ − 4(∂Φ) 2 − 1 12 H 2 = 0 . (3.50) To verify these equations, note thatΦ = const, H 2 = F 2 4 = 0 = R. A straightforward computation shows that only the uu-components of R µν , H 2 µν and (F 2 4 ) µν are non-vanishing. They are given by H 2 uu = 16 − 48J 2 , (F 2 4 ) uu = 64J 2 /g 2 s , and R uu = 4 + 36J 2 . (3.51) Substitution the above we can see that the corresponding equations of motion are indeed satisfied. Quantization of Closed Strings Propagating in the pp-wave Geometry In this section we will study the quantization of closed strings propagating in the pp-wave background. We will focus on the pp-wave solution (3.45) carrying an angular momentum which has been obtained from the dual geometry by performing an Abelian T-duality along ψ-isometry. The string world sheet action is given by S = − 1 4πα ′ dτ dσ √ gg αβ G µν ∂ α X µ ∂ β X ν + ǫ αβ B µν ∂ α X µ ∂ β X ν + α ′ √ g R (2) Φ , (4.1) Here {α, β} denote the worldsheet coordinates (τ, σ) and {µ, ν} denote the spacetime coordinates, G µν is the background metric, B µν and Φ are the background NS-NS twoform and dilaton respectively. We choose the convention ǫ τ σ = −ǫ στ = 1 and gauge fix the worldsheet metric g αβ such that √ gg αβ = η αβ , with −η τ τ = η σσ = 1. Further, we designate the string coordinates in the manner U = u, V = v, X 1 , X 2 , X 3 , X 4 ∈r, Ω 3 , X 5 , X 6 ∈ x, β, X 7 , X 8 ∈ z, w , and consider the light cone gauge U = τ with p + = 1 in order to fix the residual diffeomorphism invariance. The worldsheet action for the pp wave background (3.45) then becomes S = − 1 4πα ′ dτ dσ 8 i=1 ∂X i . ∂X i + 4 i=1 (X i ) 2 + X 8 ∂ σ X 7 − X 7 ∂ σ X 8 −(X 5 ) 2 √ 1 − 6J 2 ∂ σ X 6 + 36J 2 (X 7 ) 2 , (4.3) In the above the inner product is defined with η αβ . The corresponding Euler-Lagrange equations are given as X i − X i = 0 , i = 1, 2, 3, 4, (4.4) X 5 + √ 1 − 6J 2 X 5 ∂ σ X 6 = 0 , (4.5) X 6 − √ 1 − 6J 2 X 5 ∂ σ X 5 = 0 , (4.6) X 7 − 36J 2 X 7 + 1 2 ∂ σ X 8 = 0 , (4.7) X 8 − 1 2 ∂ σ X 7 = 0 . (4.8) The first of the above equations, eq.(4.4) is a linear equation involving the uncoupled fields X i , i = 1, . . . , 4. Considering an ansatz of the form X ∼ e −iωt+inσ , it is straightforward to obtain the frequencies of the respective modes The last two equations involving X 7 and X 8 can be decoupled giving rise to two fourth order linear partial differential equations with corresponding mode frequencies ω 2 n,i = n 2 + 1 2 36J 2 ± (36J 2 ) 2 + n 2 , i = 7, 8. (4.10) These modes can be related to the fourth order Pais-Uhlenbeck oscillator as in the case of the Pilch-Warner background [34]. The equations involving X 5 and X 6 can be combined into a single complex differential equation. Defining Z = X 6 + iX 5 , we find Z + 1 2 √ 1 − 6J 2 Z −Z ∂ σ Z = 0 (4.11) This corresponds to a non-linear complex harmonic oscillator for which the exact analytic solutions can't be obtained. However, for small value of √ 1 − 6J 2 we can use perturbation theory to obtain the frequencies of the oscillating modes. The Non-Abelian T-dual of the Klebanov-Witten Background In this section, we will discuss the non-Abelian T-duality of AdS 5 × T 1,1 background. This background arises upon placing D3-branes at the tip of a conifold. The field theory dual has been constructed by Klebanov and Witten [25], [26]. The non-Abelian T-duality on a subgroup of the symmetry group of the internal manifold T 1,1 has been carried out in [11][12][13]. 2 An extensive study of this T-dual background was carried out [14]. Unlike the AdS 5 × S 5 case, here the non-Abelian T-duality preserves all the supersymmetries of the original background [36][37][38][39]. 3 In the following we will briefly review the dual background. Subsequently, we will discuss the Penrose limits along various null geodesics of the resulting geometry. The T-dual solution that we consider here has been studied in detail in [10][11][12][13][14][15]. The background geometry is specified by the metric dŝ 2 = L 2 ds 2 AdS 5 + L 2 dŝ 2 T 1,1 , (5.1) with dŝ 2 T 1,1 = λ 2 1 dΩ 2 2 θ 1 , φ 1 + λ 2 2 λ 2 ∆ x 2 1 σ 2 3 + 1 ∆ x 2 1 + λ 2 λ 2 2 dx 2 1 + x 2 2 + λ 4 2 dx 2 2 + 2x 1 x 2 dx 1 dx 2 , (5.2) and ∆ = λ 2 2 x 2 1 + λ 2 (x 2 2 + λ 4 2 ) , σ3 = dψ + cos θ 1 dφ 1 . (5.3) Here we have done appropriate rescaling of the coordinates x 1 and x 2 in order to get an overall factor of L 2 in the metric. The NS-NS two-form of the dual background is given by the expressionB 2 = − λ 2 L 2 ∆ x 1 x 2 dx 1 + x 2 2 + λ 4 2 dx 2 ∧ σ3 , (5.4) along with the dilaton e −2Φ = 8L 6 g 2 s ∆ . (5.5) The corresponding NS-NS three form flux is given bŷ H 3 = λ 2 L 2 ∆ 2 λ 2 2 x 3 1 + λ 2 x 1 x 2 2 + λ 4 2 − 2λ 2 x 1 x 2 2 + 2λ 2 2 x 1 x 2 2 + λ 4 2 dx 1 ∧ dx 2 ∧ σ3 − λ 2 L 2 ∆ x 1 x 2 dx 1 + x 2 2 + λ 4 2 dx 2 sin θ 1 dθ 1 ∧ dφ 1 . (5.6) The RR sector of the background is described by the field strengthŝ F 2 = 8 √ 2 g s λλ 4 1 L 4 sin θ 1 dφ 1 ∧ dθ 1 , (5.7) andF 4 = − 8 √ 2 g s L 6 λλ 4 1 x 1 ∆ sin θ 1 dφ 1 ∧ dθ 1 ∧ σ3 ∧ λ 2 1 x 1 dx 2 − λ 2 x 2 dx 1 . (5.8) It has been shown that [10,11] this background solves the type IIA supergravity equations preserving N = 1 supersymmetry. We will now focus on the Penrose limits around various null geodesics of the above dual geometry. We consider motion along the isometry directions φ 1 and ψ. Let us first focus on φ 1 -isometry. The relevant metric component is g φ 1 φ 1 = L 2 λ 2 1 sin 2 θ 1 + λ 2 2 λ 2 ∆ x 2 1 cos 2 θ 1 . (5.9) This component has non-trivial dependence on x 1 , x 2 and θ 1 . The geodesic condition ∂ µ g φ 1 φ 1 = 0 gives x 1 = 0, θ 1 = π/2 for µ = x 1 , and x 1 = 0 = x 2 , θ 1 = π/2 for µ = x 2 . For the choice µ = θ 1 this gives rise to the values θ 1 = (0, π/2, π). Clearly the only non-singular choice for a geodesic is x 1 = 0, x 2 = 0 and θ 1 = π/2. We will make the following large L expansion around this geodesic: r =r L , x 1 = y 1 L , x 2 = y 2 L , θ 1 = π 2 + z L , t = ax + , φ 1 = bx + + x − L 2 ,(5.10) while keeping the ψ-coordinate unchanged. The parameters a and b are chosen to be 1/λ 1 and 1/λ 2 1 respectively. Further, we redefine the coordinates as x + = u, x − = v and rescale z → √ 6z, y 1 → y 1 / √ 6, y 2 → y 2 /3. The leading order terms of the metric in the limit L → ∞ gives ds 2 = 2dudv + dr 2 +r 2 dΩ 2 3 + dz 2 + dy 2 1 + y 2 1 dψ 2 + dy 2 2 − 6 r 2 + 6z 2 du 2 . (5.11) This is indeed a pp-wave solution in the standard Brinkmann form. Interestingly, the pp-wave metric in the above is identical to the metric (3.33), we have obtained from the Abelian T-dual background. We will now focus on other background fields. In order to keep the dilaton finite, we redefine the string coupling as g s = L 3g s . (5.12) With this redefinition, the dilaton takes the form e −2Φ = 8 g 2 s λ 2 λ 4 2 ,(5.13) In this limit, the NS-NS two-form field on the other hand becomeŝ B 2 = 2 √ 6z dy 2 ∧ du ,(5.14) with the corresponding three-form flux H 3 = 2 √ 6 du ∧ dz ∧ dy 2 . (5.15) The RR fields at Penrose limit are given aŝ F 2 = 8 3 √ 3g s du ∧ dz,F 4 = 0 . (5.16) The motion on along the ψ-isometry however does not give pp-wave geometry as we will see in the following. The relevant component of the metric is g ψψ = L 2 λ 2 2 λ 2 ∆ x 2 1 . (5.17) From the above we obtain the geodesic x 2 = 0, θ 1 = 0. Consider the following expansion r =r L , x 2 = y 2 L , θ 1 = z L , t = ax + , ψ = bx + + x − L 2 ,(5.18) while keeping x 1 and φ 1 coordinates unchanged. The leading terms of the dual metric in L → ∞ becomes ds 2 = −r 2 a 2 (dx + ) 2 + dr 2 +r 2 dΩ 2 3 + λ 2 1 dz 2 + λ 2 1 z 2 dφ 2 1 + λ 2 2 λ 2 2bx 2 1 dx + dx − + 2x 2 1 dx − dφ 1 − bx 2 1 z 2 dx + dφ 1 − z 2 x 2 1 dφ 2 1 − λ 2 y 2 2 x 2 1 bdx + + dφ 1 2 − 1 1 λ 2 y 2 2 x 2 1 + λ 2 2 λ 2 dx 2 1 − λ 4 2 dy 2 2 − 2x 1 y 2 dx 1 dy 2 − L 2 a 2 (dx + ) 2 + L 2 λ 2 2 λ 2 x 2 1 bdx + + dφ 1 2 + 2bdx + dφ 1 + x 2 1 + λ 2 2 λ 2 dx 2 1 (5.19) where, = λ 2 2 x 2 1 + λ 2 λ 4 2 . (5.20) In this case too the geodesic is not null for any choice of the parameters a and b. This is reflected by the appearance of the divergent term in the metric. Hence the motion along ψ-isometry does not give pp-wave geometry. Closed String Quantization on the PP Wave We will now study the quantization of a closed string propagating in the pp-wave background (5.11), derived in the last section. The worldsheet action is given by S = − 1 4πα ′ dτ dσ √ gg αβ G µν ∂ α X µ ∂ β X ν + ǫ αβ B µν ∂ α X µ ∂ β X ν + α ′ √ g R (2) Φ , (6.1) As before, we will use the notation ǫ τ σ = −ǫ στ = 1 and gauge fix the metric as √ gg αβ = η αβ with the convention −η τ τ = η σσ = 1. We assign string coordinates as U = u, V = v, X 1 , X 2 , X 3 , X 4 ∈r, Ω 3 , X 5 , X 6 ∈ y 1 , ψ, X 7 , X 8 ∈ z, y 2 . (6.2) Further, we fix the residual diffeomorphism invariance considering the light cone gauge U = τ with p + = 1. The worldsheet action for the pp-wave background (5.11) becomes S = − 1 4πα ′ dτ dσ ∂X i . ∂X i + 6 4 i=1 (X i ) 2 + 6(X 7 ) 2 − √ 6X 7 ∂ σ X 8 + √ 6X 8 ∂ σ X 7 .(6. 3) The equations of motion for the scalar fields in the above action are given by X i − 6X i = 0, i = 1, 2, 3, 4,(6.4) X i = 0, i = 5, 6, (6.5) X 7 − 36X 7 + 1 2 √ 6 ∂ σ X 8 = 0, (6.6) X 8 − 1 2 √ 6 ∂ σ X 7 = 0. (6.7) To obtain the oscillator frequencies we consider an ansatz of the form X i ∼ e −iωt+inσ . We find ω 2 n,i = n 2 + 6, i = 1, 2, 3, 4, (6.8) ω 2 n,i = n 2 , i = 5, 6,(6. Field Theory Duals In the previous discussion we have seen that taking the Penrose limit gives rise to pp wave geometries for smooth null geodesics, both in the case of Abelian as well as non-Abelian T-dual backgrounds from AdS 5 × T 1,1 . Here we will discuss the underlying field theory duals. We will first consider the Abelian T-duals. The field theory duals for these backgrounds has been constructed in [30] and [31]. They correspond to a system of intersecting D4 − NS5 − NS5 ′ branes where the NS5 branes are rotated appropriately and the D4 branes are stretched in between. Here we will study the field theory dual of the corresponding pp-wave geometries. In the limit of large F 5 flux, the string coupling becomes weak and hence the pp-wave background can be treated semi-classically. To show this, note that the type IIA and type IIB string couplings g A s and g B s are related among each other as g B s = g A s L. In order to get a finite dilaton in the Penrose limit we have rescaled the string coupling of the T-dual geometry asg s = g A s /L. Hence, the IIB string coupling is related tog A s as g B s = L 2g s . For the Klebanov-Witten background the size L of AdS 5 space is quantized in terms of the F 5 flux N 3 as [33]: L 4 = 27 4 πg B s 2 N 3 . (7.1) Thus we findg s ∼ 1 √ N 3 . (7.2) This shows that, in the limit of large N 3 the string couplingg s becomes negligible. Thus, it seems plausible to use semi-classical analysis to compute the spectrum for our purpose. The construction of the field theory dual is as follows [30,31]. It describes the dynamics of massless strings arising from N-D4 branes stretched across two orthogonal NS5 branes located on a circle. The spectrum consists of two chiral multiplets A 1 , A 2 in the (N, N) representation and two more chiral multiplets B 1 , B 2 in the (N, N) representation of the SU(N) × SU(N) gauge group, with superpotential W = 1 2 ǫ ij ǫ kl T r A i B k A j B l , i,dŝ 2 = L 2 ds 2 AdS 5 + L 2 λ 2 1 dΩ 2 2 θ 1 , φ 1 + λ 2 2 dΩ 2 2 θ 2 , φ 2 + 1 λ 2 dψ 2 . The SU(2) A and SU(2) B are identified with the symmetries of the two spheres parametrized by (θ 1 , φ 1 ) and (θ 2 , φ 2 ) and the R-symmetry U(1) R is identified with the shift along the ψdirection. The generators J 1 and J 2 correspond to the shift in the azimuthal coordinates φ 1 and φ 2 respectively and J 3 corresponds to shift in ψ. The BMN sector for the field theory dual has been constructed [29]. The state operator correspondence is naturally described in terms of the conifold coordinates Z 1 = A 1 B 1 , Z 2 = A 2 B 2 , Z 3 = A 1 B 2 , Z 4 = A 2 B 1 . Setting the light cone Hamiltonian H = ∆ − (J 1 + J 2 + J 3 ), it can be shown that the operator Z 1 has H = 0 and corresponds to the ground state. The first excited state, with H = 1 is described in terms of the operators Z 3 , Z 4 and the covariant derivatives D i Z 1 . This is in contrast to what we observe in closed string quantization of the pp-wave geometry (3.45), corresponding to the Abelian T-dual background. From (4.10) we find the frequencies corresponding to n = 0 modes as 4 ω 0,i = 1, i = 1, 2, 3, 4, ω 0,7+,8+ = 6J , ω 0,7−,8− = 0 . This mismatch, however, is not surprising. Large effective interaction cause the energies of the states to change. A similar phenomenon has been observed for the pp wave background corresponding to the Abelian T-dual of AdS 5 × S 5 geometry [24]. For the non-Abelian T-dual background of AdS 5 × T 1,1 , the field theory dual have been first proposed in [11] and subsequently, with suitable modification, analysed extensively in [33]. The dual theory is conjectured to arise from an intersecting D4 − NS5 − NS5 ′ brane configuration. Due to Myers effect, the D4 branes are blown into a stack of D6 branes on a sphere in the presence of B 2 field. The NS5 and NS5 ′ branes are located at various points on the radial direction and are transverse to two different S 2 s. One NS5 ′ brane is placed between two consecutive NS5 branes. Due to large gauge transformation, the D4 brane charge changes by a fixed amount each time a NS5 brane is crossed. The dual theory consists of a two tailed linear quiver with gauge groups of increasing rank at each node and matter fields in the bifundamental of each pair of nodes with a suitably added flavour group at the middle. The holographic dual corresponding to the pp-wave geometry resulting from the non-Abelian T-dual background will correspond to a class of operators in this quiver theory. Note that the construction of the field theory dual in [33] was mainly based on the brane charges arising from the supergravity background and the scaling of the central charge. The central charge corresponding to the quiver theory was computed using a-maximisation and was shown to agree with the holographic entanglement entropy computed from the background associated with the supergravity dual [33]. Naively one might expect that a similar analysis can also be carried out for the corresponding pp-wave background. However, care must be taken in the present case because the pp-wave geometry is obtained in zooming a particular region and hence it is globally not complete [33]. In this case, the holographic entanglement entropy can be computed, as in [15] by imposing a hard cutoff on the non-compact directions. However the field theory interpretation of this quantity is not clear. It might correspond to the entanglement entropy in some excited state of the dual field theory. Conclusion In this paper we have studied the Abelian as well as non-Abelian T-dual geometries arising from the Klebanov-Witten background. Though the Abelian T-duality is an exact symmetry of the string theory, the dual description is some times more convenient to study the Penrose limits. The non-Abelian T-duality provides new supergravity solutions. We considered various null geodesics of the resulting dual theories and obtained the Penrose limits. Some of these geodesics are singular while the remaining admit pp-wave geometries. We quantized closed strings propagating in these pp-wave backgrounds. We have briefly analysed the corresponding field theory duals. For the non-Abelian case the holographic dual of the pp-wave geometry corresponds to a sector of operators of a quiver theory with gauge groups of increasing rank. Further investigation is required to identify this BNM sector and to establish a precise mapping between holographically computed quantities and field theory observables. It would also be interesting to explore the possibility of obtaining pp-wave geometries for non-Abelian duals of string theory compactified on T p,q as well as backgrounds with AdS 3 factors. We hope to report on some of these issues in future. Acknowledgement The present work is partially supported by the DST project grant no. EMR/2016/001997. A. Bulk Modes The bulk modes play an important role in obtaining the spectrum of the dual field theory. In order to understand the holographic dual of the pp-wave geometry obtained from the non-Abelian T-dual of AdS 5 × T 1,1 we will consider a non-interacting, massless scalar field in this background and obtain the corresponding bulk modes. The pp-wave metric we are interested in is ds 2 pp = 2dudv + dr 2 +r 2 dΩ 2 3 + dz 2 + dy 2 1 + y 2 1 dψ 2 + dy 2 2 − 6 r 2 + 6z 2 du 2 . (8.1) In order to obtain the bulk modes we will rewrite the above metric in a convenient form. We assign coordinates X i , i = 1, . . . , 4 to parametrize the R 4 part {r, Ω 3 }, relabel the R 2 factor parametrized by {y 1 , ψ} as {X 5 , X 6 } and the R 2 factor in {z, y 2 } as {X 7 , X 8 } while leaving the light cone coordinates {u, v} intact. The metric now takes the form ds 2 pp = 2dudv − 6 4 i=1 (X i ) 2 + 6(X 7 ) 2 du 2 + 8 i=1 dX i 2 . (8.2) The NS-NS and RR fields in this coordinate system are given bŷ This geometry preserves SO(4)×SO(2)×U(1) symmetry. Rotations in X 1 , X 2 , X 3 , X 4 gives rise to the SO(4) factor and rotations in X 5 , X 6 gives rise to the SO(2). There is an additional translational symmetry giving rise to U(1) symmetry. Note that, in contrast the background AdS 5 × T 1,1 has SO(2, 4) × SU(2) × SU(2) × U(1) symmetry whereas the non-Abelian T-dual geometry possesses SO(2, 4) × SU(2) × U(1). Taking Penrose limit this symmetry reduces to SO(4) × SO(2) × U(1). We consider a massless scalar field Φ in this background for which the equation of motion is given by Φ = 0 . with the Laplacian = 2∂ u ∂ v + 6 4 i=1 (X i ) 2 + 6(X 7 ) 2 ∂ 2 v + 8 i=1 ∂ 2 X i .(8.6) To solve this wave equation, we use the method of separation of variables. Set Φ(u, v, X i ) = f (u, v)g(X i ) . The wave equation gives 2∂ u ∂ v f f (u, v) + 6 4 i=1 (X i ) 2 + 6(X 7 ) 2 ∂ 2 v f f (u, v) + 8 g(X i ) g(X i ) = 0 . (8.7) Setting the ansatz f (u, v) ∼ e i(pv v−puu) , we obtain 8 g(X i ) − 6p 2 v 4 i=1 (X i ) 2 + 6(X 7 ) 2 g(X i ) + 2p u p v g(X i ) = 0 . This equation now has a familiar Harmonic oscillator form whose solutions are given in terms of well known Hermite polynomials. We find Φ(u, v, X i ) = e i pvv−puu+c 5 X 5 +c 6 X 6 +c 8 X 8 e − βX 2 7 2 H n 7 βX 7 4 j=1 e − αX 2 j 2 H n j √ αX j , (8.9) Here p u , p v are the conserved canonical momenta along u and v direction respectively. For convenience we have used the notation α 2 = 6p 2 v , β 2 = 36p 2 v , 8 i=1 c 2 i = 2p u p v , and n i = (c 2 i α − 1)/2 in the above expression. i = n 2 + 1, i = 1, . . . , 4. (4.9) an underlying SU(2) A ×SU(2) B ×U(1) R global symmetry preserved by the theory. The fields (A 1 , A 2 ) form a doublet under the SU(2) A subgroup of the global symmetry and similarly (B 1 , B 2 ) form a doublet under SU(2) B . The R-symmetry U(1) R originates due to a shift along the circle coordinate, and all the fields A 1 , A 2 , B 1 , B 2 transform by the same phase under this symmetry. Let us denote J 1 and J 2 to be the Cartan generators of SU(2) A and SU(2) B respectively and let J 3 be the generator of U(1) R .This field theory system is dual to the T-dual geometry specified by the metric (3.27): See also[35] for a detailed discussion on some classical sting solutions of the non-Abelian T-dual background.3 We are grateful to N. T. Macpherson for explaining us the susy conditions along with providing us appropriate references. We need not worry about the modes corresponding to the non-linear oscillators here. For small value of √ 1 − 6J 2 it can be shown using perturbation theory that the lowest mode will correspond to n = 1 and will have a higher frequency than the above modes. A Symmetry of the String Background Field Equations. T H Buscher, 10.1016/0370-2693Phys. Lett. B. 194T. H. Buscher, "A Symmetry of the String Background Field Equations," Phys. Lett. B 194 (1987) 59. doi:10.1016/0370-2693(87)90769-6 Path Integral Derivation of Quantum Duality in Nonlinear Sigma Models. T H Buscher, 10.1016/0370-2693(88)90602-8Phys. Lett. B. 201466T. H. Buscher, "Path Integral Derivation of Quantum Duality in Nonlinear Sigma Models," Phys. Lett. B 201 (1988) 466. doi:10.1016/0370-2693(88)90602-8 Duality, quotients, and currents. M Rocek, E P Verlinde, 10.1016/0550-3213(92)90269-H[hep-th/9110053Nucl. Phys. B. 373630M. Rocek and E. P. Verlinde, "Duality, quotients, and currents," Nucl. Phys. B 373, 630 (1992) doi:10.1016/0550-3213(92)90269-H [hep-th/9110053]. Duality symmetries from nonAbelian isometries in string theory. X C De La Ossa, F Quevedo, 10.1016/0550-3213(93)90041-M[hep-th/9210021Nucl. Phys. B. 403377X. C. de la Ossa and F. Quevedo, "Duality symmetries from nonAbelian isometries in string theory," Nucl. Phys. B 403, 377 (1993) doi:10.1016/0550-3213(93)90041-M [hep-th/9210021]. On nonAbelian duality. A Giveon, M Rocek, 10.1016/0550-3213(94)90230-5[hep-th/9308154Nucl. Phys. B. 421173A. Giveon and M. Rocek, "On nonAbelian duality," Nucl. Phys. B 421, 173 (1994) doi:10.1016/0550-3213(94)90230-5 [hep-th/9308154]. On non-abelian T-dual geometries with Ramond fluxes. K Sfetsos, D C Thompson, 10.1016/j.nuclphysb.2010.12.013arXiv:1012.1320Nucl. Phys. B. 84621hep-thK. Sfetsos and D. C. Thompson, "On non-abelian T-dual geometries with Ra- mond fluxes," Nucl. Phys. B 846, 21 (2011) doi:10.1016/j.nuclphysb.2010.12.013 [arXiv:1012.1320 [hep-th]]. Non-abelian T-duality, Ramond Fields and Coset Geometries. Y Lozano, E Colgain, K Sfetsos, D C Thompson, 10.1007/JHEP06arXiv:1104.5196JHEP. 1106106hep-thY. Lozano, E. O Colgain, K. Sfetsos and D. C. Thompson, "Non-abelian T-duality, Ramond Fields and Coset Geometries," JHEP 1106, 106 (2011) doi:10.1007/JHEP06(2011)106 [arXiv:1104.5196 [hep-th]]. Non-abelian T-duality of Pilch-Warner background. H Dimov, S Mladenov, R C Rashkov, T Vetsov, 10.1002/prop.201600032arXiv:1511.00269Fortsch. Phys. 64hep-thH. Dimov, S. Mladenov, R. C. Rashkov and T. Vetsov, "Non-abelian T-duality of Pilch-Warner background," Fortsch. Phys. 64, 657 (2016) doi:10.1002/prop.201600032 [arXiv:1511.00269 [hep-th]]. Duality symmetries in string-inspired supergravity: T-dualities and the gauge/gravity correspondence. C A Whiting, 10.17077/etd.qc6n8jmaUniversity of IowaPh.D. ThesisC. A. Whiting, "Duality symmetries in string-inspired supergravity: T-dualities and the gauge/gravity correspondence", Ph.D. Thesis, University of Iowa, 2015. https://doi.org/10.17077/etd.qc6n8jma On Non-Abelian T-Duality and new N=1 backgrounds. G Itsios, C Nunez, K Sfetsos, D C Thompson, 10.1016/j.physletb.2013.03.033arXiv:1212.4840Phys. Lett. B. 721hep-thG. Itsios, C. Nunez, K. Sfetsos and D. C. Thompson, "On Non-Abelian T-Duality and new N=1 backgrounds," Phys. Lett. B 721, 342 (2013) doi:10.1016/j.physletb.2013.03.033 [arXiv:1212.4840 [hep-th]]. Non-Abelian T-duality and the AdS/CFT correspondence:new N=1 backgrounds. G Itsios, C Nunez, K Sfetsos, D C Thompson, 10.1016/j.nuclphysb.2013.04.004arXiv:1301.6755Nucl. Phys. B. 8731hep-thG. Itsios, C. Nunez, K. Sfetsos and D. C. Thompson, "Non-Abelian T-duality and the AdS/CFT correspondence:new N=1 backgrounds," Nucl. Phys. B 873, 1 (2013) doi:10.1016/j.nuclphysb.2013.04.004 [arXiv:1301.6755 [hep-th]]. G-structures and Flavouring non-Abelian T-duality. A Barranco, J Gaillard, N T Macpherson, C Nunez, D C Thompson, 10.1007/JHEP08(2013)018arXiv:1305.7229JHEP. 130818hep-thA. Barranco, J. Gaillard, N. T. Macpherson, C. Nunez and D. C. Thompson, "G-structures and Flavouring non-Abelian T-duality," JHEP 1308, 018 (2013) doi:10.1007/JHEP08(2013)018 [arXiv:1305.7229 [hep-th]]. Non-Abelian T-Dualizing the Resolved Conifold with Regular and Fractional D3-Branes. K S Kooner, S Zacarias, 10.1007/JHEP08(2015)143arXiv:1411.7433JHEP. 1508143hep-thK. S. Kooner and S. Zacarias, "Non-Abelian T-Dualizing the Resolved Coni- fold with Regular and Fractional D3-Branes," JHEP 1508, 143 (2015) doi:10.1007/JHEP08(2015)143 [arXiv:1411.7433 [hep-th]]. N = 1 SUSY backgrounds with an AdS factor from non-Abelian T duality. T R Araujo, H Nastase, 10.1103/PhysRevD.91.126015arXiv:1503.00553Phys. Rev. D. 9112126015hep-thT. R. Araujo and H. Nastase, "N = 1 SUSY backgrounds with an AdS fac- tor from non-Abelian T duality," Phys. Rev. D 91, no. 12, 126015 (2015) doi:10.1103/PhysRevD.91.126015 [arXiv:1503.00553 [hep-th]]. Type IIB supergravity solutions with AdS 5 from Abelian and non-Abelian T dualities. N T Macpherson, C Nunez, L A Zayas, V G J Rodgers, C A Whiting, 10.1007/JHEP02(2015)040arXiv:1410.2650JHEP. 150240hep-thN. T. Macpherson, C. Nunez, L. A. Pando Zayas, V. G. J. Rodgers and C. A. Whiting, "Type IIB supergravity solutions with AdS 5 from Abelian and non-Abelian T dualities," JHEP 1502, 040 (2015) doi:10.1007/JHEP02(2015)040 [arXiv:1410.2650 [hep-th]]. Any Space-Time has a Plane Wave as a Limit. R Penrose, Differential geometry and relativity. DordrechtR. Penrose, "Any Space-Time has a Plane Wave as a Limit" in Differential geometry and relativity, pp 271-275, Dordrecht, 1976. Field theory aspects of non-Abelian T-duality and N = 2 linear quivers. Y Lozano, C Nuenz, 10.1007/JHEP05(2016)107arXiv:1603.04440JHEP. 1605107hep-thY. Lozano and C. Nuenz, "Field theory aspects of non-Abelian T-duality and N = 2 linear quivers," JHEP 1605, 107 (2016) doi:10.1007/JHEP05(2016)107 [arXiv:1603.04440 [hep-th]]. Three-dimensional N = 4 linear quivers and non-Abelian T-duals. Y Lozano, N T Macpherson, J Montero, C Nunez, 10.1007/JHEP11arXiv:1609.09061JHEP. 1611133hep-thY. Lozano, N. T. Macpherson, J. Montero and C. Nunez, "Three-dimensional N = 4 linear quivers and non-Abelian T-duals," JHEP 1611, 133 (2016) doi:10.1007/JHEP11(2016)133 [arXiv:1609.09061 [hep-th]]. Non-Abelian T-Duality from Penrose Limit of the Pilch-Warner Solution. H Dimov, R C Rashkov, S Mladenov, T Vetsov, Bulg. J. Phys. 434251H. Dimov, R. C. Rashkov, S. Mladenov and T. Vetsov, "Non-Abelian T-Duality from Penrose Limit of the Pilch-Warner Solution," Bulg. J. Phys. 43, no. 4, 251 (2016). The Plane wave / superYang-Mills duality. D Sadri, M M Sheikh-Jabbari, 10.1103/RevModPhys.76.853hep-th/0310119Rev. Mod. Phys. 76D. Sadri and M. M. Sheikh-Jabbari, "The Plane wave / superYang-Mills duality," Rev. Mod. Phys. 76, 853 (2004) doi:10.1103/RevModPhys.76.853 [hep-th/0310119]. Nonperturbative Computation of the Weyl Anomaly for a Class of Nontrivial Backgrounds. D Amati, C Klimcik, 10.1016/0370-2693Phys. Lett. B. 21989D. Amati and C. Klimcik, "Nonperturbative Computation of the Weyl Anomaly for a Class of Nontrivial Backgrounds," Phys. Lett. B 219, 443 (1989). doi:10.1016/0370- 2693(89)91092-7 Space-Time Singularities in String Theory. G T Horowitz, A R Steif, 10.1103/PhysRevLett.64.260Phys. Rev. Lett. 64260G. T. Horowitz and A. R. Steif, "Space-Time Singularities in String Theory," Phys. Rev. Lett. 64, 260 (1990). doi:10.1103/PhysRevLett.64.260 Strings in flat space and pp waves from N =4 superYang-Mills. D E Berenstein, J M Maldacena, H S Nastase, 10.1088/1126-6708/2002/04/013hep-th/0202021JHEP. 020413D. E. Berenstein, J. M. Maldacena and H. S. Nastase, "Strings in flat space and pp waves from N =4 superYang-Mills," JHEP 0204, 013 (2002) doi:10.1088/1126- 6708/2002/04/013 [hep-th/0202021]. Penrose limits of Abelian and non-Abelian T-duals of AdS 5 × S 5 and their field theory duals. G Itsios, H Nastase, C Nunez, K Sfetsos, S Zacaras, 10.1007/JHEP01arXiv:1711.09911JHEP. 180171hep-thG. Itsios, H. Nastase, C. Nunez, K. Sfetsos and S. Zacaras, "Penrose limits of Abelian and non-Abelian T-duals of AdS 5 × S 5 and their field theory duals," JHEP 1801, 071 (2018) doi:10.1007/JHEP01(2018)071 [arXiv:1711.09911 [hep-th]]. Superconformal field theory on three-branes at a Calabi-Yau singularity. I R Klebanov, E Witten, 10.1016/S0550-3213hep-th/9807080Nucl. Phys. B. 53698I. R. Klebanov and E. Witten, "Superconformal field theory on three-branes at a Calabi-Yau singularity," Nucl. Phys. B 536, 199 (1998) doi:10.1016/S0550- 3213(98)00654-3 [hep-th/9807080]. AdS / CFT correspondence and symmetry breaking. I R Klebanov, E Witten, 10.1016/S0550-3213(99)00387-9hep-th/9905104Nucl. Phys. B. 55689I. R. Klebanov and E. Witten, "AdS / CFT correspondence and symmetry breaking," Nucl. Phys. B 556, 89 (1999) doi:10.1016/S0550-3213(99)00387-9 [hep-th/9905104]. PP wave limit and enhanced supersymmetry in gauge theories. N Itzhaki, I R Klebanov, S Mukhi, 10.1088/1126-6708/2002/03/048hep- th/0202153JHEP. 020348N. Itzhaki, I. R. Klebanov and S. Mukhi, "PP wave limit and enhanced supersymmetry in gauge theories," JHEP 0203 (2002) 048 doi:10.1088/1126-6708/2002/03/048 [hep- th/0202153]. Penrose limit of N = 1 gauge theories. J Gomis, H Ooguri, 10.1016/S0550-3213(02)00396-6[hep-th/0202157Nucl. Phys. B. 635106J. Gomis and H. Ooguri, "Penrose limit of N = 1 gauge theories," Nucl. Phys. B 635, 106 (2002) doi:10.1016/S0550-3213(02)00396-6 [hep-th/0202157]. On Penrose limits and gauge theories. L A Pando Zayas, J Sonnenschein, 10.1088/1126-6708/2002/05/010hep-th/0202186JHEP. 020510L. A. Pando Zayas and J. Sonnenschein, "On Penrose limits and gauge theories," JHEP 0205, 010 (2002) doi:10.1088/1126-6708/2002/05/010 [hep-th/0202186]. Brane configurations for branes at conifolds. A M Uranga, 10.1088/1126-6708/1999/01/022hep-th/9811004JHEP. 990122A. M. Uranga, "Brane configurations for branes at conifolds," JHEP 9901, 022 (1999) doi:10.1088/1126-6708/1999/01/022 [hep-th/9811004]. K Dasgupta, S Mukhi, 10.1016/S0550-3213(99)00206-0[hep-th/9811139Brane constructions, conifolds and M theory. 551204K. Dasgupta and S. Mukhi, "Brane constructions, conifolds and M theory," Nucl. Phys. B 551, 204 (1999) doi:10.1016/S0550-3213(99)00206-0 [hep-th/9811139]. Duality in the type II superstring effective action. E Bergshoeff, C M Hull, T Ortin, 10.1016/0550-3213(95)00367-2[hep-th/9504081Nucl. Phys. B. 451547E. Bergshoeff, C. M. Hull and T. Ortin, "Duality in the type II superstring effec- tive action," Nucl. Phys. B 451, 547 (1995) doi:10.1016/0550-3213(95)00367-2 [hep- th/9504081]. The AdS 5 non-Abelian T-dual of Klebanov-Witten as a N = 1 linear quiver from M5-branes. G Itsios, Y Lozano, J Montero, C Nunez, 10.1007/JHEP09(2017)038arXiv:1705.09661JHEP. 170938hep-thG. Itsios, Y. Lozano, J. Montero and C. Nunez, "The AdS 5 non-Abelian T-dual of Klebanov-Witten as a N = 1 linear quiver from M5-branes," JHEP 1709, 038 (2017) doi:10.1007/JHEP09(2017)038 [arXiv:1705.09661 [hep-th]]. Entanglement entropy and Fisher information metric for closed bosonic strings in homogeneous plane wave background. H Dimov, S Mladenov, R C Rashkov, T Vetsov, 10.1103/PhysRevD.96.126004arXiv:1705.01873Phys. Rev. D. 9612126004hep-thH. Dimov, S. Mladenov, R. C. Rashkov and T. Vetsov, "Entanglement entropy and Fisher information metric for closed bosonic strings in homogeneous plane wave back- ground," Phys. Rev. D 96, no. 12, 126004 (2017) doi:10.1103/PhysRevD.96.126004 [arXiv:1705.01873 [hep-th]]. Semiclassical strings and Non-Abelian T-duality. S Zacaras, 10.1016/j.physletb.2014.08.016arXiv:1401.7618Phys. Lett. B. 73790hep-thS. Zacaras, "Semiclassical strings and Non-Abelian T-duality," Phys. Lett. B 737, 90 (2014) doi:10.1016/j.physletb.2014.08.016 [arXiv:1401.7618 [hep-th]]. T duality, space-time spinors and RR fields in curved backgrounds. S F Hassan, 10.1016/S0550-3213(99)00684-7[hep-th/9907152Nucl. Phys. B. 568145S. F. Hassan, "T duality, space-time spinors and RR fields in curved backgrounds," Nucl. Phys. B 568, 145 (2000) doi:10.1016/S0550-3213(99)00684-7 [hep-th/9907152]. Non-Abelian T-duality and consistent truncations in type-II supergravity. G Itsios, Y Lozano, E Colgain, K Sfetsos, 10.1007/JHEP08(2012)132arXiv:1205.2274JHEP. 1208132hep-thG. Itsios, Y. Lozano, E. O Colgain and K. Sfetsos, "Non-Abelian T-duality and consistent truncations in type-II supergravity," JHEP 1208, 132 (2012) doi:10.1007/JHEP08(2012)132 [arXiv:1205.2274 [hep-th]]. An alternative IIB embedding of F(4) gauged supergravity. J Jeong, O Kelekci, E Colgain, 10.1007/JHEP05(2013)079arXiv:1302.2105JHEP. 130579hep-thJ. Jeong, O. Kelekci and E. O Colgain, "An alternative IIB embedding of F(4) gauged supergravity," JHEP 1305, 079 (2013) doi:10.1007/JHEP05(2013)079 [arXiv:1302.2105 [hep-th]]. Supersymmetry and non-Abelian T-duality in type II supergravity. O Kelekci, Y Lozano, N T Macpherson, E O Colgain, 10.1088/0264-9381/32/3/035014arXiv:1409.7406Class. Quant. Grav. 32335014hep-thO. Kelekci, Y. Lozano, N. T. Macpherson and E. O. Colgain, "Supersymmetry and non-Abelian T-duality in type II supergravity," Class. Quant. Grav. 32, no. 3, 035014 (2015) doi:10.1088/0264-9381/32/3/035014 [arXiv:1409.7406 [hep-th]].
[]
[ "Non-Iterative Solution for Coordinated Optimal Dispatch via Equivalent Projection-Part II: Method and Applications", "Non-Iterative Solution for Coordinated Optimal Dispatch via Equivalent Projection-Part II: Method and Applications" ]
[ "Member, IEEEZhenfei Tan ", "Senior Member, IEEEZheng Yan ", "Senior Member, IEEEHaiwang Zhong ", "Senior Member, IEEEQing Xia " ]
[]
[]
This two-part paper develops a non-iterative coordinated optimal dispatch framework, i.e., free of iterative information exchange, via the innovation of the equivalent projection (EP) theory. The EP eliminates internal variables from technical and economic operation constraints of the subsystem and obtains an equivalent model with reduced scale, which is the key to the noniterative coordinated optimization. In Part II of this paper, a novel projection algorithm with the explicit error guarantee measured by the Hausdorff distance is proposed, which characterizes the EP model by the convex hull of its vertices. This algorithm is proven to yield a conservative approximation within the prespecified error tolerance and can obtain the exact EP model if the error tolerance is set to zero, which provides flexibility to balance the computation accuracy and effort. Applications of the EP-based coordinated dispatch are demonstrated based on the multi-area coordination and transmission-distribution coordination. Case studies with a wide range of system scales verify the superiority of the proposed projection algorithm in terms of computational efficiency and scalability, and validate the effectiveness of the EP-based coordinated dispatch in comparison with the joint optimization.
10.48550/arxiv.2302.13280
[ "https://export.arxiv.org/pdf/2302.13280v1.pdf" ]
257,219,459
2302.13280
82434dac9b608a19255b6b8bce6f5939351e2b52
Non-Iterative Solution for Coordinated Optimal Dispatch via Equivalent Projection-Part II: Method and Applications Member, IEEEZhenfei Tan Senior Member, IEEEZheng Yan Senior Member, IEEEHaiwang Zhong Senior Member, IEEEQing Xia Non-Iterative Solution for Coordinated Optimal Dispatch via Equivalent Projection-Part II: Method and Applications 1Index Terms-Coordinated optimizationnon-iterativemulti- area dispatchdistribution networkprojection This two-part paper develops a non-iterative coordinated optimal dispatch framework, i.e., free of iterative information exchange, via the innovation of the equivalent projection (EP) theory. The EP eliminates internal variables from technical and economic operation constraints of the subsystem and obtains an equivalent model with reduced scale, which is the key to the noniterative coordinated optimization. In Part II of this paper, a novel projection algorithm with the explicit error guarantee measured by the Hausdorff distance is proposed, which characterizes the EP model by the convex hull of its vertices. This algorithm is proven to yield a conservative approximation within the prespecified error tolerance and can obtain the exact EP model if the error tolerance is set to zero, which provides flexibility to balance the computation accuracy and effort. Applications of the EP-based coordinated dispatch are demonstrated based on the multi-area coordination and transmission-distribution coordination. Case studies with a wide range of system scales verify the superiority of the proposed projection algorithm in terms of computational efficiency and scalability, and validate the effectiveness of the EP-based coordinated dispatch in comparison with the joint optimization. I. INTRODUCTION A. Background and Literature Review T HE coordinated optimal dispatch (COD) of power systems in different regions, voltage levels, and communities is vital for improving the operational economy and security. Conventional coordinated optimization methods rely on iterative information exchange among subsystems, which results in drawbacks including the convergence issue, communication burden, scalability issue, and incompatibility with the serial coordination scheme in practice. In this regard, realizing the COD in a non-iterative fashion, i.e., without iterative information exchange among subsystems, is of great interest to both academia and industry. This two-part paper aims at this issue and proposes the equivalent projection (EP)-based solution. The basic idea is to make external equivalence of technical and economic features of the subsystem for the upper-level coordinated optimization, which requires only a single-round exchange of some boundary information. Following the theory and framework introduced in Part I, the calculation method and typical applications of the EP will be discussed in this present paper. The EP is mathematically a geometric projection problem, which eliminates internal decision variables from the secure and economic operation constraints of the subsystem and yields a low-dimensional region regarding coordination variables to represent the system. Projection calculation is a challenging task even for linear systems due to the exponential complexity in worst-case scenarios. The most classic projection algorithm is the Fourier-Motzkin Elimination (FME). The FME eliminates variables one by one through linear combinations of inequalities with opposite signs before the coefficient of each variable to be eliminated [1]. However, numerous redundant constraints will be generated after eliminating each variable, which not only complicates the projection result, but also increases the number of constraints needed to be processed for eliminating the next variable and thus aggravates the computation burden dramatically [2]. To over these drawbacks, existing studies propose the redundancy identification technique to accelerate the FME calculation and obtain the minimal representation of the projection result [3], [4]. Another basic algorithm for polyhedral projection is the block elimination [5], which identifies facets of the projected polytope based on the projection lemma. Though one-by-one elimination of variables is avoided, the block elimination may also generate redundant constraints. The polyhedral projection problem is theoretically proven to be equivalent to the multiparametric programming (MPP) problem [6], which enables the solution of the projection problem via MPP algorithms. This method also leverages the projection lemma to identify irredundant inequalities of the projection. The application of above projection methods to the EP calculation in the power system is limited due to features of the coordinated dispatch problem. First, these methods have exponential complexity to the number of variables to be eliminated and thus, are inefficient for power system dispatch problems where the number of internal variables is large. Second, a conservative approximation of the exact EP model is usually desired in engineering applications to reduce the computation effort. However, the aforementioned methods cannot yield an inner approximation since they characterize the projection by generating inequalities and will overestimate the projection result when terminated prematurely without finding all the inequalities. In this regard, existing studies in the power system literature propose different models to approximate the projection region, e.g., the box model [7], the Zonotope model [8], the ellipse model [9], and the robust optimization-based model [10]. These methods approximate the projection region with simple shapes at the cost of accuracy. As the approximation shape is pre-fixed, these methods are only applicable to projection regions with specific geometric structures and the approximation accuracy cannot be adjusted flexibly. Another critical issue regarding the EP calculation is the error metric for controlling the calculation accuracy of the algorithm. The error metric for the EP model measures the difference between the approximated region and the exact one. Reference [8] uses the distances between parallel facets of the region to measure the approximation quality. However, this metric can only be applied to a certain kind of polytope namely the Zonotope. The volume is a more general metric for comparing two regions. Reference [11] measures the calculation accuracy of the active and reactive flexibility region of the distribution network by comparing areas of regions. Reference [11] uses the same metric to measure the flexibility region enlargement brought by power flow routers. Based on the volume metric, the Jaccard similarity can be used to measure the relative error of the approximation [12]. However, the evaluation of volume metrics is time-consuming, especially for high-dimensional problems. Furthermore, The evaluation of the volume-based error metric also relies on the knowledge of the exact projected region. Hence, this metric can only be used to test the approximation quality when the ground truth of the projection is known, but can hardly be used to control the approximation accuracy in the projection algorithm. B. Contributions and Paper Organization To meet practical requirements of the EP calculation for power system applications, the Part II of this paper proposes a novel projection algorithm with an explicit accuracy guarantee. With this algorithm, the EP theory and the non-iterative COD method are instantiated for multi-area system coordination and transmission-distribution coordination. The contribution of the Part II is threefold, 1) The progressive vertex enumeration (PVE) algorithm is proposed for the EP calculation in power system applications. Compared to existing methods that solve the projection problem by eliminating internal variables, the proposed method directly identifies the projection region in the lower-dimensional coordination space and thus, has lower computational complexity. Compared to existing approximation methods, the proposed algorithm is proven to be accurate for the polyhedral projection problem. The proposed method can also be used to get an inner approximation with an explicit and adjustable error tolerance according to practical requirements. 2) The Hausdorff distance is employed to measure the approximation error of the PVE algorithm. Compared to the volume metric and the Jaccard similarity, the proposed error metric is a byproduct of the vertex identification in each step of the algorithm and thus, is computationally efficient and can be used to balance the accuracy and computation effort of the algorithm. 3) The non-iterative COD for the multi-area system and transmission-distribution system is realized based on the EP calculation, which thoroughly overcomes the disadvantages of conventional coordinated optimization methods brought by iterations. The EP-based COD is verified to yield identical solutions as the joint optimization and consumes less computation time in scenarios with numerous subsystems. The remainder of this paper is organized as follows. Section II introduces the PVE algorithm and its properties. Sections III and IV apply the EP-based COD method to the multi-area coordinated dispatch problem and transmission-distribution coordination problem, respectively. Section V concludes this paper and discusses future research. II. CALCULATION METHOD FOR EP A. Problem Setup In power system applications, the linear model is mostly utilized for optimal dispatch for the sake of computational efficiency, reliability, and clear economic interpretation. A broad spectrum of literature has investigated the linearized modeling of the transmission network [13], [14], distribution network [15], [16], and multi-energy system [17]. Nonlinear models, in contrast, have intractable computation and may lead to incentive issues when non-convexity exists [18]. The application of nonlinear optimization is difficult even for centralized dispatch, let alone coordinated dispatch. Part II of this paper focuses on the practice-oriented coordinated dispatch method and thus, the widely used linear dispatch model is considered. As introduced in Part I, the EP eliminates internal variables from the operation feasible region of the subsystem and makes external equivalence of the subsystem model. The operation feasible region of the subsystem is enforced by both technical constraints and the epigraph of the objective function. Without loss of generality, represent the operation feasible region of the subsystem in the following compact form, Ω := (x, y) ∈ R Nx × R Ny : Ax + By ≤ c .(1) In the model, x and y denote the coordination variable and internal variable, respectively. Constant N x and N y are dimensions of x and y. The partition of variables is introduced in Part I of this paper. Matrix A, B, and c are coefficients defining operation constraints of the subsystem. Note that variables are all bounded in power system dispatch problems due to operation limits of devices. Hence, Ω formulated in (1) is a polytope in the space of R Nx × R Ny . As per Definition 2 in Part I of this paper, the EP model of the subsystem is expressed as follows, Φ := x ∈ R Nx : ∃y, s.t. (x, y) ∈ Ω .(2) The EP model depicts the technical and economic operation characteristics in the subspace of the coordination variable. The EP model can replace the original model of the subsystem in the coordinated optimization and can ensure the equivalent optimality in a non-iterative and privacy-protected manner, as proven in Part I. B. Representations of the EP model As defined in (2), EP model Φ is the geometric projection of Ω onto the space of R Nx . Since Ω is a polytope, its projection Φ is also a polytope [19]. According to the Minkowski-Weyl Theorem [20], a polytope has the following two equivalent representations, known as the double descriptions. • Hyperplane-representation (H-rep) Φ = x ∈ R Nx :Ãx ≤c . (3) • Vertex-representation (V-rep) Φ =conv(V ) := Nv i=1 λ ixi :x i ∈ V, λ i ≥ 0, Nv i=1 λ i = 1 .(4) In equation (3),à andc respectively denote the coefficient and the right-hand-side of the constraints that represent hyperplanes of the polytope. In equation (4), function conv(·) denote the convex hull of a set of vectors. Set V contains all the vertices of Φ and N v is the number of vertices.x i is the ith vertex of Φ. The H-rep and V-rep of a polytope are convertible [20]. The EP calculation is to determine either the H-rep or the Vrep of Φ. Though the two representations are equivalent, they will lead to different philosophies for designing the projection algorithm. In power system optimization problems, the Hrep is mostly used to model the operation feasible region. When calculating the projection, however, we find that the V-rep is more suitable due to some features of power systems. First, the dimension of the coordination variable is lower compared with internal variables of subsystems, since networks among subsystems are relatively weak-connected compared with networks inside subsystems. Hyperplane-oriented projection methods, e.g., the FME and block elimination, calculate the projection by eliminating internal variables and thus have exponential complexity regarding the dimension of the internal variable. By identifying vertices, in contrast, the projected polytope is directly characterized in the lower-dimensional coordination space, which can lower the computational difficulty. Second, from the perspective of practical implementation, the projection algorithm is desired to terminate within a given time and yield a conservative approximation of the EP model. If the hyperplane-oriented method is terminated prematurely, some critical hyperplanes enclosing the EP model will be missed and the calculation result may contain infeasible operating points. On the contrary, if the EP is calculated by identifying vertices, the intermediate output of the algorithm will always be a subset of the exact result due to the property of the convex set. In this regard, this paper develops a vertex-oriented projection method namely the PVE to calculate the EP efficiently and practically. C. PVE Algorithm The PVE algorithm calculates the projected polytope by identifying its vertices. The key of the algorithm is twofold. First, the error metric has to be carefully designed to provide a clear interpretation of the calculation results and allow the prespecification of the error tolerance. Second, the vertices have Construct convex hull of existing vertices: Φ (k) = conv(V ) = x :à (k) j x ≤d (k) j , j ∈ [J (k) ] 5: Initialize set of IRs: H (k) = ∅ 6: for j ∈ [J (k) ] do 7: Vertex identification: solve (5) with α =à (k) j , obtain vertexx (k,j) and IR of the vertex ∆h (k,j) 8: if ∆h (k,j) > 0 andx (k,j) / ∈ V then 9: Save new vertex: V = {V,x (k,j) } 10: Save IR: H (k) = {H (k) , ∆h (k,j) } 11: end if 12: end for 13: Evaluate error metric: D (k) = max H (k) 14: until D (k) ≤ ε 15: Output:Φ = conv(V ) to be identified in a proper sequence to improve computational efficiency. To this end, the Hausdorff distance is employed to measure the approximation error and a double-loop framework is developed to identify vertices of the projection along a path with the steepest descent of the approximation error. The basic idea of the algorithm is first finding vertices that are critical to the overall shape of the projection, and then expanding the convex hull of existing vertices to find new vertices outside the current approximation. The flowchart is summarized in Algorithm 1. The PVE algorithm contains two layers of iterative loops; however, this algorithm is run locally by each subsystem to calculate the EP model and its loops do not conflict with that the proposed coordinated optimization method does not require iterative information exchange among subsystems. Details of the PVE algorithm will be introduced as follows along with a numerical example illustrated in Fig. 1. 1) Vertex identification problem: Each vertex of the projected polytope Φ is an extreme point, which can be identified by solving the following problem, max x,y h = α x s.t. Ax + By ≤ c (5) The value vector α in the objective function represents the direction for identifying the vertex. 2) Initialization: At least N x +1 initial vertices are required to ensure the convex hull of vertices is degenerate. These vertices can be searched along each axis, i.e., solving problem (5) with α = ±e i , where e i is the ith standard basis whose ith component equals 1 while others equal 0. The set of initial vertices is denoted as V (0) . In the illustrative example in Fig. 1 (a), the blue polygon is the exact projection of a randomlygenerated polytope. Four initial vertices of the projection, i.e., 3) , andx (0,4) , are identified for initialization. x (0,1) ,x (0,2) ,x (0, 3) Inner loop: The inner loop of the PVE algorithm constructs convex hull of existing vertices and identifies new 4) ∆ℎ (1,1) ො (1,1) ො (1,2) ො (1,3) ො (1,4) ∆ℎ (1,2) j . The newly identified vertex is the optimal solution of problem (5), denoted asx (k,j) . The improvement ratio (IR) is calculated as (6) to measure the contribution of vertexx (k,j) to improve the current approximation. ො (0,1) ො (0,2) ො (0,3) ො (0,∆ ( , ) = ( ) ∆ℎ (1,4) (a) (b)∆h (k,j) = (à (k) j ) x (k,j) −d (k,j) à (k) j .(6) The IR measures the distance from vertexx (k,j) to the jth facet ofΦ (k) and is non-negative. If ∆h (k,j) > 0, indicating thatx (k,j) is outsideΦ (k) and will contribute to improve the current approximation, thenx (k,j) will be appended to the vertex set V and ∆h (k,j) will be recorded in set H (k) . If ∆h (k,j) = 0, indicating thatx (k,j) is on the facet ofΦ (k) and will not contribute to improve the current approximation, then this vertex will be omitted. The example in Fig. 1 (a) exhibits the vertex identification in the first outer loop (k = 1). The convex hull of existing vertices isΦ (1) , as the red polygon shows. Verticeŝ (1,3) , andx (1,4) are identified along the outer normal vector of the four facets ofΦ (1) , respectively. The Distance of each vertex to the corresponding facet ofΦ (1) is the IR, as the dotted line mark in Fig. 1 (a). x (1,1) ,x (1,2) ,x 4) Outer loop and termination criterion: The outer loop evaluates the error of the current approximation and compares it with the pre-specified error tolerance to decide whether to terminate the algorithm. The Hausdorff distance between the real projection Φ and the current polytopeΦ (k) is employed as the error metric, which is defined as follows [21], D (k) := max x1∈Φ min x2∈Φ (k) x 2 − x 1 2 .(7) The interpretation of D (k) is the maximum distance from points in Φ to polytopeΦ (k) . Note thatΦ (k) is the convex hull of existing vertices of Φ, thereforeΦ (k) ⊂ Φ. Hence, D (k) will be the maximum distance from vertices of Φ to polytopê Φ (k) . According to the definition in (6), the distance from each identified vertex of Φ to polytopeΦ (k) is actually the corresponding IR. Thereby, the error metric can be evaluated as follows, D (k) = max x1∈V min x2∈Φ (k) x 2 − x 1 2 = max j ∆h (k,j) = max H (k) .(8) From the above derivation, the Hausdorff distance can be evaluated by selecting the largest value from a set of scalars, which will consume much less computation effort compared to error metrics based on the region volume. If D (k) is no larger than the pre-specified error tolerance ε, the algorithm terminates and outputs the convex hull of existing vertices as the approximation of the real projection. The error metric based on the Hausdorff distance ensures that the maximum deviation of the approximation is within ε. If the termination criterion is not met, the algorithm moves into the next outer loop to identify more vertices outside the current approximation. In Fig. 1 (a), the error metric D (1) for outer loop k = 1 equals the IR of vertexx (1,3) , which has the farthest distance toΦ (1) . Fig. 1 (b) exhibits the number of new vertices and the error metric in each outer loop of the PVE algorithm. The algorithm terminates at the 4th outer loop when D (4) = 0. A total of 22 vertices of the projection are identified. As shown in the figure, the error metric keeps decreasing along the steepest path during the algorithm, indicating that vertices that have significant influences on the projection are identified with higher priority. With the Hausdorff distance-based error metric, the process of the PVE algorithm can be interpreted as searching vertices that make the approximation error decline fast. This error metric also has another two advantages. First, the Hausdorff distance can be evaluated conveniently according to (8), which does not bring additional computational complexity. Second, the evaluation of the Hausdorff distance does not rely on the ground truth of the projection result and thus, the proposed error metric can be used to control the accuracy during the projection algorithm. In contrast, other error metrics, e.g., the volume difference metric and the Jaccard similarity, rely on the knowledge of the exact projection result and consequently, are hard to calculate and cannot be used to control the accuracy of the projection algorithm. Proof. In each round of the outer loop of the PVE algorithm, the convex hull of existing vertices is an intermediate approximation of the projection. Since the projected region is a polytope, the convex hull of its vertices is a subset of the exact projection, i.e.,Φ (k) ⊂ Φ for any k. Hence, if ε > 0, a conservative approximation of the exact projection will be obtained with the maximum Hausdorff distance no larger than ε. If ε is set 0, indicating that no point of Φ is outsideΦ (k) when the algorithm terminates, then Φ ⊂Φ (k) . Note that Φ (k) ⊂ Φ and thus,Φ (k) = Φ. Since the number of vertices of a polytope is finite, the algorithm will terminate within finite steps. 2) Accelerating strategies: Three strategies are proposed to further accelerate the PVE algorithm. The first is to accelerate the solving of the vertex identification problem in (5). Note that only the value vector in the objective function is varying for identifying different vertices, the structure and parameters of constraints are invariant. Thereby, the redundant constraint elimination technique and the warm simplex basis technique can be used to accelerate the solution of (5). The redundant constraint elimination technique removes constraints that do not impact the feasible region to reduce the problem scale. Typical methods can be found in literature [22] and [23]. The warm simplex basis can be specified for problem (5) using the solution of the adjacent vertex that is on the same facet of which the outer normal vector is used as the searching direction. The specification of the start basis can reduce the iteration number of the simplex algorithm and thus accelerating the vertex identification. The second strategy is to accelerate the inner loop of the PVE algorithm by parallel computing. In line 7 of the algorithm, the identification of each vertex is independent, which can be calculated in parallel to save the computation time. The third is to dynamically update the convex hull when adding new vertices in each outer loop (line 4 of Algorithm 1), instead of building a new convex hull for all vertices. The updating of the convex hull can be realized by the quick hull method [24], which is not only computationally efficient but also capable of constructing highdimensional convex hulls. 3) Analysis of adaptability: First, for the problem where the projection region is a general convex set, the proposed PVE algorithm can also be applied. Perimeter points of the projection region will be identified by the algorithm, and the convex hull of these perimeter points is an inner approximation of the projection result. The approximation error measured by the Hausdorff distance is less than ε. Second, if the projection region is non-convex, the PVE algorithm cannot be directly applied. A potential solution is decomposing the non-convex projection region into the union of several convex sub-regions and using the PVE algorithm to identify each sub-region. Third, for the problem involving multiple time intervals, the coordination variable may have a high dimension and the direct application of the PVE algorithm will be inefficient. There are strategies to decompose the time-coupled projection problem into a series of lower-dimensional subproblems regarding each single time interval and adjacent intervals [25]. After decomposition, each low-dimensional projection can be calculated efficiently by the PVE algorithm. III. APPLICATION TO MULTI-AREA COORDINATED OPTIMAL DISPATCH A. Problem Formulation Power systems in different areas are interconnected physically through inter-area transmission lines, but different areas may be operated by different system operators. The multiarea coordinated optimal dispatch (MACOD) is thus becoming indispensable for the economic and secure operation of largescale power systems. The MACOD minimizes the total operation cost of the multi-area system in a decomposed manner. Use P , π, and θ to represent variables of active power, operation cost, and voltage phase angle, respectively. Use superscript G, D, B, RN , M A, and T L to label generation, load, power exchange at boundary node, regional network, multi-area system, and tie-line, respectively. Let r ∈ R RN and s ∈ S index areas and tie-lines, respectively. Let n ∈ N RN r and l ∈ L RN r index network nodes and internal branches of the rth area, respectively. Then the objective function of a basic MACOD is as follows, min π M A = r∈R RN π RN r (9) where π RN is the operation cost of area r and π M A is the total cost of the multi-area system. The following constraints need to be satisfied for area r. are lower and upper bounds of generation output, respectively. 6 The operating constraints of tie-lines are as follows, P T L s = 1 x s θ r F s − θ r T s , ∀s,(11a)F T L s ≤ P T L s ≤ F T L s , ∀s.(11c) Equation (11a) is the direct-current power flow model for tielines, where x s is the reactance of tie-line s, r F s and r T s denote the from and end region of the tie-line, respectively. In this work, each area is aggregated as a node when evaluating power flows on tie-lines, which is widely accepted in the literature [26] and is also implemented in the flow-based market integration in Europe [27]. Equation (11b) maps tieline power flows to boundary power injections of each area, where n F s and n T s denote the from and end node of tie-line s, respectively. Equation (11c) enforces transmission limits of tie-lines. B. EP-Based Solution for MACOD In the MACOD problem, variable P RN,B r,n appears in constraints of both the regional system and tie-line system, which restricts the independent optimization of each area. Based on the EP theory proposed in Part I of this paper, the MACOD problem can be solved in a decomposed manner without iterative information exchange. The EP model of the regional system contains all values of (P RN,B r , π RN r ) that can be met by at least one generation schedule subject to regional operation constraints with generation cost no larger than π RN r . Note that the objective of the MACOD is to minimize the sum of π RN r . Using the EP model as a substitute for the regional system model (10) in the MACOD, the cost variable π RN r will be minimized and the decision result of P RN,B r will be ensured to be feasible for the regional system, which leads to the coordinated optimality of multiple areas. Theoretical proof of the optimality is in Theorem 3 in Part I of this paper. Then the EP-based MACOD is realized following three steps. 1) Equivalent projection. Each area calculates the EP model Φ RN r of the local system according to the definition in (13), which can be realized by the PVE algorithm introduced in The above process is executed sequentially, which naturally overcomes drawbacks caused by repetitive iterations of conventional coordinated optimization methods. C. Case Study The EP of the regional transmission system is visualized, and the computational performance of the EP calculation and the EP-based MACOD is tested. Case studies in this paper are simulated on a Lenovo ThinkPad X13 laptop with an Intel Core i7-10510U 1.80-GHz CPU. Algorithms are programmed in MATLAB R2020a with GUROBI V9.0.0. 1) EP of regional system: We employ the IEEE-24 system for visualizing the EP model of the regional transmission system. Two tie-lines are connected to node 1 and node 3 of the system, and each of them has the capacity of 510 MW (15% of the total generation capacity). The results are shown in Fig 2, where subplots (a) and (b) exhibit EP models at the valley-load hour (77% of the peak load) and peak-load hour, respectively. Red regions are EP models of the system, which are 3-dimensional polytopes. As can be seen, the volume of the EP model at the peak hour is smaller than that at the valley hour. This is because the increase of load occupies the export capacity and upraising the power supply cost. The projection of the EP model on the subspace of (P RN,B r,1 , P RN,B r,2 ), denoted as Ξ RN r , is also plotted, which characterizes the admissible set of tie-lie flows that can be executed by the regional system. 2) EP calculation: We test the computational performance of the proposed PVE algorithm for EP calculation based on the IEEE-24 test system, the 200 node-synthetic grid (SG-200), and the 500 node-synthetic grid (SG-500) [28]. Cases with 3 and 6 tie-lines are considered for each test system. The FME is employed as the benchmark method. As summarized in TABLE I, the FME fails to obtain the EP model within 1200s even for the small-scale IEEE-24 system. The proposed PVE algorithm, in contrast, can yield the EP model within 4s for all the 6 cases, which is acceptable for practical application. Measure the scale of the regional system model (10) by the production of numbers of variables and constraints. With the EP, the model scale of the regional system is reduced by 92.7%∼99.9%, which will greatly alleviate the communication and computation burden of the coordinated dispatch of multiple areas as well as protecting the private information of regional systems. 3) EP-based MACOD: Four test systems composed of 20 and 40 synthetic grids of different scales are constructed according to the tie-line topology in [29]. The MACOD is solved by the joint optimization and the EP-based decomposed solution, respectively. The optimization results from the two methods are verified to be identical, which validates the accuracy of the proposed coordination method. According to the process introduced in Section III-B, the time for system reduction depends on the region that takes the longest computation time, since EP models of different regions are calculated in parallel. The total time to obtain the optimal tie-line scheduling is the sum of the system reduction time and the coordinated optimization time. Results are summarized in TABLE II. As is shown, the total computation time of the EP-based MACOD is comparable with that of the joint optimization. With the EP, the time consumed by the coordinated optimization is reduced by more than 73.8% compared with the joint optimization. This is because the scale of the coordination problem is significantly reduced with the EP of each region. In addition to the computation efficiency, the primary advantages of the proposed coordination method are avoiding private data disclosure compared with the joint optimization and avoiding iterative information exchange compared with conventional coordinated optimization methods. IV. APPLICATION TO TRANSMISSION-DISTRIBUTION COORDINATED OPTIMAL DISPATCH A. Problem Formulation Distribution networks installed with DERs can provide flexibility to the transmission system by adjusting the power exchange at the substation, which calls for the transmissiondistribution coordinated optimal dispatch (TDCOD). In this study, optimal active power dispatch is considered for the transmission system, while the co-optimization of active and reactive power is considered for the distribution system with voltage limits. Let P , Q, V , and π denote variables of active power, reactive power, voltage magnitude, and operation cost, respectively. Similar to the notation in Section III, use superscript G, D, F to label generation, load, and power flow, respectively. Use superscript T N and DN to label variables of transmission and distribution systems, respectively. Let r ∈ R DN index distribution networks participate in the coordinated dispatch. Let n ∈ N DN r and l ∈ L DN r index nodes and branches of distribution network r. Let n ∈ N T N and l ∈ L T N index nodes and branches of the transmission network. The objective of the TDCOD is minimizing the total operation cost of the transmission and distribution systems, min π T D = n∈N T N C n (P T N,G n ) + r∈R DN π DN r(14) The power balance constraint, transmission limits, and generation limits are considered for the transmission network, The operation constraints of the rth distribution network is as follows, and Q DN,F r,mn respectively denote active and reactive power flow on the feeder from node m to node n, A n is the set of nodes adjacent to node n. P DN r,0 and Q DN r,0 are net active and reactive power the distribution network injects into the transmission system. Equation (16d) is the simplified DistFlow model of the distribution network [30], which incorporates the reactive power and can be used to the distribution network with large R/X ratio. Equation (16e) enforces the transmission limit of branch mn with piece-wise inner approximation and F DN r,mn is the capacity of the branch. Equation (16f) and (16g) enforce limits on nodal voltage magnitudes and power output of DERs, respectively. Taking V 2 r,n as the independent variable, the above constraints are linear. π DN r ≥ π DN r ≥ n∈N DN r π DN,G r,n ,(16a)V 2 r,n ≤ V 2 r,n ≤ V 2 r,n ,(16e) B. EP-based Solution for TDCOD In problem (14) and π DN r that can be executed by the distribution network with at least one internal generation schedule satisfying constraints in (16). Using EP models of distribution networks to replace their original model in the TDCOD problem, the operation cost π DN r of the distribution network will be minimized as in (14), and the active power exchange P DN r,0 determined by the transmission system operator will be ensured to be feasible for the distribution network. Hence, the EP-based coordinated optimization of transmission and distribution systems will be identical to the joint optimization. The process of the EP-based TDCOD contains three successive steps. 1) Equivalent projection. Each distribution system calculates the EP model Φ DN r and reports it to the transmission system operator. 2) Coordinated optimization. The transmission system operator solves the EP-based TDCOD problem, i.e., minimizing the objective function in (14) subject to constraint (15) and (P DN r,0 , π DN r ) ∈ R DN . The optimal value (P DN r,0 ,π DN r ) is then published to each distribution system. 3) Distribution system operation. Each DSO fixesP DN r,0 as the boundary condition and optimally dispatches the local system by solving the problem min{π DN r : Eq.(16), P DN r,0 = P DN r,0 }. The above coordination process does not require iterations between the transmission and distribution systems. This coordination scheme is also compatible with the existing transmission system dispatch that schedules the optimal output of resources with their submitted information. C. Case Study 1) EP of the distribution network: The modified IEEE-13 test feeder with 6 DERs in reference [31] is used to demonstrate the EP of the distribution network. Red polygons in Fig. 3 exhibit EP models of the test system at the valley and peak hours, which are 2-dimensional regions regarding the net power exchange P DN r,0 and the operation cost π DN r . The projection of the EP model on the subspace of P DN r,0 is denoted as Ξ DN r , which characterizes the range of flexibility that the distribution network can provide. The green curves with triangular markers are cumulative cost functions of DERs. As can be seen, the cumulative cost curve is beneath the EP model and has a wider range of net power exchange. This is because network limits are omitted when generating the cumulative cost curve. Hence, only aggregating cost curves of DERs for transmission-level dispatch is not enough, the EP of the distribution system incorporating both cost functions and network constraints is needed for the optimal coordination between transmission and distribution systems. In contrast, the conventional FME method takes more than 10 and 129 times of computational effort to calculate the EP for the 13-node and 25-node test systems, respectively. For test systems with more than 49 nodes, the FME fails to yield the EP results within 1200s. This result shows the superiority of the PVE-based EP calculation in terms of computational efficiency and scalability. As for the reduction of the model scale, the EP model of the distribution network is described by a group of 2-dimensional linear constraints, which reduces the model scale of the distribution network by more than 98.4%. 3) EP-based TDCOD: The IEEE-24 system is connected with different numbers of IEEE-13 distribution feeders to simulate the coordinated dispatch between transmission and distribution systems. The coordinated dispatch is solved by the joint optimization and the EP-based coordination method respectively. Optimization results from the two methods are verified to be identical for all the test cases. The computation time of the joint optimization and the EP-based coordination is summarized in TABLE IV. The computation time of the EP-based coordination is dominated by the system reduction process. With the EP, the time of the coordinated optimization is reduced by more than 69.2%, which may help to relieve the computation burden of the transmission system. The total computation time of the EP-based coordination is also reduced compared with the joint optimization in cases with more than 100 distribution networks. This is because the EP distributes the computation burden of solving a large-scale centralized optimization to each subsystem in parallel and thus, is more efficient for the coordinated management of numerous distribution networks. V. CONCLUSION This two-part paper proposes the EP theory to make external equivalence of the subsystem and realizes the COD of power systems in a non-iterative fashion. To calculate the EP efficiently and accurately, Part II of this paper proposes a novel polyhedral projection algorithm termed the PVE, which characterizes the EP model by identifying its vertices and building the convex hull of vertices. The vertex identification process in the PVE algorithm is finely designed to give higher priority to vertices that are critical to the overall shape of the projection, which makes the approximation error decreases along the steepest path. The Hausdorff distance is employed to measure and control the calculation accuracy of the PVE algorithm, which can provide flexibility to balance the computation accuracy and computation effort of the EP calculation in practice. The EP theory and the PVE algorithm are applied to the non-iterative COD of multi-area systems and transmissiondistribution systems. Case studies of different scales testify the superiority of the proposed PVE algorithm in terms of computational efficiency and scalability compared with the conventional FME algorithm. The EP is verified to reduce the subsystem model scale by more than 92.7% for regional transmission systems and more than 98.4% for distribution networks, which can alleviate the computation and communication burden in coordinated dispatch. The effectiveness of the EP-based non-iterative coordinated dispatch is also validated in comparison with the joint optimization. The proposed EP theory and the non-iterative coordinated optimization framework are general and may find a broad spectrum of applications in addition to instances in this paper, e.g., the coordinated dispatch of the multi-energy network, the coupling of power and traffic networks, and the integration of user-side flexible resources. Applications of the PVE algorithm to other projection problems, e.g., flexibility region aggregation, loadability set [2], and the projectionbased robust optimization [32], are also worthy of future investigation. Incorporating the robust feasibility and chance constraints to deal with the uncertainty and incorporating intertemporal constraints in the projection calculation should be future extensions of the proposed method. Fig. 1 . 1Illustrative example of the PVE algorithm. (a) Vertex identification in outer loop 1. (b) Number of new vertices and the error metric of each outer loop. vertices outside the convex hull. The convex hull of existing vertices in the kth outer loop is denoted asΦ (k) , which is an intermediate approximation of the projection. AssumeΦ (k) is enclosed by J (k) facets and the affine hull of the jth facet isà New vertices are searched along outer normal directions of facets ofΦ (k) . For the jth facet, its outer normal vector isà (k) j . Thereby, the jth inner loop identifies the new vertex by solving problem (5) with α =à (k) D. Discussions 1 ) 1Properties of the PVE algorithm: The convergence of the PVE algorithm and the conservatism of the approximation are guaranteed by the following theorem. Theorem 1 . 1If error tolerance ε > 0, the output of the PVE algorithm is a conservative approximation of the exact projection. If ε = 0, the exact projection can be obtained via the PVE algorithm within finite calculations. above model, equation (10a) models the operation cost of area r in the epigraph form, where π RN,G r,n is the cost of generator n and π RN r is a large enough constant to bound π RN r . Equation (10b) is the epigraph of the piecewise linear cost function of generator n, where a RN,G r,n,i and b RN,G r,n,i are coefficients of each bidding segment, respectively. Equation (10c), (10d), and (10e) enforce the power balance, transmission limits, and generation limits of area r. In the equations, T r,l,n is the power transfer distribution factor of node n to branch l, F RN r,l is the capacity of branch l, P RN, | From the perspective of primal decomposition, P RN,) is the internal variable. According to (1), the operation feasible region of each area is as follows, Eq. (10) .(12)According to(2), the EP model of the area is the projection of Ω RN r onto the subspace of coordination variable, i.e., Fig. 2 . 2EP models of the IEEE-24 system at (a) valley hour and (b) peak hour. Section II-C. Whereafter, Φ RN r is submitted to the multiarea coordinator. 2) Coordinated optimization. The coordinator solves the EPbased MACOD problem formulated as min{ r π RN r : Eq. (11), (P RN,B r , π RN r ) ∈ Φ RN r , ∀r ∈ R RN }. The optimal value (P RN,B r ,π RN r ) is then published to each area. 3) Regional system operation. Each area fixesP RN,B r as the boundary condition and solves the local optimal dispatch problem formulated as min{π RN r : Eq.(10), P RN, 16a) and (16b) model operation costs of the distribution system and DER n in the epigraph form, where π DN r is a large enough constant, a DN,G r,n,i and b DN,G r,n,i are coefficients of the ith segment of the piece-wise linear cost function of DER n. Equation (16c) enforces nodal power balance, where P DN,F r,mn -(16), the optimization of transmission and distribution systems are coupled by P ) as the internal variable. The operation feasible region of the distribution network, denoted as Ω DN r , is the set of (x DN r , y DN r ) subject to constraints in(16). Then the EP model of the distribution network can be formulated according to the definition in(2), denoted as Φ DN r . The EP model is the projection of Ω DN r onto the subspace of x DN r . Note that constraints (16) are linear and thus, both Ω DN r and its projection Φ DN r are polytopes, and Φ DN r can be calculated by the proposed PVE algorithm. The EP model Φ DN r contains all possible combinations of P DN r,0 Fig. 3 . 3EP models of the IEEE-13 distribution network at (a) valley load and (b) peak load. TABLE I COMPUTATIONAL IEFFICIENCY OF EP CALCULATIONSystem |N B r | Time of FME (s) Time of PVE (s) Rate of model reduction IEEE-24 3 >1200 0.34 95.4% 6 >1200 3.54 92.7% SG-200 3 >1200 0.46 99.8% 6 >1200 2.69 99.6% SG-500 3 >1200 0.47 99.9% 6 >1200 2.99 99.6% TABLE II COMPUTATIONAL EFFICIENCY OF EP-BASED MACOD System Time of joint optimization (s) Time of EP-based coordination (s) System reduction Coordinated optimization Total 20 × SG-200 0.42 0.38 0.11 0.49 40 × SG-200 0.61 0.32 0.10 0.42 20 × SG-500 0.51 0.49 0.09 0.58 40 × SG-500 0.78 0.51 0.10 0.61 TABLE III COMPUTATIONAL IIIEFFICIENCY OF EP CALCULATIONSystem Number of DERs Time of FME (s) Time of PVE (s) Rate of model reduction DN-13 6 3.75 0.34 98.4% DN-25 12 41.29 0.32 99.5% DN-49 24 >1200 0.37 >99.9% DN-241 120 >1200 0.62 >99.9% DN-2401 1200 >1200 2.37 >99.9% TABLE IV COMPUTATIONAL IVEFFICIENCY OF EP-BASED TDCOD Time of EP-based coordination (s) EP calculation: Multiple IEEE-13 feeders are connected at the root node to create larger-scale test systems with 25, 49, 241, and 2401 nodes to examine the computation performance of the EP calculation. As shown in TABLE III, the proposed PVE algorithm successfully obtains EP results of all five test systems within 3 seconds.Num of DNs Time of joint optimization (s) System reduction Coordinated optimization Total 50 0.26 0.31 0.08 0.39 100 0.47 0.30 0.09 0.39 200 0.68 0.31 0.12 0.43 400 0.97 0.31 0.20 0.51 2) Introduction to linear optimization. D Bertsimas, J N Tsitsiklis, Athena Scientific Belmont, MA. 6D. Bertsimas and J. N. Tsitsiklis, Introduction to linear optimization. Athena Scientific Belmont, MA, 1997, vol. 6. On the loadability sets of power systems-Part I: Characterization. A Abiri-Jahromi, F Bouffard, IEEE Trans. Power Systems. 321A. Abiri-Jahromi and F. Bouffard, "On the loadability sets of power systems-Part I: Characterization," IEEE Trans. Power Systems, vol. 32, no. 1, pp. 137-145, 2016. Fourier's elimination: Which to choose. J.-L Imbert, PPCP. 1CiteseerJ.-L. Imbert, "Fourier's elimination: Which to choose?" in PPCP, vol. 1. Citeseer, 1993, pp. 117-129. On the loadability sets of power systems-Part II: Minimal representations. A Abiri-Jahromi, F Bouffard, IEEE Trans. Power Systems. 321A. Abiri-Jahromi and F. Bouffard, "On the loadability sets of power systems-Part II: Minimal representations," IEEE Trans. Power Systems, vol. 32, no. 1, pp. 146-156, 2016. Variable elimination in linear constraints. V Chandru, The Computer Journal. 365V. Chandru, "Variable elimination in linear constraints," The Computer Journal, vol. 36, no. 5, pp. 463-472, 1993. On polyhedral projection and parametric programming. C N Jones, E C Kerrigan, J M Maciejowski, Journal of Optimization Theory and Applications. 1382C. N. Jones, E. C. Kerrigan, and J. M. Maciejowski, "On polyhedral pro- jection and parametric programming," Journal of Optimization Theory and Applications, vol. 138, no. 2, pp. 207-220, 2008. Aggregate power flexibility in unbalanced distribution systems. X Chen, E Dall&apos;anese, C Zhao, N Li, IEEE Trans. Smart Grid. 111X. Chen, E. Dall'Anese, C. Zhao, and N. Li, "Aggregate power flexibility in unbalanced distribution systems," IEEE Trans. Smart Grid, vol. 11, no. 1, pp. 258-269, 2020. Aggregation and disaggregation of energetic flexibility from distributed energy resources. F L Müller, J Szabó, O Sundström, J Lygeros, IEEE Trans. Smart Grid. 102F. L. Müller, J. Szabó, O. Sundström, and J. Lygeros, "Aggregation and disaggregation of energetic flexibility from distributed energy resources," IEEE Trans. Smart Grid, vol. 10, no. 2, pp. 1205-1214, 2019. Aggregate modeling of distribution systems for multi-period OPF. E Polymeneas, S Meliopoulos, 2016 Power Systems Computation Conference (PSCC). IEEEE. Polymeneas and S. Meliopoulos, "Aggregate modeling of distribution systems for multi-period OPF," in 2016 Power Systems Computation Conference (PSCC). IEEE, 2016, pp. 1-8. Leveraging two-stage adaptive robust optimization for power flexibility aggregation. X Chen, N Li, IEEE Trans. Smart Grid. 125X. Chen and N. Li, "Leveraging two-stage adaptive robust optimization for power flexibility aggregation," IEEE Trans. Smart Grid, vol. 12, no. 5, pp. 3954-3965, 2021. Estimating the active and reactive power flexibility area at the TSO-DSO interface. J Silva, J Sumaili, R J Bessa, L Seca, M A Matos, V Miranda, M Caujolle, B Goncer, M Sebastian-Viana, IEEE Trans. Power Systems. 335J. Silva, J. Sumaili, R. J. Bessa, L. Seca, M. A. Matos, V. Miranda, M. Caujolle, B. Goncer, and M. Sebastian-Viana, "Estimating the active and reactive power flexibility area at the TSO-DSO interface," IEEE Trans. Power Systems, vol. 33, no. 5, pp. 4741-4750, 2018. Distance between sets. M Levandowsky, D Winter, Nature. 2345323M. Levandowsky and D. Winter, "Distance between sets," Nature, vol. 234, no. 5323, pp. 34-35, 1971. DC power flow revisited. B Stott, J Jardim, O Alsaç, IEEE Trans. Power Systems. 243B. Stott, J. Jardim, and O. Alsaç, "DC power flow revisited," IEEE Trans. Power Systems, vol. 24, no. 3, pp. 1290-1300, 2009. An extended dc power flow model considering voltage magnitude. D Liu, L Liu, H Cheng, S Zhang, J Xin, Journal of Modern Power Systems and Clean Energy. 93D. Liu, L. Liu, H. Cheng, S. Zhang, and J. Xin, "An extended dc power flow model considering voltage magnitude," Journal of Modern Power Systems and Clean Energy, vol. 9, no. 3, pp. 679-683, 2021. Lossy DistFlow formulation for single and multiphase radial feeders. E Schweitzer, S Saha, A Scaglione, N G Johnson, D Arnold, IEEE Trans. Power Systems. 353E. Schweitzer, S. Saha, A. Scaglione, N. G. Johnson, and D. Arnold, "Lossy DistFlow formulation for single and multiphase radial feeders," IEEE Trans. Power Systems, vol. 35, no. 3, pp. 1758-1768, 2020. Linear three-phase power flow for unbalanced active distribution networks with PV nodes. Y Wang, N Zhang, H Li, J Yang, C Kang, CSEE Journal of Power and Energy Systems. 33Y. Wang, N. Zhang, H. Li, J. Yang, and C. Kang, "Linear three-phase power flow for unbalanced active distribution networks with PV nodes," CSEE Journal of Power and Energy Systems, vol. 3, no. 3, pp. 321-324, 2017. Standardized matrix modeling of multiple energy systems. Y Wang, N Zhang, C Kang, D S Kirschen, J Yang, Q Xia, IEEE Trans. Smart Grid. 101Y. Wang, N. Zhang, C. Kang, D. S. Kirschen, J. Yang, and Q. Xia, "Standardized matrix modeling of multiple energy systems," IEEE Trans. Smart Grid, vol. 10, no. 1, pp. 257-270, 2017. Generalized convex hull pricing for the AC optimal power flow problem. M Garcia, H Nagarajan, R Baldick, IEEE Trans. Control of Network Systems. 73M. Garcia, H. Nagarajan, and R. Baldick, "Generalized convex hull pricing for the AC optimal power flow problem," IEEE Trans. Control of Network Systems, vol. 7, no. 3, pp. 1500-1510, 2020. Equality set projection: A new algorithm for the projection of polytopes in halfspace representation. C Jones, E C Kerrigan, J Maciejowski, Cambridge University Engineering Dept, Tech. Rep.C. Jones, E. C. Kerrigan, and J. Maciejowski, "Equality set projection: A new algorithm for the projection of polytopes in halfspace represen- tation," Cambridge University Engineering Dept, Tech. Rep., 2004. Minkowski sum of polytopes defined by their vertices. V Delos, D Teissandier, Journal of Applied Mathematics and Physics. 3V. Delos and D. Teissandier, "Minkowski sum of polytopes defined by their vertices," Journal of Applied Mathematics and Physics, vol. 3, pp. 62-67, 2015. Comparing images using the Hausdorff distance. D P Huttenlocher, G A Klanderman, W J Rucklidge, IEEE Trans. Pattern Analysis and Machine Intelligence. 159D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, "Comparing images using the Hausdorff distance," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 9, pp. 850-863, 1993. Identification of umbrella constraints in DC-based security-constrained optimal power flow. A J Ardakani, F Bouffard, IEEE Trans. Power Systems. 284A. J. Ardakani and F. Bouffard, "Identification of umbrella constraints in DC-based security-constrained optimal power flow," IEEE Trans. Power Systems, vol. 28, no. 4, pp. 3924-3934, 2013. Constraint screening for security analysis of power networks. R Madani, J Lavaei, R Baldick, IEEE Trans. Power Systems. 323R. Madani, J. Lavaei, and R. Baldick, "Constraint screening for security analysis of power networks," IEEE Trans. Power Systems, vol. 32, no. 3, pp. 1828-1838, 2017. The quickhull algorithm for convex hulls. C B Barber, D P Dobkin, H Huhdanpaa, ACM Trans. Mathematical Software. 224C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, "The quickhull algorithm for convex hulls," ACM Trans. Mathematical Software, vol. 22, no. 4, p. 469-483, 1996. Tie-line security region considering time coupling. W Lin, Z Yang, J Yu, K Xie, X Wang, W Li, IEEE Trans. Power Systems. 362W. Lin, Z. Yang, J. Yu, K. Xie, X. Wang, and W. Li, "Tie-line security region considering time coupling," IEEE Trans. Power Systems, vol. 36, no. 2, pp. 1274-1284, 2021. European electricity market integration with mixed market designs-Part I: Formulation. P N Biskas, D I Chatzigiannis, A G Bakirtzis, IEEE Trans. Power Systems. 291P. N. Biskas, D. I. Chatzigiannis, and A. G. Bakirtzis, "European electricity market integration with mixed market designs-Part I: For- mulation," IEEE Trans. Power Systems, vol. 29, no. 1, pp. 458-465, 2013. EUPHEMIA public description. N Committee, N. Committee, "EUPHEMIA public description," April 2019. [Online]. Grid structural characteristics as validation criteria for synthetic networks. A B Birchfield, T Xu, K M Gegner, K S Shetye, T J Overbye, IEEE Trans. Power Systems. 324A. B. Birchfield, T. Xu, K. M. Gegner, K. S. Shetye, and T. J. Over- bye, "Grid structural characteristics as validation criteria for synthetic networks," IEEE Trans. Power Systems, vol. 32, no. 4, pp. 3258-3265, 2017. Non-iterative multi-area coordinated dispatch via condensed system representation. Z Tan, H Zhong, Q Xia, C Kang, IEEE Trans. Power Systems. 362Z. Tan, H. Zhong, Q. Xia, and C. Kang, "Non-iterative multi-area coordinated dispatch via condensed system representation," IEEE Trans. Power Systems, vol. 36, no. 2, pp. 1594-1604, 2021. Network reconfiguration in distribution systems for loss reduction and load balancing. M Baran, F Wu, IEEE Trans. Power Delivery. 42M. Baran and F. Wu, "Network reconfiguration in distribution systems for loss reduction and load balancing," IEEE Trans. Power Delivery, vol. 4, no. 2, pp. 1401-1407, 1989. Estimating the robust PQ capability of a technical virtual power plant under uncertainties. Z Tan, H Zhong, Q Xia, C Kang, X S Wang, H Tang, IEEE Trans. Power Systems. 356Z. Tan, H. Zhong, Q. Xia, C. Kang, X. S. Wang, and H. Tang, "Estimating the robust PQ capability of a technical virtual power plant under uncertainties," IEEE Trans. Power Systems, vol. 35, no. 6, pp. 4285-4296, 2020. Adjustable robust optimization via Fourier-Motzkin elimination. J Zhen, D Den Hertog, M Sim, Operations Research. 664J. Zhen, D. Den Hertog, and M. Sim, "Adjustable robust optimization via Fourier-Motzkin elimination," Operations Research, vol. 66, no. 4, pp. 1086-1100, 2018.
[]
[ "Operation of graphene quantum Hall resistance standard in a cryogen-free table-top system", "Operation of graphene quantum Hall resistance standard in a cryogen-free table-top system" ]
[ "T J B M Janssen [email protected] \nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n", "S Rozhko \nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n", "I Antonov \nRoyal Holloway\nUniversity of London\nTW20 0EXEghamUK\n", "A Tzalenchuk \nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n\nRoyal Holloway\nUniversity of London\nTW20 0EXEghamUK\n", "J M Williams \nNational Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK\n", "Z Melhem \nOxford Instruments Nanoscience\nTubney Woods\nOX13 5QXAbingdonUK\n\nDepartment of Microtechnology and Nanoscience\nChalmers University of Technology\nS-41296GöteborgSweden\n", "H He ", "S Lara-Avila \nDepartment of Microtechnology and Nanoscience\nChalmers University of Technology\nS-41296GöteborgSweden\n", "S Kubatkin \nDepartment of Microtechnology and Nanoscience\nChalmers University of Technology\nS-41296GöteborgSweden\n", "R Yakimova \nDepartment of Physics, Chemistry and Biology (IFM)\nLinköping University\nS-58183LinköpingSweden\n" ]
[ "National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK", "National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK", "Royal Holloway\nUniversity of London\nTW20 0EXEghamUK", "National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK", "Royal Holloway\nUniversity of London\nTW20 0EXEghamUK", "National Physical Laboratory\nHampton RoadTW11 0LWTeddingtonUK", "Oxford Instruments Nanoscience\nTubney Woods\nOX13 5QXAbingdonUK", "Department of Microtechnology and Nanoscience\nChalmers University of Technology\nS-41296GöteborgSweden", "Department of Microtechnology and Nanoscience\nChalmers University of Technology\nS-41296GöteborgSweden", "Department of Microtechnology and Nanoscience\nChalmers University of Technology\nS-41296GöteborgSweden", "Department of Physics, Chemistry and Biology (IFM)\nLinköping University\nS-58183LinköpingSweden" ]
[]
We demonstrate quantum Hall resistance measurements with metrological accuracy in a small cryogen-free system operating at a temperature of around 3.8 K and magnetic fields below 5 T. Operating this system requires little experimental knowledge or laboratory infrastructure, thereby greatly advancing the proliferation of primary quantum standards for precision electrical metrology. This significant advance in technology has come about as a result of the unique properties of epitaxial graphene on SiC.
10.1088/2053-1583/2/3/035015
[ "https://arxiv.org/pdf/1507.04601v1.pdf" ]
32,427,676
1507.04601
933f67ba3f6d39e05b6998a5c77ee97693bea433
Operation of graphene quantum Hall resistance standard in a cryogen-free table-top system 16 Jul 2015 T J B M Janssen [email protected] National Physical Laboratory Hampton RoadTW11 0LWTeddingtonUK S Rozhko National Physical Laboratory Hampton RoadTW11 0LWTeddingtonUK I Antonov Royal Holloway University of London TW20 0EXEghamUK A Tzalenchuk National Physical Laboratory Hampton RoadTW11 0LWTeddingtonUK Royal Holloway University of London TW20 0EXEghamUK J M Williams National Physical Laboratory Hampton RoadTW11 0LWTeddingtonUK Z Melhem Oxford Instruments Nanoscience Tubney Woods OX13 5QXAbingdonUK Department of Microtechnology and Nanoscience Chalmers University of Technology S-41296GöteborgSweden H He S Lara-Avila Department of Microtechnology and Nanoscience Chalmers University of Technology S-41296GöteborgSweden S Kubatkin Department of Microtechnology and Nanoscience Chalmers University of Technology S-41296GöteborgSweden R Yakimova Department of Physics, Chemistry and Biology (IFM) Linköping University S-58183LinköpingSweden Operation of graphene quantum Hall resistance standard in a cryogen-free table-top system 16 Jul 2015 We demonstrate quantum Hall resistance measurements with metrological accuracy in a small cryogen-free system operating at a temperature of around 3.8 K and magnetic fields below 5 T. Operating this system requires little experimental knowledge or laboratory infrastructure, thereby greatly advancing the proliferation of primary quantum standards for precision electrical metrology. This significant advance in technology has come about as a result of the unique properties of epitaxial graphene on SiC. Introduction One of the goals of the modern-day metrology is to provide quantum standards at the fingertips of the end-users, shortening the calibration chain from primary standards to the final product. A shorter calibration chain will result in a higher accuracy for endusers which can be exploited to develop more advanced test and measurement equipment and subsequently lead to societal benefits where measurement is an issue. Resistance metrology is one of the cornerstones of electrical metrology with most national measurements laboratories around the world providing an extensive range of calibration services across many decades of resistance value [1]. The primary standard for resistance is based on the quantum Hall effect (QHE) [2] which is presently realised by a lot fewer laboratories [3]. This is because the infrastructure needed to create the QHE in conventional semiconductor systems is quite elaborate and expensive as it requires temperatures of 1 K or below and magnetic fields around 10 T. Another important barrier is the expertise needed to run a quantum Hall system and verify the correct operation and quantisation parameters. Finally, liquid Helium is becoming a scarce resource, significantly increasing in price year on year, and not readily available in every country. A simpler, cryogen-free, system is needed if more laboratories are to realise the primary standard directly and this has recently become possible with the advent of graphene. One of the first properties observed in graphene was the QHE and it was immediately realised that it is ideal for metrology by virtue of its unique band structure [4,5,6,7]. The Landau level quantisation in graphene is a lot stronger than in traditional semiconductor systems which implies that both a lower magnetic field can be used and that the low temperature constraint is more relaxed [6]. Following the original demonstration of high-accuracy quantum Hall resistance measurements in epitaxial graphene grown on SiC [8] and proof of the universality of the QHE between graphene and GaAs [9], recently these results have been very nicely reproduced by a number of different research groups [10,11,12]. Particularly, a recent publication by the LNE group has demonstrated that ppb-accuracy can be achieved over a large experimental parameter range [12]. These results also demonstrate that devices which show extraordinary good quantum Hall effect at high magnetic field and low temperature are not necessarily optimum for low magnetic field and high temperature measurements. Measurements of the QHE at low magnetic field are complicated by the fact that the carrier density needs to be reduced to a level well below the as-grown density of epitaxial graphene on SiC [13] (SiC/G). Unlike exfoliated graphene on SiO 2 , gating of graphene on SiC is not straightforward [14,15,16]. Recently, a novel technique was demonstrated which creates a static top-gate by depositing ions via corona discharge [17]. This technique allows for a systematic control of the carrier density and both n and ptype densities can be achieved on both sides of the Dirac point. Importantly this method is fully reversible and can be applied repeatably. Another issue with low carrier density graphene is the homogeneity. Under these conditions it is well known that electron-hole puddles form [18] induced by charged impurities, however, in epitaxial graphene the disorder strength can be of order 10 meV, comparable to flakes on boron-nitride [19]. Here we demonstrate for the first time measurements of the QHE with part-perbillion (ppb)-accuracy in a small table-top cryogen-free pulse-tube system. Both the longitudinal resistivity R xx and the contact resistance R c were well within the limits set by the QHR guidelines [20]. Using corona gating the carrier density was controlled such that the maximum breakdown current occurred just below the maximum magnetic field of our system. The noise sources in the system were reduced to a level such that the overall standard deviation of the measurements was comparable to those achieved for a conventional liquid 4 He/ 3 He system. The system is extremely easy to operate (it has only one button) and can run unattended for months on end, providing a stable and primary resistance reference whenever and wherever it is needed. Device design and fabrication Graphene was grown on the Si-face of SiC at T = 2000 • C and P = 1 atm Ar (GraphenSIC AB) [21]. In total 20 Hall bars of different dimensions (30 and 100 µm wide channels) and voltage probe types were patterned on the SiC/G using standard electronbeam lithography, lift-off, and oxygen plasma etching, as reported elsewhere [22]. The Hall bars are oriented parallel or perpendicular with respect to the predominant step edge direction of the SiC substrate. The sample was spin-coated with a thin, 100 nm, Magnet PT2-Stage layer of poly(methyl methacrylate-co-methacrylate acid), henceforth P(MMA-MAA) (MicroChem Corp., PMMA copolymer resist solids 6% in ethyl lactate). All results presented in this paper are measured on a Hall bar with a 30 µm wide and 180 µm long channel. A comprehensive study of all devices on this chip will be presented at a later date. PT1-Stage The measurement system The measurement system for primary resistance consists of two parts, the quantum Hall system and the measurement bridge. Table-top cryogen-free QHR cryostat Today, cryogen-free superconducting magnet systems have become omnipresent in low temperature physics laboratories because of their ease-of-use and reduced operational cost. In particular, for low magnet fields, ≤5 T, these systems can very small and simple. The 5 T superconducting magnet in our system is only 7.5 cm tall with an outer diameter of 6 cm. The inductance is 0.5 H and is small enough to be cooled by a 0.25 W pulse-tube cooler (see Fig. 2). The bore of this magnet is 3 cm in diameter which large enough to take a standard TO8 header used in QHR metrology. The system has high-T C current leads for the magnet which requires ∼60 A for full field. After evacuation the system cools down in approximately 5 hours from room temperature to ∼ 3.8 K. In a cryogen-free system there are a number of noise sources not normally present in a traditional wet system. There is the compressor which produces the high-pressure helium gas and the rotating valve and stepper motor on top of the cryostat. These sources of noise need to controlled and reduced as much as possible so as to not compromise the sensitivity of the measurement system. The noise of the compressor can simply be reduced by either placing an acoustic box around it or locating it in an adjoining space on the other side of a separating wall. Recently, a new type of high pressure hose was developed which significantly reduces the high-pitched hiss. These socalled quiet hoses have two beneficial effects, firstly it significantly reduces the vibrations in the cryostat system and secondly it is much more pleasant for the operator. Another improvement has been to replace the standard pulsed drive unit for the stepper motor with a low noise linear drive system. A number of other modifications are, plastic isolators on the high pressure lines to galvanically isolate the compressor from the cryostat and filters on the magnet current leads. Inside the cryostat care has to be taken that the experimental wiring is as tightly fixed as possible to reduce the effect of vibrations. Also the measurement wiring requires good heat sinking because these are relatively short compared to traditional wet systems. The corollary of these improvements can be seen in the noise traces in Fig. 3. The traces are measured on the sample wires with a spectrum analyser before and after the modifications. A reference trace with the compressor switched off is also shown. We can see that the low frequency noise peaks are reduced by more than two orders of magnitude and noise floor is equal to that measured with the compressor switched off. The higher frequency noise is largely unaffected by the modifications but this noise is outside the CCC measurement bandwidth and is not critical. The cryogenic current comparator bridge High accuracy measurements of resistance ratios are generally made using a so-called cryogenic current comparator (CCC) bridge. The fully automated CCC bridge used in our experiments has been described in great detail before [23,24]. In a CCC bridge, currents are locked in the inverse ratio of the resistances being compared. A CCC establishes the current ratio by passing the currents along wires through a superconducting tube and measuring the residual screening current on the outside of the tube with a superconducting quantum interference device (SQUID). The difference between the voltages developed across the resistors is measured using a sensitive voltmeter and allows one resistor to be determined with respect to the other. In primary resistance metrology one of the resistors is a quantum Hall device with a resistance value exactly equal of R H = R K /i where R K = h/e 2 , e is the elementary charge, h is the Planck constant and i is an integer and generally i = 2 or 4 is used for semiconductor devices. In graphene, owing to the bandstructure, only i = 2 is available. The maximum achievable sensitivity of the bridge depends for a large part on the signal-to-noise ratio in the voltmeter and therefore on the maximum current used to drive the resistors (the Johnson noise in the resistors is the other limiting factor) [23]. Under optimum conditions measurement accuracies in access of 1 part in 10 10 can be achieved [9,25]. However, for routine resistance metrology a few parts in 10 9 in a reasonable measurement time (∼ 15 min) is perfectly adequate. In the present system the cryogenic environment needed for the superconducting tube and SQUID is provided by a traditional liquid helium cryostat. Fig. 4 shows an example measurement of R xx and R xy made at the base-temperature of 3.8 K in the cryogen-free system described in the previous section. The curves display the familiar shape characteristic for epitaxial graphene on SiC which has been observed many times before [26,27,14,8,28,10,12]. The carrier density was reduced to 5.4 × 10 10 cm −2 by corona-gating from the as grown density of n ≈ 10 13 cm −2 . A wide plateau in R xy is observed whilst R xx is zero. The width of the plateau is much larger than would be expected from the low field carrier density. This behaviour is explained in terms of a magnetic field driven charge transfer from the interface layer to the graphene layer which results in an increase in carrier density as the magnetic field increases and effectively pins the Fermi level at exact filling of ν = 2 [13,28]. When attempting to make accurate quantum Hall resistance measurements the first step is to properly characterise the sample according to guidelines set out for primary resistance metrology [20]. Key parameters are the longitudinal resistance (R xx ) and contact resistance (R c ) at the desired measurement current. The longitudinal resistance needs to be as low as possible and preferably below a few tens of µΩ and checked on both sides of the device. Often these measurements are limited by the resolution of the nanovoltmeter and other methods can be employed to verify accurate quantisation [20]. The contact resistance can be accurately determined using a three terminal measurement technique in the quantised Hall state. This method determines R c + R l where R c is the contact resistance and R l = 6.4 Ω is the lead resistance in the cryostat in our system. For our device we find R c between 0.1 and 1 Ω for all contacts measured with a current of 10µA. Characterisation The optimum conditions for QHR measurements are easiest to obtain when the breakdown current is maximum and significantly larger than the source-drain measurement current, I sd . Here the breakdown current is defined as the maximum source-drain current the device can sustain before a measurable longitudinal resistance Plot (c) also shows the breakdown current I C as a function of magnetic field. Bottom graph: high-resolution measurement of R xx in a 1 T magnetic field range for n = 1.6 × 10 11 cm −2 . This resolution was obtained by repeated measurements (typically 50 to 100) of V xx with positive and negative I sd . appears ‡. For higher carrier density devices, the breakdown current tends to be higher because the ν = 2 state occurs at a higher magnetic field [29] which is simply related to the fact that at higher magnetic field, the Landau levels are further apart and hence the quantisation is stronger [29]. For epitaxial graphene I sd was shown to follow a ∝ B 3/2 behaviour similar to that observed in semiconductor systems [30]. This effect poses a particular problem for optimising the carrier density for accurate QHR measurements at the low magnetic fields available in our small cryogen-free system. If the carrier density is too low the maximum in the breakdown current will occur at a very low magnetic field and its value will be equally low. Fig. 5a shows a measurement of R xx at a n = 2.3 × 10 10 cm −2 very close to the Dirac point. For a I sd = 1 µA we find that the longitudinal resistance is always larger than a few Ohms and consequently the device is not properly quantised. Fig. 5b shows R xx at a n = 5.6 × 10 10 cm −2 and we can observe proper quantisation in a 2 T-range for I sd = 10 µA but for I sd = 20 µA, R xx is in the mΩ range and the device becomes unquantised (see below). When the carrier density is set even higher (see Fig. 5c), quantisation becomes stronger but the usable magnetic field range shrinks to around 0.5 T. The bottom graph in Fig. 5 shows a high-resolution measurement of R xx in this range demonstrating longitudinal resistance of order 10 µΩ and confirming proper quantisation. Using the magnetic field dependent charge-transfer model it is straightforward to estimate the optimum charge carrier density for maximum breakdown current [28]. Assuming that the maximum breakdown current will occur when ν = 2 filling factor coincides with our maximum magnetic field of 5 T [29], gives a carrier density of ≈ 2.4 × 10 11 cm −2 . Setting this density as n ∞ in the model calculation of Ref. [28] allows us to obtain the zero field carrier density. n ∞ = Aγ 1+e 2 γ/cc − n g in which A is the difference in work function between graphene and the donor states in SiC, γ is the density of donor states, c c is the classical capacitance and n g is the deposited corona gate charge. Using γ as a fit parameter we obtain a value for the optimum carrier density of n S ≈ 1.3 × 10 11 cm −2 (see Fig. 6). Figure 7 shows the measured maximum breakdown current measured at B = 5 T as a function of zero field charge carrier density for two sets of data 3 months apart. The graph confirms that optimum carrier density is around n S = 1.3 × 10 11 cm −2 . For the later data set the breakdown current was almost half the original breakdown current which could be related to the degradation of one of the current contacts on the device. The cause of this degradation is yet unclear and needs to be investigated further because QHR devices for quantum resistance metrology need to be stable and reproducible over long periods of time. The original maximum breakdown current is 60 µA which for our channel width of 30 µm implies a current density of 2 Am −1 density which is close to the theoretical maximum [29]. ‡ We typically use a limit of 10 nV for V xx which for I sd = 10 µA would imply a R xx = 1 mΩ Figure 8 shows the central result of this paper. Here we measured the quantum Hall resistance in terms of a nominally 100 Ω temperature controlled standard resistor using the CCC bridge. The data in Fig. 8 is normalised to the mean value of the resistor since we are not concerned with the absolute accuracy of the QHE in graphene which was established earlier [9]. The measurements are made at two different source-drain currents (≈10 and ≈20 µA) as a function of magnetic field. Comparing the data for 10 µA with that for 20 µA it is clear that for the larger measurement current, the device is not properly quantised. This fact is also confirmed by the measurement of R xx which show a significant deviation from zero for this larger current. The low breakdown current is not a major issue because the sample chip contains a number of devices with a larger width (100 µm) in which the breakdown current will be correspondingly larger (to be published). For the smaller measurement current, accurate quantisation is observed over a 2 T magnetic field range which is perfectly adequate for primary resistance measurements. Quantum Hall resistance measurements The measurement resolution obtained for most individual measurements of R xy in Fig. 8 is 5 parts in 10 9 for a 15 minute measuring time. A few measurements are made over a longer time (several hours) and are of order 5 part in 10 10 . This compares very well with traditional QHR systems, especially considering that for the cryogen-free system, there is in principle no limit on the available measurement time. Figure 9 shows an Allan deviation plot of the measurement resolution for a long measurement run together with results obtained from a previous measurement using our standard quantum Hall system [24]. Both curves show the expected 1/ √ τ behaviour for uncorrelated white noise. The lower measurement resolution of the cryogen-free system can be explained by the lower measurement current used (20 µA versus 100 µA) and the higher current noise of the null detector (A20 null-detector versus SQUID null detector), resulting in a factor of 10 difference. For both the cryogen-free system and the traditional system the theoretical optimum measurement resolution is still a factor of 5 better. This is caused by the fact that the noise of CCC-SQUID combination in our systems is about a factor of 5 higher than that of the bare SQUID sensor [24]. Summary/Outlook The results presented here demonstrate that with epitaxial graphene on SiC it is possible to achieve part per billion accuracy in primary resistance metrology using a simple cryogen-free system. Measurements are presented as a function of magnetic field and different source-drain current densities which demonstrate that the operational parameters are sufficiently wide for easy and reliable use. Care has to be taken to adjust the charge carrier density to the optimum value to ensure a maximum breakdown current density. Corona gating at room temperature and subsequent freezing of the doping is beneficial compared to applying a gate voltage during QHR measurements because no additional noise is injected into the system, but this comes at the expense of the practical inconvenience of thermal cycling the system. Another practical aspect which needs addressing is the CCC bridge. At the moment this bridge requires a liquid helium dewar to provide the low temperature for the superconducting shield and SQUID. In a separate cryogen-free cryostat we have recently demonstrated that a CCC can be operated in such an environment (to be published). The challenge is to integrate the CCC in the same cryogen-free cryostat as the QHE system and our plan is to do this in the next design iteration of the system. An alternative to a CCC would be a room temperature comparator bridge. In order to obtain the required ppb-accuracy a large (at least 100 µA) source drain current through the quantum Hall device is needed which is beyond the breakdown current of a single SiC/G device at low magnetic field and high temperature. In a quantum Hall array many devices can be operated in parallel, lowering the resistance value and increasing the total measurement current. The epitaxial graphene needs to be sufficiently homogeneous Figure 9. Allan deviation for a long measurement run compared with previously published data in Ref. [24]. Green triangles: Measured using cryogen-free system (5 T and 3.9 K) with a source-drain current of 20 µA and a CCC bridge with A20 null detector [23]. Each data point represents a 90 s measurement section composed of three 30 s measurements of either forward or reverse current direction. Black squares: Measured using traditional system with 14 T magnet-300 mK temperature and sourcedrain current of 100 µA. The CCC bridge uses a SQUID null detector and each data point represents 30 s of measurement time made up of three blocks of 10 s. Purple dots: Theoretical optimum measurement resolution for each system. Blue line: 1/ √ τ . so that the operational parameters of all QHR devices overlap and all contacts need to be low ohmic. Recently, the first SiC/G quantum Hall array at R K /200 has been demonstrated [31]. Dissemination and proliferation of primary quantum standards is one of the key objectives of fundamental metrology. The results presented in this paper could be transformative for future resistance metrology by creating the opportunity for many more metrology and calibration laboratories to realise their own primary resistance traceability. This will shorten the calibration chain and lower the uncertainty which can be provided to end users with all its implicit benefits. A number of technical issues remain to be addressed but the basic principle of operation has been demonstrated. Figure 1 . 1Optical microscope image of a typical device used in our experiments (not the one used for the actual experiments). The channel width is 100 µm, dark area is graphene channel, light area is SiC substrate and gold are the metallic contact. Figure 2 . 2a) Inside of the cryostat cooler showing the small superconducting magnet, mounted at the bottom of the PT2 Stage. b) The system with vacuum can mounted. The overall height of the system is around 80 cm. Figure 3 . 3Current noise measurement traces before (blue) and after (red) modification of the pulse-tube cryostat. The black trace was measured with the pulse-tube compressor switched off. Current noise was measured on the sample wires without a sample present. Figure 4 . 4R xx (Red) and R xy (Blue) on an epitaxial graphene Hall bar device measured at 3.8 K in the cryogen-free cryostat, measured with a source-drain current of I sd = 100 nA. Figure 5 . 5Top graph: R xx as a function of magnetic field for different charge carrier densities. Temperature is ≈3.8 K. Figure 6 . 6n S versus magnetic field using the model from Ref.[28] (thick black line). Red lines are constant filling factors and green lines are n S (B, N ). Blue line is R XY measured for a device with n S ≈ 1.3 × 10 11 cm −2 (right hand axis) together with the measured breakdown current (purple squares and second purple right hand axis). Vertical dashed line indicates maximum magnetic field of 5 T and horizontal dashed line indicates zero field carrier density of 1.3 × 10 11 cm −2 . Figure 7 . 7Breakdown current, I c versus carrier density n s at B = 5 T and T = 3.9 K. Black squares are for the first measurement run when the device was new and red triangles are for the second run 3 months later. Red lines are polynomial fits which serves as guide to the eye. Figure 8 . 8Top panel: R xy (Black) and R xx (Red) as a function of magnetic field measured at a small (100 nA) source-drain current (left axis). Symbols: Measurement of R xy against standard resistor using CCC bridge. The deviation is calculated as a difference from the mean value of the standard resistor in the range of 3 to 4.5 T. Bottom panel: Measurement of R xx over the same magnetic field range for two different measurement currents. AcknowledgmentsThis work was supported by the NPL Proof-of-Concept fund, NMS Programme, European Union Seventh Framework Programme under Grant Agreement No. 604391 Graphene Flagship, and EMRP Project GraphOhm.References New method for high-accuracy determination of the fine-structure constant based on quantized hall resistance. K Klitzing, G Dorda, M Pepper, Physical Review Letters. 4564K. v Klitzing, G. Dorda, and M. Pepper. New method for high-accuracy determination of the fine-structure constant based on quantized hall resistance. Physical Review Letters, 45(6):4, 1980. The quantum hall effect as an electrical resistance standard. B Jeckelmann, B Jeanneret, Measurement Science & Technology. 148B. Jeckelmann and B. Jeanneret. The quantum hall effect as an electrical resistance standard. Measurement Science & Technology, 14(8):1229-1236, 2003. Two-dimensional gas of massless dirac fermions in graphene. K S Novoselov, A K Geim, S V Morozov, D Jiang, M I Katsnelson, I V Grigorieva, S V Dubonos, A A Firsov, Nature. 4387065K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov. Two-dimensional gas of massless dirac fermions in graphene. Nature, 438(7065):197-200, 2005. Experimental observation of the quantum hall effect and berry's phase in graphene. Y B Zhang, Y W Tan, H L Stormer, P Kim, Nature. 4387065Y. B. Zhang, Y. W. Tan, H. L. Stormer, and P. Kim. Experimental observation of the quantum hall effect and berry's phase in graphene. Nature, 438(7065):201-204, 2005. Room-temperature quantum hall effect in graphene. K S Novoselov, Z Jiang, Y Zhang, S V Morozov, H L Stormer, U Zeitler, J C Maan, G S Boebinger, P Kim, A K Geim, Science. 3155817K. S. Novoselov, Z. Jiang, Y. Zhang, S. V. Morozov, H. L. Stormer, U. Zeitler, J. C. Maan, G. S. Boebinger, P. Kim, and A. K. Geim. Room-temperature quantum hall effect in graphene. Science, 315(5817):1379-1379, 2007. Quantum resistance metrology in graphene. A J M Giesbers, G Rietveld, E Houtzager, U Zeitler, R Yang, K S Novoselov, A K Geim, J C Maan, Applied Physics Letters. 9322A. J. M. Giesbers, G. Rietveld, E. Houtzager, U. Zeitler, R. Yang, K. S. Novoselov, A. K. Geim, and J. C. Maan. Quantum resistance metrology in graphene. Applied Physics Letters, 93(22):222109-12, 2008. Towards a quantum resistance standard based on epitaxial graphene. A Tzalenchuk, S Lara-Avila, A Kalaboukhov, S Paolillo, M Syvajarvi, R Yakimova, O Kazakova, T J B M Janssen, V , S Kubatkin, Nature Nanotechnology. 53A. Tzalenchuk, S. Lara-Avila, A. Kalaboukhov, S. Paolillo, M. Syvajarvi, R. Yakimova, O. Kazakova, T. J. B. M. Janssen, V. Fal'ko, and S. Kubatkin. Towards a quantum resistance standard based on epitaxial graphene. Nature Nanotechnology, 5(3):186-189, 2010. Fal'ko. Graphene, universality of the quantum hall effect and redefinition of the si system. T J B M Janssen, N E Fletcher, R Goebel, J M Williams, A Tzalenchuk, R Yakimova, S Lara-Avila, S Kubatkin, V I , New Journal of Physics. 136T. J. B. M. Janssen, N. E. Fletcher, R. Goebel, J. M. Williams, A. Tzalenchuk, R. Yakimova, S. Lara-Avila, S. Kubatkin, and V. I. Fal'ko. Graphene, universality of the quantum hall effect and redefinition of the si system. New Journal of Physics, 13:6, 2011. Precision quantum hall resistance measurement on epitaxial graphene device in low magnetic field. A Satrapinski, S Novikov, N Lebedeva, Applied Physics Letters. 10317A. Satrapinski, S. Novikov, and N. Lebedeva. Precision quantum hall resistance measurement on epitaxial graphene device in low magnetic field. Applied Physics Letters, 103(17), 2013. Towards a graphene-based quantum impedance standard. C C Kalmbach, J Schurr, F J Ahlers, A Muller, S Novikov, N Lebedeva, A Satrapinski, Applied Physics Letters. 1057C. C. Kalmbach, J. Schurr, F. J. Ahlers, A. Muller, S. Novikov, N. Lebedeva, and A. Satrapinski. Towards a graphene-based quantum impedance standard. Applied Physics Letters, 105(7), 2014. Quantum hall resistance standards from graphene grown by chemical vapour deposition on silicon carbide. F Lafont, R Ribeiro-Palau, D Kazazis, A Michon, O Couturaud, C Consejo, T Chassagne, M Zielinski, M Portail, B Jouault, F Schopfer, W Poirier, Nat Commun. 6F. Lafont, R. Ribeiro-Palau, D. Kazazis, A. Michon, O. Couturaud, C. Consejo, T. Chassagne, M. Zielinski, M. Portail, B. Jouault, F. Schopfer, and W. Poirier. Quantum hall resistance standards from graphene grown by chemical vapour deposition on silicon carbide. Nat Commun, 6, 2015. Charge transfer between epitaxial graphene and silicon carbide. S Kopylov, A Tzalenchuk, S Kubatkin, V I , Applied Physics Letters. 97113S. Kopylov, A. Tzalenchuk, S. Kubatkin, and V. I. Fal'ko. Charge transfer between epitaxial graphene and silicon carbide. Applied Physics Letters, 97(11):3, 2010. Half-integer quantum hall effect in gate-controlled epitaxial graphene devices. S Tanabe, Y Sekine, H Kageshima, M Nagase, H Hibino, Applied Physics Express. 373S. Tanabe, Y. Sekine, H. Kageshima, M. Nagase, and H. Hibino. Half-integer quantum hall effect in gate-controlled epitaxial graphene devices. Applied Physics Express, 3(7):3, 2010. Non-volatile photochemical gating of an epitaxial graphene/polymer heterostructure. S Lara-Avila, K Moth-Poulsen, R Yakimova, T Bjornholm, V Fal&apos;ko, A Tzalenchuk, S Kubatkin, Advanced Materials. 2375S. Lara-Avila, K. Moth-Poulsen, R. Yakimova, T. Bjornholm, V. Fal'ko, A. Tzalenchuk, and S. Kubatkin. Non-volatile photochemical gating of an epitaxial graphene/polymer heterostructure. Advanced Materials, 23(7):5, 2011. Bottom-gated epitaxial graphene. D Waldmann, J Jobst, F Speck, T Seyller, M Krieger, H B Weber, Nature Materials. 105D. Waldmann, J. Jobst, F. Speck, T. Seyller, M. Krieger, and H. B. Weber. Bottom-gated epitaxial graphene. Nature Materials, 10(5):357-360, 2011. Tuning carrier density across dirac point in epitaxial graphene on sic by corona discharge. A Lartsev, T Yager, T Bergsten, A Tzalenchuk, T J B M Janssen, R Yakimova, S Lara-Avila, S Kubatkin, Applied Physics Letters. 1056A. Lartsev, T. Yager, T. Bergsten, A. Tzalenchuk, T. J. B. M. Janssen, R. Yakimova, S. Lara- Avila, and S. Kubatkin. Tuning carrier density across dirac point in epitaxial graphene on sic by corona discharge. Applied Physics Letters, 105(6), 2014. Observation of electron-hole puddles in graphene using a scanning single-electron transistor. J Martin, N Akerman, G Ulbricht, T Lohmann, J H Smet, K Von Klitzing, A Yacoby, Nature Physics. 42J. Martin, N. Akerman, G. Ulbricht, T. Lohmann, J. H. Smet, K. Von Klitzing, and A. Yacoby. Observation of electron-hole puddles in graphene using a scanning single-electron transistor. Nature Physics, 4(2):144-148, 2008. Disorder induced dirac-point physics in epitaxial graphene from temperature-dependent magnetotransport measurements. J Huang, J A Alexander-Webber, A M R Baker, T J B M Janssen, A Tzalenchuk, A Antonov, T Yager, S Lara-Avila, S Kubatkin, R Yakimova, R J Nicholas, arXiv:1505.03747J. Huang, J. A. Alexander-Webber, A. M. R. Baker, T. J. B. M. Janssen, A. Tzalenchuk, A. Antonov, T. Yager, S. Lara-Avila, S. Kubatkin, R. Yakimova, and R. J. Nicholas. Disorder induced dirac-point physics in epitaxial graphene from temperature-dependent magneto- transport measurements. arXiv:1505.03747, 2015. Revised technical guidelines for reliable dc measurements of the quantized hall resistance. F Delahaye, B Jeckelmann, Metrologia. 405F. Delahaye and B. Jeckelmann. Revised technical guidelines for reliable dc measurements of the quantized hall resistance. Metrologia, 40(5):217-223, 2003. Homogeneous large-area graphene layer growth on 6h-sic(0001). C Virojanadara, M Virojanadara, R Syvajarvi, L I Yakimova, A A Johansson, T Zakharov, Balasubramanian, Process for growth of graphene. R. Yakimova, T. Iakimov, M. Syvjrviess for growth of graphene78ppPhysical Review BC. Virojanadara, C Virojanadara, M. Syvajarvi, R. Yakimova, L. I. Johansson, A. A. Zakharov, and T. Balasubramanian. Homogeneous large-area graphene layer growth on 6h-sic(0001). Physical Review B, 78(24):6, 2008. R. Yakimova, T. Iakimov, M. Syvjrvi, Process for growth of graphene, Patent Granted CN 103097283A (2014), 5 pp. Low contact resistance in epitaxial graphene devices for quantum metrology. T Yager, submitted to AIP AdvancesT. Yager, et al., Low contact resistance in epitaxial graphene devices for quantum metrology. submitted to AIP Advances, 2015. An automated cryogenic current comparator resistance ratio bridge for routine resistance measurements. J M Williams, T J B M Janssen, G Rietveld, E Houtzager, Metrologia. 473J. M. Williams, T. J. B. M. Janssen, G. Rietveld, and E. Houtzager. An automated cryogenic current comparator resistance ratio bridge for routine resistance measurements. Metrologia, 47(3):167-174, 2010. Fal'ko. Precision comparison of the quantum hall effect in graphene and gallium arsenide. T J B M Janssen, J M Williams, N E Fletcher, R Goebel, A Tzalenchuk, R Yakimova, S Lara-Avila, S Kubatkin, V I , Metrologia. 493T. J. B. M. Janssen, J. M. Williams, N. E. Fletcher, R. Goebel, A. Tzalenchuk, R. Yakimova, S. Lara-Avila, S. Kubatkin, and V. I. Fal'ko. Precision comparison of the quantum hall effect in graphene and gallium arsenide. Metrologia, 49(3):294-306, 2012. Quantum resistance standard accuracy close to the zero-dissipation state. F Schopfer, W Poirier, Journal of Applied Physics. 1146F. Schopfer and W. Poirier. Quantum resistance standard accuracy close to the zero-dissipation state. Journal of Applied Physics, 114(6), 2013. Half integer quantum hall effect in high mobility single layer epitaxial graphene. X S Wu, Y K Hu, M Ruan, N K Madiomanana, J Hankinson, M Sprinkle, C Berger, W A De Heer, Applied Physics Letters. 95223X. S. Wu, Y. K. Hu, M. Ruan, N. K. Madiomanana, J. Hankinson, M. Sprinkle, C. Berger, and W. A. de Heer. Half integer quantum hall effect in high mobility single layer epitaxial graphene. Applied Physics Letters, 95(22):3, 2009. Observation of quantum-hall effect in gated epitaxial graphene grown on sic (0001). T Shen, J J Gu, M Xu, Y Q Wu, M L Bolen, M A Capano, L W Engel, P D Ye, Applied Physics Letters. 95173T. Shen, J. J. Gu, M. Xu, Y. Q. Wu, M. L. Bolen, M. A. Capano, L. W. Engel, and P. D. Ye. Observation of quantum-hall effect in gated epitaxial graphene grown on sic (0001). Applied Physics Letters, 95(17):3, 2009. Anomalously strong pinning of the filling factor ν = 2 in epitaxial graphene. T J B M Janssen, A Tzalenchuk, R Yakimova, S Kubatkin, S Lara-Avila, S Kopylov, V I , Physical Review B. 83234T. J. B. M. Janssen, A. Tzalenchuk, R. Yakimova, S. Kubatkin, S. Lara-Avila, S. Kopylov, and V. I. Fal'ko. Anomalously strong pinning of the filling factor ν = 2 in epitaxial graphene. Physical Review B, 83(23):4, 2011. Phase space for the breakdown of the quantum hall effect in epitaxial graphene. J A Alexander-Webber, A M R Baker, T J B M Janssen, A Tzalenchuk, S Lara-Avila, S Kubatkin, R Yakimova, B A Piot, D K Maude, R J Nicholas, Physical Review Letters. 1119J. A. Alexander-Webber, A. M. R. Baker, T. J. B. M. Janssen, A. Tzalenchuk, S. Lara-Avila, S. Kubatkin, R. Yakimova, B. A. Piot, D. K. Maude, and R. J. Nicholas. Phase space for the breakdown of the quantum hall effect in epitaxial graphene. Physical Review Letters, 111(9), 2013. The quantum hall effect as an electrical resistance standard. B Jeckelmann, B Jeanneret, Reports on Progress in Physics. 6412B. Jeckelmann and B. Jeanneret. The quantum hall effect as an electrical resistance standard. Reports on Progress in Physics, 64(12):1603-1655, 2001. A prototype of R K /200 quantum hall array resistance standard on epitaxial graphene. A Lartsev, Journal of Applied Physics. A. Lartsev, et al., A prototype of R K /200 quantum hall array resistance standard on epitaxial graphene. submitted to Journal of Applied Physics, 2015.
[]
[]
[ "Athanassios Tzouvaras [email protected] \nDepartment of Mathematics\nAristotle University of Thessaloniki\n541 24ThessalonikiGreece\n" ]
[ "Department of Mathematics\nAristotle University of Thessaloniki\n541 24ThessalonikiGreece" ]
[]
We reformulate slightly Russell's notion of typicality, so as to eliminate its circularity and make it applicable to elements of any first-order structure. We argue that the notion parallels Martin-Löf (ML) randomness, in the sense that it uses definable sets in place of computable ones and sets of "small" cardinality (i.e., strictly smaller than that of the structure domain) in place of measure zero sets. It is shown that if the domain M satisfies cf (|M |) > ℵ 0 , then there exist |M | typical elements and only < |M | non-typical ones. In particular this is true for the standard model R of second-order arithmetic. By allowing parameters in the defining formulas, we are led to relative typicality, which satisfies most of van Lambalgen's axioms for relative randomness. However van Lambalgen's theorem is false for relative typicality. The class of typical reals is incomparable (with respect to ⊆) with the classes of ML-random, Schnorr random and computably random reals. Also the class of typical reals is closed under Turing degrees and under the jump operation (both ways).Mathematics Subject Classification (2010): 03C98, 03D78Keywords: B. Russell's typical Englishman, typical property, typical object, Martin-Löf randomness, van Lambalgen's theorem.1 Russell seems to worry not about circularity itself but rather about a contradiction which is supposed to emerge from this. For he continues the above phrase as follows: "You will easily realise that most Englishmen do not possess all the properties that most Englishmen possess, and therefore a typical Englishman, according to your own definition, would be untypical." (ibid.) It is not clear to me how Russell concludes that most Englishmen do not possess all the properties that most Englishmen possess, and thus how the contradiction is derived. I suppose that Russell's conclusion "You will easily realise that..." is empirical rather than logical. In section 2 below we show (Example 2.7 and Theorems 2.9, 2.11) that there are plenty of structures in which the majority of elements, or even the totality of them, can be typical. So in these structures, the property of metalanguage expressing typicality is itself typical.
10.1002/malq.202000038
[ "https://export.arxiv.org/pdf/2303.11741v1.pdf" ]
221,679,351
2303.11741
0dadd5f69219538494aa2226cee5f1041dd96416
21 Mar 2023 Athanassios Tzouvaras [email protected] Department of Mathematics Aristotle University of Thessaloniki 541 24ThessalonikiGreece 21 Mar 2023Russell's typicality as another randomness notion We reformulate slightly Russell's notion of typicality, so as to eliminate its circularity and make it applicable to elements of any first-order structure. We argue that the notion parallels Martin-Löf (ML) randomness, in the sense that it uses definable sets in place of computable ones and sets of "small" cardinality (i.e., strictly smaller than that of the structure domain) in place of measure zero sets. It is shown that if the domain M satisfies cf (|M |) > ℵ 0 , then there exist |M | typical elements and only < |M | non-typical ones. In particular this is true for the standard model R of second-order arithmetic. By allowing parameters in the defining formulas, we are led to relative typicality, which satisfies most of van Lambalgen's axioms for relative randomness. However van Lambalgen's theorem is false for relative typicality. The class of typical reals is incomparable (with respect to ⊆) with the classes of ML-random, Schnorr random and computably random reals. Also the class of typical reals is closed under Turing degrees and under the jump operation (both ways).Mathematics Subject Classification (2010): 03C98, 03D78Keywords: B. Russell's typical Englishman, typical property, typical object, Martin-Löf randomness, van Lambalgen's theorem.1 Russell seems to worry not about circularity itself but rather about a contradiction which is supposed to emerge from this. For he continues the above phrase as follows: "You will easily realise that most Englishmen do not possess all the properties that most Englishmen possess, and therefore a typical Englishman, according to your own definition, would be untypical." (ibid.) It is not clear to me how Russell concludes that most Englishmen do not possess all the properties that most Englishmen possess, and thus how the contradiction is derived. I suppose that Russell's conclusion "You will easily realise that..." is empirical rather than logical. In section 2 below we show (Example 2.7 and Theorems 2.9, 2.11) that there are plenty of structures in which the majority of elements, or even the totality of them, can be typical. So in these structures, the property of metalanguage expressing typicality is itself typical. Introduction Bertrand Russell [8, p. 89], in an attempt to explain impredicative definitions (and the need to appeal to the reducibility axiom in order to avoid them), puts under examination the following definition of "typical Englishman": "Suppose, for example, you were to suggest that 'a typical Englishman is one who possesses all the properties possessed by a majority of Englishmen'." Now typicality so defined is itself a property of Englishmen, so in order for someone to check whether a particular Englishman is typical, one has to check, among other things, if he is typical, so checking enters a circle and fails. 1 Despite this failure, the core of the definition remains natural and sound, and can be easily restored if we make some changes that will eliminate circularity. It is the purpose of this paper to provide such a strict definition of typicality, and then explore its behavior in concrete contexts and its connections with randomness. We believe that the intended meaning of Russell's notion can be described as follows: given a structured universe of things, which could mathematically be represented by a first-order structure M = (M, . . .), an element a of M is typical if it is not special in any definable sense, that is, if it does not possess any property that makes it belong to a special definable minority X ⊂ M . This can be done rigorously in three steps: (a) Define precisely when a set X ⊆ M contains a majority/minority of elements of M . (b) Specify what a "typical property" (of the first-order language L of M) is in terms of the previously specified majority notion. (c) Define an element a of M to be "typical" if it satisfies all typical properties. Since typical properties are properties of the object language L, while typicality is a property of the metalanguage, no circularity occurs. In section 2 we make these steps precise. In section 3 we argue that typicality is a kind of randomness notion, especially when we apply it to the standard model of full second-order arithmetic R = (ω, P(ω), +, ·, <, ∈, 0, 1). In the context of reals, the most popular notion of randomness is Martin-Löf (ML-) randomness ( [1] is a rich source of information about this notion and its variants). The parallelism between ML-random and typical reals lies on this: • The definition of a ML-random real is based on the combination of computability and small sets in the sense of measure theory (sets of measure zero). Namely, a is ML-random if it cannot be "trapped" within any set n V n of measure zero, where (V n ) n is a computable sequence of r.e. open sets such that µ(V n ) ≤ 2 −n . • The definition of a typical real is based on the combination of definability and small sets in the sense of cardinality (sets of cardinality strictly less than the continuum). Namely, a is typical if it cannot be "'trapped" within any set X which is definable and small, i.e., |X| < 2 ℵ 0 . Thus typicality arises if we replace computable sets with definable ones and small sets from the point of view of measure theory with small sets from the point of view of cardinality. There are structures, like (N, S, +, ·, 0), that contain no typical elements, and others, like (Q, <), all elements of which are typical. We give also some sufficient conditions in order for a structure to contain typical elements. One such simple and basic condition is given in Theorem 2.9. It says that every M = (M, . . .) with a countable language such that cf (|M |) > ℵ 0 contains |M | typical elements, while only < |M | non-typical ones. In particular this is true for the structure R of reals mentioned above, since cf (2 ℵ 0 ) > ℵ 0 . If further we allow parameters to occur in properties, we are led to the notion of relative typicality "x is typical with respect to y", denoted Tp(x, y), which is analogous to the relative randomness R(x, y) studied by van Lambalgen in [5], [6] and [7]. We show that most of the basic axioms for R(x, y) considered in these papers hold also for Tp(x, y) in general structures, and in particular in the structure R. On the other hand the two notions deviate significantly at certain points. For example van Lambalgen's theorem does not hold for relative typicality, while it holds for ML-randomness. Actually the class of typical reals is incomparable (with respect to inclusion) with the classes of ML-random, Schnorr random and computably random reals. Finally we show that the relation Tp(x, y) is closed with respect to Turing reducibility ≤ T and Turing degrees, as well as with respect to the jump operator a → a ′ . Some open questions are stated at the end of the paper. Formalizing typicality Let L be a first-order language, M = (M, . . .) an L-structure and A ⊆ M . L(A) denotes L augmented with parameters from A. By some abuse of language we refer also to L(A) as the set of formulas of L(A). By a property of L(A) we mean a formula φ(x) ∈ L(A) with one free variable. We have first to make precise what it means for a set X ⊆ M to contain the majority of elements of M . The definition is the expected one: a majority subset is any set that contains strictly more elements than its complement, i.e., any X ⊆ M such that |X| > |M \X|. We shall refer to such a X as a majority subset (and accordingly to M \X as a minority subset). The definition applies both to finite and infinite domains M . If M is finite, then X is a majority set just if |X| > |M |/2. If M is infinite (as will be the case throughout this paper), the above condition is equivalent to |M \X| < |M | (which in particular implies that |X| = |M |). Having fixed a rigorous notion of majority, the formalization of Russell's concept comes in two steps: first, define a property φ(x) of a language L for a structure M = (M, . . .) to be typical, if it defines a majority subset of M , i.e., if its extension in M belongs to mj(M ); second, define an element a ∈ M to be typical, if it satisfies all typical properties for M. A weak notion of typicality is obtained (over uncountable structures) if we use the filter F r(M ) in place of mj(M ). Typical and weakly typical properties and elements of a first-order structure. Relative typicality Given an L-structure M, a set A ⊆ M and a property φ(x) of L(A), we often denote by ext(φ) M , or just ext(φ), the extension of φ(x) in M, i.e., ext(φ) = {a ∈ M : M |= φ(a)}. Definition 2.1 A property φ(x) of L(A) is said to be typical over M, or (M, A)-typical, or just A-typical (resp. weakly A-typical,) if ext(φ) ∈ mj(M ) (resp. ext(φ) ∈ F r(M )). In particular φ(x) is typical (resp. weakly typical) if it is ∅-typical (resp. weakly ∅-typical). Proof. It is straightforward from the definition that the conjunction of finitely many A-typical (resp. weakly A-typical) properties is A-typical (resp. weakly A-typical), thus satisfiable. ⊣ We shall be mostly interested in A-typical properties and elements for finite A. In this case the elements of A occur in φ in a certain order, i.e., as a vector a = (a 1 , . . . , a n ), so it convenient to use in parallel the vector notation and write also a-typical (weakly a-typical) instead of A-typical (weakly Atypical). Concerning the existence of A-typical properties over a structure M, we have the following simple facts. All structures M considered below are infinite. The proof of the following is straightforward. Fact 2.3 (i) Every tautology φ(x) (e.g. ψ(x) ∨ ¬ψ(x)) is M -typical. (ii) For every tuple a = (a 1 , . . . , a n ) of M , the property φ a (x) := (x = a 1 ) ∧ · · · ∧ (x = a n ) is a-typical. (iii) If b is a tuple of a-definable elements of M , then the property φ b (x) is equivalent over M to an a-typical property. We come to typical elements. M |= (Qmostx)φ(x) ⇐⇒ {a ∈ M : M |= φ(a)} ∈ mj(M ), and M |= (Q inf x)φ(x) ⇐⇒ {a ∈ M : M |= φ(a)} ∈ F r(M ). Then a property φ(x) is A-typical (resp. weakly A-typical) over M iff M |= (Qmostx)φ(x) (resp. M |= (Q inf x)φ(x)). Allowing parameters to be used in definitions, gives rise to the following relative typicality relations between elements of a structure M: • Tp(a, b): "a is b-typical", • wTp(a, b): "a is weakly b-typical". In particular we write Tp(a), wTp(a) instead of Tp(a, ∅), wTp(a, ∅), respectively. Recall that given an L-structure M and A ⊆ M , an element b ∈ M is said to be A-definable in M, if there is a formula φ(x, y) of L and a tuple a ∈ A such that b is the unique element of M that satisfies φ(x, a), i.e., M |= (∀x)(x = b ↔ φ(x, a) ). An element is said to be definable if it is ∅-definable. More generally b ∈ M is said to be A-algebraic in M, if there is a formula φ(x, y) and a tuple a ∈ A such that M |= φ(b, a) and ext(φ(x, a)) is finite. For every tuple a ∈ M , let df(a) and cl(a) denote the sets of a-definable and a-algebraic elements of M , respectively. Obviously df(a) ⊆ cl(a). The following are immediate consequences of the definitions. (iv) Tp(a, b) ⇒ wTp(a, b) ⇔ a / ∈ cl(b) (v) For countable M, Tp(a, b) ⇔ wTp(a, b). While for all (infinite) structures there exist typical properties, one cannot expect that all structures contain typical elements. E.g. this is the case with structures all elements of which are "special", i.e., definable without parameters. On the other hand, there exist structures consisting entirely of typical elements. Two such prominent examples of structures, lying at the two opposite ends of the spectrum, are considered below. Example 2.6 Let M = N = (N, S, +, ·, 0) be the standard structure of natural numbers. Every n ∈ N is definable in N , so the formulas of L(N) coincide with those of L. Thus for any tuple n = (n 1 , . . . , n k ) of elements of N, the formula φ n (x) defined in 2.3 is typical. Coming to elements, it is easy to see that N contains no typical element. Because for every tuple n = (n 1 , . . . , n k ) of N, the typical property φ n (x) := (x = n 1 )∧· · ·∧(x = n k ) has extension N\{n 1 , . . . , n k }. So if a ∈ N were typical, it should belong to the extensions of all properties φ n (x), i.e. to F r(N), but this is empty. Example 2.7 Let M = Q = (Q, <) be the ordered set of rationals. Here, in full contrast to N , the elements of Q are all of the same type, i.e., for every φ(x) and any a, b ∈ Q, Q |= φ(a) ↔ φ(b). So for every φ(x), either it holds of all elements, i.e., ext(φ) = Q, so φ(x) is typical, or it holds of no element, in which case ext(¬φ) = Q, i.e., ¬φ(x) is typical. It follows that for every φ(x), either φ(x) or ¬φ(x) is typical over Q. Let A ⊆ Q be a set of parameters. The theory DLO * (of dense linear order without end-points) admits quantifier elimination, so each property φ(x) with parameters from A is equivalent over Q to a Boolean combination of formulas x < a i , a j < x, x = a k , and their negations. Of them only the properties x = a i are A-typical. So the A-typical properties over Q are exactly φ a (x) := (x = a 1 ) ∧ · · · ∧ (x = a n ) for a ∈ A. It follows that a is A-typical iff it satisfies all such φ a (x), i.e., iff it belongs to a 1 ,...,an∈A Q\{a 1 , . . . , a n } which equals Q\A. This is exactly the set of A-typical elements of Q. Here is a sufficient condition for the existence of typical elements in a structure M. Theorem 2.8 Let M be a structure. If M is κ-saturated, for some κ ≥ ℵ 0 , then for every A ⊆ M such that |A| < κ, M contains A-typical elements. Proof. We saw that the set T P (M, A) of A-typical properties is a type over (M, A). Sine |A| < κ and M is κ-saturated, T P (M, A) is satisfiable in M. Every a ∈ M satisfying T P (M, A) is A-typical. ⊣ Here is another sufficient condition for the existence of typical elements, independent of saturation. Theorem 2.9 Let M be an L-structure, for a countable L, and let A ⊆ M be a set of parameters such that cf (|M |) > max(ℵ 0 , |A|). Then M contains |M | A-typical elements, while only < |M | non-A-typical ones. Proof. Let M and A ⊆ M be as stated. By definition an element a ∈ M is non-A-typical iff it satisfies a property with minority extension. So if S = {φ(x, b) : b ∈ A & |ext(φ(x, b))| < |M |}, then the set of all non-A-typical elements of M is X = {ext(φ) : φ ∈ S}, and hence |X| ≤ Σ{|ext(φ)| : φ ∈ S}. Now |S| ≤ max(ℵ 0 , |A|) < cf (|M |), i.e., |S| < cf (|M |), and also for every φ ∈ S, |ext(φ)| < |M |. It follows that Σ{|ext(φ)| : φ ∈ S} < |M |, hence |X| < |M |. The set of A-typical elements is the complement of X, M \X, so |M \X| = |M |. ⊣ Corollary 2.10 Let M be an L-structure, for a countable L, such that cf (|M |) > ℵ 0 . Then for any A ⊆ M such that |A| ≤ ℵ 0 , M contains |M | A-typical elements and < |M | non-A-typical ones. Let Z 2 be the theory of full second-order arithmetic, whose language is L 2 = {+, ·, <, ∈, 0, 1}. L 2 has variables m, n, i, j, . . . for numbers and variables x, y, z for sets of numbers, i.e., for reals (for details see [9]). The full standard model of Z 2 is R = (ω, P(ω), +, ·, <, ∈, 0, 1). a, b, c, . . . denote arbitrary reals. Since cf (|P(ω)|) = cf (2 ℵ 0 ) > ℵ 0 , applying Corollary 2.10 to the structure R, we have the following immediate consequence. Theorem 2.11 For every A ⊆ P(ω) such that |A| ≤ ℵ 0 , there exist 2 ℵ 0 A-typical reals, while only < 2 ℵ 0 non-A-typical ones. More precisely: For every finite (or even countable) tuple b of reals, |{x : Tp(x, b)}| = 2 ℵ 0 , while |{x : ¬Tp(x, b)}| < 2 ℵ 0 . Example 2.7 and Theorems 2.9, 2.11 provide examples of structures that not only contain typical elements, but are structures over which the property of typicality itself (as a property of the metalanguage) is also "typical" (in the sense that its extension is a majority set). It follows from Fact 2.5 (ii) that every algebraic (hence also every definable) element of M is non-typical. However, using the coding capabilities of reals, it is easy to see that for any b, the b-algebraic and b-definable reals coincide. Fact 2.12 For all b ∈ P(ω), cl(b) = df(b). Proof. It suffices to see that cl(b) ⊆ df(b). This is a simple consequence of the fact that every finite (and even a countable) set of reals can be coded by a single real. Let a ∈ cl(b). Then there is a formula φ(x, y) of L 2 such that |= φ(a, b) and {x :|= φ(x, b)} is finite, so let {x :|= φ(x, b)} = {c 1 , . . . , c n }. A code for the latter set is the real c = { i, m : m ∈ c i , 1 ≤ i ≤ n}, where i, m is the usual arithmetical code of the pair (i, m). If (c) i = {m : i, m ∈ c}, then c i = (c) i for 1 ≤ i ≤ n. In particular a = (c) i 0 for some i 0 . Now c is defined by the formula ψ(x, b) := (∀y)(φ(y, b) ↔ y = (x) 1 ∨ · · · ∨ (x) n ). Thus c ∈ df(b), and since also a ∈ df(c), it follows a ∈ df(b). ⊣ Fact 2.12, together with 2.5 (ii), implies: Fact 2.13 In the structure R, for all reals a, b we have Tp(a, b) ⇒ wTp(a, b) ⇔ a / ∈ df(b). It is well-known that the L 2 -definable subsets of P(ω) are those of the analytical (or lightface) hierarchy of descriptive set theory consisting of the classes Σ 1 n , Π 1 n , for n ≥ 0. So a real a is non-typical iff it belongs to a minority set of this hierarchy, i.e., iff a ∈ X for some X ∈ Σ 1 n , for some n ≥ 0, such that |X| < 2 ℵ 0 . The problem is considerably simplified if we assume CH, in which case |X| < 2 ℵ 0 means X is countable. Thus we have the following. Fact 2.14 Assuming CH, a real a ∈ P(ω) is non-typical iff there is a countable X ∈ n Σ 1 n such that a ∈ X. Interestingly enough, the countable sets of the analytical hierarchy have drawn the attention of important researchers in descriptive set theory quite early, with remarkable results (see [2], [3] and [10]). The preceding papers show the existence of largest countable Σ 1 n and Π 1 n sets for certain n, and also give information about their internal structure. Incidentally they provide some nontrivial examples of non-typical reals. For instance, Solovay [10] proved that if there are only countably many constructible reals (which is the case if there exists a measurable cardinal), then they form the largest countable Σ 1 2 set, which means that every constructible real is non-typical. Generalizing this, [2] and [3] showed that, under the assumption of Projective Determinacy (PD), for each n there exists a largest countable Σ 1 2n set, C 2n , and a largest countable Π 1 2n+1 set, C 2n+1 . The above results should perhaps prompt us to refine the general definition of typicality for reals, by considering "degrees of typicality", namely Σ n -and Π n -typical reals, for n ≥ 0, when the properties involved in the definitions are Σ n and Π n , respectively. The aim should be to find conditions in order for a real to belong to some countable Σ n /Π n analytical set, and besides to specify "concrete" reals not belonging to any such set. M |= |{x ∈ V α+1 : φ(x)}| > |{x ∈ V α+1 : ¬φ(x)}|. Then φ(x) is said to be typical over M, if it is α-typical for every ordinal α ∈ M . This adjustment somehow internalizes the definition of typical property over M = (M, E), although at the cost of increased complexity. But the internalization fails irreparably when it comes to the definition of α-typical/typical element of (M, E) (if in the old definition we simply replace "typical property" with "α-typical property"). For then a set A ∈ M would be α-typical iff for every α-typical property φ(x) over M, M |= φ(A). The latter definition obviously cannot be expressed inside M. Thus we are in a situation where one has to judge what a typical set of M is by using both internal and external criteria, a practice not quite natural. Despite the aforementioned subtleties and obstacles, the challenge to find a notion of typicality suitable for models of set theory remains. All the more because such models accommodate naturally generic sets, and it is well-known that genericity is a notion dual to randomness in the contexts of both reals and sets (see e.g. [1], [6]). Relative typicality as a relative randomness notion. Randomness axioms The notion of typicality considered above has a clear key-feature that allows it to be used as a faithful formal representation of the (general) concept of randomness. This feature is the fact that a typical object avoids all "special" predicates, i.e., predicates having "small extension" (as measured by cardinality). All established notions of randomness currently used in the context of reals (namely ML-randomness, Schnorr randomness and computable randomness), have similar features: a random real is one that avoids all sets of measure zero that are generated by certain special computable tests. Because of the discrepancy in the criteria of smallness and effectiveness, one should not expect to prove many connections between typical reals and ML-random or Schnorr random reals. Rather the similarities should be sought in the structure of the class of typical sets and that of the other classes of random sets. Such a structural approach to randomness, through axioms, has been set out by van Lambalgen in a series of papers, [5], [6] and [7]. In these papers the author introduced a new primitive relation R(x, y) for relative randomness, with intended meaning "x is random with respect to y" (or "y offers no information for x"), as well as certain axioms Ri about R (randomness axioms). In [5] the relation R(x, y) refers primarily to infinite binary sequences, i.e., elements of 2 ω , while in the sequel papers [6] and [7] R(x, y) is intended to express properties of general random sets, so the axioms, denoted again Ri, are slightly different. The relations Tp(x, y) and wTp(x, y) of relative typicality, defined in the previous section, present obvious analogies with R(x, y), so it is natural to examine closely which of the axioms Ri hold of them and which do not. Recall that when the relations Tp and wTp specialize to the language L 2 and the structure R of reals, they mean the following: • Tp(a, b) :⇔ for every formula φ(x, b) of L 2 such that R |= φ(a, b), |{x : R |= φ(x, b)}| = 2 ℵ 0 . • wTp(a, b) :⇔ for every formula φ(x, b) of L 2 such that R |= φ(a, b), |{x : R |= φ(x, b)}| ≥ ℵ 0 . The randomness axioms considered below are all analogues of axioms proposed by van Lambalgen in [5], [6], and [7]. These papers contain three different but largely overlapping lists of 6 or 7 axioms denoted Ri, for i = 1, . . . , 7. The axioms Ti, for 1 ≤ i ≤ 6, given below are the analogues of the corresponding Ri contained in the list of [5]. Axiom T7 (tailset) corresponds to R6 of [7], while axiom-scheme T8 (zero-one law) corresponds to scheme R5 of [7]. Notice that the Ri's are formulated in a first-order object language L ∪ {R}, i.e., L augmented by R. Moreover in the scheme R6 and the zero-one law, the formulas may contain R. On the other hand our Ti's are formulated in the metalanguage. For finite tuples a, b, we write ab for the concatenation of a and b. It should be stressed that not all axioms Ti are expected to hold of the relation Tp(x, y). The reason is that Ri were mostly motivated by the intuition about a random sequence of digits 0,1, as a sequence generated by independent choices (see [7, p. 284]), while the intuition behind a typical object, as this is reflected in its definition, is substantially different. The next definition is needed for the formulation of axiom scheme T8 below. Definition 3.1 The reals a, b are said to be equivalent and write a ≈ b if they differ in finitely many elements, i.e., if the symmetric difference a△b is finite. Let [a] ≈ be the equivalence class of a. A set of reals X is said to be a tailset if a ∈ X and b ≈ a imply b ∈ X. Proof. T1: This follows immediately from Theorem 2.11. T2 and T3 are straightforward consequences of the definition of Tp(a, b), since what counts is simply the set of elements of the tuple b, not their particular ordering. T4 is equivalently written ¬Tp(a, a), which follows immediately from Fact 2.5 (i). T7: We have to show that for any a, b, c, Tp(a, c) and a ≈ b imply Tp(b, c), or, equivalently, ¬Tp(a, c) and a ≈ b imply ¬Tp(b, c). Assume ¬Tp(a, c) and a ≈ b. Then there is φ(x, y) such that R |= φ(a, c) and if X = {x : R |= φ(x, c)}, then |X| < 2 ℵ 0 . Notice that the predicate F in(x):= "the real x is finite" is definable in R, and thus so is the relation x ≈ y := F in(x△y). Put ψ(x, c) := (∃y)(φ(y, c) ∧ x ≈ y). Then R |= ψ(b, c). It suffices to show that if Y = {x : R |= ψ(x, c)}, then |Y | < 2 ℵ 0 . But Y = {[x] ≈ : R |= φ(x, c)}, and |[x] ≈ | = ℵ 0 for every real x. So |Y | ≤ |X| · ℵ 0 = |X| < 2 ℵ 0 . ⊣ Concerning the rest axioms T5, T6 and T8, they either fail (in R or in some other structure) or their truth is open. We discuss separately the case of each one of them. Axiom T5. This axiom fails in general for both relations Tp and wTp. It suffices to show in particular that Tp(a) & Tp(b, a) ⇒ Tp(a, b)(1) is false over some structure M, for some a, b ∈ M (and c = ∅). We show this by the following counterexample. Proof. First notice that there is an abundance of countable nonstandard models of P A with the property stated above (e.g. every countable recursively saturated model has this property). Fix such an M. Recall also (Fact 2.5) that, because of countability, Tp(a, b) ⇔ wTp(a, b) over M, while (by Fact 2.5 again) wTp(a, b) ⇔ a / ∈ cl(b). In view of these equivalences it suffices to show that there exist a, b ∈ M such that a / ∈ cl(∅) & b / ∈ cl(a) & a ∈ cl(b).(2) By assumption there are a, b ∈ M such that a / ∈ df(∅) & b / ∈ df(a) & a ∈ df(b). So in order to show (2) it suffices to establish that for every M |= P A and any b ∈ M , df(b) = cl(b). But the latter is a well-known fact, analogous to Fact 2.12, that holds in every model of P A, due to the coding capabilities of integers. Axiom scheme T6. The explanation for the corresponding scheme R6 (more precisely, for its simpler version R6 ′ ) given in [5] is somewhat obscure and hoc. (The formulation of and explanation for R6 itself is much more complicated.) In p. 1146 the author justifies the introduction of R6 ′ as follows: for any a, b of a structure M, if a is random relative to b, i.e., if R(a, b), then a should not belong to the algebraic closure of b. The author says that this goal is attained when the scheme R6 ′ is true, and this is the reason for adopting it. He gives also a short proof that R6 ′ guarantees the implication M |= R(a, b) ⇒ a / ∈ cl(b). In our case, however, where R(a, b) is interpreted either as Tp(a, b) or as wTp(a, b), it follows from the definition (see Fact 2.5 (iv)) that Tp(a, b) ⇒ wTp(a, b) ⇔ a / ∈ cl(b), so no extra axiom is needed for this derivation. Apart from this, it is very hard to motivate the truth of T6 for the relations Tp(a, b) and wTp(a, b) over the reals. Besides, it is relatively easy to show that T6 fails over appropriate countable non-standard models of P A. However the proof is beyond the main scope of the paper, and we omit it. We conclude that T6 is not a natural principle for typicality. Axiom scheme T8. Like T7, the scheme refers to reals again. Its truth for general formulas φ(x, y) is open to us. If for some formula φ(x, y) and some given parameters c, either φ(x, c) or ¬φ(x, c) happens to be a typical property, then it is easy to see that T8 is true for such a φ(x, c). Because if φ(x, c) is typical, then every c-typical element satisfies it, that is (∀x) (Tp(x, c) ⇒ φ(x, c)) is true. If on the other hand ¬φ(x, c) is typical, then no c-typical element satisfies φ(x, c), i.e., (∃x) (Tp(x, c) & φ(x, c)) is false. In both cases T8 holds true. However it is easy to find properties φ(x) with extension closed under ≈, such that neither φ(x) nor ¬φ(x) is typical. For instance φ(x) := (∀n)(∃m > n)(m is even ∧ m ∈ x) is such a property. Van Lambalgen's theorem In this section we compare typical reals mainly with ML-random reals, and also with Schnorr and computably random reals. For the precise definitions of these notions see [1]. In [4] van Lambalgen proved the following theorem which since then bears his name (see also [1], §11.6). Theorem 3.5 For any a, b ⊆ ω, a ⊕ b is ML-random iff a is ML-random and b is a-ML-random. Here a ⊕ b denotes the set {2n : n ∈ a} ∪ {2n + 1 : n ∈ b}. Does the relation Tp(a, b) satisfy the analogue of Theorem 3.5 in the structure R? That is, is the equivalence Tp(a ⊕ b) ⇔ Tp(a) & Tp(b, a) true for all reals a, b? We shall see that the answer is no. Instead we have the following simple reduction for the typicality of a ⊕ b. ⇒: Assume ¬Tp(a⊕b, c). There is a φ(x, y) such that R |= φ(a⊕b, c) and |{x : R |= φ(x, c)}| < 2 ℵ 0 . Observe that, by the definition of the operation ⊕, every real x is written in a unique way as a sum x 0 ⊕ x 1 . So we can write x 0 = (x) 0 and x 1 = (x) 1 , i.e. x = (x) 0 ⊕ (x) 1 . Consider the formulas: ψ 0 (x, c) := (∃y)(φ(y, c) ∧ x = (y) 0 ), ψ 1 (x, c) := (∃y)(φ(y, c) ∧ x = (y) 1 ). Since R |= φ(a ⊕ b, c), it follows that R |= ψ 0 (a, c) and R |= ψ 1 (b, c). Let X = {x : R |= φ(x, c)}, X 0 = {x : R |= ψ 0 (x, c)} and X 1 = {x : R |= ψ 1 (x, c)}. It suffices to show that |X 0 |, |X 1 | < 2 ℵ 0 . Now using the functions f 0 (x) = (x) 0 and f 1 (x) = (x) 1 , we have immediately that f 0 [X] = X 0 and f 1 [X] = X 1 . Thus |X 0 | ≤ |X| and |X 1 | ≤ |X|. Since by assumption |X| < 2 ℵ 0 , it follows that |X 0 |, |X 1 | < 2 ℵ 0 . Then R |= φ(a, c) and R |= ψ(b, c) imply R |= σ(a ⊕ b, c). Let Z = {z : R |= σ(z, c)}. It suffices to show that |Z| < 2 ℵ 0 . Now since every z ∈ Z is of the form x ⊕ y with x ∈ X and y ∈ Y , it follows that |Z| ≤ |X × Y | = |X| · |Y | = max(|X|, |Y |) < 2 ℵ 0 . This completes the proof. ⊣ Theorem 3.6 makes the notion of typicality deviate considerably from ML-randomness. For example we have the following immediate consequence. Proof. Otherwise we should have for every typical real a, Tp(a ⊕ ∅) ⇔ Tp(a) & Tp(∅, a), hence Tp(∅, a), which is obviously false. ⊣ By 3.7, if a is a typical set then so is a ⊕ ∅ = {2n : n ∈ a}. This is in sharp contrast to ML-random sets. No set consisting of even (or odd) numbers alone, can be ML-random. This can be shown easily using the definition of ML-random sets (see Theorem 3.9 below for the proof of a stronger fact), but follows also immediately from van Lambalgen's theorem 3.5. For if a ⊕ ∅ were ML-random then a should be random and besides ∅ should be a-random, which is obviously false. In my view the fact that a ML-random real turns into a non-ML-random one if we multiply its elements by 2, or by any constant k, is highly counterintuitive. Consequently so is also van Lambalgen's theorem which agrees with and is related to this fact. On the other hand, it is shown in [12] that van Lambalgen's theorem fails for the two other randomness notions, namely Schnorr randomness and computable randomness. Specifically, direction ⇒ of Theorem 3.5 fails if "ML-random" is replaced by "Schnorr random" or "computably random". Concerning the relationship between the three concepts, it is known that if ML, SR, CR denote the classes of these reals, respectively, then they are strictly nested as follows: ML CR SR. Liang Yu ([12]) considers the truth of van Lambalgen's theorem so important, that he believes it should be the criterion as to which of the above mentioned three randomness notions should be finally accepted as the "correct" one. He concludes that this is Martin-Löf randomness since it is the only one that satisfies the theorem. He justifies his belief as follows: "Philosophically, a random set should have the property that no information about any part of it can be obtained from another part. In particular, no information about 'the left part' of a random set should be obtained from 'the right part' and vice versa. In other words, 'the left part' of a random set should be 'the right part'-random and vice versa." The problem with this argument is in the meaning of the term "part". It is implicitly implied that any part of a random set should be random, and also random relative to any other part. But why couldn't a random set have definable (or in general non-random) parts? I think that a random set can have plenty of non-random parts. According to my intuition about randomness, if we glue together a random set and a (disjoint) definable one, the outcome will be a random set. In general, if we glue together a "bad" entity (random, irregular, undefinable, etc) with a "good" one (non-random, regular, definable), the "bad" entity prevails and spoils the composite whole. (This general idea was examined in [11], and was shown that in many countable structures there are exist undefinable sets X (called totally non-immune) with an abundance of definable parts, namely, for every definable set A, if A ∩ X is infinite, then it contains an infinite definable subset.) Corollary 3.7 helps answer the question about the comparability (with respect to inclusion) of the class of typical reals with the classes of MLrandom, Schnorr random and computably random reals. Let TP be the class of typical reals. Theorem 3.9 TP is incomparable with all classes ML, SR and CR. Proof. Since ML ⊆ CR ⊆ SR, it suffices to show that ML ⊆ TP and TP ⊆ SR. ML ⊆ TP follows from the fact that there is a ∆ 0 2 -definable MLrandom real, namely Chaitin's Ω (see [1], §8.2). Then clearly Ω ∈ ML\TP. To show TP ⊆ SR, pick a real a ∈ TP. Then by 3.7, a ⊕ ∅ ∈ TP. It suffices to show that a ⊕ ∅ / ∈ SR. We saw already above that a ⊕ ∅ / ∈ ML, but this is not enough since ML SR. We have to show that there is a Schnorr test (V n ) n such that a ⊕ ∅ ∈ n V n . Recall that a Schnorr test is a ML-test with the extra requirement that the measures of V n 's converge to zero in a computable way, rather than a semi-computable one as is the case with the ML-tests. Namely, there must exist a computable strictly increasing g : ω → ω such that for every n, µ(V n ) = 2 −g(n) (instead of simply µ(V n ) ≤ 2 −n ). Since a ⊕ ∅ = {2n : n ∈ a} consists of even numbers alone, we consider the following sequence of sets. (In this argument we identify of course a set a ⊆ ω with its characteristic function.) For every n, let V n = {x ∈ 2 ω : (∀i ≤ n)(x(2i + 1) = 0)}. Then clearly a ⊕ ∅ ∈ n V n . So it remains to verify that (V n ) n satisfies the conditions required in order to be a Schnorr test. Recall that for finite strings σ ∈ 2 <ω , the sets [σ] = {x ∈ 2 ω : σ ⊂ x} are the basic clopen sets of the topology of 2 ω . If dom(σ) = {0, . . . , n − 1}, we identify σ with the string σ(0)σ(1) · · · σ(n − 1), and let |σ| = n be the length of σ. Then µ([σ]) = 2 −n . It is easy to check that for every n, the set V n above is written V n = {[i 0 0i 1 0i 2 · · · 0i n 0] : i 0 , . . . , i n ∈ {0, 1}}. Obviously (V n ) n is a computable sequence of open sets, each generated by finitely many basic sets. Namely, each V n is the union of the 2 n+1 (disjoint) basic sets [σ m ] = [i 0 0i 1 0i 2 · · · 0i n 0], where for every m < 2 n+1 , |σ m | = 2n+2, thus µ([σ m ]) = 2 −(2n+2) . Therefore µ(V n ) = Σ m<2 n+1 µ([σ m ]) = 2 n+1 · 2 −(2n+2) = 2 −(n+1) . Thus the sequence (µ(V n )) n converges to zero in a computable way, with g(n) = n + 1. So (V n ) n is a Schnorr test. ⊣ Turing reducibility and jump operation The next theorem shows that the relation Tp(a, c) is closed with respect to Turing degrees, as well as with respect to the jump operation a → a ′ (both ways). Proof. (ii) follows immediately from (i). (iii) Since for every a, a < T a ′ , by (i), ¬Tp(a ′ , c) ⇒ ¬Tp(a, c). For the converse, assume ¬Tp(a, c). We shall show ¬Tp(a ′ , c). Then there is φ(x, c) such that R |= φ(a, c) and if X = {x : R |= φ(x, c)}, then |X| < 2 ℵ 0 . Now since for every real x, x ′ = {e : Φ x e (e) ↓}, it is rather straightforward that there is a definable in R (without parameters) function f on P(ω) such that f (x) = x ′ . Consider the formula ψ(x, c) := (∃y)(φ(y, c) ∧ x = f (y)). Since R |= φ(a, c) and a ′ = f (a), it follows that R |= ψ(a ′ , c). Moreover, if Y = {x : R |= ψ(x, c)}, clearly Y = f [X]. Thus |Y | < 2 ℵ 0 since |X| < 2 ℵ 0 . Therefore ¬Tp(a ′ , c). To prove (i) assume ¬Tp(a, c) and b ≤ T a. Then there is a φ(x, y) such that R |= φ(a, c) and |{x : φ(x, c)}| < 2 ℵ 0 . Now the relation x ≤ T y says: "x is definable by a Σ 0 1 formula that has y as parameter", so is definable in R by the help of a Σ 0 2 satisfaction class for all Σ 0 1 predicates. Let ψ(x, y) denote the formula formalizing x ≤ T y. Then R |= ψ(b, a). Consider the formula σ(x, c) := (∃y)(φ(y, c) ∧ ψ(x, y)). Since R |= φ(a, c) and ψ(b, a), we have R |= σ(b, c). So in order to show that ¬Tp(b, c), it suffices to show that |{x : R |= σ(x, c)}| < 2 ℵ 0 . If X = {y : R |= φ(y, c)}, by assumption |X| < 2 ℵ 0 . Also for every real y the set Y y = {x : ψ(x, y)} = {x : x ≤ T y} is countable. Therefore |{x : R |= σ(x, c)}| ≤ Σ y∈X |Y y | = |X| · ℵ 0 = |X| < 2 ℵ 0 . ⊣ Lower cones and some open questions In this last part we discuss some open questions about relative typicality of reals. Recall that the relation ¬Tp(x, y) means something like "x depends on y", but since transitivity is (most probably) missing, this is not a preordering. On the other hand we saw in Theorem 2.11 that for every a, the set {x : ¬Tp(x, a)} is "small" (in fact a minority set, or just countable under CH). This reinforces the intuition that the elements of {x : ¬Tp(x, a)}, except of being relatively few, should not be "more complex" than a. Let us call the set {x : ¬Tp(x, a)} (lower) cone of a, and denote it by con(a), i.e., con(a) = {x : ¬Tp(x, a)}. We denoted by TP the class of all typical reals. So P(ω)\TP = {x : ¬Tp(x)} is the class of non-typical ones. The following fact relates P(ω)\TP with the cones of reals. Proof. By axiom T2, for all x, y, ¬Tp(x) ⇒ ¬Tp(x, y), so P(ω)\TP ⊆ con(a) for every a. Therefore On the other hand, by definition ¬Tp(x) means ¬Tp(x, ∅), so P(ω)\TP = con(∅). Thus the three sets above coincide. ⊣ The intuition expressed above on the relationship between the complexity of a and that of the elements of con(a), leads to the following question: Question 3.12 Is it true that ¬Tp(a) & b ∈ con(a) ⇒ ¬Tp(b)? We have seen (Proposition 3.2) that, as a consequence of axiom T7, Tp(a) & b ≈ a ⇒ Tp(b). The following question seems also natural: Question 3.13 Does there exist a real a such that Tp(a) but (∀x)(x ∈ con(a) & b ≈ a ⇒ ¬Tp(x))? Another question, that we already met before section 3.1 and was related to Question 3.4, can be formulated in terms of cones. Question 3.14 If Tp(a) and Tp(b), is it true that b ∈ con(a) ⇔ a ∈ con(b)? Let us set mj(M ) = {X ⊆ M : |X| > |M \X|} = {X : |M \X| < |M |} for the class of majority subsets of M . It is easy to see that mj(M ) is a filter on M , which extends the Fréchet filter of cofinite sets F r(M ) = {X ⊆ M : |M \X| is finite}. For countable M , mj(M ) = F r(M ) and mj(M ) is sometimes referred to as generalized Fréchet filter. 2 Let 2T P (M, A), wT P (M, A) denote the sets of A-typical and weakly Atypical properties φ(x) of L(A) over M. Obviously wT P (M, A) ⊆ T P (M, A). Fact 2. 2 2Both T P (M, A) and wT P (M, A) are types over M with parameters in A in the usual sense, i.e., they are finitely satisfiable in M. Definition 2. 4 4Given a structure M and A ⊆ M , an element a ∈ M is said to be (M, A)-typical, or just A-typical (resp. weakly A-typical), if it satisfies the type T P (M, A) of A-typical properties over M (resp. the type wT P (M, A) of weakly A-typical properties), i.e M |= φ(a) for every φ ∈ T P (M, A). tation of Qmost and Q inf is the majority filter mj(M ) and the Fréchet filter, respectively. That is, for every φ(x) of L(A), Fact 2. 5 5For any structure M and elements of M the following hold:(i) ¬Tp(a, b) iff there exists φ(x, y) such that M |= φ(a, b) and |ext(φ(x, b))| < |M |.(ii) ¬wTp(a, b) iff there exists φ(x, y) such that M |= φ(a, b) and ext(φ(x, b)) is finite.(iii) If {b 1 , . . . b n } ⊆ {c 1 , . . . , c m }, then Tp(a, c) implies Tp(a, b) and wTp(a, c) implies wTp(a, b). Remark 2. 15 15Although our definitions of typical property and typical element apply to any kind of first-order structures, we shall refrain throughout from dealing with models of set theory, namely structures M = (M, E) for the language L = {∈} that satisfy, say, the axioms of ZF . The reason is that the definition of typical property over a structure M relies heavily on the cardinality of subsets of M which is judged from outside the structure M (external cardinality). On the other hand, every model (M, E) of ZF possess a natural notion of internal cardinality |x| M , for every x ∈ M , which provides a much more natural way to measure the size of elements of M . Now if φ(x) is a property of the language L = {∈}, we cannot use the internal cardinality of M = (M, E) to measure the extension of φ(x), since {x ∈ M : M |= φ(x)} is a class in general, i.e., not an element of M . Thus the criterion of majority extension cannot be directly applied to φ(x) when we employ internal cardinality. Nevertheless, we can get around this problem if we use instead a parameterized version of typicality, α-typicality, for every ordinal α ∈ M , defined as follows: A property φ(x) of the language of set theory (perhaps with parameters) is α-typical over M, if T1. (Existence) (∀y)(∃x)Tp(x, y). In particular, there exists a such that Tp(a).T2. (Downward monotonicity) Tp(a, cb) ⇒ Tp(a, b). T3. (Parameters act as a set rather than vector) (a) Tp(a, b) ⇒ Tp(a, πb), for any permutationπ of b. (b) Tp(a, bc) ⇒ Tp(a, bbc). T4. (Irreflexivity) Tp(a, b) ⇒ a = b. T5. (Steinitz exchange principle) Tp(a, c) & Tp(b, ac) ⇒ Tp(a, bc). 3 T6. If φ(x,y)is a formula of L not containing z free, then (∀x)(Tp(x, zy) ⇒ φ(x, y)) ⇒ (∀x)(Tp(x, y) ⇒ φ(x, y)).T7. (Tailset, for the structure R of reals) For any reals c, the set {x : Tp(x, c)} is a tailset.T8. (Zero-one law, for the structure R of reals) Let φ(x, c) be a formula the extension of which is a tailset. Then (∃x)(Tp(x, c) & φ(x, c)), implies (∀x)(Tp(x, c) ⇒ φ(x, c)).Proposition 3.2 Axioms T1-T4 are true in every structure M. Axiom T7 holds in R. Example 3. 3 3Let M = (M, S, +, ·, 0) be a countable nonstandard model of P A with the following property: There are a, b ∈ M such that a / ∈ df(∅), a ∈ df(b) and b / ∈ df(a). Then Tp(a), Tp(b, a) and ¬Tp(a, b) hold in M, so (1) is false over M. Does T5, and in particular(1), hold in the structure R?Notice that if (1) were true over R, then any for any typical reals a, b, i.e., such that Tp(a) and Tp(b), we would have Tp(a, b)⇔ Tp(b, a). Some further open questions, analogous to 3.4, are cited at the end of the paper. Theorem 3. 6 6For any reals a, b, c,Tp(a ⊕ b, c) ⇔ Tp(a, c) ∨ Tp(b, c).Proof. We show equivalently ¬Tp(a ⊕ b, c) ⇔ ¬Tp(a, c) & ¬Tp(b, c). ⇐: Assume ¬Tp(a, c) and ¬Tp(b, c). There are φ(x, c), ψ(y, c) such that R |= φ(a, c), R |= ψ(b, c), and if X = {x : R |= φ(x, c)} and Y = {y : R |= ψ(y, c)}, then |X|, |Y | < 2 ℵ 0 . Consider the formula σ(z, c) := (∃x)(∃y)(φ(x, c) ∧ ψ(y, c) ∧ z = x ⊕ y). Corollary 3. 7 7For every real a and every non-typical b, Tp(a⊕b) ⇔ Tp(a). In particular, Tp(a ⊕ ∅) ⇔ Tp(a). Proof. When ¬Tp(b), by 3.6, Tp(a ⊕ b) ⇔ Tp(a) ∨ Tp(b) ⇔ Tp(a). ⊣ Corollary 3.8 Van Lambalgen's theorem 3.5 is false for the relation Tp(a, b) in R. That is, the equivalence Tp(a ⊕ b) ⇔ Tp(a) & Tp(b, a) is false in general. Theorem 3. 10 10For all reals a, b, c the following hold.(i) ¬Tp(a, c) & b ≤ T a ⇒ ¬Tp(b, c). (Equivalently: Tp(b, c) & b ≤ T a ⇒ Tp(a, c).) (ii) Tp(a, c) & b ≡ T a ⇒ Tp(b, c). (iii) Tp(a, c) ⇔ Tp(a ′ , c), where a ′ is the halting problem relative to a, i.e. a ′ = {e ∈ ω : Φ a e (e) ↓}. Fact 3 . 311 P(ω)\TP = con(∅) = {con(a) : a ∈ P(ω)}. An equivalent way to define typicality and weak typicality of properties is by the use of generalized quantifiers Qmost and Q inf , respectively. For every structure M, the interpre- A motivation for this principle comes from the linear dependence relation. As we said above the relation R(a, b), and also Tp(a, b), can be read "a is independent from b", so ¬Tp(a, b) can be seen as a dependence relation D(a, b): "a depends on b". Then T5 is equivalently written D(a, bc) ⇒ D(a, c) ∨ D(b, ac). This implication can be easily seen to be true if we take a, b, c1, . . . , cn to be vectors and D(a, b) mean "a is a linear combination of b1, . . . , bn." P(ω)\TP ⊆ {con(a) : a ∈ P(ω)} ⊆ con(∅). Acknowledgement I would like to thank the anonymous referee for some helpful remarks and suggestions. Algorithmic Randomness and Complexity. R Downey, D Hirschfeldt, SpringerR. Downey and D. Hirschfeldt, Algorithmic Randomness and Complex- ity, Springer, 2010. Two theorems about projective sets. A S Kechris, Y N Moschovakis, Israel J. Math. 12A.S. Kechris and Y.N. Moschovakis, Two theorems about projective sets, Israel J. Math. 12 (1972), 391-399. The theory of countable analytical sets. A Kechris, Trans. Amer. Math. Soc. 202A. Kechris, The theory of countable analytical sets, Trans. Amer. Math. Soc. 202 (1975), 259-297. Random Sequences. M Van Lambalgen, University of AmsterdamPh.D. DissertationM. van Lambalgen, Random Sequences, Ph.D. Dissertation, University of Amsterdam, 1987. The axiomatization of randomness. M Van Lambalgen, J. Symb. Logic. 55M. van Lambalgen, The axiomatization of randomness, J. Symb. Logic 55 (1990), 1143-1167. Independence, randomness and the axiom of choice. M Van Lambalgen, J. Symb. Logic. 574M. van Lambalgen, Independence, randomness and the axiom of choice, J. Symb. Logic 57 (1992), no. 4, 1274-1304. Independence structures in set theory. M Van Lambalgen, Logic: From Foundations to Applications (European Logic Colloquium '93. W. Hodges, M. Hyland, C. Steinhorn and J. TrussOxfordClarendon PressM. van Lambalgen, Independence structures in set theory, in: Logic: From Foundations to Applications (European Logic Colloquium '93), W. Hodges, M. Hyland, C. Steinhorn and J. Truss (Eds), Oxford (Clarendon Press) 1996, pp. 277-311. . B Russell, My Philosophical Development. revised editionB. Russell, My Philosophical Development, Routledge, revised edition 1995. Subsystems of Second-Order Arithmetic. S G Simpson, Perspectives in Mathematical Logic. SpingerS.G. Simpson, Subsystems of Second-Order Arithmetic, Perspectives in Mathematical Logic, Spinger, 1999. On the cardinality of Σ 1 2 sets of reals. R M Solovay, Foundations of Mathematics (Symposium Commemorating Kurt Gödel. Columbus, Ohio; New YorkSpringerR.M. Solovay, On the cardinality of Σ 1 2 sets of reals, in: Foundations of Mathematics (Symposium Commemorating Kurt Gödel, Columbus, Ohio 1966), Springer, New York, 1966, pp. 58-73. Totally non-immune sets. A Tzouvaras, Math. Logic Quarterly. 611-2A. Tzouvaras, Totally non-immune sets, Math. Logic Quarterly 61 (2015), no. 1-2, 103-116. When van Lambalgen's theorem fails. Liang Yu, Proc. Amer. Math. Soc. 135Liang Yu, When van Lambalgen's theorem fails, Proc. Amer. Math. Soc. 135 (2007), 861-864.
[]
[ "Galactic Structure", "Galactic Structure" ]
[ "R O S E M A R \nDepartment of Physics & Astronomy\nThe Johns Hopkins University\n21218BaltimoreMDUSA\n", "Y F G W Y S E \nDepartment of Physics & Astronomy\nThe Johns Hopkins University\n21218BaltimoreMDUSA\n" ]
[ "Department of Physics & Astronomy\nThe Johns Hopkins University\n21218BaltimoreMDUSA", "Department of Physics & Astronomy\nThe Johns Hopkins University\n21218BaltimoreMDUSA" ]
[]
Our Milky Way Galaxy is a typical large spiral galaxy, representative of the most common morphological type in the local Universe. We can determine the properties of individual stars in unusual detail, and use the characteristics of the stellar populations of the Galaxy as templates in understanding more distant galaxies. The star formation history and merging history of the Galaxy is written in its stellar populations; these reveal that the Galaxy has evolved rather quietly over the last ∼ 10 Gyr. More detailed simulations of galaxy formation are needed, but this result apparently makes our Galaxy unusual if ΛCDM is indeed the correct cosmological paradigm for structure formation. While our Milky Way is only one galaxy, a theory in which its properties are very anomalous most probably needs to be revised. Happily, observational capabilities of next-generation facilities should, in the the forseeable future, allow the aquisition of detailed observations for all galaxies in the Local Group.
null
[ "https://export.arxiv.org/pdf/astro-ph/0402636v1.pdf" ]
252,333,614
astro-ph/0402636
a5790221d5706d4283743db5909b9cd36de1cedf
Galactic Structure 26 Feb 2004 R O S E M A R Department of Physics & Astronomy The Johns Hopkins University 21218BaltimoreMDUSA Y F G W Y S E Department of Physics & Astronomy The Johns Hopkins University 21218BaltimoreMDUSA Galactic Structure 26 Feb 2004 Our Milky Way Galaxy is a typical large spiral galaxy, representative of the most common morphological type in the local Universe. We can determine the properties of individual stars in unusual detail, and use the characteristics of the stellar populations of the Galaxy as templates in understanding more distant galaxies. The star formation history and merging history of the Galaxy is written in its stellar populations; these reveal that the Galaxy has evolved rather quietly over the last ∼ 10 Gyr. More detailed simulations of galaxy formation are needed, but this result apparently makes our Galaxy unusual if ΛCDM is indeed the correct cosmological paradigm for structure formation. While our Milky Way is only one galaxy, a theory in which its properties are very anomalous most probably needs to be revised. Happily, observational capabilities of next-generation facilities should, in the the forseeable future, allow the aquisition of detailed observations for all galaxies in the Local Group. Introduction: The Fossil Record The origins and evolution of galaxies, such as our own Milky Way, and of their associated dark matter haloes are among the major outstanding questions of astrophysics. Detailed study of the zero-redshift Universe provides complementary constraints on models of galaxy formation to those obtained from direct study of high-redshift objects. Stars of mass similar to that of the Sun live for essentially the present age of the Universe and nearby low-mass stars can be used to trace conditions in the high-redshift Universe when they formed, perhaps even the 'First Light' that ended the Cosmological Dark Ages. While these stars may well not have formed in the galaxy in which they now reside (especially if the CDM paradigm is valid), several important observable quantities are largely conserved over a star's lifetime -these include surface chemical elemental abundances (modulo effects associated with mass transfer in binaries) and orbital angular momentum (modulo the effects of torques and rapidly changing gravitational potentials). Excavating the fossil record of galaxy evolution from old stars nearby allows us to do Cosmology locally, and is possible to some extent throughout the Local Group, with the most detailed information available from the Milky Way Galaxy. I here discuss our knowledge of the stellar populations of the Milky Way and the implications for models of galaxy formation. Complementary results for M31 are presented by Brown (this volume). close to the solar Galactocentric distance; here the local thick disk consists of old stars, that are on average of lower metallicity than that of a typical old star in the thin disk, and are on orbits of lower angular momentum. ♥ The central bulge: This component is very centrally concentrated and mildly triaxial with rotational energy close to the expected value if it were an oblate, isotropic rotator. The dominant stellar population is old and metal-rich. ♥ The stellar halo: The bulk of the stars are old and metal-poor, on low angularmomentum orbits. A few percent of the stellar mass is in globular clusters 2.1. Large Scale Structure of the Thin Disk Our knowledge of the stellar populations in the thin disk is in fact rather poor, with only very limited data on age distributions and metallicity distributions in either the inner disk or the outer disk. At the solar neighbourhood, the gross characteristics of the metallicity distribution of the thin disk have been known for a long time -the narrow distribution (peaking somewhat below the solar metallicity) giving rise to the 'G-dwarf Problem', or the deficit of metal-poor stars compared to the predictions of the Simple closed-box model of chemical evolution e.g. van den Bergh 1962;Pagel & Patchett 1975. The favoured solution to this 'problem' is to lift the 'closed-box' assumption, in particular to allow inflow of unenriched gas (cf. Larson 1972). Such inflow is rather natural in many models of disk formation and evolution (see Tosi's contribution to this volume). The age distribution of stars in the thin disk is particularly important in setting the epoch of the onset of disk formation (assuming that the bulk of the old stars now in the thin disk were formed in the thin disk -see Steinmetz's contribution to this volume for an alternative view). In Cold-Dark-Matter-dominated cosmologies, the merging by which galaxies grow involves gravitational torques and dynamical friction, which result in significant angular momentum transport away from the central parts of dark matter haloes and their associated galaxies, and into the outer parts. This re-arrangement of angular momentum is particularly effective if the merging involves dense, non-dissipative (i.e. stellar and/or dark matter) substructure (cf. Zurek, Quinn & Salmon 1988;Zhang et al. 2002). However, extended galactic disks as we observe them require detailed angular momentum conservation during the dissipative collapse and spin-up of proto-disk gas, within a dominant dark halo (cf. Fall & Efstathiou 1980). The angular momentum transport inherent in the merging process in a CDM-universe results in disks that are too concentrated and have radial scale-lengths that are too short (cf. Navarro, Frenk & White 1995;Navarro & Steinmetz 1997). A proposed solution to this problem delays the formation of stellar disks until after the epoch of most merging, with 'stellar feedback' as the proposed mechanism for the delay (cf. S. . A delay in radiative coolingthe very first stage of stellar disk formation -until after a redshift of unity, or lookback times of ∼ 8 Gyr, is apparently required (Eke, Efstathiou & Wright 2000). This has the obvious side-effect that the extended disks that eventually form should contain few old stars. Further, the theoretical prediction is that disks form from the inside-out, with lower angular momentum gas settling to the (inner regions of the) disk plane earlier and only later settling of higher angular momentum material, destined to form the outer disk. The solar circle is some 2-3 exponential scale lengths from the Galactic center, and thus forms later than the inner disk. Detailed predictions are lacking (and should be made), but allowing for 1 Gyr for cooling and star formation, the expectation from this delay in disk formation would then be that there should be very few stars in the local thin disk that are older than ∼ 7 Gyr. Distances and kinematics derived for nearby stars from parallaxes and proper motions from the Hipparcos satellite has allowed the determination of the colour-absolute magnitude diagram for thin disk stars. Analysis of the locus in the CMD of subgiant stars gives a lower limit to the age of the oldest stars of 8 Gyr, obtained with an adopted upper limit to the metallicity of [Fe/H] = +0.3 (Jimenez, Flynn & Kotoneva 1998;Sandage, Lubin & VandenBerg 2003); the age limit increases approximately 1 Gyr for every 0.1 dex decrease in the adopted metallicity (to set the age scale, the VandenBerg isochrones used give ages of > ∼ 13 Gyr for metal-poor globular clusters). Analysis of the main sequence turn-off stars in the Hipparcos dataset provides a best-fit age for the oldest disk stars of > ∼ 11 Gyr (Binney, Dehnen & Bertelli 2000), adopting a metallicity distribution that peaks below the solar value and using isochrones that provide an age of > ∼ 12 Gyr for metal-poor globular clusters. It is clear that metallicity determinations for the Hipparcos sample are needed before a definitive value for the age of the oldest disk stars can be derived, but one should note that the available metallicity distributions, mostly for G/K dwarfs, all peak at [Fe/H] = −0.2 (e.g. Kotoneva et al. 2002), distinctly more metal-poor than the +0.3 dex that gave the lower limit in age of 8 Gyr for the oldest stars. An older age is then expected. An alternative technique to derive ages uses the observed white dwarf (WD) luminosity function combined with theoretical models of white dwarf cooling. Hansen et al. (2002; see also Richer's contribution to this volume) analysed the disk WD luminosity function of Leggett et al. (1998) together with their own data for the WD sequence in the globular cluster M4. They derived a ∼ 5 Gyr gap between the formation of M4 and the birth of the oldest disk stars, with ages of ∼ 13 Gyr and ∼ 8 Gyr respectively. However, completeness remains an issue for the disk WD luminosity function, and different determinations are available and provide older ages and less of a gap in age (e.g. Knox, Hawkins & Hambly 1999). Indeed, a recent re-analysis of the M4 data (De Marchi et al. 2003) has demonstrated that with a re-assessment of the errors, the derived WD luminosity function is in fact still rising at the last point and so only a lower limit in age, > ∼ 8 Gyr, can be derived. Clearly more and better data are needed, and should be available, from the ACS on HST for M4, and from surveys such as SDSS for the faint, local disk WDs. The star formation history of the local disk that is derived from the Hipparcos CMD (Hernandez, Valls-Gabaud & Gilmore 2000) has an amplitude that shows a slow overall decline, with quasi-periodic increases of a factor of a few on timescales of ∼ 1 Gyr. This result is consistent with other age indicators such as chromospheric activity (Rocha-Pinto et al. 2000), and with chemical evolution models. The available data are then all consistent with a significant population of stars in the local disk with ages ∼ 8 Gyr, and perhaps as old as 11 Gyr. If these stars formed in the disk, then the formation of extended disks was not delayed until after a redshift of unity, as was proposed to 'solve' the disk angular momentum problem in CDM models. The outer disk of M31 also contains old stars (Ferguson & Johnson 2001; Guhathakurta this volume) and similar conclusions hold. Further, deep, high-resolution IR observations have revealed apparently relaxed disk galaxies at z > ∼ 1 (Dickinson 2000), which presumably formed at least a few dynamical times earlier. Indeed, a candidate old disk has been identified at z ∼ 2.5 (Stockton et al. 2003). Large Scale Structure of the Thick Disk The thick disk was defined through star counts 20 years ago (Gilmore & Reid 1983) and is now well-established as a distinct component. Its origins remain the source of considerable debate. Locally, some ∼ 5% of stars are in the thick disk; the vertical scale-height is ∼ 1 kpc, and radial scale-length ∼ 3 kpc. Assuming a smooth double-exponential spatial distribution with these parameter values, the stellar mass of the thick disk is 10-20%. of that of the thin disk (the uncertainty allowing for the uncertainty in the structural parameters), or some 10 10 M ⊙ . Again the properties of the stellar populations in this component are rather poorly known far from the solar neighborhood. Locally, within a few kpc of the Sun, the typical thick disk star is of intermediate metallicity, [Fe/H] ∼ −0.6 dex, and old, with an age comparable to that of 47 Tuc, the globular cluster of the same metallicity, ∼ 12 Gyr (see e.g. review of Wyse 2000). Detailed elemental abundances are now available for statistically significant sample sizes. These show that the pattern of elemental abundances differs between the thick and thin disks, with different values of the ratio [α/Fe] at fixed [Fe/H], implying distinct star formation and enrichment histories for the thick and thin disks (Fuhrmann 1998(Fuhrmann , 2000Prochaska et al. 2000;Feltzing, Bensby & Lundström 2003;Nissen 2003; see Figure 1). Such a difference argues against the model (Burkert, Truran & Hensler 1992) whereby the thick disk represents the earliest stages of disk star formation during continuous, self-regulated dissipational settling of gas to the thin disk. Thick stellar disks can be formed from pre-existing thin stellar disks by heating, and a (minor) merger of a reasonably dense and massive satellite galaxy into a pre-existing thin disk galaxy could be the heating mechanism (cf. Quinn, Hernquist & Fullagar 1993). In the merger, orbital energy is deposited in the internal degrees of freedom of both the thin disk and the satellite, and acts to disrupt the satellite and heat the disk. Depending on the orbit of the satallite, and on its density profile and mass (this last determines the dynamical friction timescale), tidal debris from the satellite will be distributed through the larger galaxy during the merger process. Thus the phase space structure of the debris from the satellite depends on many parameters, but in general one expects that the final 'thick disk' will be a mix of heated thin disk and satellite debris. The age and metallicity distributions of the thick disk can provide constraints on the mix. Could the thick disk be dominated by the debris of tidally disrupted dwarf galaxies (cf. Abadi et al. 2003)? The removal of material from the satellite occurs under essentially the Roche relative density criterion, so that one expects that the lower density, outer parts of accreted dwarfs will be tidally removed in the outer parts of the larger galaxy, with the inner, denser regions of the dwarf only being removed if the dwarf penetrates further inside the larger galaxy. As noted above, the local (within a few kpc of the Sun) thick disk is old and quite metal-rich, with a mean iron abundance ∼ −0.6 dex. Further, the bulk of these stars have enhanced, super-solar [α/Fe] abundances (Fuhrmann 1998(Fuhrmann , 2000Prochaska et al. 2000;Feltzing et al. 2003; see Figure 1). Achieving such a high level of enrichment so long ago (the stellar age equals the age of 47 Tuc, at least 10 Gyr, as noted above), in a relatively short time -so that Type II supernovae dominate the enrichment, as evidenced by the enhanced levels of [α/Fe]) -implies a high star formation rate within a rather deep overall potential well. This does not favor dwarf galaxies. Indeed, the inner disk of the LMC, our present most massive satellite galaxy, has a derived metallicity distribution (Cole, Smecker-Hane & Gallagher 2000) that is similar to that of the (local) thick disk, but, based on the color-magnitude diagram, these stars are of intermediate age. Thus the LMC apparently took until a few Gyr ago to self-enrich to an overall metallicity that equals that of the typical local thick disk star in the Galaxy. Further, the abundances of the α-elements to iron in such metal-rich LMC stars are below the solar ratio (Smith et al. 2002), unlike the local thick disk stars. This may be understood in terms of the different star-formation histories (cf. Gilmore & Wyse 1991). The LMC is not a good template for a putative dwarf to form the thick disk from its debris. What about the Sagittarius dwarf, a galaxy that has clearly penetrated into the disk? Thus based on observations, there is no good analogue among the surviving dwarf galaxies for a possible progenitor of the thick disk. Theoretically, based on our (admittedly limited) understanding of supernova feedback, it seems very contrived to envisage a dwarf galaxy that had a deep enough potential well to self-enrich rapidly a long time ago, but that was sufficiently low density to be tidally disrupted to form the thick disk. One might argue that the satellites that were accreted earlier, initiated star formation earlier (e.g. Bullock, Kravtsov & Weinberg 2000) and were typically more dense and able to self-enrich faster. However, the analyses of deep color-magnitude diagrams for the extant satellites of the Milky Way are consistent with all containing stars as old as the stellar halo of the Milky Way (e.g. Da Costa 1999), implying that the onset of star formation was co-eval and there are no (surviving) satellites that initiated star formation earlier. In summary, it appears implausible that the bulk of the thick disk is the debris of accreted dwarf galaxies. Heating of a pre-existing thin disk by a minor merger remains a viable mechanism for creating the bulk of the thick disk (e.g. Velazquez & White 1999). In this case, the old age of the thick disk, combined with the fairly continuous star formation in the thin disk, has two important consequences -the first that there was an extended disk in place at a lookback time of greater than 10 Gyr, and the second that there has been no extraordinary heating -by mergers -of the thin disk since that time. Knowing the age distribution of stars in the thick disk -in both the observed thick disk and in predicted theoretical thick disks -is obviously crucial. Semi-analytic modelling of the heating of disks by merging of substructure in CDM cosmologies has shown that thin disks with reasonable scale-heights are produced at the present day (Benson et al. 2003). However the presence or otherwise of thick disks has yet to be demonstrated in such simulations. Further, predictions need to be made for the age distribution of member stars of the thick and the thin disk, to be confronted with the observations. All this being said, some fraction of the metal-poor stars assigned to the 'thick disk', on the basis of having orbital kinematics that are intermediate between those of the stellar halo and those of the thin disk, may well be debris from a satellite (the one that caused the disk heating perhaps), and we return to this point below, in section 3.2. Large Scale Structure of the Central Bulge The metallicity distributions of low-mass stars in various low-reddening lines-of-sight towards the bulge (with projected Galactocentric distances of a few 100pc to a few kpc) have been determined spectroscopically (e.g. McWilliam & Rich 1994;Sadler, Terndrup & Rich 1996) and photometrically (e.g. Zoccali et al. 2003) with the robust result that the peak metallicity is [Fe/H] ∼ −0.3 dex, with a broad range and a tail to low abundances (indeed the distribution is well-fit by the Simple closed-box model, unlike the solar neighborhood data). The available elemental abundances, limited to the brighter stars, show the enhanced [α/Fe] signatures of enrichment by predominantly Type II supernovae (McWilliam & Rich 1994;McWilliam & Rich 2003), indicating rapid star formation. Indeed the chemical abundances favor very rapid star formation and (self-)enrichment (Ferreras, Wyse & Silk 2003). The age distributions derived from the analyses of deep HST and ISO color-magnitude diagrams -again over several degrees across the sky -are consistent with the dominant population being of old age > ∼ 10 Gyr ( (Ortolani et al. 1995). There is also a small intermediate-age component seen in the ISO data, and traced by OH/IR stars (Sevenster 1999), plus there is ongoing star formation in the plane. The interpretation of these younger stars in terms of the stellar populations in the bulge is complicated by the fact that the scale-height of the thin disk is comparable to that of the central bulge, so that membership in either component is ambiguous. Indeed the relation between the inner triaxial bulge/bar and the largerscale bulge is as yet unclear (see Merrifield 2003 for a recent review). All that said, the dominant population in the bulge is clearly old and metal-rich. In the hierarchical clustering scenario, bulges are built up during mergers, with several mechanism contributing. The dense central regions of massive satellites may survive and sink to the center; the dynamical friction timescale for a satellite of mass M sat orbiting in a more massive galaxy of mass M gal is t dyn f ric ∼ t cross M gal /M sat , where t cross is the crossing time of the more massive galaxy. With t cross ∼ 3 × 10 8 yr for a large galaxy, only the most massive satellites could contribute to the central bulge in a Hubble time. Gravitational torques during the merger process are also expected to drive disk gas to the central regions, and some fraction of stars in the disk will also be heated sufficiently to be 're-arranged' into a bulge (cf. Kauffmann 1996). The predicted age and metallicity distributions of the stars in the bulge are then dependent on the merger history; however a uniform old population is not expected. An alternative scenario for bulge formation appeals to an instability in the disk, forming first a bar which then buckles out of the plane to form a bulge (e.g. Raha et al. 1991) or is destroyed by the orbit-scattering effects of the accumulation of mass at its center (e.g. Hasan & Norman 1990). Again one would expect a significant range of stellar ages in the bulge. As noted above, the bulge is dominated by old, metal-rich stars. This favors neither of the two scenarios above, but rather points to formation of the bulge by an intense burst of star formation, in situ, a long time ago (cf. Elmegreen 1999;Ferreras, Wyse & Silk 2003). The inferred star formation rate is > ∼ 10M ⊙ /yr. A possible source of the gas is ejecta from star-forming regions in the halo; the rotation of the bulge is consistent with collapse and spin-up of halo material (cf. Wyse & Gilmore 1992;, and the chemical abundances are also consistent with the mass ratios (see Carney, Lathm &Laird 1990 andGilmore 1992). Large Scale Structure of the Stellar Halo The total stellar mass of the halo is ∼ 2 × 10 9 M ⊙ (cf. Carney, Laird & Latham 1990), modulo uncertainties in the stellar halo density profile in each of the outer halo, where substructure may dominate, and the central regions, where the bulge dominates. Some ∼ 30% of the stars in the halo are on orbits that take them through the solar neighborhood, to be identified by their 'high-velocity' with respect to the Sun. These stars form a rather uniform population -old and metal-poor, with enhanced values of the elemental abundance ratio [α/Fe]. The dominant signature of enrichment by Type II supernovae indicates a short duration of star formation. This could naturally arise due to star formation and self-enrichment occuring in low-mass star-forming regions that cannot sustain extended star formation. In contrast, the typical star in a dwarf satellite galaxy now is of intermediate-age, and has solar values of [α/Fe] (cf. Tolstoy et al. 2003). These differences in stellar populations between the field stellar halo and dwarf galaxies limit significant ( > ∼ 10% by mass) accretion into the stellar halo from satellite galaxies to have occurred at high redshift only, at lookback times greater than ∼ 8 Gyr (cf. Unavane, Wyse & Gilmore 1996). Typical CDM-models predict, in contrast, significant late accretion of sub-haloes, with around 40% of subhaloes that survive reionization falling into the host galaxy at redshifts less that z = 0.5, or a look-back time of less than 6 Gyr (Bullock, Kravtsov & Weinberg 2000). Again the later accretion is preferentially to the outer parts, and to be consistent with the observations of the Milky Way, these sub-haloes must contain very few young stars, and not over-populate the outer galaxy with visible stars. Large Scale Structure: Merging History The overall properties of the main stellar components of the Milky Way, as discussed above, can be understood if there was little merging or accretion of stars into the Milky Way for the last∼ 10 Gyr (cf. Wyse 2001). How does this compare with 'merger trees' of N-body simulations? As an example, the publically available Virgo GIF ΛCDM simulations (Jenkins et al. 1998) have 26 final haloes with mass similar to that of the Milky Way -taken to be 2 × 10 12 M ⊙ . Of these, only 7% have not merged with another halo of at least 20% by mass since a redshift of 2, a look-back time of ∼ 11 Gyr in this cosmology (L. Hebb, priv. comm.). A merger with these parameter values could produce a thick disk as observed in the Milky Way. None of these 'Milky Way' analogues pass a more stringent mass ratio limit of no mergers more than 10% by mass (still capable of producing a thick disk, given appropriate orbit etc.) since a redshift of 2. Reducing the epoch of last significant merger to unity (a lookback time of 8 Gyr) and adopting a maximum merging mass ratio of 20%, makes the Milky Way more typical, with 35% of Milky Way analogues meeting these criteria. However reducing the highest mass ratio to 10%, while maintaining this lower look-back time limit, results in only 4% of Milky Way analogue haloes passing these criteria. The Milky Way appears to be rather unusual in the ΛCDM cosmology. Note that predictions of smooth average 'universal' mass assembly histories (e.g. Wechsler et al. 2002) are not useful for this comparison, since these curves suppress the detailed information necessary to predict the effect of the mass accretion. More useful is the detailed merging history as a function of radius (cf. Helmi et al. 2003a), since ideally one would like to know what fraction of mergers can penetrate into the realm of the baryonic Galaxy. The Small Scale Structure of the Stellar Components of the Galaxy While there is no evidence for recent very significant mergers into the Milky Way, mergers are clearly happening, as best evidenced by the Sagittarius dwarf galaxy (Ibata, Gilmore & Irwin 1994, 1995Ibata et al. 1997; see Majewski's contribution to this volume). While the present and past mass of the Sagittarius dSph are rather uncertain, its assimilation into the Milky Way is best classed as a 'minor merger', meaning mass ratio of less than 10%. The small-scale structure in the Milky Way may reflect the minor-merger history -or may simply reflect inhomogeneities of different kinds. Small Scale Structure in the Thin Disk The small scale structure of the thin disk is rich and varied and includes stellar moving groups, the scatter in the age-metallicity relationship, spiral arms, the outer 'ring' structure and the central bar. All star formation appears to occur in clusters (e.g. Elmegreen 2002), which are then subject to both internally and externally driven dynamical processes that operate to disrupt them. Some clusters dissolve almost immediately star formation is initiated and some remain gravitationally bound for many Gyr. The creation of phase space structure is thus a natural part of the evolution of stellar disks. The scatter in the age-metallicity relationship for stars at the solar neighborhood appears to be well-established (Edvardsson et al. 1993), as is the offset between the metallicity of the Sun, and of younger stars and the interstellar medium, in the solar neighbourhood, with the Sun being more chemically enriched. These may have their origins in some combination of radial gradients and mixing (e.g. Francois & Matteucci 1993;Sellwood & Binney 2002) and infall of metal-poor gas, from the general intergalactic medium, or perhaps from satellite galaxies (e.g. Geiss et al. 2002). It should be noted that further motivation for appeal to accretion events from satellite galaxies had been found in the scatter in element ratios at a given iron abundance in the Edvardsson et al. data. However, more recent data has instead suggested that a more correct interpretation is that the elemental abundances of stars belonging to the thick disk are distinct from those belonging to the thin disk (e.g. Nissen 2003; see also Gilmore, Wyse & Kuijken 1989 and Figure 1 above), with no scatter in elemental ratios within a given component. The low-latitude 'ring' seen in star counts (Newberg et al. 2002;Ibata et al. 2003;Bellazzini et al. 2004) in the anticenter direction at Galactocentric distances of ∼ 15 kpc may be structure in the outer disk, which has a well-established warp in the gas, and probably in stars (e.g. Carney & Seitzer 1993;Djorgovski & Sosin 1989). The recent detection of structure in HI, interpreted as a newly identified spiral arm, at just this distance (McClure-Griffiths et al. 2003) is intriguing. The ring may also be interpreted as resulting from the accretion of a satellite galaxy (e.g. Helmi et al. 2003b;Martin et al. 2004;Rocha-Pinto et al. 2003); a recent N-body hydrodynamic simulation within a ΛCDM cosmology has shown that it is possible for satellite galaxies to be accreted into a disk, provided they are massive enough for dynamical friction to circularize their orbit quickly enough (Abadi et al. 2003). The available kinematics for 'ring' stars do not discriminate between the two possibilities of satellite or outer disk (Yanny et al. 2003;Crane et al. 2003). Given the complexity of the structure of outer disks, comprehensive color-magnitude data plus metallicity distributions plus kinematics will be needed to rule out the 'Occam's Razor' interpretation of the 'ring' as being a manifestation of structure in the outer disk. The detailed structure of the thin disk will be revealed by large-scale spectroscopic surveys such as RAVE (Steinmetz 2003); the time is ripe to develop models that will distinguish between intrinsic structure due to the normal disk star formation process and other effects (cf. Freeman & Bland-Hawthorn 2002). Small Scale Structure in the Thick Disk As noted above, in the minor-merger scenario for formation of the thick disk, one expects the 'thick disk' to be a mixture of heated thin disk, plus satellite debris. An identification of satellite debris, made on the basis of distinct kinematics, was made by Gilmore, Wyse & Norris (2002). These authors obtained radial velocities for several thousand faint (V < ∼ 19.5) F/G dwarf stars, selected by photometry to be unevolved stars in the thick disk/halo interface at several kpc from the Sun, in key intermediate-latitude lines of sight that probe orbital rotational velocity (particularly ℓ = 270 • ). They found that the mean lag behind the Sun's azimuthal streaming velocity was significantly larger for the fainter stars that for the brighter stars (see Figure 2). Either there are (discontinuous?) steep kinematic gradients within the thick disk (cf. Majewski 1993), or a separate population exists. In the latter case, a viable explanation would be stars from a shredded satellite. Indeed, these stars have low metallicities, typically −1.5 dex (Norris et al., in prep), a factor of ten below a typical thick disk star (e.g. Gilmore, Wyse & Jones 1995), but typical of the old population in dwarf galaxies. However, the values of critical defining parameters for the 'canonical' thick disk, probed locally, remain variable from study to study. For example, the 'accepted' value for the rotational lag is around 40km/s (e.g. Carney, Laird & Latham 1989), but values as low as 20km/s (Chiba & Beers 2000) and as high as 80km/s (Fuhrmann 2000) have been reported. Some of this variation is undoubtably due to the difficulty of deconvolving a complex mix of populations. The thin disk will dominate any local sample, and comparison with distant in situ surveys will help (cf. the technique of Wyse & Gilmore 1995), as will using the discrimination inherent in the distinct elemental abundances of thick and thin disk stars (cf. Nissen 2003). Again, large statistically significant samples, so that tails of the distribution functions are well-defined, in key lines-of-sight, are needed. Small Scale Structure of the Bulge The bulge is clearly triaxial, but estimates of its three-dimensional structure are hindered by dust extinction, projection effects and the uncertainties in the structure of the disk along the line-of-sight (e.g. spiral arm pattern). The inner bulge, within ∼ 1 kpc of the center, appears symmetric in deep infrared images taken with the ISO satellite (van Loon et al. 2003). The best fitting bar model (Bissantz & Gerhard 2002) to the COBE data has axial ratios 1:0.3-0.4:0.3 (i.e. barely triaxial) and a length of ∼ 3.5 kpc. The effects of the bar potential may be the cause of the asymmetric stellar kinematics found by Parker, Humphreys & Beers (2003) in samples of stars on either side of the Galactic Center. Stellar Halo Small Scale Structure Structure in coordinate space mixes and dissolves on dynamical timescales. The outer regions of the halo, say at Galactocentric distances of greater than 15 kpc where dynamical timescales are > ∼ 1 Gyr, are thus most likely to host observable substructure. Indeed, as discussed more fully in Majewski's contribution to this volume, several streams are found in the outer halo, in both coordinate space and kinematics. The vast majority of the confirmed structure is due to a single system, the Sagittarius dwarf spheroidal (e.g. Ibata et al. 2001;Dohm-Palmer et al. 2001;Majewski et al. 2003;Newberg et al. 2002;Newberg et al. 2003). This contrasts with the predictions of many disrupted satellites in CDM models (e.g. Bullock et al. 2000). The present mass of the Sagittarius dwarf is uncertain and model-dependent, but most estimates are within a factor of three of 10 9 M ⊙ (Ibata et al. 1997;Majewski et al. 2003). The mass lost by it to the halo is also model-dependent; presently identified streams are perhaps 15% of the remaining bound mass. The evolutionary history of the Sagittarius dwarf is as yet unclear and much work remains to be done. Tidal streams can be, and are, also associated with dynamically evolving globular clusters. The excellent photometry from the Sloan Digital Sky survey has allowed the tracing of extended, thin arms from the outer halo globular cluster Palomar 5 over 10 degrees across the sky ; see Figure 3). Streams are rare in the inner halo (which contains most of the stellar mass!). Simulations suggest that signatures in phase space, particularly if integrals of the motion can be estimated, can survive for ∼ a Hubble time. A moving group has indeed be isolated (Helmi et al. 1999), but its mass is uncertain (see Chiba & Beers 2000), as is its originperhaps even it is associated with the Sagittarius dwarf (e.g. Majewski et al. 2003). No structure is seen in coordinate space of the inner halo (within a few kpc of the Sun); the 2pt correlation function for main sequence stars brighter than V = 19 is flat (Gilmore, Reid & Hewett 1985;Lemon et al. 2003). This rules out significant recent accretion events that penetrate into the inner Galaxy, and ongoing disruption of inner globular clusters. Other tests for substructure show low-significance features consistent with known streams from the Sagittarius dwarf (Lemon et al. 2003), in agreement with results from blue horizontal branch stars (Sirko et al. 2003). Concluding Remarks The properties of the stellar populations of the Milky Way contain much information about the star formation history and mass assembly history of the Galaxy. The Milky Way has merged with, is merging with, and will merge with, companion galaxies, which contribute stars, gas and dark matter. Debris from the Sagittarius dwarf galaxy dominates recent accretion into the outer Galaxy, while the data are consistent with little stellar accretion into the inner Galaxy, including the disk. Predominantly gaseous accretion is relatively unconstrained, and is favoured by models of chemical evolution (cf. Tosi's contribution). Planned and ongoing large spectroscopic surveys will tightly constrain the existence and orgins of stellar phase-space substructure. The relatively quiescent merging history of the Milky Way that is implied by the mean properties of the stellar components is rather atypical in ΛCDM cosmologies. What about the rest of the Local Group? Figure 1 . 1Taken from Feltzing et al. 2003, their Figure 2. Filled symbols represent stars whose kinematics are consistent with membership of the thick disk, while open symbols represent thin disk stars. The uncertainties in Mg abundance are indicated by the error bars; uncertainties in Fe are smaller than the symbol sizes. Feltzing & Gilmore 2000 (HST); van Loon et al. 2003 (ISO); Zoccali et al. 2003 (HST)), confirming earlier conclusions from groundbased data Figure 2 . 2Modified fromGilmore, Wyse & Norris 2002. In each panel the solid histograms are observational data for faint V < ∼ 19.5 F/G stars in lines of sight where, at these distances, the line-of-sight velocity probes ∼ 0.7 − 0.8 of the azimuthal streaming velocity. The dashed histogram is a model; in the upper panel the model is derived from standard local 'thick disk' kinematics which provide a good fit to the brighter stars, while the model in the lower panel has a significantly higher lag in vrotation behind the Sun. This provides a significantly better fit to the data. Figure 3 . 3Taken from, theirFigure 3. The contours show the surface density of stars that are selected from their photometry to be members of Pal 5. There are clearly streams associated with this globular cluster. The arrow extending from the core of Pal 5 indicates the estimated direction of its orbit. At a given value of [Fe/H], the thick and thin disk stars are separated, with thick disk stars having higher [Mg/Fe]. At the typical thick disk metallicity, [Fe/H]∼ −0.5 dex, the value of [Mg/Fe] in thick disk stars is equal to that seen in the stellar halo, and consistent with enrichment by Type II supernovae. More metal-rich thick disk stars show some enrichment by iron-dominated ejecta from Type Ia supernovae. photometry the stars are on average quite enriched and of intermediate age (cf. the discovery paper ofIbata, Gilmore & Irwin 1994, where the member stars were clearly distinguished from the bulge field stars; see alsoLayden & Saradejini 2000 andCole 2001). The overall metallicity distribution of the Sagittarius dwarf spheroidal galaxy is not well defined, but it contains a significant population of stars with metallicity as high as the solar value(Bonifacio et al. 2000;Smecker-Hane & McWilliam 2003). One of its globular clusters, Terzian 7, has a metallicity equal to that of 47 Tuc, but an age several Gyr younger(Buonannno et al. 1995), and thus by inference, several Gyr younger than the thick disk stars of the same metallicity. The derived age-metallicity relationship for the Sgr dSph, based on both the CMD(Saradejini & Layden 2000) and spectroscopy of selected red giants(Smecker-Hane & McWilliam 2003), is consistent with stars more metal-rich than [Fe/H] = −0.7 dex being less than 8 Gyr old. Further, these stars have essentially solar values of the ratio of [α/Fe](Bonifacio et al. From 2000; Smecker-Hane & McWilliam 2003), and are thus different in several important properties from the local thick disk stars. The Ursa Minor dSph is the only satellite galaxy of the present retinue that contains only old stars, and thus has an age distribution similar to that of the local thick disk. However these are exclusively metal-poor, [Fe/H] ∼ −2 dex, and again not a good match to thick disk stars. Simulations of galaxy formation in a Lambda CDM Universe II: The fine structure of simulated galactic disks. M Abadi, J Navarro, Steinmetz, V Eke, Astrophys. J. 597Abadi, M., Navarro, J., Steinmetz, M & Eke, V. 2003 Simulations of galaxy formation in a Lambda CDM Universe II: The fine structure of simulated galactic disks. Astrophys. J. 597, 21-34. The moon behind the finger. Detection of the Canis Major galaxy in the background of galactic open clusters. M Bellazzini, R Ibata, L Monaco, L Martin, M Irwin, G Lewis, Mon. Not. Roy. Astr. Soc. submitted (astro-ph/0311119Bellazzini, M., Ibata, R., Monaco, L., Martin, L., Irwin, M. & Lewis, G. 2003 The moon behind the finger. Detection of the Canis Major galaxy in the background of galactic open clusters. Mon. Not. Roy. Astr. Soc. submitted (astro-ph/0311119). Heating of galactic disks by infalling satellites. A Benson, C Lacey, C Frenk, C Baugh, S Cole, Mon. Not. Roy. Astr. Soc. submitted (astro-ph/0307298Benson, A., Lacey, C., Frenk, C., Baugh, C. & Cole, S. 2003 Heating of galactic disks by infalling satellites. Mon. Not. Roy. Astr. Soc. submitted (astro-ph/0307298). The age of the solar neighbourhood. J Binney, W Dehnen, G Bertelli, Mon. Not. Roy. Astr. Soc. 318Binney, J., Dehnen, W. & Bertelli, G. 2000 The age of the solar neighbourhood. Mon. Not. Roy. Astr. Soc. 318, 658-664. Spiral arms, bar shape and bulge microlensing in the Milky Way. Bissantz, O Gerhard, Mon. Not. Roy. Astr. Soc. 330Bissantz, & Gerhard, O. 2002 Spiral arms, bar shape and bulge microlensing in the Milky Way. Mon. Not. Roy. Astr. Soc. 330, 591-608. First results of UVES at VLT: Abundances in the Sgr dSph. P Bonifacio, V Hill, P Molari, L Pasquini, P Dimarcantonio, P Santini, Astron. Astrophys. 359Bonifacio, P. Hill, V., Molari, P., Pasquini, L., DiMarcantonio, P. & Santini, P. 2000 First results of UVES at VLT: Abundances in the Sgr dSph. Astron. Astrophys. 359, 663-668. Terzian 7: A young metal-rich globular cluster in the Milky Way. R Buonanno, Astron. J. 109Buonanno, R. et al. 1995 Terzian 7: A young metal-rich globular cluster in the Milky Way. Astron. J. 109, 663-671. The collapse of our Galaxy and the formation of the Galactic disk. A Burkert, J Truran, G Hensler, Astrophys. J. 391Burkert, A., Truran, J. & Hensler, G. 1992 The collapse of our Galaxy and the formation of the Galactic disk. Astrophys. J. 391, 651-658. Reionization and the Abundance of Galactic Satellites. J Bullock, A Kravtsov, D Weinberg, Astrophys. J. 539Bullock, J., Kravtsov, A. & Weinberg, D. 2000 Reionization and the Abundance of Galac- tic Satellites. Astrophys. J. 539, 517-521. A survey of proper-motion stars VIII: On the Galaxy's third population. B Carney, D Latham, J Laird, Astron. J. 97Carney, B., Latham, D. & Laird, J. 1989 A survey of proper-motion stars VIII: On the Galaxy's third population. Astron. J. 97, 423-430 A survey of proper-motion stars X: The early evolution of the Galaxy's halo. B Carney, D Latham, J Laird, Astron. J. 99Carney, B., Latham, D. & Laird, J. 1990 A survey of proper-motion stars X: The early evolution of the Galaxy's halo. Astron. J. 99, 527-589. Optical detection of the Galaxy's souhern stellar warp and outer disk. B Carney, P Seitzer, Astron. J. 105Carney, B. & Seitzer, P. 1993 Optical detection of the Galaxy's souhern stellar warp and outer disk. Astron. J. 105, 2127-2137. Kinematics of metal-poor stars in the Galaxy. III. Formation of the stellar halo and thick disk. M Chiba, T C Beers, Astron. J. 119Chiba, M. & Beers, T.C. 2000 Kinematics of metal-poor stars in the Galaxy. III. Formation of the stellar halo and thick disk. Astron. J. 119, 2843-2865. The 2MASS CMD of the center of the Sagittarius dwarf galaxy. A Cole, Astrophys. J. 559Cole, A. 2001 The 2MASS CMD of the center of the Sagittarius dwarf galaxy. Astrophys. J. 559, L17-L20. The metallicity distribution function of red giant in the LMC. A Cole, T Smecker-Hane, J Gallagher, Astron. J. 120Cole, A., Smecker-Hane, T. & Gallagher, J. 2000 The metallicity distribution function of red giant in the LMC. Astron. J. 120, 1808-1827. Hierarchical galaxy formation. S Cole, C Lacey, C Baugh, C S Frenk, Mon. Not. Roy. Astr. Soc. 319Cole, S., Lacey, C., Baugh, C. & Frenk, C.S. 2000 Hierarchical galaxy formation. Mon. Not. Roy. Astr. Soc. 319, 168-204. The dwarf spheroidal galaxies in the Galactic halo. G Da Costa, Third Stromlo Symposium: The Galactic Halo. San FranciscoAstron. Soc. PacificDa Costa, G. 1999 The dwarf spheroidal galaxies in the Galactic halo. In Third Stromlo Symposium: The Galactic Halo (ed. B. Gibson et al.), Astron. Soc. Pacific, San Francisco, 153-166. Exploring halo substructure with giant stars: Spectroscopy of stars in the Galactic anti-center stellar structure. J Crane, S Majewski, H Rocha-Pinto, P Frinchaboy, M Strutskie, D Law, Astrophys. J. 594Crane, J., Majewski, S., Rocha-Pinto, H., Frinchaboy, P., Strutskie, M. & Law, D. 2003 Exploring halo substructure with giant stars: Spectroscopy of stars in the Galactic anti-center stellar structure. Astrophys. J. 594, L119-L122. On the age and mass function of the globular cluster M4: a different interpretation of recent deep HST observations. G De Marchi, F Paresce, O Straniero, P Prada Moroni, Astron. Astrophys. submitted (astro-ph/03106646De Marchi, G., Paresce, F., Straniero, O. & Prada Moroni, P. 2003 On the age and mass function of the globular cluster M4: a different interpretation of recent deep HST observations. Astron. Astrophys. submitted (astro-ph/03106646) Galaxy evolution at 0 < z < 2 from the NICMOS HDF North. M Dickinson, XIXth Moriond meeting. World Scientific Publishing257In Building galaxies from the primordial Universe to the presentDickinson, M. 2000 Galaxy evolution at 0 < z < 2 from the NICMOS HDF North. In Building galaxies from the primordial Universe to the present, XIXth Moriond meeting (ed. Hammer et al.) World Scientific Publishing p257 The warp of the Galactic stellar disk detected in IRAS source counts. G Djorgovski, C Sosin, Astrophys. J. 341Djorgovski, G. & Sosin, C. 1989 The warp of the Galactic stellar disk detected in IRAS source counts. Astrophys. J. 341, L13-L16. Mapping the Galactic halo V. Sagittarius dwarf spheroidal tidal debris 60 o from the main body. R Dohm-Palmer, Astrophys. J. 555Dohm-Palmer, R. et al. 2001 Mapping the Galactic halo V. Sagittarius dwarf spheroidal tidal debris 60 o from the main body. Astrophys. J. 555, L37-L40. The chemical evolution of the Galactic disk. I. Analysis and results. B Edvardsson, J Andersen, B Gustafsson, D L Lambert, P E Nissen, J Tomkin, Astron. Astrophys. 275101Edvardsson, B., Andersen, J., Gustafsson, B., Lambert, D.L., Nissen, P.E. & Tomkin, J. 1993 The chemical evolution of the Galactic disk. I. Analysis and results. Astron. As- trophys. 275, 101 The cosmological dependence of galactic specific angular momenta. V Eke, G Efstathiou, L Wright, Mon. Not. Roy. Astr. Soc. 315Eke, V., Efstathiou, G. & Wright, L. 2000 The cosmological dependence of galactic specific angular momenta. Mon. Not. Roy. Astr. Soc. 315, L18-L22. Galactic bulge formation as a maximum intensity starburst. B Elmegreen, Astrophys. J. 517Elmegreen, B. 1999 Galactic bulge formation as a maximum intensity starburst. Astrophys. J. 517, 103-107. Star formation from galaxies to globules. B Elmegreen, Astrophys. J. 557Elmegreen, B. 2002 Star formation from galaxies to globules. Astrophys. J. 557, 206-220. Formation and rotation of disk galaies with haloes. S M Fall, G Efstathiou, Mon. Not. Roy. Astr. Soc. 193Fall, S.M. & Efstathiou, G. 1980 Formation and rotation of disk galaies with haloes. Mon. Not. Roy. Astr. Soc. 193, 189-206. Age and metallicity gradients in the Galactic bulge. A differential study using HST/WFPC2. S Feltzing, G Gilmore, Astron. Astrophys. 355Feltzing, S. & Gilmore, G. 2000 Age and metallicity gradients in the Galactic bulge. A differential study using HST/WFPC2. Astron. Astrophys. 355, 949-965 Signatures of SNIa in the Galactic thick disk. S Feltzing, T Bensby, I Lundström, Astron. Astrophys. 397Feltzing, S., Bensby, T. & Lundström, I. 2003 Signatures of SNIa in the Galactic thick disk. Astron. Astrophys. 397, L1-L4. Constraints on galaxy formation from stars in the far outer disk of M31. A M N Ferguson, R Johnson, Astrophys. J. 559Ferguson, A.M.N. & Johnson, R. 2001 Constraints on galaxy formation from stars in the far outer disk of M31. Astrophys. J. 559, L13-L16. The formation history of the Galactic bulge. I Ferreras, R F G Wyse, J Silk, Mon. Not. Roy. Astr. Soc. 345Ferreras, I., Wyse, R.F.G. & Silk, J. 2003 The formation history of the Galactic bulge. Mon. Not. Roy. Astr. Soc. 345, 1381-1391. On the abundance spread in solar neighbourhood stars. P Francois, F Matteucci, Astron. Astrophys. 280Francois, P. & Matteucci, F. 1993 On the abundance spread in solar neighbourhood stars. Astron. Astrophys. 280, 136-140. The new Galaxy: Signatures of its formation. K Freeman, J Bland-Hawthorn, Ann. Rev. Astron. Astrophys. 40Freeman, K. & Bland-Hawthorn, J. 2002 The new Galaxy: Signatures of its formation. Ann. Rev. Astron. Astrophys. 40, 487-537. Nearby stars of the Galactic disk and halo. K Fuhrmann, Astron. Astrophys. 338Fuhrmann, K. 1998 Nearby stars of the Galactic disk and halo. Astron. Astrophys. 338, 161- 183. Nearby stars of the Galactic disk and halo. K Fuhrmann, Fuhrmann, K. 2000 Nearby stars of the Galactic disk and halo. II http://www.xray.mpe.mpg.de/ ∼ fuhrmann Chemical evolution in our Galaxy during the last 5Gyr. J Geiss, G Gloeckler, C Charbonnel, Astrophys. J. 578Geiss, J., Gloeckler, G. & Charbonnel, C. 2002 Chemical evolution in our Galaxy during the last 5Gyr. Astrophys. J. 578, 862-867. New light on faint stars III. Galactic structure towards the South Pole and the Galactic thick disk. G Gilmore, I N Reid, Mon. Not. Roy. Astr. Soc. 202Gilmore, G. & Reid, I.N. 1983 New light on faint stars III. Galactic structure towards the South Pole and the Galactic thick disk. Mon. Not. Roy. Astr. Soc. 202, 1025-1047. New light on faint stars. VII Luminosity and mass distributions in two high Galactic latitude fields. G Gilmore, I N Reid, P Hewett, Mon. Not. Roy. Astr. Soc. 213Gilmore, G., Reid, I.N. & Hewett, P. 1985 New light on faint stars. VII Luminosity and mass distributions in two high Galactic latitude fields. Mon. Not. Roy. Astr. Soc. 213, 257-278 Chemical evolution with bursts of star formation -Element ratios in dwarf galaxies. G Gilmore, R F G Wyse, Astrophys. J. 367Gilmore, G. & Wyse, R.F.G. 1991 Chemical evolution with bursts of star formation -Element ratios in dwarf galaxies. Astrophys. J. 367, L55-L58. A determination of the thick disk chemical abundance distribution: Implications for galaxy evolution. G Gilmore, R F G Wyse, J B Jones, Astron. J. 109Gilmore, G., Wyse, R.F.G. & Jones, J.B. 1995 A determination of the thick disk chemical abundance distribution: Implications for galaxy evolution. Astron. J. 109, 1095-1111. Kinematics, chemistry and structure of the Galaxy. G Gilmore, R F G Wyse, K Kuijken, Ann. Rev. Astron. Astrophys. 27Gilmore, G., Wyse, R.F.G. & Kuijken, K. 1989 Kinematics, chemistry and structure of the Galaxy. Ann. Rev. Astron. Astrophys. 27, 555-627. Deciphering the last major invasion of the Milky Way. G Gilmore, R F G Wyse, J Norris, Astrophys. J. 574Gilmore, G., Wyse, R.F.G. & Norris, J. 2002 Deciphering the last major invasion of the Milky Way Astrophys. J. 574, L39-L42. The white dwarf cooling sequence of the globular cluster M4. B Hansen, Astrophys. J. 574Hansen, B. et al. 2002 The white dwarf cooling sequence of the globular cluster M4. Astro- phys. J. 574, L155-L158. Chaotic orbits in barred galaxies with central mass concentrations. H Hasan, C Norman, Astrophys. J. 361Hasan, H. & Norman, C. 1990 Chaotic orbits in barred galaxies with central mass concentra- tions. Astrophys. J. 361, 69-77. Debris streams in the solar neighbourhood as relicts from the formation of the Milky Way Nature. A Helmi, S D M White, P T De Zeeuw, H.-S Zhao, 402Helmi, A., White, S.D.M., de Zeeuw, P.T. & Zhao, H.-S. 1999 Debris streams in the solar neighbourhood as relicts from the formation of the Milky Way Nature 402, 53-55 The phase-space structure of cold dark matter haloes: insights into the Galactic halo Mon. A Helmi, S D M White, V Springel, Not. Roy. Astr. Soc. 339Helmi, A., White, S.D.M. & Springel, V. 2003a The phase-space structure of cold dark matter haloes: insights into the Galactic halo Mon. Not. Roy. Astr. Soc. 339, 834-848. On the nature of the ringlike structure in the outer Galactic disk. A Helmi, J Navarro, A Meza, M Steinmetz, V Eke, Astrophys. J. 592Helmi, A., Navarro, J., Meza, A., Steinmetz, M. & Eke, V. 2003b On the nature of the ringlike structure in the outer Galactic disk. Astrophys. J. 592, L25-L28. The recent star formation history of the HIPPARCOS solar neighbourhood. X Hernandez, D Valls-Gabaud, G Gilmore, Mon. Not. Roy. Astr. Soc. 316Hernandez, X., Valls-Gabaud, D. & Gilmore, G. 2000 The recent star formation history of the HIPPARCOS solar neighbourhood. Mon. Not. Roy. Astr. Soc. 316, 605-612. The outer regions of the Galacic bulge: II. R Ibata, G Gilmore, Analysis Mon. Not. Roy. Astr. Soc. 275Ibata, R. & Gilmore, G. 1995 The outer regions of the Galacic bulge: II. Analysis Mon. Not. Roy. Astr. Soc. 275, 605-627. A dwarf satellite galaxy in Sagittarius. R Ibata, G Gilmore, M Irwin, Nature. 370194Ibata, R., Gilmore, G. & Irwin, M. 1994 A dwarf satellite galaxy in Sagittarius. Nature 370, 194. Great circle tidal streams: Evidence for a nearly spherical massive dark halo around the Milky Way. R Ibata, G Lewis, M Irwin, Totten, T Quinn, Astrophys. J. 551Ibata, R., Lewis, G., Irwin, M., Totten, E & Quinn, T. 2001 Great circle tidal streams: Evidence for a nearly spherical massive dark halo around the Milky Way. Astrophys. J. 551, 294-311. Sagittarius, the nearest dwarf galaxy. R Ibata, G Gilmore, M Irwin, Mon. Not. Roy. Astr. Soc. 277Ibata, R., Gilmore, G. & Irwin, M. 1995 Sagittarius, the nearest dwarf galaxy. Mon. Not. Roy. Astr. Soc. 277, 781-800. One ring to emcompass them all: a giant stellar structure that surrounds the Galaxy. R Ibata, M Irwin, G Lewis, A Ferguson, N Tanvir, Mon. Not. Roy. Astr. Soc. 340Ibata, R., Irwin, M., Lewis, G., Ferguson, A. & Tanvir, N. 2003 One ring to emcompass them all: a giant stellar structure that surrounds the Galaxy. Mon. Not. Roy. Astr. Soc. 340, L21-L27. The kinematics, orbit and survival of the Sagittarius dwarf spheroidal galaxy. R Ibata, R F G Wyse, G Gilmore, M Irwin, N Suntzeff, Astron. J. 113Ibata, R., Wyse, R.F.G., Gilmore, G., Irwin, M. & Suntzeff, N. 1997 The kinematics, orbit and survival of the Sagittarius dwarf spheroidal galaxy. Astron. J. 113, 634-655. Evolution of structure in cold dark matter universes. A Jenkins, Astrophys. J. 49920Jenkins, A. et al. 1998 Evolution of structure in cold dark matter universes. Astrophys. J. 499, 20. HIPPARCOS and the age of the galactic disk. R Jimenez, C Flynn, E Kovonta, Mon. Not. Roy. Astr. Soc. 299Jimenez, R., Flynn, C. & Kovonta, E. 1998 HIPPARCOS and the age of the galactic disk. Mon. Not. Roy. Astr. Soc. 299, 515-519. The age of elliptical galaxies and bulges in a merger model. G Kauffmann, Mon. Not. Roy. Astr. Soc. 281Kauffmann, G. 1996 The age of elliptical galaxies and bulges in a merger model. Mon. Not. Roy. Astr. Soc. 281, 487-492. A survey for cool white dwarfs and the age of the Galactic disk. R Knox, M Hawkins, N Hambly, Mon. Not. Roy. Astr. Soc. 306Knox, R., Hawkins, M. & Hambly, N. 1999 A survey for cool white dwarfs and the age of the Galactic disk. Mon. Not. Roy. Astr. Soc. 306, 736-752. K dwarfs and the chemical evolution of the solar cylinder. E Kotoneva, C Flynn, F Matteucci, C Chiappini, Mon. Not. Roy. Astr. Soc. 336Kotoneva, E., Flynn, C., Matteucci, F. & Chiappini, C. 2002 K dwarfs and the chemical evolution of the solar cylinder. Mon. Not. Roy. Astr. Soc. 336, 879-891. Effect of Infalling Matter on the Heavy Element Content of a Galaxy. R B Larson, Nature Phys. Sci. 236Larson, R.B. 1972 Effect of Infalling Matter on the Heavy Element Content of a Galaxy. Nature Phys. Sci. 236, 7-8. Photometry of the globular cluster M54 and the Sagittarius dwarf galaxy: The age-metallicity relation. A Layden, A Sarajedini, Astron. J. 119Layden, A. & Sarajedini, A. 2000 Photometry of the globular cluster M54 and the Sagittarius dwarf galaxy: The age-metallicity relation. Astron. J. 119, 1760-1792. The cool white dwarf luminosity function and the age of the Galactic disk. S Leggett, M Ruiz, P Bergeron, Astrophys. J. 497Leggett, S., Ruiz, M. & Bergeron, P. 1998 The cool white dwarf luminosity function and the age of the Galactic disk. Astrophys. J. 497, 294. The Millenium Galaxy Catalogue: Star counts and the structure of the Galactic halo. D Lemon, R F G Wyse, J Liske, S Driver, K Horne, Mon. Not. Roy. Astr. Soc. in press (astro-ph/0308200Lemon, D., Wyse, R.F.G., Liske, J., Driver, S. & Horne, K. 2003 The Millenium Galaxy Catalogue: Star counts and the structure of the Galactic halo. Mon. Not. Roy. Astr. Soc. in press (astro-ph/0308200). Galactic structure surveys and the evolution of the Milky Way. S R Majewski, Ann. Rev. Astron. Astrophys. 31Majewski, S.R. 1993 Galactic structure surveys and the evolution of the Milky Way. Ann. Rev. Astron. Astrophys. 31, 575-638. A 2-MASS all-sky view of the Sagittarius dwarf galaxy I. Morphlogy of the Sagittarius core and tidal arms. S Majewski, M Strutskie, M Weinberg, J Ostheimer, Astrophys. J. 599Majewski, S., Strutskie, M., Weinberg, M. & Ostheimer, J. 2003 A 2-MASS all-sky view of the Sagittarius dwarf galaxy I. Morphlogy of the Sagittarius core and tidal arms Astrophys. J. 599, 1082-1115. A dwarf galaxy remnant in Canis Major: the fossil of an in-plane accretion onto the Milky Way. N Martin, R Ibata, M Bellazzini, M Irwin, G Lewis, W Dehnen, Mon. Not. Roy. Astr. Soc. in press (astro-ph/0311010Martin, N., Ibata, R., Bellazzini, M., Irwin, M., Lewis, G. & Dehnen, W. 2003 A dwarf galaxy remnant in Canis Major: the fossil of an in-plane accretion onto the Milky Way. Mon. Not. Roy. Astr. Soc. in press (astro-ph/0311010). The first detailed abundance analysis of Galactic bulge K giants in Baade's window. A Mcwilliam, R M Rich, Astrophys. J. Supp. 91McWilliam, A. & Rich, R.M. 1994 The first detailed abundance analysis of Galactic bulge K giants in Baade's window. Astrophys. J. Supp. 91, 749-791. Composition of the Galactic bulge. A Mcwilliam, R M Rich, astro-ph/0312628Origin and evolution of the elements. A. McWilliam & M. RauchCambridgeCambridge University Press4McWilliam, A. & Rich, R.M. 2003 Composition of the Galactic bulge. In Origin and evolution of the elements, Carnegie Observatories Astrophysics series, vol. 4, eds A. McWilliam & M. Rauch (Cambridge, Cambridge University Press) (astro-ph/0312628). A new arm for our Galaxy?. N Mcclure-Griffiths, J Dickey, B Gaensler, A Green, Astrophys. J. Letters. McClure-Griffiths, N., Dickey, J., Gaensler, B. & Green, A. 2003 A new arm for our Galaxy? Astrophys. J. Letters submitted (see http://www.atnf.csiro.au/news/press/spiralarm/) The Galactic bar. M Merrifield, astro-ph/0308302Milky Way surveys: the structure and evolution of our Galaxy. D. Clemens, T. Brainerd & R. ShahSan FranciscoASP Conf Proc. (ASPMerrifield, M. 2003 The Galactic bar. In Milky Way surveys: the structure and evolution of our Galaxy, (eds. D. Clemens, T. Brainerd & R. Shah), ASP Conf Proc. (ASP, San Francisco) (astro-ph/0308302). The effects of a photoionizing ultraviolet background on the formation of disk galaxies. J Navarro, M Steinmetz, Astrophys. J. 478Navarro, J. & Steinmetz, M. 1997 The effects of a photoionizing ultraviolet background on the formation of disk galaxies. Astrophys. J. 478, 13-28. The assembly of galaxies in a hierarchically clustering universe. J Navarro, C S Frenk, S D M White, Mon. Not. Roy. Astr. Soc. 275Navarro, J., Frenk, C.S. & White, S.D.M. 1995 The assembly of galaxies in a hierarchically clustering universe. Mon. Not. Roy. Astr. Soc. 275, 56-66. Sagittarius tidal debris 90 kpc from the Galactic center. H Newberg, Astrophys. J. 596Newberg, H. et al. 2003 Sagittarius tidal debris 90 kpc from the Galactic center. Astrophys. J. 596, L191-L194. The ghost of Sagittarius and lumps in the halo of the Milky Way. H Newberg, Astrophys. J. 569Newberg, H. et al. 2002 The ghost of Sagittarius and lumps in the halo of the Milky Way. Astrophys. J. 569, 245-274. The extended tails of Palomar 5: A 10 o arc of globular cluster tidal debris. M Odenkirchen, Astron. J. 126Odenkirchen, M. et al. 2003 The extended tails of Palomar 5: A 10 o arc of globular cluster tidal debris. Astron. J. 126, 2385-2407. Thin and thick Galactic disks. In Origin and evolution of the elements. P Nissen, astro-ph/0310326Carnegie Observatories Astrophysics series. A. McWilliam & M. Rauch4Cambridge University PressNissen, P. 2003 Thin and thick Galactic disks. In Origin and evolution of the elements, Carnegie Observatories Astrophysics series, vol. 4, eds A. McWilliam & M. Rauch (Cambridge, Cambridge University Press) (astro-ph/0310326). The extended tails of Palomar 5: A 10 o arc of globular cluster tidal debris. M Odenkirchen, Astron. J. 126Odenkirchen, M. et al. 2003 The extended tails of Palomar 5: A 10 o arc of globular cluster tidal debris. Astron. J. 126, 2385-2407. Near coeval formation of the Galactic bulge and halo. S Ortolani, A Renzini, R Gilmozzi, G Marconi, B Barbuy, E Bica, R M Rich, Nature. 377Ortolani, S., Renzini, A., Gilmozzi, R., Marconi, G., Barbuy, B., Bica, E. & Rich, R.M. 1995 Near coeval formation of the Galactic bulge and halo. Nature 377, 701-703. Metal abundances in nearby stars and the chemical history of the solar neighbourhood. B Pagel, B Patchett, Mon. Not. Roy. Astr. Soc. 172Pagel, B. & Patchett, B. 1975 Metal abundances in nearby stars and the chemical history of the solar neighbourhood. Mon. Not. Roy. Astr. Soc. 172, 13-40. The asymmetric thick disk: A star count and kinematic analysis. J Parker, R Humphreys, T Beers, Astron. J. in press (astro-ph/0312017Parker, J., Humphreys, R. & Beers, T. 2003 The asymmetric thick disk: A star count and kinematic analysis. Astron. J. in press (astro-ph/0312017). The Galactic thick disk stellar abundances. J X Prochaska, S Naumov, B Carney, A Mcwilliam, A Wolfe, Astron. J. 120Prochaska, J.X., Naumov, S., Carney, B., McWilliam, A. & Wolfe, A. 2000 The Galac- tic thick disk stellar abundances. Astron. J. 120, 2513-2549. Heating of galactic disks by mergers. P Quinn, L Hernquist, D Fullagar, Astrophys. J. 403Quinn, P., Hernquist, L. & Fullagar, D. 1993 Heating of galactic disks by mergers. Astro- phys. J. 403, 74-93. A dynamical instability of bars in disk galaxies. N Raha, J Sellwood, R James, F Kahn, Nature. 352Raha, N., Sellwood, J., James, R. & Kahn, F. 1991 A dynamical instability of bars in disk galaxies. Nature 352, 411-412. Chemical enrichment and star formation in the Milky Way disk. II. Star formation history. H J Rocha-Pinto, J Scalo, W J Maciel, C Flynn, Astron. Astrophys. 358Rocha-Pinto, H.J., Scalo, J., Maciel, W.J. & Flynn, C. 2000 Chemical enrichment and star formation in the Milky Way disk. II. Star formation history. Astron. Astrophys. 358, 869-885. Tracing the Galactic anticenter stellar stream with 2MASS M giants. H J Rocha-Pinto, S Majewski, M Skrutskie, J Crane, Astrophys. J. 594Rocha-Pinto, H.J., Majewski, S., Skrutskie, M. & Crane, J. 2003 Tracing the Galactic anticenter stellar stream with 2MASS M giants. Astrophys. J. 594, L115-L118. K giants in Baade's window II. The abundance distribution. E Sadler, R M Rich, D Terndrup, Astron. J. 112Sadler, E., Rich, R.M. & Terndrup, D. 1996 K giants in Baade's window II. The abundance distribution. Astron. J. 112, 171-190. The age of the oldest stars in the local Galactic disk from HIPPARCOS parallaxes of G and K subgiants. A Sandage, L Lubin, D A Vandenberg, Pub. Astr. Soc. Pac. 115Sandage, A., Lubin, L. & VandenBerg, D.A. 2003 The age of the oldest stars in the local Galactic disk from HIPPARCOS parallaxes of G and K subgiants. Pub. Astr. Soc. Pac. 115, 1187-1206. Radial mixing in galactic discs. J Sellwood, J Binney, Mon. Not. Roy. Astr. Soc. 336Sellwood, J. & Binney, J. 2002 Radial mixing in galactic discs. Mon. Not. Roy. Astr. Soc. 336, 785-796. Something about the structure of the Galaxy Mon. M Sevenster, Not. Roy. Astr. Soc. 310Sevenster, M. 1999 Something about the structure of the Galaxy Mon. Not. Roy. Astr. Soc. 310, 629-644. BHB stars in the SDSS I. Sample selection and structure in the Galactic halo. E Sirko, astro-ph/0311324Astron. J. Sirko, E. et al. 2003 BHB stars in the SDSS I. Sample selection and structure in the Galactic halo Astron. J. submitted (astro-ph/0311324). The complex chemical abundances and evolution of the Sagittarius dwarf spheroidal galaxy. T Smecker-Hane, A Mcwilliam, astro-ph/0205411Astrophys. J. accepted. Smecker-Hane, T. & McWilliam, A. 2003 The complex chemical abundances and evolution of the Sagittarius dwarf spheroidal galaxy. Astrophys. J. accepted (astro-ph/0205411). Chemical abundances in 12 red giants of the LMC from high-resolution IR spectroscopy. V Smith, Astron. J. 124Smith, V. et al. 2002 Chemical abundances in 12 red giants of the LMC from high-resolution IR spectroscopy. Astron. J. 124, 3241-3254 RAVE, the RAdial Velicity Experiment. M Steinmetz, GAIA spectroscopy, science and technology. San Francisco298ASP Conf Proc.Steinmetz, M. 2003 RAVE, the RAdial Velicity Experiment. In GAIA spectroscopy, science and technology (ed. U. Munnari), ASP Conf Proc. vol. 298, p381-386. (ASP, San Francisco) A disk galaxy of old stars at z ∼ 2. A Stockton, Canalizo, T Maihara, Stockton, A., Canalizo, G & Maihara, T. 2003 A disk galaxy of old stars at z ∼ 2.5 . astro-ph/0312550Astrophys. J. Astrophys. J. submitted (astro-ph/0312550). VLT/UVES abundances in four nearby dwarf spheroidal galaxies II. Implications for understanding galaxy evolution. E Tolstoy, K Venn, M Shetrone, F Primas, V Hill, A Kaufer, T Szeifert, Astron. J. 125Tolstoy, E., Venn, K., Shetrone, M., Primas, F., Hill, V., Kaufer, A., Szeifert, T. 2003 VLT/UVES abundances in four nearby dwarf spheroidal galaxies II. Implications for understanding galaxy evolution. Astron. J. 125, 707-726. The merging history of the Milky Way. M Unavane, R F G Wyse, G Gilmore, Mon. Not. Roy. Astr. Soc. 278Unavane, M., Wyse, R.F.G. & Gilmore, G. 1996 The merging history of the Milky Way. Mon. Not. Roy. Astr. Soc. 278, 727-736. The frequency of stars with different metal abundances. S Van Den Bergh, Astron. J. 67van den Bergh, S. 1962 The frequency of stars with different metal abundances. Astron. J. 67, 486-490. Infrared stellar populations in the central parts of the Milky Way galaxy. J Van Loon, Mon. Not. Roy. Astr. Soc. 338van Loon, J. et al. 2003 Infrared stellar populations in the central parts of the Milky Way galaxy. Mon. Not. Roy. Astr. Soc. 338, 857-879. Sinking satellites and the heating of galaxy discs. V Velazquez, S D M White, Mon. Not. Roy. Astr. Soc. 304Velazquez, V. & White, S.D.M. 1999 Sinking satellites and the heating of galaxy discs. Mon. Not. Roy. Astr. Soc. 304, 254-270. Concentration of halos from their assembly history. R Wechsler, J Bullock, J Primack, A Kravsov, A Dekel, Astrophys. J. 568Wechsler, R., Bullock, J., Primack, J., Kravsov, A. & Dekel, A. 2002 Concentration of halos from their assembly history. Astrophys. J. 568, 52-70. Formation Scenarios. R F G Wyse, The Galactic Halo: From Globular Clusters to Field Stars. LiègeInstitut d'Astrophysique et de GeophysiqueWyse, R.F.G. 2000 Formation Scenarios. In The Galactic Halo: From Globular Clusters to Field Stars (ed. A. Noels et al.), Liège, Institut d'Astrophysique et de Geophysique, 305-322. The merging history of the Milky Way disk. R F G Wyse, Galactic disks and disk galaxies. San FranciscoASP230Wyse, R.F.G. 2001 The merging history of the Milky Way disk. In Galactic disks and disk galaxies (ed. J. Funes & E. Corsini) ASP Conference series vol 230, (ASP, San Francisco) 71-80. Formation and evolution of the Galactic bulge and spheroid -Where did the spheroid gas go?. R F G Wyse, G Gilmore, Astron. J. 104Wyse, R.F.G. & Gilmore, G. 1992 Formation and evolution of the Galactic bulge and spheroid -Where did the spheroid gas go?. Astron. J. 104, 114-153. Chemistry and kinematics in the solar neighbourhood. R F G Wyse, G Gilmore, Astron. J. 1102771Wyse, R.F.G. & Gilmore, G. 1995 Chemistry and kinematics in the solar neighbourhood. Astron. J. 110, 2771. A low-latitude halo stream around the Milky Way. B Yanny, Astrophys. J. 588Yanny, B. et al. 2003 A low-latitude halo stream around the Milky Way. Astrophys. J. 588, 824-841. The dynamical evolution of substructure. B Zhang, R F G Wyse, M Stiavelli, J Silk, Mon. Not. Roy. Astr. Soc. 332Zhang, B., Wyse, R.F.G., Stiavelli, M. & Silk, J. 2002 The dynamical evolution of sub- structure. Mon. Not. Roy. Astr. Soc. 332, 647-675. Age and metallicity distribution of the galactic bulge from extensive optical and near-IR stellar photometry. M Zoccali, Astron. Astrophys. 399Zoccali, M. et al. 2003 Age and metallicity distribution of the galactic bulge from extensive optical and near-IR stellar photometry. Astron. Astrophys. 399, 931-956. Rotation of halos in open and closed universes -Differentiated merging and natural selection of galaxy types. W Zurek, P J Quinn, J Salmon, Astrophys. J. 330Zurek, W., Quinn, P.J. & Salmon, J. 1988 Rotation of halos in open and closed universes - Differentiated merging and natural selection of galaxy types. Astrophys. J. 330, 519-534.
[]
[ "Superconducting Coherence and the Helicity Modulus in Vortex Line Models", "Superconducting Coherence and the Helicity Modulus in Vortex Line Models" ]
[ "Jack Lidmar \nDepartment of Theoretical Physics\nRoyal Institute of Technology\nSE-100 44StockholmSweden\n", "Mats Wallin \nDepartment of Theoretical Physics\nRoyal Institute of Technology\nSE-100 44StockholmSweden\n" ]
[ "Department of Theoretical Physics\nRoyal Institute of Technology\nSE-100 44StockholmSweden", "Department of Theoretical Physics\nRoyal Institute of Technology\nSE-100 44StockholmSweden" ]
[]
We show how commonly used models for vortex lines in three dimensional superconductors can be modified to include k = 0 excitations. We construct a formula for the k = 0 helicity modulus in terms of fluctuations in the projected area of vortex loops. This gives a convenient criterion for the presence of superconducting coherence. We also present Monte Carlo simulations of a continuum vortex line model for the melting of the Abrikosov vortex lattice in pure YBCO. PACS numbers: 74.60.-w (Type-II Sup.), 05.70.Fh (Phase Trans.), 75.40.Mg (Num. Simulations)
10.1103/physrevb.59.8451
[ "https://export.arxiv.org/pdf/cond-mat/9812343v1.pdf" ]
119,388,241
cond-mat/9812343
55c226cf9462df0644918ff7e276441f0046abb7
Superconducting Coherence and the Helicity Modulus in Vortex Line Models 21 Dec 1998 Jack Lidmar Department of Theoretical Physics Royal Institute of Technology SE-100 44StockholmSweden Mats Wallin Department of Theoretical Physics Royal Institute of Technology SE-100 44StockholmSweden Superconducting Coherence and the Helicity Modulus in Vortex Line Models 21 Dec 1998(March 24, 2022) We show how commonly used models for vortex lines in three dimensional superconductors can be modified to include k = 0 excitations. We construct a formula for the k = 0 helicity modulus in terms of fluctuations in the projected area of vortex loops. This gives a convenient criterion for the presence of superconducting coherence. We also present Monte Carlo simulations of a continuum vortex line model for the melting of the Abrikosov vortex lattice in pure YBCO. PACS numbers: 74.60.-w (Type-II Sup.), 05.70.Fh (Phase Trans.), 75.40.Mg (Num. Simulations) Phase transitions involving vortices in high temperature superconductors are the subject of intense study both experimentally and theoretically [1]. The enhanced thermal fluctuations strongly alter large parts of the mean field phase diagram [2,3], with new phases appearing, e.g., vortex line liquids, vortex glass phases, etc. A convenient quantity to study theoretically is the helicity modulus Υ (or spin-wave stiffness), which measures the free-energy increment associated with an externally imposed twist in the phase of the superconducting order parameter [4], and is proportional to the macroscopic superfluid density. Much work has been based on XY -like models, defined in terms of this phase, where vortices appear only as topological defects. However, a formulation directly in terms of vortex degrees of freedom has several advantages. In this paper we show how the uniform helicity modulus can be defined directly in terms of the vortex lines, without reference to the phase. We also show Monte Carlo results for Υ and other quantities in a continuum model of interacting vortex lines. One of the advantages of the vortex representation is the possibility to define the model on a continuum, and so avoid artificial pinning to a discretization lattice. Furthermore, one may include interactions coupling directly to the vortex lines, such as core energies and various types of disorder. The vortex representation therefore allows new parameter regimes to be reached compared to the phase representation. Both representations are frequently used in computer simulations [5][6][7][8]. Υ is straightforward to calculate in the phase representation [5]. However, in the vortex representation usually only k = 0 fluctuations are included, making the uniform response Υ exactly zero, and extrapolations from finite k become necessary [6]. Furthermore, when screening from gauge field fluctuations is taken into account, Υ(k) ∼ k 2 for small, but finite k, since an imposed phase twist can be compensated by the gauge field. Extrapolation to k = 0 then gives zero, but the small-k behavior of Υ(k) may still be related to the Meissner effect in a superconductor and used to detect phase transitions [6]. The methods mentioned above involve extrapolations from the smallest available wave vectors to k = 0, thereby severely complicating the data analysis. An alternative may be to study winding number fluctuations, which are related to the magnetic permeability µ, but they suffer from being difficult to equilibrate for large system sizes. In this paper we take a different route by modifying the vortex model in order to incorporate fluctuations with zero wave vector. The form of this modification is obtained using a duality transformation between the phase and vortex representations, paying due attention to the role of the boundary conditions. We show that periodic boundary conditions for the phases enter as an additional term in the Hamiltonian of the vortex representation. This allows direct evaluation of the k = 0 helicity modulus in terms of fluctuations of the total net area of vortex loops, which can indeed be finite also in the presence of screening. The role of boundary conditions in the duality transformation has previously been explored in 2D lattice models [9][10][11] and 3D gauge glass models [12]. Here we generalize this idea to continuous 3D systems, with finite magnetic field, penetration depth, and temperature. Furthermore, we report on Monte Carlo simulations of a continuum London model. In contrast to previous continuum simulations, which used 2D Bose models with planar interaction [8], we take into account the full 3D long range interaction. Our model has the essential features to describe the vortex lattice melting transition in pure YBCO, where a continuum description should apply. The starting point for our discussion is the Ginzburg-Landau theory in the London limit, where amplitude fluctuations of the superconducting order parameter Ψ = |Ψ| exp(iθ) are neglected. For simplicity we will use an isotropic continuum description, since our results are independent of microscopic details. The generalization to other cases is straightforward. The Hamiltonian reads H = Ω d 3 r J 2 ∇θ − 2π Φ 0 A 2 + B 2 8π − B · H 4π ,(1) where Ω = L x L y L z is the size of the system, θ(r) is the phase of the superconducting order parameter, B = ∇ × A is the magnetic flux density, H is an externally applied magnetic field, Φ 0 = hc/(2e) is the flux quantum, and J = Φ 2 0 /(16π 3 λ 2 0 ) is a coupling constant with λ 0 the bare magnetic penetration depth. The first term in Eq. (1) is the kinetic energy with the superfluid velocity v = ∇θ − (2π/Φ 0 )A, while the second describes the magnetic energy. The partition function is obtained by integrating over the phases θ(r), and gauge field A(r), subject to some gauge fixing condition: Z = DθD ′ Ae −βH . In order to get a finite result a short distance regularization has to be imposed, e.g., by defining the model on a lattice. Physically this cutoff is of the order of the Ginzburg-Landau coherence length ξ 0 , and gives the size of the vortex cores. We now discuss the transformation of the model, Eq. (1), to a system of interacting vortex lines in some detail. For simplicity we start by considering the case without any externally applied magnetic field, H = 0. The interaction can be linearized by an integration over an auxiliary field b(r), upon which the kinetic energy becomes Ω d 3 rJ ib · v + 1 2 b 2 . The superfluid velocity splits into a longitudinal part, describing the smooth spin-wave fluctuations of the order parameter, and a transverse part describing the singular vortices: v = v + v ⊥ . Integrating over the longitudinal part leads to the constraint ∇·b = 0, which can be enforced by setting b = ∇×a. After a partial integration of the first term and subsequent integration over B the Hamiltonian becomes H = Ω d 3 rJ 2πia · m + 1 2 (∇ × a) 2 + 1 2λ 2 0 a 2 ⊥ , (2) where a ⊥ is the transverse part of a and m(r) denotes the vorticity, ∇ × ∇θ = 2πm. Integrating out the auxiliary field a leaves us with the vortex Hamiltonian [13,6,1] H = Ω K 2 m(r) · V (r − r ′ )m(r ′ )d 3 rd 3 r ′ ,(3) where K = (2π) 2 J and V is the London interaction V (r) = 1 Ω k e ik·r k 2 + λ −2 0 ,(4)k µ = 2πn µ /L µ , n µ ∈ Z, µ = x, y, z. With periodic boundary conditions for the phases θ we also get the constraint of zero net vorticity Ω m(r)d 3 r = 0. In going from Eq. (1) to Eq. (2) we implicitly assumed that v had no k = 0 component, allowing us to throw away a surface integral. However, if the phases obey periodic boundary conditions, θ(r + L µ ) = θ(r) (where L µ = L µ e µ ), there will be an additional energy term, coming from uniform fluctuations of v. An important point here is that the integration over A should not include the uniform part A 0 , since such fluctuations correspond to fluctuations in the boundary conditions. The additional energy is simply H ′ = J 2Ω v 2 0 , with v 0 = Ω v(r)d 3 r. This is now related to the vortices as follows. The contribution to the vorticity m from a single vortex loop can be written m(r) = Γ δ(r − r ′ )dr ′ = ∇ × S δ(r − r ′ )dS ′ ,(5) where Γ is a contour describing the vortex loop, and S denotes an oriented surface which has Γ as a boundary. Summing over all vortex loops gives the total vortex density. Now, since the vorticity is the rotation of the superfluid velocity, we may define Q = 1 2π Ω vd 3 r = i Si dS i ,(6) where the sum is over all vortices. This has the interpretation of the total projected net area of the vortex loops in each direction. Due to the periodic boundary conditions, the value of Q is uniquely determined by the positions of the vortices only up to an integer multiple of Ω/L µ , reflecting the need to specify the total phase twist of the system. Thus, the variable Q keeps track of the total phase twist of the system and must be independently specified in addition to the vortices in order to completely specify the state of the system. The additional k = 0 component of the energy is now given by H ′ = K Q 2 2Ω ,(7) and the total Hamiltonian is given by the sum of Eqs. (3) and (7): H tot = H + H ′ . External magnetic field.-Assume now that n flux quanta Φ 0 penetrates the system in the z-direction. In this case the periodic boundary conditions for the phases have to be changed so that the system can accommodate a net number of vortices. For the vortices penetrating the whole system the area Q should now be measured with respect to a given reference line at some arbitrary but fixed position determined by the boundary conditions. A possible choice is to let θ(r + L µ ) − θ(r) = nπ − 2π Φ0 r+Lµ r A · dr ′ , with the integral taken along a straight line across the system. In the gauge ∇ · A = 0 we may write A(r) = A per (r)+ 1 2B ×(r−r 0 ), where A per satisfy periodic boundary conditions andB = nΦ 0 L z /Ω is the uniform part of the flux density. In this case the fixed reference line goes through r 0 in the direction ofB. Helicity modulus.-The full importance of the new term in the energy becomes evident when one considers the superfluid response of the system. Replacing the boundary conditions by twisted, θ(r + L µ ) → θ(r + L µ ) + Θ, leads to the replacement Q µ → Q µ −à µ with A µ = ΩΘ/L µ in Eq. (7). This allows us to define the zero wave vector helicity modulus by Υ µ = Ω K ∂ 2 F ∂à 2 µ = 1 − K ΩT Q 2 µ ,(8) where Q 2 = Q 2 − Q 2 , and F = −T ln Z is the free energy. Υ is non-zero in the superconducting state and vanishes at the phase transition. In the critical region of a continuous phase transition it obeys the Josephson scaling relation Υ ∼ ξ −1 , where ξ(T ) is the correlation length [4]. Monte Carlo results for the (λ0, T ) phase diagram for B = 0. The dotted line indicates the value of Tc for the inverted XY -model obtained in the limit λ0 → 0, Tc ≈ Ka(λ0/a) 2 /3.0 (a is the lattice constant ∼ ξ0). In the opposite limit, λ0 → ∞, Tc ≈ 3.0Ka/(2π) 2 . Inset: Tc is located at the intersection of curves for LΥ for different L. The ordinary vortex Hamiltonian, Eq. (3), without the additional area term H ′ is recovered by integrating over Θ and thus corresponds to having fluctuating boundary conditions [9]. In this case the k = 0 response is zero. 7), must be used to calculate the transition probabilities of the Markov chain. Optionally, these moves may be supplemented by global moves where Q µ is changed by ±Ω/L µ , corresponding to dragging a whole vortex across the entire system. The acceptance ratio for such moves can be expected to be quite low because of the high energies involved. Alternatively these global moves can be integrated out exactly, leading to the replacement of H ′ by a periodic Gaussian: [14]. Since this form of the area term leads to a somewhat more complicated expression for the helicity modulus it will not be used in what follows. exp(−βH ′ ) → Mµ exp(− 1 2ΩT µ K(Q µ − M µ Ω/L µ ) 2 ) We present simulation results for two different models: (i) a lattice superconductor in zero magnetic field with different values of λ 0 , (ii) a continuum vortex model of the melting of the Abrikosov vortex lattice. In both cases 10 5 − 10 6 Monte Carlo sweeps were used, with the initial ∼ 10% discarded for equilibration. In Fig. 1 we present results from the first case, in which the phase transition is continuous. The critical temperature T c is determined from finite size scaling of the helicity modulus Υ(T, L) = L −1Υ ([T − T c ]L 1/ν ), as shown in the inset. Due to the new length scale given by λ 0 , scaling works only for rather large system sizes and corrections are clearly visible for small sizes, but the determination of T c is still quite accurate. In the limits λ 0 → ∞ and λ 0 → 0 we recover known results, showing that our method works properly. Furthermore, a scaling collapse of MC data for λ = 0.25 is obtained using the expected 3D XY value ν ≈ 2/3. In our continuum simulations in an applied magnetic field, we discretize the vortex lines only along the zdirection, using straight segments to interpolate between the xy-planes where the positions are continuous. We exclude overhangs and isolated loops, which should be of importance only close to the zero field T c . In addition to local MC moves of the positions in the xy-plane, we include moves where two flux lines are cut off and reconnected to each other, allowing different permutations of the boundary conditions to be sampled. We use a full 3D long range interaction, given by Eq. (4) (with λ 0 = ∞), supplemented by a Gaussian short distance cutoff e −k 2 ξ 2 0 , which acts between the midpoints of the vortex line segments. The vortex lattice constant was set to 4ξ 0 and the layer separation d to 2ξ 0 . The number of layers for a system of N vortices was set to 4 √ N . To avoid frustration effects in the vortex lattice phase we use a hexagonal simulation cell with periodic boundary conditions in all directions. In Fig. 2 we show the helicity modulus in the direction of the applied field Υ z , and the structure function S q = |m z (q)| 2 /(N L z ) 2 at a reciprocal vector q of the Abrikosov vortex lattice, as a function of temperature. Υ x = Υ y = 0 for all T , reflecting the absence of vortex pinning. At the transition both Υ z and S q drop quite sharply, suggesting a first order melting transition to an entangled vortex liquid with no intermediate disentangled phase. Right at the transition, the time series of the internal energy or the structure function, obtained from the simulation, fluctuate around two different values, giving further support for a first order transition. The inset shows how the average energy per vortex and layer approaches a jump at the transition as system size increases, with a latent heat of roughly 0.0015N L z K. Taking into account the internal temperature dependence of the parameters [15], and using values for YBCO gives an entropy jump ∆S ≈ 0.5k B per vortex and layer, in rough agreement with experiments [16]. We now comment on the implications of the ideas presented above on the analogy between the statistical mechanics of vortex systems and zero temperature quantum field theory of bosons in (2 + 1)D [2,17]. The vortex lines are the analogue of boson world lines in a path integral representation of the partition function, with the z-direction playing the role of imaginary time. A crystal ground state for the bosons corresponds to the Abrikosov vortex lattice phase in the vortex problem, while a superfluid boson ground state is mapped to an entangled vortex line liquid. The winding number fluctuations of world lines in (2 + 1)D gives the superfluid density of the boson problem [18]. It is of interest to study the consequences of our new area term H ′ in this context. To this end it is useful [19] to view the London interaction, as being mediated by a (massive in the screened case) (2+1)D gauge field a µ , see Eq. (2). The area term H ′ is now generated by integrating over the k = 0 component of the dual field strengthf µ = ǫ µνρ ∂ ν a ρ (which corresponds to the auxiliary field b above Eq. (2) in the vortex problem). By coupling the dual field strength,f µ = (e y , e x , b z ), to an external source,g µ = (d y , d x , h z ), we find that the dual magnetic permeability µ = ∂ b z /∂h z corresponds to the helicity modulus in z direction, Υ z [1,6]. The inverse dual dielectric constants ǫ −1 i = ∂ e i /∂d i in the x and y directions, correspond to the helicity modulus Υ µ in the y and x directions, respectively. In summary, we have shown how to include k = 0 fluctuations in vortex line models, and how the helicity modulus can be obtained from fluctuations of the projected area of vortex loops. This is useful for detecting superconducting phase transitions in models with and without screening of the London interaction. We also presented continuum Monte Carlo simulation results with a full 3D London interaction. This model should be appropriate for the vortex lattice melting in moderately anisotropic systems such as YBCO. Similar approaches should be useful in studies of, e.g., vortex glass transitions and quantum phase transitions. FIG. 1. Monte Carlo results for the (λ0, T ) phase diagram for B = 0. The dotted line indicates the value of Tc for the inverted XY -model obtained in the limit λ0 → 0, Tc ≈ Ka(λ0/a) 2 /3.0 (a is the lattice constant ∼ ξ0). In the opposite limit, λ0 → ∞, Tc ≈ 3.0Ka/(2π) 2 . Inset: Tc is located at the intersection of curves for LΥ for different L. Fluctuations in the winding number W = Ω m(r)d 3 r, are obtained only if the k = 0 component of the magnetic flux density B is allowed to fluctuate. Then the magnetic permeability µ ν = 4π∂ B ν /∂H ν = B 2 ν /ΩT = W 2 ν Φ 2 0 /ΩT can be calculated. Monte Carlo simulations.-To clearly demonstrate the practical usefulness of these ideas we now discuss Monte Carlo (MC) simulations. The inclusion of the area term in a simulation is straightforward. With the model defined in terms of vortices, the Monte Carlo moves consist of deformations of the vortex lines and (possibly) creations and destructions of closed loops. The change in the projected area coming from these local updates are accumulated in Q, and the change in total energy, including the area term Eq. ( FIG. 2 . 2Helicity modulus Υz and structure function Sq at an ordering vector of the vortex lattice. Inset shows the jump in energy per vortex and layer at the transition. Also shown are two typical snapshots from the simulation, one below, and one above Tc ≈ 0.008Kd. We thank Steve Girvin and Peter Olsson for valuable discussions. This work was supported by the Swedish Natural Science Research Council, and by the Swedish Council for Planning and Coordination of Research (FRN) and Parallelldatorcentrum (PDC), Royal Institute of Technology. . G Blatter, Rev. Mod. Phys. 661125G. Blatter et al., Rev. Mod. Phys. 66, 1125 (1994). . D R Nelson, Phys. Rev. Lett. 601973D. R. Nelson, Phys. Rev. Lett. 60, 1973 (1988); . D R Nelson, H S Seung, Phys. Rev. B. 399153D. R. Nelson and H. S. Seung, Phys. Rev. B 39, 9153 (1989). . D S Fisher, M P A Fisher, D A Huse, Phys. Rev. B. 43130D. S. Fisher, M. P. A. Fisher, and D. A. Huse, Phys. Rev. B 43, 130 (1991). . M E Fisher, M N Barber, D Jasnow, Phys. Rev. A. 81111M. E. Fisher, M. N. Barber, and D. Jasnow, Phys. Rev. A 8, 1111 (1973). . Y.-H Li, S Teitel, Phys. Rev. B. 494136Y.-H. Li and S. Teitel, Phys. Rev. B 49, 4136 (1994); . Phys. Rev. B. 47359Phys. Rev. B 47, 359 (1993). . Tao Chen, S Teitel, Phys. Rev. Lett. 722085Tao Chen and S. Teitel, Phys. Rev. Lett. 72, 2085 (1994); . Phys. Rev. Lett. 742792Phys. Rev. Lett. 74, 2792 (1995); . Phys. Rev. B. 5515197Phys. Rev. B 55, 15197 (1997). . R E Hetzel, A Sudbø, D A Huse, Phys. Rev. Lett. 69518R. E. Hetzel, A. Sudbø, and D. A. Huse, Phys. Rev. Lett. 69, 518 (1992); . G Carneiro, 75521G. Carneiro, ibid 75, 521 (1995); . A K Nguyen, A Sudbø, R E Hetzel, 771592A. K. Nguyen, A. Sudbø, and R. E. Hetzel, ibid 77, 1592 (1996); . S Ryu, D Stroud, 784629S. Ryu and D. Stroud, ibid 78, 4629 (1997); . X Hu, S Miyashita, M Tachiki, 793498X. Hu, S. Miyashita, and M. Tachiki, ibid 79, 3498 (1997); . Tao Chen, S Teitel, Phys. Rev. B. 5511766Tao Chen and S. Teitel, Phys. Rev. B 55, 11766 (1997); . T J Hagenaars, ibid. 5511706T. J. Hagenaars et al., ibid 55, 11706 (1997); . A E Koshelev, 5611201A. E. Koshelev, ibid 56, 11201 (1997); . A K Nguyen, A Sudbø, 573123A. K. Nguyen and A. Sudbø, ibid 57, 3123 (1998); . P Olsson, S Teitel, Phys. Rev. Lett. 801964P. Olsson and S. Teitel, Phys. Rev. Lett. 80, 1964 (1998); . S Ryu, D Stroud, Phys. Rev. B. 5714476S. Ryu and D. Stroud, Phys. Rev. B 57, 14476 (1998). . H Nordborg, G Blatter, Phys. Rev. Lett. 791925H. Nordborg and G. Blatter, Phys. Rev. Lett. 79, 1925 (1997). . P Olsson, Phys. Rev. B. 524511P. Olsson, Phys. Rev. B 52, 4511 (1995). . A Vallat, H Beck, Phys. Rev. B. 504015A. Vallat and H. Beck, Phys. Rev. B 50, 4015 (1994). . M Ney-Nifle, H J Hilhorst, Phys. Rev. B. 518357M. Ney-Nifle and H. J. Hilhorst, Phys. Rev. B 51, 8357 (1995). . H S Bokil, A P Young, Phys. Rev. Lett. 743021H. S. Bokil and A. P. Young, Phys. Rev. Lett. 74, 3021 (1995). . C Dasgupta, B I Halperin, Phys. Rev. Lett. 471556C. Dasgupta and B. I. Halperin, Phys. Rev. Lett. 47, 1556 (1981). The form of the area term after summing over the global moves may be related to the work in Ref. Note that global moves should not be included if dynamical properties of the system are being considered. The form of the area term after summing over the global moves may be related to the work in Ref. [12]. Note that global moves should not be included if dynamical prop- erties of the system are being considered. . M J W Dogson, Phys. Rev. Lett. 80837M. J. W. Dogson et al., Phys. Rev. Lett. 80, 837 (1998). . A Schilling, Nature. 382791A. Schilling et al., Nature (London) 382, 791 (1996). . M P A Fisher, D H Lee, Phys. Rev. B. 392756M. P. A. Fisher and D. H. Lee, Phys. Rev. B 39, 2756 (1989). . E L Pollock, D M Ceperley, Phys. Rev. B. 368343E. L. Pollock and D. M. Ceperley, Phys. Rev. B 36, 8343 (1987); . D M Ceperley, Rev. Mod. Phys. 67279D. M. Ceperley, Rev. Mod. Phys. 67, 279 (1995). . M V Feigel&apos;man, Phys. Rev. B. 4816641M. V. Feigel'man et al., Phys. Rev. B 48, 16641 (1993).
[]
[ "CMB lensing tomography with the DES Science Verification galaxies", "CMB lensing tomography with the DES Science Verification galaxies" ]
[ "T Giannantonio ", "P Fosalba †[email protected] ", "R Cawthon ", "Y Omori ", "M Crocce ", "F Elsner ", "B Leistedt ", "S Dodelson ", "A Benoit-Lévy ", "E Gaztañaga ", "G Holder ", "H V Peiris ", "W J Percival ", "D Kirk ", "A H Bauer ", "B A Benson ", "G M Bernstein ", "J Carretero ", "T M Crawford ", "R Crittenden ", "D Huterer ", "B Jain ", "E Krause ", "C L Reichardt ", "A J Ross ", "G Simard ", "B Soergel ", "A Stark ", "K T Story ", "J D Vieira ", "J Weller ", "T Abbott ", "F B Abdalla ", "S Allam ", "R Armstrong ", "M Banerji ", "R A Bernstein ", "E Bertin ", "D Brooks ", "E Buckley-Geer ", "D L Burke ", "D Capozzi ", "J E Carlstrom ", "A Carnero Rosell ", "M Carrasco ", "F J Castander ", "C L Chang ", "C E Cunha ", "L N Da Costa ", "C B D&apos;andrea ", "D L Depoy ", "S Desai ", "H T Diehl ", "J P Dietrich ", "P Doel ", "T F Eifler ", "A E Evrard ", "A Fausti Neto ", "E Fernandez ", "D A Finley ", "B Flaugher ", "J Frieman ", "D Gerdes ", "D Gruen " ]
[]
[ "J. L. Marshall" ]
We measure the cross-correlation between the galaxy density in the Dark Energy Survey (DES) Science Verification data and the lensing of the cosmic microwave background (CMB) as reconstructed with the Planck satellite and the South Pole Telescope (SPT). When using the DES main galaxy sample over the full redshift range 0.2 < z phot < 1.2, a cross-correlation signal is detected at 6σ and 4σ with SPT and Planck respectively. We then divide the DES galaxies into five photometric redshift bins, finding significant (>2σ) detections in all bins. Comparing to the fiducial Planck cosmology, we find the redshift evolution of the signal matches expectations, although the amplitude is consistently lower than predicted across redshift bins. We test for possible systematics that could affect our result and find no evidence for significant contamination. Finally, we demonstrate how these measurements can be used to constrain the growth of structure across cosmic time. We find the data are fit by a model in which the amplitude of structure in the z < 1.2 universe is 0.73 ± 0.16 times as large as predicted in the ΛCDM Planck cosmology, a 1.7σ deviation.
10.1093/mnras/stv2678
[ "https://arxiv.org/pdf/1507.05551v2.pdf" ]
15,736,919
1507.05551
9ec955a0c14e0d02f82e8c006141c43ac03a15d9
CMB lensing tomography with the DES Science Verification galaxies 2016. March 10 T Giannantonio P Fosalba †[email protected] R Cawthon Y Omori M Crocce F Elsner B Leistedt S Dodelson A Benoit-Lévy E Gaztañaga G Holder H V Peiris W J Percival D Kirk A H Bauer B A Benson G M Bernstein J Carretero T M Crawford R Crittenden D Huterer B Jain E Krause C L Reichardt A J Ross G Simard B Soergel A Stark K T Story J D Vieira J Weller T Abbott F B Abdalla S Allam R Armstrong M Banerji R A Bernstein E Bertin D Brooks E Buckley-Geer D L Burke D Capozzi J E Carlstrom A Carnero Rosell M Carrasco F J Castander C L Chang C E Cunha L N Da Costa C B D&apos;andrea D L Depoy S Desai H T Diehl J P Dietrich P Doel T F Eifler A E Evrard A Fausti Neto E Fernandez D A Finley B Flaugher J Frieman D Gerdes D Gruen CMB lensing tomography with the DES Science Verification galaxies J. L. Marshall 302016. March 10Last updated 19 January 2016Preprint 19 January 2016 Compiled using MNRAS L A T E X style file v3.0 Affiliations are listed at the end of the 2 T. Giannantonio, P. Fosalba et al.Cosmic background radiation -gravitational lensing: weak -large-scale struc- ture of the Universe We measure the cross-correlation between the galaxy density in the Dark Energy Survey (DES) Science Verification data and the lensing of the cosmic microwave background (CMB) as reconstructed with the Planck satellite and the South Pole Telescope (SPT). When using the DES main galaxy sample over the full redshift range 0.2 < z phot < 1.2, a cross-correlation signal is detected at 6σ and 4σ with SPT and Planck respectively. We then divide the DES galaxies into five photometric redshift bins, finding significant (>2σ) detections in all bins. Comparing to the fiducial Planck cosmology, we find the redshift evolution of the signal matches expectations, although the amplitude is consistently lower than predicted across redshift bins. We test for possible systematics that could affect our result and find no evidence for significant contamination. Finally, we demonstrate how these measurements can be used to constrain the growth of structure across cosmic time. We find the data are fit by a model in which the amplitude of structure in the z < 1.2 universe is 0.73 ± 0.16 times as large as predicted in the ΛCDM Planck cosmology, a 1.7σ deviation. INTRODUCTION The cosmic microwave background (CMB) radiation, released at the time of hydrogen recombination, provides a view of the Universe when it was only 380,000 years old. However, this image has been slightly altered since the last-scattering surface, as the CMB photons had to travel through an inhomogeneous distribution of matter before reaching us today. Beyond the simple background cooling due to the Hubble expansion, the intervening large-scale structure (LSS) of the Universe can alter the energies and paths of the CMB photons, producing a range of effects beyond the primary CMB power spectrum; these are collectively known as secondary CMB anisotropies. The CMB photons freely stream through neutral hydrogen after recombination, but they can undergo Compton scattering once again at late times in the re-ionised intergalactic medium or in the hot, ionised gas in the potential wells of massive clusters of galaxies. This latter phenomenon is known as the Sunyaev-Zel'dovich (SZ) effect (see e.g. Sunyaev & Zeldovich 1980;Carlstrom et al. 2002). When travelling in and out of gravitational potential wells, they may gain a net energy when the potentials are evolving in time (integrated Sachs-Wolfe, ISW effect) (Sachs & Wolfe 1967;Rees & Sciama 1968;Crittenden & Turok 1996;Fosalba et al. 2003;Boughn & Crittenden 2004;Fosalba & Gaztañaga 2004;Cabré et al. 2006;Giannantonio et al. 2006Giannantonio et al. , 2008Giannantonio et al. , 2012b. Finally, as they travel through the LSS, the CMB photons are gravitationally deflected by the mass distribution along their way, distorting the image we eventually observe. Here we focus on this last effect, CMB lensing. As described in the review by Lewis & Challinor (2006), the typical gravitational deflections of the CMB photons are of order a few arcminutes (Cole & Efstathiou 1989). These deflections, integrated along the entire line of sight, alter the CMB anisotropies we observe in a number of ways. First, lensing smooths out the peaks and troughs in the temperature and polarisation angular power spectra on arcminute scales (Seljak 1996). Lensing leads to power leakage from large into smaller angular scales (Linder 1990), and from E-to B-mode polarisation (Zaldarriaga & Seljak 1998). Lensing also introduces non-vanishing higher-order statistics of the temperature and polarisation fields, which can be used to reconstruct the lensing potential (Okamoto & Hu 2003;Hirata & Seljak 2003), provided a sufficiently high-resolution and low-noise map is available. Such reconstructed maps of the lensing potentials contain the integrated information of the entire matter distribution in the Universe, out to the surface of last scattering. In order to interpret this information to optimally constrain cosmology, and in particular the evolution of structure formation, it is desirable to study the lensing contribution as a function of redshift: this can be achieved by crosscorrelating the full reconstructed CMB lensing maps and tracers of matter at known redshift, such as galaxy surveys (Lewis & Challinor 2006). By cross-correlating the CMB lensing potential with the LSS, we can measure the growth of structure as a function of time in redshift bins; this measurement can be used for example to help identify the mechanism driving the current epoch of cosmic acceleration. In addition, CMB lensing-galaxy correlations can be used to improve the control of systematics in weak lensing analyses (Das et al. 2013). The power of the cross-correlation technique is made evident by the early works in this field: while the CMB lensing potential itself was only weakly detectable (at <2σ) from the WMAP temperature maps, due to their comparatively low resolution and high noise (Smidt et al. 2011;Feng et al. 2012), the first significant detection of CMB lensing was achieved by Smith et al. (2007) at the 3.4σ level by cross-correlating WMAP data with radio-galaxies from the NRAO VLA Sky Survey (NVSS, Condon et al. 1998). This was later extended by Hirata et al. (2008) using multiple galaxy catalogues, in a first attempt at studying the redshift evolution of the signal, finding a lower combined evidence of 2.5σ. The field is now flourishing: CMB lensing has been detected not only indirectly from the smearing of the CMB temperature power spectrum (Das et al. 2011b;Keisler et al. 2011;Story et al. 2013;Planck Collaboration et al. 2014b), but also directly at high significance from the non-Gaussianity of the CMB temperature field using highresolution data from ACT (Das et al. 2011a(Das et al. , 2014, SPT (van Engelen et al. 2012;Story et al. 2015) and Planck (Planck Collaboration et al. 2014cCollaboration et al. , 2015a. The latest analyses of these experiments achieved detections of CMB lensing at the 4.6σ, 14σ, and 40σ levels respectively; the different significance levels depend on the different beam resolutions, detector noise levels, and sky coverage fractions. With respect to the last, the Planck satellite has a clear advantage, thanks to its large sky coverage, even in the galaxy-masked maps, while the small-scale resolution and noise are superior for the ground-based surveys. CMB lensing has also been detected through its impact on the B-mode signal in CMB polarisation data with BICEP2 (BICEP2 Collaboration 2014), with a joint BICEP2-Planck polarisation analysis (BICEP2/Keck and Planck Collaborations 2015), the Keck Array (Keck Array and BICEP2 Collaborations 2015), POLAR-BEAR (POLARBEAR Collaboration 2014b), and SPT (Keisler et al. 2015), as well as from the four-point function of POLAR-BEAR polarisation data (Ade et al. 2014). The ACT lensing data, reconstructed over six regions within the SDSS Stripe 82 covering a total of 320 deg 2 , has been used by Sherwin et al. (2012) for cross-correlation with optically-selected, photometric quasars from SDSS (Bovy et al. 2011), finding a detection of significance 3.8σ. The ACTPol data, including information from CMB polarisation, were cross-correlated by van Engelen et al. (2015) with cosmic infrared background (CIB) maps reconstructed by Planck, finding a detection at the 9σ level. The SPT lensing maps were cross-correlated by Bleem et al. (2012) over four distinct fields of ∼ 50 deg 2 each with opticallyselected galaxies from the Blanco Cosmology Survey (Desai et al. 2012;Bleem et al. 2015), IR sources from SPT Spitzer Deep Field (Ashby et al. 2009), and from the WISE all-sky IR survey (Wright et al. 2010;Geach et al. 2013). Significant correlations (>4σ) were found in all cases, although the interpretation was complicated by the large uncertainties on the redshift of these sources. Additionally, Holder et al. (2013) detected the correlation between the SPT lensing maps and the diffuse CIB maps measured by Herschel/SPIRE (Griffin et al. 2010), finding positive detections at significances between 6.7σ and 8.8σ in three sub-mm frequency bands. Cross-correlation between SPTPol and the CIB was detected at 7.7σ by Hanson et al. (2013). The CMB lensing-CIB correlation was also detected with POLARBEAR data (POLARBEAR Collaboration 2014a). These works further demonstrate that the CIB is well-suited for CMB lensing cross-correlations, due to its broad and deep redshift distribution, leading to a significant overlap with the CMB lensing kernel; on the other hand, the interpretation of the results is more challenging than for resolved sources, due to the relative uncertainty on the CIB redshift distribution. The Planck team took immediate advantage of their data (Planck Collaboration et al. 2014c), by cross correlating their CMB lensing map with four tracers of the LSS: NVSS, SDSS LRGs (Ross et al. 2011b), SDSS clusters (Koester et al. 2007), and the WISE sub-mm satellite survey. These cross-correlations were measured at high significance: 7σ for WISE and clusters, 10σ for the SDSS LRGs, and 20σ for NVSS, thanks to the dramatic extension of sky coverage with respect to previous CMB lensing data. These results were also confirmed by and extended to the final photometric SDSS main galaxies (Aihara et al. 2011), SDSS photometric quasars, X-ray background (Boldt 1987), and the 2MASS IR survey (Skrutskie et al. 2006). The cross-correlation between Planck lensing and quasars from WISE and SDSS was measured by Geach et al. (2013), DiPompeo et al. (2015). A further study of the cross-correlation between Planck lensing and Herschel was performed recently by Bianchini et al. (2015), while Omori & Holder (2015) measured at >5σ the cross-correlation with the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) galaxy number density. However, none of the existing galaxy surveys has the depth and density of sources over a contiguous area required for a comprehensive tomographic analysis of the CMB lensing signal; this is finally possible with the Dark Energy Survey (DES) and SPT (see e.g. Vallinotto 2013), and it is the main focus of this work. The DES finished its second of five years of operations in March 2015, and will eventually image 5000 square degrees in the Southern Hemisphere from the Blanco Telescope in Chile, in the bands g, r, i, z, and Y using the Dark Energy Camera (Flaugher et al. 2015). Its depth makes it well-suited for measuring CMB lensing tomography, because it allows the survey to detect a larger fraction of the CMB lensing signal, whose contribution peaks at redshifts z > 1. In this paper, we cross-correlate the initial DES Science Verification (SV) data with the CMB lensing maps reconstructed by the Planck and SPT surveys, and report a detection of the correlation in broad agreement with the expectations under the assumption of a concordance ΛCDM model, with a significance of 6σ and 4σ for SPT and Planck respectively. The DES SV data consist of near fulldepth imaging of ∼ 300 deg 2 , of which we use the ∼ 200 deg 2 of the SPT-E field, which is reduced to 131 deg 2 after masking. The SPT lensing data we use were derived by van Engelen et al. (2012) from the 2500 deg 2 SPT-SZ survey (Story et al. 2013), which fully overlaps by design with the DES footprint, while the Planck public data (Planck Collaboration et al. 2014c cover the entire extra-galactic sky. Motivated by the high significance of the SPT detection, we measure this cross-correlation in redshift bins, reconstructing the time evolution of CMB lensing. The plan of this paper is as follows: after briefly reviewing the theoretical expectations in Section 2, we present the data in Section 3 and the mocks we use to estimate the covariances in Section 4; we then report our results in Section 5, tests for possible systematics in Section 6, and present some basic cosmological implications in Section 7, before concluding in Section 8. THEORY Power spectra Gravitational lensing deflects the primordial CMB temperature anisotropies, so that the temperature we observe in a directionn corresponds to the primordial unlensed anisotropy in the direction n + ∇ϕ(n). Here ϕ(n) is the CMB lensing potential, defined in a flat universe as (Lewis & Challinor 2006) ϕ(n) = − χ * 0 dχ χ * − χ χ * χ [Φ + Ψ] (χn, η 0 − χ) ,(1) where χ is the comoving distance, asterisks denote quantities evaluated at the last-scattering surface, η 0 is the conformal time today, and Φ, Ψ are the matter and light gravitational potentials, which are effectively equal in the standard ΛCDM model in linear theory. The convergence field κ(n) can be used in place of the lensing potential ϕ(n); the two are related in multipole space as κ m = ( + 1) 2 ϕ m .(2) By applying the Poisson equation, the CMB convergence in a directionn can be rewritten as a function of the matter overdensity δ (see e.g. Bleem et al. 2012): κ(n) = 3Ω m H 2 0 2 χ * 0 dχ χ 2 a(χ) χ * − χ χ * χ δ(χn, η 0 − χ) ,(3) where H 0 is the Hubble parameter today, Ω m the matter energy density and a(χ) is the scale factor. In the local bias model (Fry & Gaztanaga 1993), the smoothed galaxy overdensity δ g is related to the smoothed matter overdensity δ by a Taylor expansion, so that if the bias is assumed to be deterministic, δ g (x, z) = ∞ i=0 b i (z) δ i (x, z)/i!. In the present analysis of DES data we will consider only scales where the linear bias suffices, as demonstrated by Crocce et al. (2016). In this case, δ g (x, z) = b(z) δ(x, z). A galaxy catalogue with redshift distribution dn/dz(z) thus provides an estimate of the projected overdensity in a directionn as δ g (n) = ∞ 0 b(z) dn dz (z) δ(χn, z) dz ,(4) where b(z) is the galaxy bias (assumed here linear, deterministic and scale-independent) and δ the total matter overdensity field. The two-point statistics of the galaxy-galaxy and galaxy-CMB lensing correlations can be written in harmonic space as C gg = 2 π ∞ 0 dk k 2 P(k) W g (k) W g (k) (5) C κg = 2 π ∞ 0 dk k 2 P(k) W κ (k) W g (k) ,(6) where P(k) is the matter power spectrum at z = 0, and the kernels for galaxies and CMB lensing convergence are in the standard model (Φ = Ψ) for a flat universe (Lewis & Challinor 2006;Bleem et al. 2012;Sherwin et al. 2012): W g (k) = ∞ 0 dz b(z) dn dz (z) D(z) j [kχ(z)](7)W κ (k) = 3Ω m H 2 0 2 ∞ 0 dz χ * − χ χ * χ (z) D(z) j [kχ(z)] ,(8) where D(z) is the linear growth function defined so that δ(z) = D(z) δ(z = 0), j are the spherical Bessel functions, and we have assumed c = 1; the lensing potential power spectra can be readily obtained using Eq. (2). The equivalent expressions in real space can be derived with a Legendre transformation. We will indicate in the following the two-point statistics of generic fields a, b as C ab , w ab (ϑ), related by w ab (ϑ) = ∞ =0 2 + 1 4π P (cos ϑ) C ab ,(9) where P are the Legendre polynomials, and in practice the sum is limited to max , chosen to be sufficiently high to ensure convergence. From the definitions of Eqs. (5), it is clear that to first approximation, valid in the limit of a narrow redshift range for local and deterministic linear bias, C gg (z) ∝ b 2 (z) D 2 (z) , C κg (z) ∝ b(z) D 2 (z) ,(10) so that a joint measurement of these two quantities can break the degeneracy between bias and linear growth (see e.g. Gaztañaga et al. 2012). We develop this idea in Section 7 below. Stochasticity Alternatively, it is possible to assume cosmology to be fixed, and to use the data to constrain galaxy bias instead. Non-linear bias is expected at small scales, as well as a stochastic component due to the discrete sampling and the physical processes affecting halo collapse and galaxy formation. As bias non-linearities on the scales considered have been excluded for our sample by Crocce et al. (2016), we will consider stochasticity , which changes the biasing law to (Tegmark & Peebles 1998;Pen 1998;Dekel & Lahav 1999) δ g (x, z) = b(z) δ(x, z) + (x, z) ,(11) which leads to the power spectra C gg (z) b 2 (z) C mm (z) + C (z) (12) C κg (z) b(z) C κm (z) ,(13) where C mm and C are the matter and stochasticity power spectra respectively. It is clear that, in the absence of non-linearity, a measurement of the correlation coefficient r ≡ C gm / C mm C gg constrains the stochastic component, as r = 1 + C l b 2 C mm l − 1 2 1 − C l 2 b 2 C mm l .(14) Notice that if stochasticity is present, the bias inferred from the measured galaxy auto-correlation b auto = C gg /C mm will absorb the stochastic component, and it will thus be different from what is obtained from the galaxy-CMB lensing cross-correlation b cross = C κg /C mm ; the mismatch is simply given by r = b cross /b auto . In the following, we will assume no stochasticity throughout, thus assuming b cross = b auto = b, except from Section 7.4, where we discuss the possible interpretation of our results as a measurement of stochasticity. Stochasticity has been studied with N-body simulations and constrained with observations. Recent simulation studies report a negligible stochastic component, except on the smallest scales Cai et al. 2011;Manera & Gaztañaga 2011). Observational constraints have been obtained by combining galaxy clustering with weak gravitational lensing data, using the methods by Schneider (1998), van Waerbeke (1998. The most recent results were obtained by Jullo et al. (2012) using the COSMOS survey, finding no evidence for stochasticity; this is however in tension with the significant stochasticity found by Hoekstra et al. (2002) using the Red-Sequence Cluster Survey and the VIROS-DESCART survey, by Sheldon et al. (2004) with the Sloan Digital Sky Survey, and by Simon et al. (2007) with the GaBoDS survey. The current and upcoming DES clustering and weak lensing data, including CMB lensing, are well-suited to obtain better constraints on this issue. We calculate all theoretical power spectra and correlation functions using a full Boltzmann code implemented in camb (Lewis et al. 2000;Challinor & Lewis 2011), including the (small) effect of redshift-space distortions. We include the effects of non-linear matter clustering using the Halofit formalism (Smith et al. 2003;Takahashi et al. 2012). We have tested from the slopes of the number counts that the effect of cosmic magnification (see e.g. van Waerbeke 2010) is negligible for all cases considered in this paper, so that we neglect this contribution. Unless otherwise specified, we assume a fiducial Planck 2013 (+ WMAP polarisation + ACT/SPT + BAO) best-fit flat ΛCDM+ν (1 massive neutrino) cosmology of parameters: ω b = 0.0222, ω c = 0.119, ω ν = 0.00064, h = 0.678, τ = 0.0952, A s = 2.21 × 10 −9 , n s = 0.961 at a pivot scalek = 0.05 Mpc −1 , corresponding to σ 8 = 0.829, where h ≡ H 0 /100 km s −1 Mpc −1 and ω i ≡ Ω i h 2 for each species i (Planck Collaboration et al. 2014b). (We have checked that assuming a Planck 2015 cosmology has negligible impact on the results.) Expected signal-to-noise We first estimate the signal-to-noise expected for the detection of the CMB lensing -galaxies cross-correlation with current and upcoming data. We include the uncertainties from cosmic variance and the noise, N , which is due to shot noise for the galaxy counts and to the primary CMB, instrumental, and atmospheric noise for the CMB lensing maps. The top panel of Fig. 1 shows the noise N κκ compared with the CMB lensing auto-spectrum C κκ , for currently available CMB lensing data. We compare the Planck 2015 CMB lensing noise (Planck Collaboration et al. 2015a) with the mean noise level of the SPT-SZ lensing maps we use. The effective SPT-SZ noise is lower than Planck's on most scales ( > 100), while on larger scales the 2015 Planck lensing data have higher sensitivity. We also show how future CMB data from the SPT-3G survey (Benson et al. 2014) are expected to lower the lensing noise by an order of magnitude. For SPT we have assumed a minimum mode min = 30, given the smaller sky coverage of this survey, while for Planck we use min = 8, as specified by the public data provided. The SPT-3G forecast is based on a minimum-variance lensing reconstruction up to = 3000, without explicitly considering the effect of foreground contaminations. In the second panel of Fig. 1 we show the theoretical prediction for the cross-spectrum C κg and the corresponding theoretical noise per multipole (see e.g. Ross et al. 2011a) σ C κg = 1 f sky (2 + 1) C κg 2 + C κκ + N κκ C gg + N gg 1/2 ,(15) where f sky is the overlapping sky fraction of the surveys, the CMB lensing noise N κκ is discussed above, and the galaxy noise is N gg = 1/n, where n is the galaxy density per steradian. For the signal to noise projection on the DES-SV area, we use the specifications of the real galaxy catalogue described below in Section 3: we assume the real redshift distribution of the full sample, a galaxy number density of 5.39 arcmin −2 , and a sky coverage of 131 deg 2 , fully overlapping both SPT and Planck. For the forecasts of the DES 5-year survey, we instead assume that galaxies follow the simple redshift distribution by Smail et al. (1995) with the original proposed specifications of DES (The Dark Energy Survey Collaboration 2005), i.e. a median redshiftz = 0.7 and a galaxy number density of 10 arcmin −2 . We further assume a sky coverage of 5000 deg 2 (fully overlapping Planck, but of which only 50% overlaps SPT). We finally assume constant bias, equal to 1 at all scales. In reality, galaxy bias will be estimated from the galaxy auto-correlations; we show below in Section 5 that the bias for the DES main galaxy sample is only marginally larger than 1. We can see that the noise per multipole for DES-SV is large compared with the theory, but this is significantly reduced once binning is used. The noise per multipole Central panel: The theoretical CMB lensing-galaxy cross-spectrum compared with the analytical errors estimated for the Planck and SPT cases, considering DES SV (top, darker colours) and 5-year data (bottom, lighter colours). The errors are large as they are shown per individual multipole , and are correspondingly reduced once binned. Bottom panel: The cumulative signal-to-noise ratio of the CMB lensing-galaxy cross-correlations for the same cases, compared with the theoretical maximum fixed by cosmic variance. Note that a 5-10 σ detection is expected for SV data with the information coming from < 2000. For the full DES 5-year data, the measurement with Planck is expected to yield a similar significance to SPT-SZ, given the larger overlapping area. SPT-3G will achieve the most accurate measurement. is reduced to the same level of the signal with DES 5-year data; in this case the noise level of Planck is lower than SPT-SZ at low , given the larger area overlap with the full DES footprint. Finally, the expected signal-to-noise is S N 2 κg = f sky max = min (2 + 1) C κg 2 C κg 2 + C κκ + N κκ C gg + N gg .(16) We show in the third panel of Fig. 1 the cumulative signal-to-noise using different assumptions for the CMB and galaxy data. Here we can see that using DES SV data only, a S/N 8 (5) is expected using current SPT (Planck) data, thus motivating the analysis in this study. Beyond the current analysis, we can see that the theoretical maximum S/N determined by cosmic variance is significantly larger than what is possible at present; we further discuss in Section 8 the prospects for future improvements of this measurement. DATA Galaxy catalogue The DES Science Verification (SV) data include imaging of ∼ 300 square degrees over multiple disconnected fields; the largest contiguous areas are the SPT-E and SPT-W fields, covering ∼ 200 and ∼ 50 deg 2 respectively, which overlap the SPT-SZ survey. We consider here the larger SPT-E field only. The SV area was imaged over 78 nights from November 2012 until February 2013, and includes ∼ 4 · 10 7 unique co-add objects. The raw data were processed as described by Rykoff et al. (2015), Crocce et al. (2016). From the DES-SV final ('Gold') main galaxy catalogue (Rykoff et al. 2015), we use the 'Benchmark' galaxy selection introduced by Crocce et al. (2016), which we also briefly describe here. The 'Gold' catalogue covers 254.4 deg 2 with dec > −61 deg after masking, thus removing the Large Magellanic Cloud and R Doradus regions, unsuitable for extra-galactic science. Only regions with at least one CCD coverage in each band (except Y) were included. Star-galaxy separation is achieved with a cut in the wavg_spread_model quantity (Crocce et al. 2016). The 'Gold' catalogue includes a total of 25,227,559 galaxies over the whole SV area. From them, we select the 'Benchmark' galaxy sample over the SPT-E field by imposing the following cuts: • 18.0 < i < 22.5 (completeness, 10σ detection); • −1 < g − r < 3 and −1 < r − i < 2 and −1 < i − z < 2 (remove strong colours from diffraction artefacts); • wavg_spread_model(i) > 0.003 (star-galaxy separation); • 60 < r.a. < 95 and −61 < dec < −40 (SPT-E field). Notice that we use two different choices of magnitude definition for the completeness cut (slr_mag_auto) and for the colour cuts (mag_detmodel); see details in Rykoff et al. (2015). We have checked that using a different magnitude definition for the completeness cuts does not change the results significantly; likewise, the galaxy-CMB lensing cross-correlation results remain consistent if using a different classifier for star-galaxy separation (modest_class, Rykoff et al. 2015). Finally, note that our declination cut at dec > −61 is marginally less conservative than the cut applied by Crocce et al. (2016) at dec > −60. Photometric redshifts of DES galaxies were estimated using a variety of techniques (Sánchez et al. 2014). We consider here the machine learning 'Trees for photometric redshifts' (TPZ) (Carrasco Kind & Brunner 2013) and the template-based 'Bayesian photometric redshifts' (BPZ, Benítez 2000) methods. TPZ was shown to perform well compared with a validation sample of known redshifts (Sánchez et al. 2014), and we therefore use this method for our main results. We show however in Section 6.2 below that using BPZ does not change our results significantly. Briefly, TPZ is a machine-learning algorithm using prediction trees and a random forest method that was shown to minimise the number of catastrophic outliers with respect to other techniques. The TPZ implementation we use does not include information from Y-band observations. In addition to the above-mentioned cuts, we discard the tails of the photometric redshift distribution, by selecting galaxies with maximum likelihood photo-z 0.2 < z phot < 1.2 only, which reduces the sample by ∼ 5%. This leaves us with 3,207,934 objects. Our selection agrees with Crocce et al. (2016) except from the small difference in the declination cut, so that the results of the two papers can be directly compared. We then pixelise the data on the sky using the Healpix scheme (Górski et al. 2005) at resolution N side = 2048 (the corresponding pixel side is d pix ∼ 1.7 ), which is sufficient to capture all the information in both the SPT and Planck lensing data. The mask is constructed by excluding regions of photometry shallower than the completeness cut at i < 22.5; in addition, pixels are discarded unless > 80% of their area has detections. After masking, the SPT-E field is left with 2,544,276 objects. The sky fraction covered is f sky = 3.176·10 −3 , corresponding to 131.02 deg 2 , with number density n = 6.37 · 10 7 sr −1 , or 5.39 arcmin −2 . Future DES catalogues will be denser as the magnitude limit is pushed faintward. We refer to Fig. 2 by Crocce et al. (2016) for the stacked probability distribution of the photometric redshifts of the 'Benchmark' main galaxies, for both TPZ and BPZ methods. In addition to the full sample, we also use five redshift bins of width ∆z phot = 0.2 that we use in the tomographic analysis below. Also in this case, the cuts are applied on the maximum-likelihood photo-z; in all cases, the stacked photo-z PDF has tails outside the cut boundaries. The number of galaxies in each bin is: 509,456; 818,376; 673,881; 424,437; 118,126 from low to high z respectively. While the number of galaxies in the last bin is significantly lower than in the others, we choose the current binning in order to explore the clustering Figure 3. Maps of the CMB lensing convergence in the SPT-E field for SPT (left) and Planck (right), pixellated on the Healpix N side = 2048 scheme (pixel side: 1.7 ) in Equatorial coordinates, smoothed on angular scales of 10 arcmin to improve visualisation. The grid lines are 2.5 deg apart. Grey areas indicate masked data. The DES mask has been applied for clarity, but we do not impose it onto the CMB data in our cross-correlation estimation. The Planck lensing map also includes the Planck lensing mask. Planck shows higher amplitude variations, but this is due to higher noise caused by the lower spatial resolution of its map. and the CMB lensing correlation up to the highest redshifts that are accessible to DES. We show the masked map of the DES galaxy sample we use in our analysis in Fig. 2. CMB lensing maps We consider the lensing convergence maps reconstructed from observations of the CMB temperature anisotropies by the South Pole telescope (SPT) and by the Planck satellite shown in Fig. 3. For each experiment, we also use simulated CMB observations to characterise the noise properties in the cross-correlation analysis with the DES data. We present in Fig. 9 below the angular power spectra of the CMB lensing maps together with their noise properties inferred from the mocks. The South Pole Telescope lensing maps The SPT-SZ survey was assembled from hundreds of individual observations of each of 19 contiguous fields that together covered the full survey area. For the SPT-E field a 25 • × 25 • 150 GHz map was made by forming an inverse-variance weighted coadd of all the overlapping observations. A lensing map was constructed from this CMB map following the procedures described in detail by van Engelen et al. (2012), which we briefly outline below. Individual sources detected with signal-to-noise greater than 15 (in any of the 3 SPT frequencies) were masked, with the masked regions in the CMB map filled in using Wiener interpolation. These maps were filtered in Fourier space (using the flat-sky approximation) with an anisotropic filter that removed Fourier modes along the scan direction with x < 500, and an isotropic filter that removed modes with > 4000. A flat-sky lensing map was generated from the filtered maps using a quadratic estimator technique (Okamoto & Hu 2003). This map was then projected into spherical coordinates for the cross-correlation. The details of point source masking, anisotropic noise, nonstationary noise, and spatially varying beams are sufficiently com- Figure 4. Map of the hit counts in the 143 GHz Planck channel. The stripy structure reflects the scanning strategy of the Planck satellite, with nodes near the Ecliptic poles. We also show the Planck lensing mask (grey area) and the outline of the DES-SV SPT-E area (black), on which we perform the present analysis. The SPT-E region's noise properties are clearly atypical, as the field lies near the South Ecliptic Pole, where the Planck hit count is approximately five times higher than the full-sky average. plex that calibrations and noise estimates were obtained from simulated data. Starting with 100 mock lensed skies and 100 mock unlensed skies, synthetic time-streams were generated, masked and filtered identically to the data. By cross-correlating the 100 lensed output reconstructions with the known input lensing potential, a lensing transfer function was estimated. This lensing transfer function was applied to both the data and the output from the unlensed simulations, which provided 100 noise realisations to characterise the noise properties of the cross-correlations. We use multipoles between 30 < < 2000 in the SPT lensing map, as including higher multipoles negligibly changes the overall signal-to-noise in the SPT lensing data. The Planck lensing maps We use the 2015 CMB lensing map provided by the Planck collaboration (Planck Collaboration et al. 2015a). As in the SPT case, this was derived with the quadratic estimator by Okamoto & Hu (2003), which was extended to use the combined information from both CMB temperature and polarisation, thus reducing the noise with respect to the 2013 lensing map (Planck Collaboration et al. 2014c). The Planck lensing map is provided as a table of spherical harmonic coefficients up to max = 2048. The reconstructed map covers the full sky, but the comparatively low sensitivity of Planck means that, averaged over the full sky, the maps are noisedominated on most scales, as it can be seen in the first panel of Fig. 1. However, we can see in Fig. 4 that Planck observed the DES-SV SPT-E area on which we perform the current analysis with significantly better than average accuracy, as the SPT-E area lies near the South Ecliptic Pole that was repeatedly scanned at every rotation of the Planck satellite. We therefore expect that the typical lensing noise over this region will be significantly reduced with respect to its full-sky level; we confirm this in Section 5.2.3 below. For the current analysis we apply the Planck mask provided with the lensing map shown in grey in Fig. 4, which masks the Galactic plane and resolved point sources. As this map does not include multipoles smaller than < 8, these modes are also removed from the modelling. The Planck collaboration also provided 100 realisations of the CMB lensing sky, either including only the cosmological signal based on the fiducial ΛCDM model with relative fluctuations, or together with the experimental Planck noise. We can therefore reconstruct 100 noise realisations by taking the difference between the two. MOCKS Here we describe the two approaches to build galaxy mocks that we use to estimate covariance matrices in our analysis, as described in Section 5.1.2. We use both methods to demonstrate robustness, but we use the N-body mocks for our nominal results. Monte Carlo Gaussian mocks The first method to build simulated DES-SV galaxy mocks is based on a Monte Carlo (MC) procedure. In this approach we generate Gaussian random realisations of the maps we use: galaxies, SPT lensing, and Planck lensing, all with their (average) noise properties. We produce 1000 random realisations using the synfast code from the Healpix package, using random seeds and based on the non-linear Planck best-fit fiducial theory described above, assuming a linear, constant bias b = 1.15, which we find to be consistent with our auto-correlation clustering measurements. In addition to the fiducial cosmological power spectrum, the mock CMB lensing maps also include fluctuations from the effective noise of SPT and Planck, so that we can generate a larger number of MC mocks than the number of realistic noise realisations available. We discuss below in detail in Section 5.2.3 how the effective CMB lensing noise levels over the DES-SV area compare with the simplified average noise presented above in Section 2. The random maps are generated in such a way to include their correlations as described e.g. by Boughn et al. (1998), Cabré et al. (2007), Giannantonio et al. (2008). For the galaxy mocks, after generating random overdensity maps with the correct statistical properties, we transform them into number count maps assuming the actual galaxy number density of the real data. At this point, we add the appropriate galaxy shot noise on each pixel by random sampling from a Poisson distribution of expected value equal to the pixel occupation number. We finally smooth all mock maps with the same Gaussian beam used for the data: ϑ FWHM = 5.4 for the maps intended for the DES auto-correlation and DES-SPT cross-correlation, and ϑ FWHM = 10.8 for the DES-Planck crosscorrelation respectively. We expect the MC method to yield covariances that are similar to a purely analytic Gaussian estimate (see Eq. 15), except for the effect of the angular survey mask, which adds non-trivial correlations between angular multipoles. Such analytic and MC covariances are expected to be more accurate in their diagonal elements and on large (linear) scales where the fields are close to Gaussian distributed (Cabré et al. 2007). N-body mocks The second method to produce simulated DES-SV galaxy mocks uses N-body outputs from the MICE Grand Challenge N-body light-cone simulation (MICE-GC hereafter). The MICE simulations are based on the following fiducial cosmology: ω b = 0.02156, ω c = 0.10094, ω ν = 0, h = 0.7, A s = 2.44 × 10 −9 , n s = 0.95, which has a lower matter content than currently preferred Planck results. For further details about this simulation see Fosalba et al. (2015b), Crocce et al. (2015), Fosalba et al. (2015a). We generate CMB lensing mocks by using an all-sky lensing potential map of the MICE-GC. This map includes lenses at 0 < z < 100 and sources at the last scattering surface (z 1100). We have checked that lenses at z > 100 give a negligible contribution. The lensing map has been pixelised in the Healpix scheme at N side = 8192 (0.43 pixels) and downgraded to the required resolution of our analysis (N side = 2048) for covariance estimation. As for the mock galaxy number density map, we match as closely as possible the 'Benchmark' main galaxy sample. For this purpose we have used the dark-matter counts in the light-cone (i.e. unbiased galaxies) as a good approximation to the overall DES-SV galaxy population, given the low bias recovered from the main galaxies. We have then weighted the dark-matter counts with the redshift distribution and bias of our DES-SV galaxy sample in the range 0.2 < z phot < 1.2; we assume b = 1.15. The resulting mock galaxy number density is projected onto a Healpix map of N side = 8192, and downgraded in the same way as the lensing map described above. We add Poisson noise matching that of the SV galaxy sample to the N-body mocks as described for the MC mocks above. From the full sky we produce 100 non-overlapping rotations of the SPT-E mask. This procedure yields 100 effectively independent realisations of the galaxy and CMB lensing fields, as described and validated in Appendix A. Onto each CMB lensing mock we then add one mock CMB lensing noise realisation, as provided by the SPT and Planck collaborations. Finally, we apply a Gaussian smoothing of ϑ FWHM = 5.4 or 10.8 to all mock maps, as we do to the data. RESULTS We present here the results of the clustering analysis of the DES-SV galaxies and their correlations with the CMB lensing data. For robustness, we set up two independent analysis methods, measuring all quantities in real and harmonic spaces. As both SPT and Planck lensing data only contain meaningful information at multipoles < max , we enforce a cutoff in our analysis by applying a Gaussian smoothing to all data maps and mocks. For the DES auto-correlation and DES-SPT cross-correlation, we choose a beam size ϑ FWHM = 5.4 , which corresponds to a multipole FWHM ∼ π/ϑ FWHM = 2000. For the DES-Planck crosscorrelation, given the lower resolution and sensitivity, we use instead ϑ FWHM = 10.8 , corresponding to FWHM ∼ π/ϑ FWHM = 1000. We explore in Section 6.3 below the robustness of the results for different choices of max . The theoretical power spectrum predictions are thus suppressed by a Gaussian beam B 2 = e − ( +1)σ 2 , where σ = ϑ FWHM / √ 8 ln 2; we have checked that indeed this beam suppresses the signal by > 85% at = FWHM . As we do not enforce on the maps a sharp cut-off at < max , a small fraction of heavily suppressed power from higher multipoles is retained in the maps and consistently in the theoretical predictions. A sharp cutoff at = FWHM would also be a reasonable choice for the harmonic space analysis, and we have indeed confirmed that our results remain consistent with this choice; the real space analysis on the other hand is ill-behaved for sharp cutoffs in , so that the Gaussian smoothing is a better strategy for maintaining consistency between the two methods. Finally, we note that we apply the Gaussian smoothing on masked data consistently by smoothing both the masked map and the mask itself, and then dividing the smoothed masked map by the smoothed mask. This method removes the effect of the mask from the smoothing procedure, and we tested it is equivalent to first applying the smoothing on full-sky mock data and later masking them. Real space We begin with the real space analysis, where we measure the projected two-point correlation functions w(ϑ) of the pixellated maps. Correlation function estimators Given the observed number of objects n i in each pixel i = 1, ..., N pix , and given a binary coverage mask f i = {0, 1}, we first estimate the average number density per pixeln. We can then use for the correlation between two galaxy density maps a, b the estimator w ab (ϑ) = 1 N ab ϑ N pix i, j=1 f a i (n a i −n a ) f b j (n b j −n b ) n anb Θ i, j ,(17) where Θ i, j is 1 if the pixel pair i, j is at angular separation ϑ within the bin size ∆ϑ, and 0 otherwise, and the number of pixel pairs at angular separation ϑ is N ab ϑ = N pix i, j=1 f a i f b j Θ i, j . The CMB lensing maps κ i have zero mean, so that the correlation between a galaxy density map and a convergence map can be estimated aŝ w κg (ϑ) = 1 N κg ϑ N pix i, j=1 f g i (n g i −n g ) n g f κ j κ j Θ i, j ,(18) where the coverages of galaxies and CMB lensing f g i , f κ j are both binary masks defining the sky area used. We use this estimator to measure the correlations in p = 12 angular bins, equally spaced in logarithm between 0.04 deg (= 2.4 arcmin) and 5 deg. We have tested with mock data and analytical covariances that this binning optimally recovers the maximum possible information available for maps smoothed at ϑ FWHM = 5.4 , as the addition of extra bins does not increase the signal-to-noise any further. Notice that the smallest angle we consider is ϑ min = 2.4 arcmin < ϑ FWHM , as the imposed cut-off on the maps is Gaussian and not top-hat. Due to the Gaussian smoothing, the shot noise contribution to the galaxy auto-correlation functions affects angular scales at separations ϑ > 0 deg: we describe in detail in Appendix B how we model the shot noise component, which we subtract from all measured auto-correlation functions. Covariance matrix We estimate the covariances with several different methods: MC realisations, N-body mocks, an analytic method, and jack-knife techniques. We describe these methods and demonstrate their consistency in Appendix C. Differently from the MC and analytic covariances, the N-body method fully reproduces the anisotropic nature of the CMB lensing noise, and it also includes the non-Gaussian contributions to the covariance matrix produced by non-linear clustering, while being more stable than the JK estimator. We thus deem the N-body method to be our most realistic noise estimator, and we use this for our main results. We estimate the covariance matrix from the mocks as follows. We first measure the correlation functions of the mock maps, using the same estimator and keeping the same angular binning as done for the data. We use N = 1000 MC and 100 N-body realisations, and the covariance matrix is then estimated from the scatter of the mock correlations in each angular bin i:ŵ ab α,i ≡ŵ ab α (ϑ i ), where α labels a given realisation: C ab i j = 1 N N α=1 ŵ ab α,i −w ab i ŵ ab α, j −w ab j ,(19) wherew ab i is the mean correlation function over all realisations in the bin i. Notice that, for all covariance estimators based on multiple realisations N, the unbiased estimator for the inverse covariance matrix is not simply Ĉ ab i j −1 , but (Hartlap et al. 2007) C ab i j −1 = β Ĉ ab i j −1 ,(20) where β = (N − p − 2)/(N − 1) and p is the number of angular bins; β tends to one in the limit of large N. We can also define the correlation matrices as R ab i j ≡ C ab i j C ab ii C ab j j ,(21) which we show below in this section and in Appendix C. Note that even a covariance matrix that is diagonal in harmonic space corresponds to a real-space correlation matrix with significant offdiagonal components. If we assume the likelihood distribution to be Gaussian, the above estimate of the inverse covariance matrix (Eq. 20) can then be used to calculate the likelihood distribution of some parameters x given the dataŵ ab i as L(x) = (2π) −p/2 detĈ ab i j −1/2 × exp        − 1 2 p i, j=1 C ab i j −1 ŵ ab i − w ab i (x) ŵ ab j − w ab j (x)        ,(22) where w ab i (x) are the binned theoretical correlation functions predicted from the parameters x. As a consequence of the central limit theorem, the Gaussian likelihood is a good approximation at all but the largest angular scales, whose contribution to our measurement is negligible. The effect of the uncertainty on the data covariance itself onto the final parameters variance can be estimated (Taylor et al. 2013;Dodelson & Schneider 2013;Percival et al. 2014). We have tested that this contribution is small throughout this work; the central values of fit parameters are unchanged while error bars are affected at the < 10% level. In the following, we use a theory template based on the fiducial (fid) Planck cosmology, and we fit its amplitude. We therefore have for the auto-and cross-correlations: w gg i = b 2 w gg i fid , w κg i = A w κg i fid .(23) The amplitude of the auto-correlations is given by the galaxy bias b 2 . The amplitude of the cross-correlations A depends on both the galaxy bias and the actual amplitude of the CMB lensing signal A Lens , so that A = b A Lens . If the underlying true cosmology matches our fiducial ΛCDM model, so that A Lens = 1, the expectation value for the amplitude should be equal to the galaxy bias from the auto-correlation A = b, if the same scales are considered; if instead the scales considered do not match precisely, we expect this to hold only approximately. A and b are the parameters that we fit from our measurements on data and mocks below by calculating the likelihood (Eq. 22) over a grid of parameter values. The thick lines show the theoretical expectations from our Planck fiducial cosmology, rescaled by the best-fit bias b to the auto-correlation (dashed) and best-fit amplitude A = bA Lens to the cross-correlation functions (solid). The thin dotted lines refer to linear theory; the scale below which linear and non-linear theories differ by > 20%, ϑ NL , is marked in the first panel. The dark and light gray bands represent the 1 and 2σ uncertainties on the best fit respectively. The error bars are from the N-body covariance, and they are highly correlated. The correlation shapes for DES-SPT and DES-Planck correlations differ because the Planck map is smoothed on larger scales. Real-space results: full sample We show in Fig. 5 the measured two-point correlation functions in real space of the DES-SV main galaxies in the SPT-E field. The three panels show, from top to bottom, the galaxy autocorrelation function, and the cross-correlation functions with SPT and Planck CMB lensing. We compare the measurements with the predictions from our fiducial cosmology, where we use the non-linear matter power spectrum from the Halofit formalism (Smith et al. 2003;Takahashi et al. 2012). We fit the amplitudes of auto-and cross-correlations Full sample, 0.2 < z phot < 1.2 Real space Harmonic space Correlation Covariance b ± σ b S/N χ 2 / d.o.f. b ± σ b S/N χ 2 / d.o.f. Gal-Gal N-body 1.22 ± 0.03 41 3.8 / 8 1.22 ± 0.04 34 2.7 / 3 Correlation Covariance A ± σ A S/N χ 2 / d.o.f. A ± σ A S/N χ 2 / d.o.f. Gal-SPT N-body 0.84 ± 0.13 6.3 8.4 / 11 0.84 ± 0.15 5.6 8.7 / 19 Gal-Planck 0.78 ± 0.21 3.7 11 / 10 0.81 ± 0.20 3.8 7.7 / 9 Table 1. Summary of the results for the main galaxy sample for real (left) and harmonic (right) spaces: best-fit linear bias b and correlation amplitudes A = bA Lens for the three correlation functions and the N-body covariance estimator. The results are consistent between each other and with respect to the theoretical expectations for our fiducial model, but the cross-correlation amplitude is lower than the auto-correlation by 2 − 3σ. The recovered χ 2 per degree of freedom indicates the models and covariance estimators are in all cases appropriate for the data. . Correlation matrices for the three cases we consider, estimated with the N-body method. The matrices refer to galaxy-galaxy, galaxy-SPT, and galaxy-Planck lensing respectively. The angular range is from 2.4 arcmin to 5 deg as in Fig. 5. We see that the galaxy-CMB lensing correlation matrix is more diagonal than the galaxy-galaxy case, as the auto-correlation theory is more non-linear, and thus more non-Gaussian and less diagonal. Furthermore, all matrices become less diagonal in the first few angular bins due to the introduction of the Gaussian smoothing to the maps, which effectively blurs information on scales ϑ < ϑ FWHM = 5.4 (DES-SPT) and 10.8 (DES-Planck). given this model, binned consistently with the data, with simple one-parameter likelihood fits. In the case of the auto-correlation we determine the galaxy bias b, assumed constant and linear. Given the comparatively large effect of non-linearities compared with the statistical error bars, and in order to obtain a physically meaningful value for the linear galaxy bias, we restrict the fit to the bins at angular scales ϑ > ϑ NL , where ϑ NL is defined as the scale where the non-linear auto-correlation function diverges from the linear theory by > 20%. In the case of the cross-correlations, our main purpose is instead to extract as much signal as possible, and the theoretical uncertainties due to non-linearities are much smaller than the statistical errors. For these reasons, we fit in this case the overall amplitude A to the galaxy-CMB lensing cross-correlation functions at all scales. For the DES-Planck correlation, we exclude the first angular bin, as it is ∼ 100% correlated with the second bin due to the larger smoothing applied. We can see in Fig. 5 that the galaxy auto-correlation is in agreement with our fiducial ΛCDM model with a linear bias b = 1.22 ± 0.03 (N-body covariance). The physically crude approximation of an effective average bias across the full redshift range is actually able to correctly model the observed auto-correlation of the full galaxy sample; we study in our tomographic analysis below the actual redshift evolution of galaxy bias. The CMB lensing cross-correlations prefer a lower amplitude: A = 0.84 ± 0.13 and A = 0.78±0.21 using the SPT and Planck maps, respectively. These results are quoted for our most reliable covariance matrix (N-body), which we show in Fig. 6 for the three correlation functions consid-ered; we present in Appendix C a detailed comparison of the four covariance matrix estimators, where we demonstrate consistency and robustness of both diagonal and off-diagonal elements. We estimate the significance of the detections by evaluating the best fits of the linear bias b ± σ b and amplitude A ± σ A for the auto-and cross-correlations obtained with a simple one-parameter χ 2 fit from the measured correlation functions. We show a summary of the results in the left section of Table 1, from which we can already anticipate that the real and harmonic-space results presented in Section 5.2.3 below yield consistent results in all cases. For both SPT and Planck, the cross-correlation amplitude is lower than the auto-correlation by 2 − 3σ. We later discuss possible explanations for this result: in Section 6 we discuss systematic uncertainties, and in Section 7 we discuss possible cosmological interpretations. If we define the final significance of the detection to be A/σ A , we find it to be ∼ 6σ for the DES-SPT and ∼ 4σ for the DES-Planck cases respectively. These numbers should be compared with the (ideal) theoretical signal-to-noise levels to be expected from Eq. (15), which are ∼ 8 and ∼ 5 respectively. Hence our results are consistent with the expectations; the lower significance recovered is mainly due to the actual best fit being lower than expected in the fiducial model, and to the more realistic N-body covariance matrix we use. Finally, we see that our best fits are in most cases good fits, as the χ 2 per degree of freedom is generally close to (or below) unity, which confirms that our estimate of the covariance is realistic given the scatter observed in the data. Redshift tomography in real space Given the significance of the recovered detection in the DES-SPT case, we then study the evolution of the correlations as a function of redshift. We measure the DES-SPT cross-correlations in each of the photo-z bins, and we present the results in Fig. 7. The covariances are estimated with the most reliable N-body method only, constructed for each redshift bin from its photo-z redshift distribution, and assuming in each case a constant bias equal to the best fit to that bin's auto-correlation (we cross-checked that analytic covariances yield consistent results on the scales we consider). We fit from each bin auto-correlation the best-fit bias b, considering only quasi-linear scales ϑ > ϑ NL , where non-linearities are less than 20% of the total auto-correlation function; we see that ϑ NL decreases in redshift as expected, allowing us to consider all data points in the highest-redshift bin. We fit the cross-correlation amplitude A = bA Lens from the DES-SPT lensing cross-correlations, using in this case all the available scales, as discussed above for the full sample. We can see that the auto-correlation observations are in agreement with our fiducial model and a set of constant linear bias pa- Figure 7. Measured auto-(left) and cross-correlation functions (right) of DES-SV main galaxies as a function of photometric redshift. The panels refer to thin photo-z bins, from low to high redshift. The error bars are derived from the N-body covariance matrix. The lines show the fiducial Planck cosmology rescaled by the best-fit linear bias or amplitude obtained from the auto-(dashed) and from the cross-correlations (solid); for each case, the linear theory is shown with thin dotted lines. The best-fit bias values and their 1σ errors are also shown in each panel; the coloured bands represent 1 and 2σ uncertainties on the best fits. When fitting the auto-correlation bias, the points at ϑ < ϑ NL have been excluded from the fit, consistently with Crocce et al. (2016), as they lie in the non-linear regime where the non-linear corrections are > 20%. All points are included in the cross-correlation fits. The auto-correlation results are presented and discussed in more detail by Crocce et al. (2016), including a further discussion on the anomalous behaviour of the lowest-redshift bin at small angular scales. rameters that increase with redshift. The bias values we obtain are fully consistent with the main results by Crocce et al. (2016), thus validating both analyses. In the cross-correlation case, we also find an agreement with the same model, although the uncertainties and the scatter are larger than what we find for the full sample, especially at low redshift. Both auto-and cross-correlations agree less well with the expectations in the first bin at 0.2 < z phot < 0.4; see Crocce et al. (2016) for a more detailed discussion of the possible residual systematics in this bin. We summarise in Table 2 the best-fit biases and amplitudes of the cross-correlations with their errors, assumed Gaussian. We see that we do recover a significant correlation (at > 2σ) in all bins and > 3σ in all but the lowest redshift bin; however, the best-fit cross-correlation amplitude recovered fluctuates significantly with respect to the expectation, and with respect to the best-fit bias. We see that the trend of obtaining A(z) < b(z) is recovered in most redshift bins, confirming what we find for the full sample. We also show that the reduced χ 2 associated with the best-fit bias and amplitudes are close to 1 in most cases, indicating that our estimate of the covariances is realistic, and that our best-fit model is consistent with the observations. The only notable exceptions are the galaxy auto-correlations in the first and last redshift bins. We discuss below in Section 7 the cosmological implications of these results. Harmonic space analysis While measurements of the angular correlation function are formally fully equivalent to the information contained in the power spectrum, there are fundamental differences that warrant a detailed comparison. The harmonic space has some well-known advantages Redshift tomography Real space Harmonic space Correlation Covariance Photo-z bin b ± σ b S/N χ 2 / d.o.f. b ± σ b S/N χ 2 / d.o.f. Gal-Gal N-body 0.2 < z phot < 0.4 1.03 ± 0.06 17 20 / 7 1.14 ± 0.05 22 1.4 / 1 0.4 < z phot < 0.6 1.28 ± 0.04 31 2.2 / 8 1.29 ± 0.05 28 0.6 / 3 0.6 < z phot < 0.8 1.32 ± 0.03 46 6.9 / 9 1.29 ± 0.03 40 2.7 / 5 0.8 < z phot < 1.0 1.57 ± 0.03 59 4.3 / 10 1.58 ± 0.03 54 2.5 / 7 1.0 < z phot < 1.2 1.95 ± 0.04 50 29 / 11 1.98 ± 0.05 44 26 / 9 Correlation Covariance Photo-z bin over real space correlation estimators. The covariance matrix, for a given survey mask, is more diagonal than in real space, and measurements of the power spectrum in multipole bins are significantly less correlated, so that it is more straightforward to isolate clustering contributions at different physical scales, and to apply band-pass filters if required. Nonetheless, harmonic space estimators need to develop efficient ways to deconvolve the mask, which is more difficult than in configuration space, thus making the analysis more expensive. Different power spectrum estimators exist: computationally expensive optimal estimators that extract all information contained in the data (Tegmark 1997;Bond et al. 1998), and pseudo-C estimators that are sub-optimal, but have a much lower computational complexity (e.g., Hivon et al. 2002;Chon et al. 2004). A ± σ A S/N χ 2 / d.o.f. A ± σ A S/N χ 2 / d.o.f. Gal-SPT N-body 0.2 < z phot < 0.4 0.41 ± 0. Power spectrum estimators In the following, we repeat our cross-correlation analysis in harmonic space using two different estimators of the angular power spectra C : the pseudo-C estimator PolSpice (Szapudi et al. 2001;Chon et al. 2004;Fosalba & Szapudi 2004) for our main results of Sections 5.2.2, 5.2.3, 5.2.4, and as a cross-check, a quadratic maximum likelihood estimator described in Section 5.2.5. Masks and data remain the same as for the real-space analysis presented above. We measure here the power spectra C with the nearly-optimal and unbiased pseudo-C estimator implemented in the PolSpice code. This public code measures the two-point auto (or cross-) correlation functions w(ϑ) and the angular auto-(or cross-) power spectra C from one (or two) sky map(s). It is based on the fast spherical harmonic transforms allowed by isolatitude pixelisations such as Healpix; for N pix pixels over the whole sky, and a C computed up to = max , the PolSpice complexity scales like N 1/2 pix 2 max instead of N pix 2 max . The algorithm corrects for the effects of the masks and can deal with inhomogeneous weights given to the map pixels. In detail, PolSpice computes the (pseudo) C of the map and weights/masks, calculates their (fast) Legendre transforms, i.e, the corresponding correlation functions, computes their ratio, applies apodisation if needed, and transforms back to harmonic space, where pixel deconvolution is simply applied to get the final C . Covariance matrix Similarly to the real space case presented in Section 5.1.2, we compute covariance matrices in harmonic space. This involves computing the covariance between different C multipoles, by formally replacing the angular correlation function by the power spectrum in Eq. (19) above. We first estimate the covariance with the MC method. From our analysis we find that the covariance matrix of the galaxy-CMB lensing cross-correlation is approximately diagonal up to max = 2000, for a multipole bin width ∆ = 98. We only sample scales down to min = 30 (i.e., ϑ < 6 deg), as lower multipoles are poorly constrained by DES-SV data over the SPT-E area. This yields 20 multipole bins in the range used. We then estimate the covariance with the N-body method described in Section 4.2, which provides us with 100 independent, realistic realisations of the galaxy and CMB lensing maps in the SPT-E field. We derive the correlation matrices using the normalisation of Eq. (21). We show in Fig. 8 the resulting binned correlation matrices for galaxy-galaxy, galaxy-SPT and galaxy-Planck lensing convergence. In particular, the covariances show band-powers of ( + 1)C gg and 2 ( + 1)C gκ . We find that the galaxy-CMB lensing and CMB lensing-CMB lensing covariances stay block diagonal even when non-linear growth is taken into account using N-body simulations. The galaxy-galaxy covariance, however, displays large off-diagonal elements for 200 (depending on the z bin) due to non-linear mode coupling that induces a non-Gaussian contribution to the covariance, sourced by the gravitational matter trispectrum. We show in Appendix C a comparison of the different covariance matrix estimators in harmonic space, where we demonstrate consistency of the results. Harmonic-space results: full sample We first show in Fig. 9 the CMB lensing auto-spectrum for SPT and Planck. As we showed from the covariance analysis in Section 5.2.2, we can bin the data in multipole bins of width ∆ 100, in order to get uncorrelated bandpower measurements. For plotting purposes, we use broader (uncorrelated) bins with ∆ 200, in order to get smaller errors per bandpower. As the expected true spectrum is smooth, this step is not expected to destroy any information. In Fig. 9 we can see that for both surveys, the convergence maps are noise-dominated at all scales, as the auto-spectrum is always larger than the fiducial cosmological signal shown in magenta; SPT has higher sensitivity at small scales > 300, while Planck has an advantage on the largest angular scales. In the case of SPT, we see that the convergence power spectrum of the data (black points) is well characterised by the mean SPT noise over the 100 anisotropic noise realisations (dashed black line). The small (∼ 10%) errors of the lensing auto-power are due to the low level of scatter among lensing noise realisations. In the case of Planck, Figure 10. Auto-and cross-correlations between our DES main galaxy sample and the CMB lensing convergence, in harmonic space. The first panel shows the galaxy auto-spectrum, while the second and third panels refer to the galaxy-SPT and galaxy-Planck CMB lensing cross-spectra respectively. The lines show our fiducial cosmology rescaled by the bestfitting constant bias and amplitude to the auto-spectrum (dashed) and to the cross-spectra (solid lines). Dotted lines refer to linear theory. The arrow in the first panel indicates the multipole NL after which the full nonlinear auto-correlation theory exceeds linear theory by > 50%, which is our cutoff in the galaxy bias fit, while the arrow in the bottom panel indicates our cutoff scale for the DES-Planck correlation at < 1000. The amplitude of the cross-correlation is fit using 30 < < 2000 for DES-SPT and 30 < < 1000 for DES-Planck. The error bars are the diagonal elements of the N-body covariance. The different shape of the DES-Planck correlation is due to the stronger smoothing we apply. we find that the convergence spectrum over the DES-SV area (solid blue square points) is ∼ 25% lower than the spectrum over the full Planck lensing mask (empty blue squares); this is confirmed by the mean of the 100 mock Planck lensing realisations (dashed and dotted lines for the DES-SV area and full CMB lensing mask respectively). We can understand this given the especially convenient location of the DES-SV footprint, shown in Fig. 4, which justifies the atypical noise properties over this area. For our theoretical and MC covariances, we use the convergence noise levels observed from the mocks over the DES-SV area, as these are the most realistic noise estimations. We then show in Fig. 10 the auto-and cross-correlations between the DES-SV main galaxy sample and the SPT and Planck CMB lensing convergence, with the diagonal errors from the N-body covariance. Using the measured spectra and the N-body covariance matrices, we can estimate the best-fit amplitudes and corresponding detection significances for the cross-correlations. As in the real-space analysis of Section 5.1, we apply a cut to the nonlinear scales when fitting the galaxy bias b from the auto-spectrum; in this case, the scale of non-linearity NL is defined so that the nonlinear theory exceeds the linear model by > 50% at > NL ; this scale is marked with an arrow in Fig. 10. This threshold is less strin- Figure 11. Harmonic space redshift tomography of the auto-(left column) and cross-(right column) correlations. The panels from top to bottom describe the results of photo-z bins of increasing redshift. The solid lines show our fiducial cosmology rescaled by the best-fit linear bias b up to weakly non-linear scales NL marked with an arrow (for the auto-spectra) or the cross-correlation amplitudes A = bA Lens over the whole range of scales (cross-spectra); the best-fit biases and amplitudes are reported in the captions with their 1σ errors. The dotted lines are linear theory predictions, and the error bars are from the full N-body covariance estimator. In agreement with the real-space analysis and with Crocce et al. (2016), the auto-correlation in the lowest (and less significantly in the highest) redshift bins does not match the theoretical expectation on non-linear scales, which are discarded from our bias fits anyway. gent than the 20% we use in the real-space analysis above; this is because, even with linear binning and logarithmic ϑ binning, the harmonic space analysis is more sensitive to non-linear scales than the real-space measurement, where information from all scales is mixed. Applying a 20% threshold in harmonic space would leave only one data point in the galaxy auto-spectrum, while using a 50% criterion in real space would lead to the inclusion of all data points. For the same motivations as above, we do not apply such a scale cutoff (beyond our Gaussian smoothing of the maps) when fitting the cross-correlation amplitudes A, so that we do not expect a perfect match between the two amplitudes. The upper panel of Fig. 10 shows that the galaxy auto-power is best fit by our fiducial cosmology with linear galaxy bias b = 1.22 ± 0.04, up to NL (dashed line) and assuming the N-body covariance. From the central panel, we see that the cross-correlation with SPT is best fit by a lower amplitude value, A = 0.84 ± 0.15 (solid line), which is ∼ 2σ smaller. Likewise, the bottom panel shows that the cross-correlation with Planck is also lower than expected from the galaxy auto-spectrum: A = 0.81 ± 0.20. We summarise our harmonic-space results in detail in the right section of Table 1, where we show the results with the N-body covariance matrix. The best-fit linear galaxy bias from the autospectrum is typically ∼ 2σ higher than the best-fit amplitude of the galaxy-CMB lensing cross-correlations, in agreement with what we find in real space. The cross-correlation significance of a detection is ∼ 6σ for SPT and ∼ 4σ for Planck; these numbers are in agreement with the real-space analysis results. We note that we do not expect a perfect agreement between the two analyses as they involve different estimators that weight physical scales in a different way; however, thanks to the Gaussian smoothing we apply to data and mocks, which effectively makes both estimators band-limited, we do manage to recover a good agreement. We test in Section 5.3 below the consistency between the real-and harmonic-space estimators, and their degree of correlation. We can see from the χ 2 per degree of freedom that our best fits are good fits. Finally, we point to Section 6.3 for an analysis of the stability of the results with respect to different choices in the multipole range considered. Redshift tomography in harmonic space In analogy to the real space results discussed above, we then measure the redshift tomography of the auto-and cross-spectra. The left column of Fig. 11 shows the auto-power of DES galaxies for the five photo-z bins we consider. The solid lines show the best-fit linear galaxy bias to the measured spectra, and the error bars are from the N-body estimator. In each case, we only include in the bias fit data points that are to the left of the non-linear scale NL , marked with an arrow. We find that the recovered galaxy bias grows smoothly with redshift, as expected, in agreement with Crocce et al. (2016). We note that neighbouring photo-z bins are significantly correlated, due to photo-z errors. In agreement with Crocce et al. (2016) and consistently with the real-space analysis, we find that for the lowest photo-z bin (and more mildly for the highest one), the auto-correlation on non-linear scales disagrees with the theoretical expectations. While the harmonic-space analysis highlights this mismatch more significantly than what we see in Fig. 7 in real space, we remind the reader that these scales are not used for finding the best-fit bias. Crocce et al. (2016) attribute these discrepancies to possible non-linear bias in the lowest-redshift bin, and to systematic contaminations related to inaccurate photo-z determination of blue galaxies in the absence of u-band photometry. In the right column of Fig. 11 we show the corresponding cross-correlations with SPT lensing. Although the signal is clearly more noisy than the auto-spectra, we do find a 2-4σ detection in every bin, and we also see that the cross-correlation amplitude grows with redshift, as expected, although typically we find A(z) < b(z), confirming the general trend observed in the analysis of the full galaxy sample. Note also that the scatter seen in the C s is larger than that of the corresponding two-point correlation functions shown in Fig. 7 above, since band-power measurements are much less correlated than the real-space angular bins used. All results are summarised in detail in Table 2, where they can be directly compared with their real-space counterparts. From this table, we can see that there is a good agreement between real-and harmonic-space analyses. One point that stands out as marginally inconsistent (at the > 2σ level) is the cross-correlation amplitude in the third photo-z bin (0.6 < z phot < 0.8), which is significantly lower in harmonic space than in real space. By inspecting the data in Fig. 11 for the third photo-z bin (green), it is clear that this anomaly is driven by the low correlation observed in the first multipole bin at < 300, also seen in real space in the form of an oscillating cross-correlation function at scales ϑ > 0.5 deg; we found that by discarding this point, we obtain an excellent agreement also in this bin, but we decide not to apply such a cut in our final result, to avoid any ad hoc manipulation of the data. Optimal quadratic estimator As a cross check of the harmonic space results with an independent pipeline, we now present the analysis using the optimal estimator introduced by Tegmark (1997) in an implementation described in Leistedt et al. (2013). In short, it computes the power spectrum estimates from a quadratic combination of the data vector, normalised by the Fisher matrix. It can be shown that the variance of the estimator saturates the Cramér-Rao bound, i.e. the estimator is optimal. As well as producing the power spectrum estimates themselves, the algorithm also allows the mathematically exact calculation of the C covariance matrix under the assumption of a Gaussian signal. While some of the variance of the galaxy auto power spectrum will not be captured (galaxy fields can more realistically be described by a log-normal distribution), error bars of the galaxy-CMB lensing convergence power spectrum are represented to sufficient accuracy (see Section 5.1.2). Unfortunately, the computations involved in constructing the optimal estimator scale with the third power of the number of pixels of the input map, restricting its application to data vectors of moderate size (Borrill 1999). To accommodate this requirement, we therefore downgrade all maps to resolution N side = 512. In downgrading the mask, we set to zero all low-resolution pixels that contain a masked pixel at the original resolution, thereby reducing the sky fraction available. To avoid aliasing, we further bandpass filter the data by applying a top-hat kernel in harmonic space that restricts fluctuations to the multipole range used in the analysis, 30 1210. Here, the lower limit reflects the restricted sky coverage of the observed region, and the upper limit is a conservative estimate up to which signals can be represented well at the given resolution parameter N side . Since lowering the resolution comes at the expense of a loss of information, we primarily use the method to demonstrate robustness and independently verify the results presented in Section 5.2.3. We summarise the details of the implementation and test our analysis pipeline on simulations in Appendix D. In Fig. 12, we show results for the galaxy-galaxy and galaxy-CMB lensing convergence power spectrum, convolved with the window function of a Gaussian kernel with a FWHM of 5.4 arcmin. We treat the bias parameter as a free parameter and obtain its numerical value from a fit to the galaxy-galaxy power spectrum assuming the best fit cosmological model of Planck. Using analytic covariance matrices, we then compute χ 2 values and reach a quantitative agreement with the theoretical model in the multipole range probed. Compared to the results derived with the pseudo-C estimator in Section 5.2.3, we find a good agreement, in particular in the intermediate multipole regime. On large scales, the optimal estimator has an advantage since the effect of the mask is taken into account in a mathematically exact way. At high multipoles, how- Figure 13. Consistency of the results in real and harmonic spaces, assuming the fiducial Planck cosmology (left) and the MICE cosmology with which the mocks were generated (right). In each case, the two scatter plots show a comparison of the best-fit bias b from the galaxy auto-correlation (top) and best-fit galaxy-CMB lensing cross-correlation amplitude A = bA Lens (bottom), obtained with the two methods. The red square points with error bars represent the results from real data, while the grey circles refer to the 100 N-body mocks. The black empty circles with error bars show the mean of the mocks and their standard deviation. The blue triangle is the input value for the simulations (b = A = 1.15). The results from real data are largely consistent with the distribution of the mocks, although we see that the bias value assumed for the mocks is lower than the value recovered from DES galaxies auto-correlation, and higher than what measured from the galaxy-CMB lensing cross-correlation. The harmonic and real space estimators are correlated, but a significant scatter exists. The mock cross-correlation results are displaced from the fiducial input amplitude when they are interpreted with the Planck cosmology, but they agree with the fiducial when interpreted assuming their own MICE cosmology. The data closely follow the behaviour of the mocks, which in turn suggests the data prefer a lower ω m σ 8 than expected in the Planck cosmology. This is further discussed in terms of the linear growth estimator, D G , in Section 7. ever, the lower computational complexity of the pseudo-C estimator allows it to work at higher map and mask resolution, yielding a larger effective sky area that can be retained for analysis. Finally, keeping the cosmological parameters fixed, we compute the likelihood function of the galaxy bias parameter, finding a Gaussian distribution with mean and standard deviation b = 1.189 ± 0.015. For the DES-SPT cross-correlation amplitude, we find A ≡ bA Lens = 0.83±0.19, i.e. a detection at the 4.5-σ level. This result is in agreement with the pseudo-C analysis of Section 5.2.3, although with a larger error bar due to the smaller range of scales probed. This further validates the robustness of our analysis. Consistency of the results In order to demonstrate the consistency of the results obtained with different estimators in real and harmonic spaces, we repeat our analysis, measuring auto-and cross-correlations of the 100 Nbody mock realisations available for the DES galaxy and the SPT lensing convergence. We show in Fig. 13 the best-fit bias obtained from the galaxy auto-correlations and the best-fit amplitude from the DES-SPT lensing cross-correlations, comparing the real data and N-body results in real and harmonic spaces. As the N-body mocks we use were generated assuming the MICE cosmology, we repeat this test assuming both Planck and MICE parameter values. From these scatter plots, we first see that the harmonic and real-space estimators are correlated as expected, but the scatter between them is significant. Considering the 100 mock results, we obtain a Pearson correlation coefficient r P = 0.7 for both auto-and cross-correlations. Further, we see that the results from the real data cross-correlations are largely consistent with being a random draw from the distribution of the N-body results; however in the autocorrelation case, the bias recovered from the real data (b 1.2) is marginally higher than what we assumed for producing the mocks (b = 1.15). While we could generate new mocks with a bias value matching the data more closely, we expect this to have only a minor effect on the covariance of the measurements: this is confirmed by the observation that the current mocks and a jack-knife method independent of bias yield consistent results, as shown in Appendix C. Furthermore, as the bias recovered from the cross-correlation of real data is actually b < 1.15, the current value can be seen as a compromise choice. We then compare the results obtained assuming the two cosmologies. Here we see that the mocks and their Figure 14. Maps of the main potential DES systematics we consider, plotted in the masked region of the SPT-E field we use for our analysis. We show in order extinction as estimated by Planck (Planck Collaboration et al. 2014a), and seeing, sky brightness and airmass estimated from DES data by Leistedt et al. (2015). mean fully agree with their input bias value when interpreted with their native MICE cosmology, as expected. Instead, the values of b and A inferred from the mocks naturally deviate from the fiducial if the different Planck cosmology is assumed. If we focus again on the results from the real data, we notice that their behaviour is not dissimilar from the MICE mocks: the tension between b and A that is observed when the Planck cosmology is assumed is significantly alleviated by the MICE model; we discuss this point further in Section 7 below. SYSTEMATIC TESTS We summarise here a series of tests done to ensure the crosscorrelation signal we measure is not significantly affected by systematics. We first consider the impact of possible DES systematic contaminants at the map and catalogue levels in Section 6.1, we then assess the impact of photo-z uncertainties in 6.2, and we finally test for possible CMB systematics in Section 6.3. With the exception of the photo-z case, we perform these tests on the full redshift sample only in the current work with DES-SV data. We will extend the entire systematics analysis to the tomographic bins for future DES data releases. DES systematics We consider a broad range of possible DES systematics, which include potential sources of contamination at both the map and catalogue levels. The first category includes extinction of distant sources by dust in our galaxy, degradation of image quality due to the observing conditions such as atmospheric seeing, brightness level of the sky and its fluctuations (sky sigma), and amount of air mass dependent on the distance of the observed field from the zenith, while the second category includes errors on galaxy magnitudes, photo-z, amount of nearby bright stars, and goodness of the point spread function (PSF) and magnitude determination. The values of all such properties were mapped across the DES-SV area as described in a companion paper (Leistedt et al. 2015); considering some of these potential contaminants are mapped in different photometric bands, the total number of maps we can consider amounts to 19. All were checked and were found to be consistent with the null hypothesis of no significant contamination to the DES-CMB cross-correlations. We describe here for simplicity the null test results of four possible systematics that are likely to bring strong contaminations: extinction from the Planck colour excess map (Planck Collaboration et al. 2014a), seeing, sky brightness, and airmass in the DES i band. Our aim is to demonstrate that the results are stable with respect to them. We show in Fig. 14 the sky maps of these contaminants in the masked region of the SPT-E field that we use for our analysis. We study the properties of these potential systematics by plotting the histogram of their pixel distributions, which we show in Fig. 15. Here we can see that typical contaminants have a tail in their histogram distribution, corresponding to the most affected areas in the map; for example in the extinction case, the tail at E(B − V) > 0.05 mag corresponds to the dusty region in the lower left corner of the map shown in Fig. 14. A first method for assessing whether any of these potential contaminants has a significant impact on the results of the DES clustering and the DES-CMB lensing correlation is to test whether the results change significantly compared with the statistical uncertainty when the worst-affected areas are masked. We thus measure the DES auto-and DES-SPT cross-correlation functions applying different cuts in these contaminants, in order to assess the stabil- Figure 16. Measured DES auto-(top) and cross-(bottom) correlation functions with SPT lensing obtained from the full mask (red dots) and applying cuts in the main contaminants we consider (coloured lines). For each potential systematic, we remove 20% of the area, corresponding to the most affected regions. The results are stable: the correlation functions do not deviate significantly compared with the statistical error bars. ity of our results. We show in Fig. 16 the correlation functions we obtain when masking the 20% worst-affected areas for each potential systematic we consider. Here we can see that the results are stable, as the correlation functions of the cut data are consistent with the full data, given the statistical errors. We report in the top section of Table 3 how the best-fit bias and cross-correlation amplitude change when the cut maps are used. The same test in harmonic space yields comparable results. These results are indicative of the full set of 19 systematic maps considered. These tests are reassuring and indicate that our claimed detection is not likely to be dominated by this class of systematics. A second method of controlling potential systematics involves measuring cross-correlations with the systematics maps themselves; these cross-correlations can then be used to correct the measurements from contamination. We assume that some systematic source s, whose value at a given angular position is given by δ s , may add a linear contribution to our maps of galaxy overdensity or lensing potential. In the galaxy case this assumption means (Ross et al. 2011b;Ho et al. 2012;Crocce et al. 2016): δ g,obs = δ g,true + s α s δ s ,(24) if the corrections are small, with an identical treatment for the lensing potential map. If we consider only one possible systematic at a time, the true value of our measurements can be related to the observed correlations between data and systematics. In the crosscorrelation case, this is given in harmonic space by C gκ ,true = C gκ ,obs − C gs C κs C ss ,(25) where the last term on the right represents a correction factor to Figure 17. The galaxy-galaxy (top) and galaxy-SPT lensing (bottom) power spectra, including systematic corrections. Corrections are overall small, and especially so for the cross-spectrum, with all data points within 1σ in this case. The best-fit bias and level of detection are negligibly changed when including these corrections. Figure 18. Correlations between the SPT convergence map and the potential systematics maps. Data points are offset on the x-axis for clarity. The galaxy-lensing correlation (red circles) is detected at 6σ, while the majority of the SPT cross systematic maps data points are consistent with zero. Correlating the SPT maps with DES potential systematic maps is expected to produce a null result, which we recover. the measurements. We investigated the size of these corrections for all potential systematic maps, consistently finding the corrections to be small compared with the statistical uncertainties on the measurements. We show in Fig. 17 the corrected power spectra for the four systematic maps of Fig. 14, for both the DES auto-and the DES-SPT lensing cross-power spectra in harmonic space. We summarise in the bottom section of Table 3 the changes in the best-fit bias and cross-correlation amplitude when we apply the systematic corrections. Also in this case, the results are robust. We finally show in Fig. 18 the direct cross-correlations of the SPT lensing maps with the contaminant maps, which enter into Equation 25. This figure shows that the cross-correlations are consistent with zero, which is a good null test. For further details on the Systematic cuts, real space minimisation of potential systematics in the DES galaxy catalogue see Crocce et al. (2016). The systematics cuts shift the cross-correlation by less than 1σ in all cases; thus we do not apply any of these cuts to the main analysis. b ± σ b A ± σ A A further possible source of systematics is stellar contamination to the galaxy sample, which can potentially alter the measured auto-correlations and dilute the cross-correlations. Crocce et al. (2016) demonstrate by using a spectroscopic subset of DES galaxies derived from the COSMOS survey that the amount of contamination to the 'Benchmark' galaxy sample is < 2% in all redshift bins, so that we can ignore it for the present analysis. We have also tested the stability of our results with respect to a range of possible choices in the analysis method, finding overall stability in the recovered cross-correlation function. Additional items that we tested include: measurement done on Galactic or Equatorial coordinate maps; using cuts in a different magnitude definition (mag_detmodel_i instead of slr_mag_auto_i); using the intersection of the galaxy and CMB masks versus keeping the two masks distinct; reducing the catalogue to a magnitude cut of 18 < i < 22. Photo-z uncertainties Another source of systematics can be introduced by potential inaccuracies in the photometric redshifts of the galaxy sample. Changes in the TPZ photo-z distribution We first test the effect of smoothing the photo-z redshift distribution for the full galaxy sample at 0.2 < z phot < 1.2. Smoothing this distribution with a Gaussian kernel broad enough to remove its oscillations does not affect the predicted cross-correlations, while it affects the auto-correlation only marginally. This results in an identical value of A and a value of b that is only ∼ 2% higher than our main result, so that our results are not significantly affected. We further explore how wrong the photo-zs would need to be in order to significantly change our results. We test this by warping the fiducial redshift distribution, which we implement by first fitting the actual TPZ distribution with a Gaussian, and then changing the width of this Gaussian. We consider as two extreme cases a top hat within 0.2 < z < 1.2, and a narrow distribution centred around the median redshift z = 0.6, with σ = 0.1. We find that the galaxy-CMB lensing cross-correlations are extremely robust with respect to such warping, due to the broadness of the CMB lensing kernel. For all cases we tested, including the top hat and the narrow Gaussian, the best-fit amplitude A we recover from the cross-correlation is within 5% of the result obtained assuming the TPZ distribution. This highlights that the significance of our detection is robust with respect to changes in the photo-zs. In the case of the auto-correlations we find that whenever the redshift distribution becomes smoother and broader, the expected auto-correlation becomes lower as the galaxies are in average further apart in physical distance; conversely, the auto-correlation increases for a more peaked redshift distribution. When assuming the top-hat distribution, the recovered bias b increases by 15% compared with the TPZ distribution, while b decreases by 30% if we assume the narrow Gaussian. Therefore, it is in principle possible to alleviate the observed tension between auto-and crosscorrelations by assuming that the true redshift distribution of the DES galaxies is significantly narrower than what is determined with the TPZ method. However, we find that in order to bring auto-and cross-correlations in full agreement, we need to assume a warping of ∼ 50%, i.e. we need to use a Gaussian distribution of width σ = 0.15, which is twice as narrow as the the TPZ distribu- Figure 20. Redshift tomography using two different photo-z methods: TPZ (red) and BPZ (navy). The theoretical curves and best-fit amplitudes for the cross-correlations are also shown for each method. The recovered results agree. See Crocce et al. (2016) for similar tests on the galaxy autocorrelations. In the first photo-z bin at 0.2 < z phot < 0.4 the best fit curves do not trace the data closely, given the mismatch between the measurements and the template shape, and the high covariance between the points. tion, of σ 0.3. In other words, the stacked probability distribution produced by the TPZ estimator for the full galaxy sample at 0.2 < z phot < 1.2 would need to be twice as broad as the true redshift distribution of the DES galaxies. But such a dramatic error is unlikely, as the mean r.m.s. error on the TPZ photo-zs was found to byσ z = 0.078 by Sánchez et al. (2014); furthermore, the fraction of 3σ outliers was found to be 2% only, thus reducing the potential impact of catastrophic redshift errors. Therefore, we consider our main results to be robust, and we discard the photo-z errors as the main reason of the discrepancy we observe. Comparison of two photo-z estimators We then demonstrate the robustness of the results with respect to a different choice of photo-z estimator: besides our baseline choice of TPZ, we also consider here a galaxy catalogue selected on photo-zs obtained with the BPZ method. Given the radical differences between the two methods (TPZ is a machine learning algorithm while BPZ is template-based), it is important to test the robustness of our results with respect to this change. We therefore change the selection of the galaxy sample according to the alternative BPZ estimator, and we derive modified theoretical predictions with the corresponding BPZ redshift distribution. We compare in Fig. 19 the measured DES auto-and DES-SPT lensing cross-correlation functions with the different photo-z methods for the full sample 0.2 < z phot < 1.2. Here we can see that the results with TPZ (red points and curves) and BPZ (navy) photozs are generally consistent. The change of the cross-correlation amplitude is consistent with the statistical errors from A = 0.84 to 0.81, so that the significance of our measurement remains unaffected. On the other hand, the bias from the auto-correlation shifts more significantly from b = 1.22 to 1.09. This happens because the BPZ redshift distribution is narrower than TPZ, as BPZ assigns fewer objects to high redshift. As discussed above in Section 6.2.1, this causes the predicted auto-correlation to be higher, thus requiring a lower bias to match the nearly identical data. Using BPZ therefore removes approximately half of the observed tension between auto-and cross-correlations for the full redshift sample. However, we think it is unlikely that changes in the photo-z alone can fully remove the tension: as shown below for the cross-correlation and by Crocce et al. (2016) for the autocorrelation, the recovered bias and amplitude values in the tomographic analysis are significantly more robust. For the tomography, assuming TPZ or BPZ yields consistent results for both auto-and cross-correlation amplitudes. As the tension we find between autoand cross-spectra in the tomography is consistent with the full sample, our main results do not appear to be dominated by the photoz uncertainty. Furthermore, the BPZ redshift distribution appears to be a poorer description of the DES-SV galaxies than the TPZ one, given the galaxy-galaxy cross-correlations observed by Crocce et al. (2016) between different redshift bins are less consistent than what is seen for TPZ. We then test the robustness of the redshift tomography crosscorrelations, which we show in Fig. 20. Here we see once again that the change of photo-z selection method does not change significantly the recovered best-fit cross-correlation amplitudes in any redshift bin. We show more quantitatively the resulting best-fit amplitudes for the full sample and tomography in Table 4. Here we can see that the fluctuations of the results are generally small. The cross-correlations are stable, as the variations due to the photoz differences are small compared with the statistical error bars. Crocce et al. (2016) show that the results of the auto-correlations tomography are equally robust for changes between the TPZ and BPZ distributions. This is because the lack of high-redshift objects in BPZ, which makes the full distribution narrower, has no significant effect on the narrow dn/dz of the five redshift bins. In fact, the bias of the full redshift sample b full is expected to be approximated by a weighted average b avg over the number of pairs in the N bin redshift bins: b full b avg = N bin i=1 n 2 i b i ,(26) where n i is the number of galaxies in the bin i. We test whether Photo-z bin (A ± σ A ) TPZ (A ± σ A ) BPZ 0.2 < z phot < 1.2 0.84 ± 0.13 0.81 ± 0.14 0.2 < z phot < 0.4 0.41 ± 0.21 0.36 ± 0.22 0.4 < z phot < 0.6 0.75 ± 0.25 0.76 ± 0.24 0.6 < z phot < 0.8 1.25 ± 0.25 1.13 ± 0.25 0.8 < z phot < 1.0 1.08 ± 0.29 1.21 ± 0.29 1.0 < z phot < 1.2 1.95 ± 0.37 1.83 ± 0.34 Table 4. Real-space comparison of the galaxy-CMB lensing crosscorrelations for two different photo-z estimators (TPZ vs. BPZ) for the full sample and the redshift tomography, for the case of N-body covariance. The recovered cross-correlation amplitudes are consistent within the statistical errors. See Crocce et al. (2016) for the corresponding results from the galaxy auto-correlations. this consistency check is satisfied by the TPZ and BPZ estimators, and we confirm it, by looking at the ratios for the full and averaged biases between the two photo-z methods: b TPZ full /b BPZ full = 1.11 ± 0.04 b TPZ avg /b BPZ avg = 1.08 ± 0.03 .(27) This results confirms that the larger bias change seen between TPZ and BPZ for the full sample auto-correlation is not in disagreement with the smaller bias changes seen in the tomography, which are shown by Crocce et al. (2016). In this sense, the stability of the auto-correlation tomography confirms that the tension we observe between galaxy clustering and CMB lensing correlation is unlikely to be fully removed by changes in the photo-z alone, since switching from TPZ to BPZ removes only half the discrepancy (in the full sample) and leaves the discrepancy unchanged (in the tomography). We refer to Crocce et al. (2016) for a more detailed study of the effect of the photometric redshifts to the determination of galaxy bias. CMB systematics and cuts in multipole range We then investigate the sensitivity of the results to the range of multipoles used. This is an important consistency check, and it is especially useful to detect possible systematic contaminations in the CMB lensing maps, as these would typically affect distinct scales differently (Story et al. 2015). For example, a type of possible systematics that could affect our results would be any residual foreground contamination of the CMB lensing map that is correlated with the galaxies, such as e.g. thermal SZ (tSZ). It was shown by van Engelen et al. (2015) that using modes out to = 4000 in the original CMB temperature map used for the lensing reconstruction could lead to more than 5% bias in the total CMB lensing signal; such bias is more pronounced on the largest angular scales. Bleem et al. (2012) also found a 5% bias in their cross-correlation sample based on cross-correlating mock galaxy catalogues with simulated CMB lensing using a tSZ prescription. In general any remaining un-subtracted foregrounds will bias the cross-correlation low; such bias will be worse for SPT than Planck because of the smaller scales used for the lensing reconstruction. The bias could be larger or smaller in our case, so that a more detailed quantification of these effects will be necessary for future work along these lines. While the most instructive test would be to apply cuts in the range of multipoles used to reconstruct CMB lensing from the temperature map, this is beyond the scope of this work, and we instead apply cuts in the CMB lensing maps themselves. We perform this test in harmonic space only, as cuts in the Correlation min max 1.14 ± 0.31 3.6 1.2/2 Table 5. Stability of the cross-correlation results with respect to cuts in the range of multipoles considered, for the DES-SPT (top) and DES-Planck correlations (bottom). In this case, both maps are smoothed at 5.4 , to permit the use of the entire multipole range. The cross-correlations are significantly detected in all cases, and the amplitude of the cross-correlations A = bA Lens is always significantly smaller than the best-fit linear bias b = 1.22 ± 0.03. In the DES-SPT case, we find that the most aggressive choice of including all multipoles at 30 < < 2000 is robust, while in the DES-Planck case this choice leads to a high S/N and a poor χ 2 , which is due to the outlying points at > 1000. For this reason, we adopt the more conservative cut 30 < < 1000 in this case. Bold font indicates the values used in the main analysis. A ± σ A S/N χ 2 / d.o.f. multipole range are easier to implement in this case. We show the results of this test in Table 5 for both the DES-SPT and DES-Planck cases (for this test, we smooth both maps at the same scale of 5.4 ). We first see that both cross-correlations are detected at high significance (S/N > 3) in all cases. Our method for selecting the multipole range to be used in the main analysis is to keep a range as wide as possible, unless there is evidence of inconsistencies such as large deviations of the results (> 1σ) or significant outliers leading to a poor reduced χ 2 . More accurately, it was shown by Planck Collaboration et al. (2015b) that the variance of a parameter between different data cuts should be approximately given by the difference between the variances obtained when using the two data sets. This criterion is satisfied in all cases. For the SPT case, we find that the most aggressive choice of including all multipoles at 30 < < 2000 is robust, as the result only fluctuates within the statistical error when more restrictive choices are made. The χ 2 per degree of freedom is also good in all cases. We therefore adopt this choice for our main DES-SPT results. In the case of the DES-Planck correlation, we already noticed in Fig. 10 the presence of significant outliers at > 1000. We also know from the Planck analysis (Planck Collaboration et al. 2015a) that, due to the lower sensitivity, the Planck CMB lensing maps are fully noise-dominated at high multipoles, and the 'conservative' Planck analysis of the lensing auto-spectrum was performed at < 400 only, recovering most S/N available over the entire multipole range. While we do expect both noise and systematics to be less critical in a cross-correlation measurement, we need to take a conservative approach on the higher multipole range of these data. We see in Table 5 that indeed the results including all 30 < < 2000 yield the highest best-fit amplitude and S/N, but this is driven by the significant outlier at 1500; the reduced χ 2 is poor (χ 2 / d.o.f. = 31/19, corresponding to a PTE = 4%). The situation improves significantly if a more conservative cut at 30 < < 1000 is applied, which retains a nearly unchanged error bar while yielding a much more reasonable χ 2 /d.o.f. = 8.8/9. Further cuts, down to the most conservative case at 30 < < 420, give statistically consistent results, and reasonable χ 2 vaules. We therefore adopt the 30 < < 1000 multipole range for our main DES-Planck results. COSMOLOGICAL IMPLICATIONS While a thorough study of the cosmological implications of the DES CMB lensing tomography is deferred to future work with DES year-1 data, we present here a simple proof of concept of the potential applications of lensing tomography measurements. Bias and growth estimators From the theoretical form of the CMB lensing spectra presented in Section 2, it is evident that CMB lensing tomography is a measurement of structure growth across cosmic time, potentially constraining departures from the standard cosmological model at the linear growth level. Indeed, it is clear that the joint measurement of the auto-and cross-correlations C gg , C κg allows one to break the degeneracy that exists between bias and structure growth. We use here the simplest possible assumptions, and consider linear, local forms of both the galaxy bias and the growth function, given by b(z), D(z), while keeping the cosmology fixed to the Planck best-fit fiducial model. A potential caveat of this analysis is that, given the results from Section 6.2, the statistical errors on the bias evolution obtained from the galaxy auto-correlations can be comparable with systematic errors due to the uncertainties in the photometric redshifts estimations, which are not taken into account in this section. For a more complete analysis of the bias evolution and a more detailed treatment of the systematics, see Crocce et al. (2016). Our estimator for the bias in the i-th redshift bin is simplŷ b i = b i , i.e. the best-fit value from each auto-correlation, while a basic estimator for the growth function D i can be derived from the ratio between the observed (obs) cross-spectrum and a normalising fiducial (the) cross-spectrum: D 0 i ≡ C κg i obs / C κg i the ,(28) where the expression is averaged over all multipoles considered. Here we have defined with a slash the normalising power spectrum / C κg , which we define as the usual power spectrum of Section 2, where the kernels had the growth function removed: / C κg = 2 π ∞ 0 dk k 2 P(k) / W κ (k) / W g (k) (29) / W g (k) = ∞ 0 dz b(z) dn dz (z) j [kχ(z)] (30) / W κ (k) = 3Ω m H 2 0 2 ∞ 0 dz χ * − χ χ * χ (z) j [kχ(z)] .(31) Notice that, while the CMB convergence kernel is formally not bound to the narrow redshift range where dn dz (z) 0, its overall contribution to the cross-spectrum from redshifts outside this range is negligible; this can be seen more clearly by using the Limber approximation. In this case, as shown e.g. by Eq. (27) in Giannantonio et al. (2012a), the angular power spectrum is given by a single integral over redshift, and when one of the two source terms W κ , W g vanishes, so does the total C l . Therefore the D 0 i estimator correctly recovers the linear growth function in the redshift bin i. In order to estimate the theoretical power spectrum at the denominator of Eq. (28), we still need the galaxy bias. We can remove the dependence on bias by introducing the following estimator: D G i ≡ C κg i obs / C κg i the / C gg i the C gg i obs .(32) We can see that D G does not directly depend on the galaxy bias, as its observed and theoretical values simplify exactly in the limit of narrow redshift bins, and that it contains no direct dependence on the theoretical growth function either: we therefore propose this estimator as a novel simplified method for extracting cosmic growth information. The D G estimator still includes a dependence on the combination of cosmological parameters Ω m H 2 0 σ 8 from the CMB lensing kernel of Eqs. (29, 31); this dependence is degenerate with the growth function information in any redshift bin, but the degeneracy can be broken by a multi-bin tomography. We evaluate D G directly using the harmonic space bandpowers and the real-space correlation functions; we further improve the estimator of Eq. (32) by weighting the averages with the diagonal errors on the power spectra and correlation functions respectively. While the expectation value is D G = D on linear scales, we note that the dependence on non-linearities will largely cancel between the theoretical and observed parts of the estimator. We nonetheless use scales at < 1000 only, to reduce potential contamination by non-linear contributions. We estimate the errors on D G and the full covariance matrix between the redshift bins by repeating the D G calculation for our set of 100 N-body realisations of the galaxy density and CMB lensing data. Our estimator D G is related to, but different from, the E G estimator introduced by Zhang et al. (2007), used to confirm GR with observations by Reyes et al. (2010), and studied for projections with future surveys by Pullen et al. (2015). This alternative estimator is defined as E G ∝ C κg C θg = C κg β C gg ,(33) where θ indicates the linear velocity perturbations, given by θ = f δ, where f = d ln D/d ln a is the linear growth rate, and β = f /b is observable from redshift space distortions (RSD). Both E G and D G have the advantage of being independent from galaxy bias by construction. E G has the additional bonus of being more easily related to modified gravity theories, as it can be directly connected to departures from the Poisson equation and the anisotropic stress; furthermore it is scale-independent in GR. On the other hand, E G can only be accurately measured from a spectroscopic survey. In the case of photometric data, such as DES, a further possible alternative to E G would be to simply test the ratio C κg /C gg , which would retain many of the desirable features of E G , as this is still scale-independent in GR and easily related to modified gravity theories. However, this simple ratio requires external information on the galaxy bias, which is a serious drawback. For this reason, we propose to use the D G estimator as an alternative for photometric surveys. Results and interpretation By applying the D G estimator described above to our tomographic data in real and harmonic space, we obtain the results shown in The evolution of galaxy bias is presented and discussed in more detail by Crocce et al. (2016); we follow this study, and compare the bias with a simple third-order polynomial fit, which was shown in Appendix A by Crocce et al. (2016) to be in good agreement with results from the MICE N-body simulations: b(z) = 1 + a 1 z + a 2 z 2 + a 3 z 3 . We show in the top panel of Fig. 21 that the best-fit model by Crocce et al. (2016), of parameters a 1 = 0.87, a 2 = −1.83, a 3 = 1.77 is also an excellent fit to our measurements in both real and harmonic spaces, further validating both analyses. We show in the central panel of Fig. 21 the redshift evolution of the galaxy-CMB lensing correlation amplitude A = bA Lens : as shown above in Table 2, A is in most cases lower than the expected value given the auto-correlations. We can see once again that realand harmonic-space results agree well, with the one exception of the third bin cross-correlation, as discussed above in Section 5.2.4. We then focus on the linear growth function: we show in the bottom panel of Fig. 21 the results from the D G estimator of Eq. (32) for real and harmonic spaces, where we use scales at < 1000 only. We see that the data prefer a smaller growth of structure than what is expected in the fiducial Planck ΛCDM model: this result is driven by the lower than expected values of the observed galaxy-CMB lensing correlations. The estimators in real and harmonic space agree well in most bins. If we assume the template shape of D G (z) to be fixed by the fiducial Planck cosmology and we fit its amplitude A D , so that D G (z) = A D [D G (z)] fid ,(35) we find A D = 0.76 ± 0.17 from the real-space analysis and A D = 0.70 ± 0.15 in harmonic space. As the two results are consistent and there is no reason to prefer one over the other, we take their mean as our main result: A D = 0.73 ± 0.16 ,(36) where the error is also the mean of the errors, as the two methods are based on the same data. This result includes the full covariance between the photo-z bins, which is typically 30% between neighbours. We note that, as discussed above in Section 6.2, if the real redshift distribution of the galaxies in all bins is narrower than our assumption, the tension could be alleviated, but the photo-z alone are unlikely to be responsible for this discrepancy in full. In particular we have tested that, if we use the alternative BPZ photo-zs, we obtain A D = 0.70 ± 0.16, in agreement with the TPZ results. We can then assess the significance of the discrepancy with respect to the fiducial Planck cosmology. From the point of view of template fitting, the mean best-fit value is 1.7σ away from the fiducial value A D = 1. Alternatively, we perform a null hypothesis test and find that the χ 2 difference between the best fit and the fiducial model is ∆χ 2 = 7.2 in real space (10.5 in harmonic space) for 4 degrees of freedom, corresponding to a PTE = 13% in real space (3.3% in harmonic space). We therefore conclude that the observed tension is only weakly significant. We discuss however in the following what the implications could be, if the lower A D persists with more accurate measurements. The D G estimator retains a dependence on the ratio between the real and the fiducial values of the background parameters Ω m h 2 σ 8 ≡ ω m σ 8 ; it is thus in principle possible to attribute the observed mismatch to a preference for different parameter values. The parameter shift required is large compared with the current CMB constraints from Planck (Planck Collaboration et al. 2015c): in order to shift the amplitude A D from its best-fit value 0.73 ± 0.16 to 1, would require a fractional decrease in ω m σ 8 of 27%. It is worth mentioning that in the last few years several independent measurements of LSS probes have hinted at low significance towards low growth in recent times, including measurements of σ 8 from galaxy clusters (Bocquet et al. 2015), weak lensing (MacCrann et al. 2015), redshift-space distortions (Beutler et al. 2014), and a combination of probes (Ruiz & Huterer 2015). It is important to stress that, in most cases, alternative analyses showing weaker or no tension do exist, e.g. by Samushia et al. (2014) for RSD, and by Mantz et al. (2015) for galaxy clusters. Only better data in the near future will clarify whether statistical flukes, systematic effects or new physics are behind these observations; we prefer for the moment to avoid over-interpreting the results, and we defer to the upcoming DES year-1 data a more detailed study that MICE cosmology, full sample, 0.2 < z phot < 1.2 Table 6. Summary of the results obtained when assuming the low-matter density MICE cosmology in real and harmonic spaces. We use the N-body covariance matrix in all cases. Assuming this fiducial model relieves most of the tension: the disagreement between auto-and cross-correlation best-fit amplitudes is in this case at the ∼ 1σ level only. Correlation Space b ± σ b S/N χ 2 / d.o.f. will include a more rigorous quantification of the photo-z and SZ systematic uncertainties, varying cosmological parameters and the full covariance between all data. Relaxing cosmology Motivated by the results of the previous section, we test how the interpretation of our results changes when we assume a different fiducial cosmology. We first adopt the baseline MICE cosmology defined above in Section 4; notably, in this case Ω m = 0.25, so that a significant reduction of the tension between auto-and crosscorrelations is expected. We repeat the amplitude fitting of Section 5 to the measured auto-and cross-correlations in real and harmonic spaces, and we find for the full redshift sample the best-fit values of Table 6. Here we can see that indeed the change in the fiducial cosmology relieves most of this tension: the remaining differences are at the 1σ level only. We further proceed to a revised interpretation of the growth function estimator D G , based on the MICE cosmology. We find that as expected the tension is significantly alleviated: we obtain A D = 0.86 ± 0.19, which is consistent within 1σ with the MICE cosmology expectations. In order to shift the best-fit value to A D = 1 would require in this case a fractional decrease in ω m σ 8 by 14%. In the upper panel of Fig. 22 we illustrate how shifting from the Planck best fit to other ΛCDM cosmologies could bring the theoretical model closer to the observations. We consider here the MICE cosmology used in our N-body simulations (Ω m = 0.25, h = 0.70, σ 8 = 0.80) and the best-fit ΛCDM model to the CFHTLenS + WMAP 7 data by Heymans et al. (2013) (Ω m = 0.255, h = 0.717, σ 8 = 0.794). Note that for the Planck cosmology we normalise D G = 1 today, while for any other model i, D G is rescaled by the factor (ω m σ 8 ) i / (ω m σ 8 ) Planck , as the fiducial Planck value for ω m σ 8 was assumed in the measured D G . A further interesting possibility is to use the growth function measurement to constrain modified gravity theories. We compare in the lower panel of Fig. 22 our data with a selection of parameterised departures from the ΛCDM model. In order to avoid the ambiguities related to scale-dependent growth for simplicity, we only consider models where the growth function remains approximately scale-independent. These include Linder's γ parameterisation (Linder & Cahn 2007), in which the growth of structure evolves as f (z) ∝ Ω γ m , where γ 0.55 in ΛCDM; a dark energy model with equation of state w(z) = w 0 + w a z/(1 + z) (Chevallier & Polarski 2001); and two modifications of gravity at the perturbative level as Heymans et al. (2013). Bottom panel: The coloured lines display in the order: a Linder γ model (Linder & Cahn 2007), a dark energy model parameterised by w 0 , w a (Chevallier & Polarski 2001), and two models of modified gravity at the perturbative level: entropy perturbation (wΓ) and anisotropic stress models (wΠ), as described by Battye & Pearson (2013). described by Battye & Pearson (2013) and recently constrained by Soergel et al. (2015), in which the dark fluid is described by an entropy perturbation (wΓ model) or anisotropic stress (wΠ model). For all models i, their growth function D is normalised to recover the ΛCDM behaviour at early times; in addition, D G is rescaled by the factor (ω m σ 8 ) i / (ω m σ 8 ) Planck . We can see that some of these models succeed in explaining the low-growth behaviour at low redshifts, although clearly the current data are not accurate enough for a solid model selection, which we defer to future DES data releases. Crocce et al. (2016) demonstrate that bias non-linearities can be excluded for the DES-SV 'Benchmark' galaxy sample on the scales we consider. In this case, as discussed in Section 2 above, it is possible to interpret our results by assuming that any tension between auto-and cross-correlations is due to stochasticity. If we do so and assume cosmology is fixed to our fiducial model, we can directly interpret our constraint on D G as a constraint on r, as this quantity defined in Eq. (14) can be simply estimated as r = b cross /b auto . Stochasticity Thus, under the assumption of the Planck fiducial cosmology, our measurement at face value translates to r = 0.73 ± 0.16. Such result would indicate a 1.7σ preference for non-negligible stochasticity in our sample; this appears to be close to the early results by Hoekstra et al. (2002), but in disagreement with the more recent work by Jullo et al. (2012). Nonetheless, an analysis of stochasticity from the galaxymatter correlation function of the MICE-GC simulations, which were shown to reproduce most aspects of the DES-SV data correctly, find r = 1 to 1% precision on all scales of interest (Crocce et al. 2016), which strongly suggests that the mismatch between auto-and cross-correlation amplitudes can not be entirely due to stochasticity. CONCLUSIONS We have detected the cross-correlation between the matter overdensities in the Universe as traced by the DES-SV galaxies and the CMB lensing maps reconstructed by the SPT and Planck collaborations. The total significance of the detections is 6σ for the SPT case and 4σ for Planck when using the DES main galaxies in the SPT-E field over 130 square degrees. Given the sufficient signal to noise available, and the welltested photometric redshifts for our galaxy sample, we have studied the redshift evolution of the cross-correlation signal. Ours is the first study to examine this evolution from a single survey. We divided the DES main galaxies into five photometric redshift bins of width ∆z = 0.2. We found that the auto-and cross-correlations evolve in redshift as expected, recovering a significant detection at > 2σ in all bins and > 3σ in all but the lowest redshift bin. We have finally applied these tomographic measurements of auto-and cross-correlations to reconstruct the evolution of galaxy bias and the linear growth of structure in our redshift range. While the results are overall consistent with the ΛCDM expectations, we do find a ∼ 2σ tension (including statistical errors only) between the observed amplitudes of the auto-and crosscorrelations when using the full galaxy sample at 0.2 < z phot < 1.2, which we confirm with two fully independent analyses in real and harmonic space. This tension is observed when using either the DES-SPT or DES-Planck cross-correlations. When dividing the galaxy sample into five redshift bins, we also found the amplitude of the DES-SPT cross-correlations is consistently lower than expected from the DES auto-correlations. We then introduced a new linear growth estimator, D G (z), which combines auto-and cross-correlations, so that it is independent of galaxy bias on linear scales. Using this new estimator, we measured the evolution of the linear growth function in five redshift bins. We then compared the D G (z) measurements with a template, based on the fiducial ΛCDM cosmology with a free constant amplitude A D , obtaining A D = 0.73 ± 0.16, which is the final result of this work. This result shows a weak (1.7σ) tension with the fiducial ΛCDM cosmology based on Planck. We have quantified the impact of photo-zs on our results by repeating the analysis with two photo-z estimators: TPZ and BPZ. We have found that using either method leaves the significance of the cross-correlation detections unaffected. If assuming BPZ, the inferred tension between auto-and cross-correlations of the full galaxy sample is reduced by ∼ 50%, but the results are nearly unchanged in the tomography. In particular, our final result on the growth function estimator D G is unaffected by the choice of BPZ, as in this case we find A D = 0.70 ± 0.16. Further work with the upcoming DES and SPT data of extended coverage and sensitivity will be accompanied by more thorough tests of the possible systematics, including a quantitative estimation of the systematic errors from photometric redshifts and from foreground contamination by the SZ effect. If taken at face value, the mild tension we observe can be interpreted as the data favouring a lower growth of structure in the late universe than expected from the fiducial model, or equivalently a lower value of ω m σ 8 with respect to what is fixed by the CMB at recombination. An alternative possibility that would eliminate the tension we observe is a significant stochastic component in the galaxy density of the DES-SV sample; this interpretation leads at face value to a correlation coefficient r = 0.73 ± 0.16, assuming non-linear bias can be safely ignored on the scales of interest see (see Crocce et al. 2016, for a companion analysis supporting this assumption). However, this is at variance with the most recent results on the subject from observations (Jullo et al. 2012) and N-body simulations (Crocce et al. 2016). The inferred low amplitude of the cross-correlation signal can be compared with the literature that reports a wide range of A values. Some authors (e.g. van Engelen et al. 2015) found A to be consistent with the expectations, while others have found values of the CMB lensing amplitude that are < 1 with modest statistical significance, such as Liu & Hill (2015), who cross-correlated the CMB lensing map from Planck and the cosmic shear map from CFHTLenS, and Omori & Holder (2015), who correlated Planck lensing and CFHTLens galaxy density data. We have tested that our significance levels are reliable by running two independent analysis pipelines in real and harmonic spaces, and by estimating the covariances with four different methods. We have checked that the results are robust by estimating the impact of nineteen possible DES systematics, by exploring the stability of the signal with a broad range of cuts in the scales considered, and with different estimators of the photometric redshifts, showing that their impact on our measurements is not statistically significant. The CMB lensing tomography with DES will improve dramatically in the upcoming years. As shown in Fig. 1, the area increase alone from SV to the full survey (5000 deg 2 ) is expected to boost the signal-to-noise to ∼ 30σ with either Planck or SPT data, as the lower level of noise in SPT is compensated by the larger overlap between DES and Planck. Notably, this projection does not account for improvements in the CMB lensing data. If we include the expected advances from the upcoming SPT-3G survey, we obtain a signal-to-noise of ∼ 90σ with the final DES data. The ACT survey and its successor are also of interest. There is modest overlap with the DES footprint already, and with the Advanced ACT survey we expect to have close to complete overlap, allowing for promising cross-correlation studies similar to the SPT-3G survey. Similar sensitivity will also be achievable with the Simons Array (Arnold et al. 2014). Looking forward to the future, it is most likely that an optimal reconstruction of the CMB lensing-matter correlation will be accomplished by a multi-probe, multi-wavelength approach: optical galaxy surveys (such as DES now, the Dark Energy Spectroscopic Instrument, the Euclid satellite and the Large Synoptic Survey Telescope in the future) will probe the full LSS up to a redshift ∼ 2, while the higher-redshift matter distribution will be reconstructed with other techniques, such as CIB, and later 21cm radiation intensity mapping. This multi-probe approach will eventually allow a full reconstruction of the process of structure formation across cosmic time, and determine the nature of dark energy and gravity on cosmological scales. The South Pole Telescope program is supported by the National Science Foundation through grant PLR-1248097. Partial support is also provided by the NSF Physics Frontier Center grant PHY-0114422 to the Kavli Institute of Cosmological Physics at the University of Chicago, the Kavli Foundation, and the Gordon and Betty Moore Foundation through Grant GBMF#947 to the University of Chicago. APPENDIX A: MOCKS GENERATION AND VALIDATION From the full sky projection of the MICE-GC N-body simulations described in Section 4.2, we produce 100 non-overlapping rotations of the SPT-E mask. We then rotate instead the simulated overdensity map 100 times into the real DES SV mask, thus generating 100 independent realisations of the data. We do the same for the MICE CMB lensing map, which also covers the full sky and includes the lensing effect of all sources at z < 100. Onto each CMB lensing mock we then add one mock CMB lensing noise realisation, as provided by the SPT or Planck collaboration. In the Planck case, what is provided are actually 100 realisations of the full observable signal, which include both cosmological signal and noise, and the corresponding 100 realisations of the cosmological signal only, so that we reconstruct 100 noise-only maps by taking the difference between the two. We have checked using the Monte Carlo realisations that this method of all-sky map rotations yields the same covariance matrix as a statistically independent set of realisations. Furthermore the method yields an unbiased estimate of the auto-and crosscorrelations. By this we mean that the average correlations (or power spectra) of the suite of rotated mocks are equal to those of the un-rotated all-sky map, as shown in Fig. A1. We have performed a similar validation procedure in real space, also obtaining consistent results. After generating the mock galaxy maps, we add Poisson noise on each pixel by randomly resampling each pixel number density from a Poisson distribution. Finally, we smooth the mock maps with the same Gaussian beam we apply to the real data. We demonstrate in Fig. A2 that the mean auto-and cross-power spectra of the mocks and their scatter agree well with the properties of the real data. APPENDIX B: SHOT NOISE As we estimate the matter overdensity via the number of observed galaxies per unit area, shot noise is introduced in the analysis. In harmonic space, this is described in the ideal case by the contribution N gg = 1/n mentioned in Section 2; this is constant on the full sky, but it is affected in the same way as the cosmological signal C gg by the effects of survey mask, pixellation, and any additional smoothing applied to the map. In a real-space analysis of a pixellated map, shot noise only affects the auto-correlation function at zero lag, by adding to the Figure A1. Comparison between the rotated and un-rotated (all-sky) power spectra (from top to bottom: galaxy-galaxy, galaxy-CMB lensing, and lensing-lensing). Rotated mocks (in blue and red for N side = 1024, 2048 respectively) yield unbiased results with respect to the un-rotated maps (black). The Gaussian smoothing was not applied for this particular test. Figure A2. Comparison between the data and the mocks. The red points show the measured auto-and cross-spectra of the real DES, SPT and Planck data, while the black dashed and solid lines describe the mean value and the 1σ scatter of the same spectra measured on our N-body mocks. The different shapes of the SPT and Planck cross-spectra are due to the different smoothing we apply. Figure B1. Modelling the shot noise contribution to a smoothed galaxy map. The mean auto-correlation function of 100 unsmoothed galaxy mocks (green circles) in the redshift bin 1.0 < z phot < 1.2 is in good agreement with the fiducial cosmological model w gg (ϑ) (green dashed line); the effect of shot noise is limited to an additional contribution at zero separation, not shown in the plot. When the mocks are smoothed, the shot noise component spreads to non-zero angles: the observed mean auto-correlation (orange squares) does not match the smoothed cosmological model w gg smooth (ϑ) (dot-dashed red curve), as the shot noise contribution is missing from the model. We plot with blue triangles the shot noise component measured from the mocks with the estimator of Eq. (B4), which is well fit by the model w shot smooth (ϑ) with an amplitude A shot = 0.047 (dotted blue line). By adding this to the smoothed cosmological theory, we obtain the full model of Eq. (B2) (black solid line), which is a good match to the smoothed mocks. cosmological signal w gg (ϑ) a contribution w shot (0 deg) = 1/n pix , wheren pix is the number density of galaxies per pixel. However, as in our analysis we apply an additional Gaussian smoothing, the effect of shot noise is diluted onto angular separations ϑ > 0 deg. The effective auto-correlation can be written as w gg+shot smooth (ϑ) = w gg smooth (ϑ) + w shot smooth (ϑ) ,(B2) where w gg smooth (ϑ) is the smoothed galaxy auto-correlation of cosmological origin, while the shot noise contribution is (Boughn et al. 2002) w shot smooth (ϑ) = A shot 1 n pix e − ϑ 2 4σ 2 .(B3) The constant A shot depends on the relative size of the pixel d pix and the smoothing beam: for σ d pix , A shot → 1 and the shot noise is returned to the zero-lag limit, while the dilution will affect larger scales, and A shot → 0, if σ d pix . We determine A shot from our set of N-body simulations as follows. We first estimate the shot noise contribution in each angular bin by averaging over the 100 mock maps aŝ Figure C1. Comparison of the real-space diagonal error bars for our four estimators of the covariance matrix: theoretical (orange dashed), MC (green dot-dashed), JK (red dotted) and from N-body simulations (blue solid). w shot smooth (ϑ) = ŵ i smooth (ϑ) −ŵ i (ϑ) w gg smooth (ϑ) w gg (ϑ) i ,(B4) The three panels refer from top to bottom to galaxy auto-correlation, and the cross-correlations with SPT and Planck. The different methods agree for the cross-correlations, while the N-body covariance yields marginally larger error bars for the auto-correlation as expected due to the effect of non-Gaussianities. We use the N-body errors for our main results. whereŵ i smooth (ϑ) is the measured auto-correlation from the smoothed mock i,ŵ i (ϑ) is the measured auto-correlation from the unsmoothed mock i, and w gg smooth (ϑ)/w gg (ϑ) is the ratio between the smoothed and unsmoothed theoretical predictions for the cosmological signal. We focus on the highest redshift bin 1.0 < z phot < 1.2, which has the lowest number density, and thus the highest shot noise. We then derive A shot with a one-parameter likelihood fit, minimising the χ 2 between the mock data of Eq. (B4) and the model of Eq. (B3), using the full covariance matrix from the same mocks. We thus obtain A shot = 0.047 ± 0.002. As we can see in Fig. B1, this model is in good agreement with the measured auto-correlations of the smoothed mocks. We have confirmed that the same value of A shot is accurate for all redshift bins as expected. We use this model to subtract the shot-noise contribution from all measured real-space auto-correlations. The cross-correlations are naturally unaffected. APPENDIX C: ROBUSTNESS OF THE COVARIANCE MATRIX ESTIMATION We demonstrate in this section the robustness of our covariance matrix estimation in real and harmonic spaces. Figure C2. Comparison of four correlation matrix estimators in real space, for the three correlations we consider. The first row refers to the theoretical covariance, the second row is obtained from 1000 MC realisations, the third row is the JK method (100 regions), and the final row shows the 100 N-body realisations. The three columns refer to galaxy-galaxy, galaxy-SPT and galaxy-Planck lensing convergence correlations respectively. The angular range is from 2.4 arcmin to 5 deg as in Fig. 5. The different methods produce consistent covariances; by comparing the different correlation matrices, we see that the galaxy-CMB lensing correlation matrices are more diagonal than the galaxy-galaxy case, which is related to the auto-correlation theory being more non-linear, and thus more non-Gaussian and covariant, at these scales. Furthermore, all matrices become more covariant in the first few angular bins due to the introduction of the Gaussian smoothing to the maps, which effectively blurs information on scales ϑ < ϑ FWHM = 5.4 (DES-SPT) and ϑ < ϑ FWHM = 10.8 (DES-Planck). We estimate the covariance matrix of the results in four different ways. In addition to the MC and N-body methods described above in Section 5.1.2, we first use an analytic approach: we (optimistically) assume a diagonal covariance in harmonic space, including the uncertainties from cosmic variance, shot noise in the galaxy counts, and CMB lensing noise, as described in Eq. (15). The galaxy shot noise is determined by the observed galaxy number density in our sample, and we use for the CMB lensing noise its level as determined from the CMB lensing maps auto-spectrum, as discussed below in Section 5.2.3. For any pair of maps a, b, the harmonic space covariance σ 2 C ab defined above in Section 2 is readily transformed to real space ( Table C1. Summary of the results for the main galaxy sample for real (left) and harmonic (right) spaces: best-fit linear bias b and correlation amplitudes A = bA Lens for the three correlation functions and four covariance estimators. The results are consistent between each other and with respect to the theoretical expectations for our fiducial model, but the cross-correlation amplitude is lower than the auto-correlation by 2 − 3σ. The recovered χ 2 per degree of freedom indicates the models and covariance estimators are in all cases appropriate for the data, with the only exception of the DES-Planck theoretical covariance in real space. Figure C3. Comparison of two different correlation matrix estimators in harmonic space: the first row shows the results from MC realisations, while the second row refers to 100 N-body realisations. We show correlations among C band-powers for the galaxy auto-, galaxy-SPT lensing and galaxy-Planck cross-correlations, from left to right, respectively. We use ten linear multipole band-powers from min = 30 to max = 2000, with ∆ = 197, matching the bins of (2 + 1) 2 (4π) 2 P (cos ϑ i )P (cos ϑ j ) σ 2 C ab . (C1) Notice the sum goes in principle up to infinity, but it is in practice possible to truncate it to a finite value given the Gaussian smoothing we apply. We also estimate the covariance matrices with a jack-knife (JK) technique. This consists of removing in turn N JK subsets of the data, to obtain N JK pseudo-random realisations of the correlations, whose scatter can be used to estimate the covariance as in Eq. (19), but with an additional factor: Ĉ ab i j JK = N JK − 1 N JK N JK α=1 ŵ ab α,i −w ab i ŵ ab α, j −w ab j .(C2) The advantage in this case is that the method is completely modelindependent; it is nevertheless not uniquely defined, as the number of patches that it is possible to remove is limited, and it typically yields different results depending on the particular procedure chosen. Also in this case we use the β correction of the inverse covariance (Eq. 20); while not mathematically exact in the case of nonindependent realisations, it was shown by Hartlap et al. (2007) to yield accurate results also in this case. We tested several JK methods; we show below the results for a scheme where we have divided the galaxy mask into N JK = 100 mostly contiguous patches with the same number of pixels. We have achieved this by selecting 100 sets of pixels whose pixel ID is contiguous in the Healpix nested scheme; this ensures the patches are mostly contiguous. We summarise in Table C1 the best-fit results obtained using the four covariance estimators in real and harmonic spaces, where we can see that all methods agree, and they all yield realistic reduced χ 2 values. The only exception is the DES-Planck case with the theoretical covariance estimator (PTE: <1%); the N-body covariance yields anyway a typical χ 2 (PTE: 35%). We then show in Fig. C1 the real-space diagonal error bars obtained with the four estimators we consider: theoretical prediction, jack-knife, Monte Carlo, and full N-body. The error bars obtained with all methods are in excellent agreement: we can see that all methods fully agree for the cross-correlations, while in the autocorrelation the N-body errors are larger than the others; this is reasonable as this is the only method, besides the less stable JK, that incorporates the non-Gaussian variance produced on small scales by non-linear structure formation. In order to also compare the offdiagonal part of the covariance matrices, we show in Fig. C2 the real-space correlation matrices of the three correlations we study, obtained with all four methods we consider. We can see that the agreement between the methods is excellent; the JK results are marginally noisier, but the general behaviour consistently shows a high off-diagonal covariance for the galaxy auto-correlation on small scales, and lower covariance for the cross-correlations. Finally, we show in Fig. C3 a comparison of the harmonicspace correlation matrices. In this case, the theoretical correlation matrix is fully diagonal, as the effect of the survey mask is not included, so we do not show it, but we limit the comparison to the MC and N-body estimators. We find that the MC covariances are still mostly diagonal, while more significant off-diagonal contributions emerge in the case of the full N-body estimator, especially for the auto-spectrum case. This is expected, and it is due to the significant non-Gaussianities produced by non-linearities, which are non-negligible on these scales. APPENDIX D: THE OPTIMAL QUADRATIC C ESTIMATOR D1 Implementation In the following, we briefly reiterate the basic equations of the optimal quadratic estimator proposed by Tegmark (1997). We then summarise the necessary extensions for power spectrum estimation in wide multipole bins, and discuss a simple regularisation approach for bandpass filtered data. We start by defining the n pix × n pix data covariance matrix in pixel space as a sum of contributions from signal and noise, C = S + N. For an isotropic signal with power spectrum C , S nn = max = min 2 + 1 4π C P (n ·n ) ,(D1) where we have introduced the Legendre polynomials P with the argument given by the dot product between the normal vectors of pixels n and n . Then, the optimal power spectrum estimateĈ is given by a quadratic combination of the data vector d, C = 1 2 F −1 d † E d ,(D2) where E = C −1 ∂C ∂C C −1 ,(D3) and the Fisher matrix is F = 1 2 tr C −1 ∂C ∂C C −1 ∂C ∂C .(D4) For unbiased power spectrum estimation in multipole bins b, we now extend Eq. (D1) by introducing weight functions w , S nn = b C b ∈b 2 + 1 4π 1 w P (n ·n ) ,(D5) where the equality holds if C b = w C = constant within each bin. We therefore choose the weights w ∝ 1/C , normalised to unity. The derivative with respect to individual power spectrum elements in Eq. (D3) is then simply replaced by ∂/∂C b . We note that to compare power spectrum estimates to a theoretical model, the model has to be binned with the same weight function w . For a signal covariance matrix with a smaller number of Fourier modes than pixels, Eq. (D1) will return a rank deficient matrix. In case the same holds true for the noise covariance matrix (e.g., for bandpass filtered noise), the inverse of the covariance matrix does not exist. One way to solve this problem is to restrict all computations to the non-singular subspace of C. Here, we adopt the Figure D1. The optimal quadratic estimator is unbiased. We compare the power spectra estimates, averaged over all 1000 simulations, to the input power spectrum (top panel). We also plot the residuals after dividing by the fiducial model (bottom panel) together with the 1 and 2-σ error on the mean (grey regions), i.e., the error bars of an individual simulation divided by √ 1000. We find no evidence for a detectable bias, even in the highest multipole bins close to the resolution limit of a Healpix map at N side = 512. simpler approach of regularisation. We multiply the diagonal elements of the covariance matrix,C nn = (1 + ) · C nn , where typically ≈ 10 −7 . In the next section, we demonstrate that our pipeline as described here produces reliable results. D2 Verification We generated 1000 realisations of a Gaussian random field at N side = 2048 and max = 2500 from the theoretical model of the SPT-E galaxy × galaxy power spectrum. We added an isotropic contribution of white noise at a level of N = 2.1 · 10 −8 to the maps, roughly consistent with the observed level of shot noise in this field. For each simulated map, we computed auto power spectra at a downgraded resolution of N side = 512 using the SPT-E mask in 10 uniform multipole bins up to the applied bandpass limit of max = 1500. We further calculated the C Fisher matrix using its mathematically exact analytical formula. In Fig. D1, we show the results of our comparison. We plot the averaged power spectrum estimates and compare them to the inputs and find no evidence for a significant bias. Figure 1 . 1Signal-to-noise forecasts for the DES-CMB lensing correlations, for a range of different CMB and DES data sets. Top panel: The theoretical CMB lensing auto-spectrum compared with the noise of Planck 2015 and SPT-SZ as well as the projected noise of the upcoming SPT-3G survey. Figure 2 . 2Map of the main galaxies used for our analysis in the SPT-E field, pixellated on the Healpix N side = 2048 scheme (pixel side: 1.7 ) in Equatorial coordinates, after masking. The colour scale indicates the number of galaxy counts in each pixel. The grid lines are 2.5 deg apart. Grey areas indicate masked data or areas outside the SV footprint. The coordinates (74.6, −52.7) indicate the position of the map centre. Figure 5 . 5Measured two-point correlation functions of DES-SV main galaxies and their correlations with CMB lensing maps. The red dots show the measured results using our full galaxy catalogue. The top panel shows the galaxy auto-correlation, the central panel is the correlation with SPT lensing convergence, while the bottom panel shows the same with Planck. Figure 6 6Figure 6. Correlation matrices for the three cases we consider, estimated with the N-body method. The matrices refer to galaxy-galaxy, galaxy-SPT, and galaxy-Planck lensing respectively. The angular range is from 2.4 arcmin to 5 deg as in Fig. 5. We see that the galaxy-CMB lensing correlation matrix is more diagonal than the galaxy-galaxy case, as the auto-correlation theory is more non-linear, and thus more non-Gaussian and less diagonal. Furthermore, all matrices become less diagonal in the first few angular bins due to the introduction of the Gaussian smoothing to the maps, which effectively blurs information on scales ϑ < ϑ FWHM = 5.4 (DES-SPT) and 10.8 (DES-Planck). Figure 8 . 8Correlation matrices from N-body realisations in harmonic space: we show correlations among C band-powers for the galaxy auto-, galaxy-SPT lensing and galaxy-Planck cross-correlations, from left to right, respectively. We use ten linear multipole band-powers from min = 30 to max = 2000, with ∆ = 197, matching the bins ofFig. 10. Figure 9 . 9Auto-spectra measured from the CMB lensing convergence maps (points with error bars) from Planck (blue squares) and SPT (black circles), compared with the fiducial cosmological signal (magenta solid line). The dashed lines describe the average of 100 mock realisations that fully characterise the Planck and SPT maps respectively. For the Planck case we show two sets of data: measured over the full Planck lensing mask (dotted line and empty points), and over the intersection of the lensing mask with the DES-SV SPT-E mask (dashed line, full points). We can see that the convenient position of the DES-SV SPT-E area next to the South Ecliptic Pole results in a 25% noise reduction. No smoothing is applied to the maps for this figure. Figure 12 . 12The power spectra derived with the optimal quadratic estimator are in quantitative agreement with theoretical predictions and with the pseudo-C estimator. We compare the galaxy-galaxy (upper panel) and the galaxy-CMB lensing potential power spectra (middle panel) of SPT-E (blue circles) to the theoretical model (black solid line). For comparison, we include results of the pseudo-C estimator (open grey symbols, plotted with a small offset in for better visualisation). Residuals of the cross-correlation power spectrum, shown on linear scale (bottom panel), are consistent with zero within the error bars. Figure 15 . 15Pixel distributions of the potential DES systematics we consider. The histograms show the number of pixels where each systematic assumes the value shown in the abscissa. In addition, five possible cuts of the worst affected areas are shown, ranging from 2 to 50%. Figure 19 . 19Measured DES auto-(top) and DES-SPT lensing crosscorrelation (bottom) functions for two different choices of photometric redshift estimators: our baseline TPZ choice is shown in red, while the alternative BPZ catalogue is in navy blue. The theory lines are produced accordingly to each catalogue's redshift distribution. The recovered best-fit biases and cross-correlation amplitudes are shown in the caption for both photo-z methods. Figure 21 . 21Reconstructed measurements of the redshift evolution of linear bias b(z) from galaxy auto-correlations, as also presented byCrocce et al. (2016) (top panel), galaxy-CMB lensing cross-correlation amplitudes A(z) from the cross-correlations (central panel) and linear growth function from the D G (z) estimator (bottom panel) from the combined tomography of galaxy clustering and galaxy-CMB lensing correlations. The red (round) points are derived from the correlation functions, while the blue (square) points are from the angular power spectra. The purple dashed line shows the mean best fit amplitude to D G with 1 and 2σ uncertainty bands. We also show for comparison the best-fit bias model of Eq. (34) in the top and central panels (dotted lines), and the theoretical growth function for the Planck fiducial cosmology in the bottom panel (thick solid line). The low values of A we observe translate into a preference for a lower D G in most redshift bins. Fig. 21 . 21Here we plot the redshift evolution of linear bias (top panel), galaxy-CMB lensing cross-correlation amplitude (central panel) and the linear growth function derived with the D G estimator (bottom panel). Figure 22 . 22The red circles and blue squares show our growth function measurements with the D G estimator, compared with the fiducial Planck best fit ΛCDM prediction (thick black line), different choices of the ΛCDM parameters (top panel), and a selection of dark energy and modified gravity models (bottom panel). Top panel: The green dashed line shows the prediction for the MICE cosmology, while the orange dot-dashed line refers to the best-fit ΛCDM model to the CFHTLenS + WMAP 7 data by ≡ Cov[w ab ](ϑ i , ϑ j ) Table 2. Summary of the main results of the redshift tomography in real and harmonic spaces. The top half of the table shows the best-fit biases b to the DES auto-correlations, while the lower half illustrates the best fits to the DES-SPT lensing cross-correlation amplitudes A = bA Lens . All results are shown for the Nbody covariance matrix. The real-and harmonic-space results are in good agreement with few exceptions, such as most notably the third bin cross-correlation, which we discuss in Section 5.2.4 below. The reduced χ 2 values are consistent with 1 in most cases, except the auto-correlations in the first and last redshift bins.21 2.0 10 / 11 0.57 ± 0.25 2.3 16 / 19 0.4 < z phot < 0.6 0.75 ± 0.25 3.1 11 / 11 0.91 ± 0.22 4.2 24 / 19 0.6 < z phot < 0.8 1.25 ± 0.25 5.1 9.5 / 11 0.68 ± 0.28 2.4 29 / 19 0.8 < z phot < 1.0 1.08 ± 0.29 3.8 7.3 / 11 1.02 ± 0.31 3.3 22 / 19 1.0 < z phot < 1.2 1.95 ± 0.37 5.3 9.3 / 11 1.83 ± 0.42 4.4 23 / 19 The DES participants from Spanish institutions are partially supported by MINECO under grants AYA2012-39559, ESP2013-48274, FPA2013-47986, and Centro de Excelencia Severo Ochoa SEV-2012-0234. Research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478. Crocce et al. 2011; Ross et al.Full sample, 0.2 < z phot < 1.2 / d.o.f. b ± σ b S/N χ 2 / d.o.f. / d.o.f. A ± σ A S/N χ 2 / d.o.f.Real space Harmonic space Correlation Covariance b ± σ b S/N χ 2 Gal-Gal N-body 1.22 ± 0.03 41 3.8 / 8 1.22 ± 0.04 34 2.7 / 3 Theory 1.23 ± 0.02 51 5.8 / 8 1.26 ± 0.03 51 1.3 / 3 MC 1.23 ± 0.03 47 5.8 / 8 1.26 ± 0.04 33 0.54 / 3 JK 1.22 ± 0.03 48 5.4 / 8 - - - Correlation Covariance A ± σ A S/N χ 2 Gal-SPT N-body 0.84 ± 0.13 6.3 8.4 / 11 0.84 ± 0.15 5.6 8.7 / 19 Theory 0.86 ± 0.13 6.6 13 / 11 0.85 ± 0.13 6.6 11 / 19 MC 0.91 ± 0.13 6.9 9.2 / 11 0.81 ± 0.15 5.4 15 / 19 JK 0.91 ± 0.14 6.5 5.3 / 11 - - - Gal-Planck N-body 0.78 ± 0.21 3.7 11 / 10 0.81 ± 0.20 3.8 7.7 / 9 Theory 0.86 ± 0.24 3.6 25 / 10 0.82 ± 0.21 3.8 8.3 / 9 MC 0.77 ± 0.20 3.8 10 / 10 0.82 ± 0.25 3.3 5.3 / 3 JK 0.77 ± 0.18 4.4 7.8 / 10 - - - MNRAS 000, 1-32(2016) ACKNOWLEDGEMENTS TG thanks Anthony Challinor and George Efstathiou for comments on a draft version of this paper, and James Fergusson, Martin Kilbinger and Ariel Sánchez for useful discussions. TG acknowledges support from the Kavli Foundation, STFC grant ST/L000636/1, and from the Excellence Cluster 'Universe' of Garching, Germany, as well as the Institut de Ciències de l'Espai, IEEC-CSIC, Universitat Autònoma de Barcelona, for hospitality. PF acknowledges support from the MareNostrum supercomputer (BSC-CNS, www.bsc.es), grants AECT-2008-1-0009 to 2010-1-0007, Port d'Informació Científica (www.pic.es), and the Cosmo-HUB portal (cosmohub.pic.es), where the MICE simulations were run, stored, and distributed, respectively. PF is funded by MINECO, project ESP2013-48274-C3-1-P. FE, BL and HVP were partially supported by the European Research Council under the European Union's Seventh Framework Programme (PP7/2007-2013) / ERC grant agreement no 306478-CosmicDawn. CR acknowledges support from the University of Melbourne and from the Australian Research Council's Discovery Projects scheme (DP150103208). This paper has gone through internal review by the DES collaboration.We are grateful for the extraordinary contributions of our CTIO colleagues and the DECam Construction, Commissioning and Science Verification teams in achieving the excellent instrument and telescope conditions that have made this work possible. The success of this project also relies critically on the expertise and dedication of the DES Data Management group.Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Científico e Tecnológico and the Ministério da Ciência, Tecnologia e Inovação, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The DES data management system is supported by the National Science Foundation under Grant Number AST-1138766.The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Enérgeticas, Medioambientales y Tecnológicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenössische Technische Hochschule (ETH) Zürich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciències de l'Espai (IEEC/CSIC), the Institut de Física d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universität München and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University. . P A R Ade, 10.1103/PhysRevLett.113.021301Phys. Rev. Lett. 11321301Ade P. A. R., et al., 2014, Phys. Rev. Lett., 113, 021301 . H Aihara, 10.1088/0067-0049/193/2/29ApJS. 19329Aihara H., et al., 2011, ApJS, 193, 29 Arnold K , 10.1117/12.2057332Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 1Arnold K., et al., 2014, in Society of Photo-Optical Instrumentation Engi- neers (SPIE) Conference Series. p. 1, doi:10.1117/12.2057332 . M L N Ashby, 10.1088/0004-637X/701/1/428ApJ. 701428Ashby M. L. N., et al., 2009, ApJ, 701, 428 . 10.1103/PhysRevLett.112.241101Physical Review Letters. 112241101BICEP2 Collaboration 2014, Physical Review Letters, 112, 241101 . Bicep2/Keck, Planck Collaborations10.1103/PhysRevLett.114.101301Physical Review Letters. 114101301BICEP2/Keck and Planck Collaborations 2015, Physical Review Letters, 114, 101301 . T Baldauf, R E Smith, U Seljak, R Mandelbaum, 10.1103/PhysRevD.81.063531Phys. Rev. D. 8163531Baldauf T., Smith R. E., Seljak U., Mandelbaum R., 2010, Phys. Rev. D, 81, 063531 . R A Battye, J A Pearson, 10.1103/PhysRevD.88.061301Phys. Rev. D. 8861301Battye R. A., Pearson J. A., 2013, Phys. Rev. D, 88, 061301 . N Benítez, 10.1086/308947ApJ. 536571Benítez N., 2000, ApJ, 536, 571 B A Benson, 10.1117/12.2057305arXiv:1407.2973SPIE Conference Series. p. 1Benson B. A., et al., 2014, in SPIE Conference Series. p. 1 (arXiv:1407.2973), doi:10.1117/12.2057305 . F Beutler, 10.1093/mnras/stu1051MNRAS. 4431065Beutler F., et al., 2014, MNRAS, 443, 1065 . F Bianchini, 10.1088/0004-637X/802/1/64ApJ. 80264Bianchini F., et al., 2015, ApJ, 802, 64 . L E Bleem, 10.1088/2041-8205/753/1/L9ApJ. 7539Bleem L. E., et al., 2012, ApJ, 753, L9 . L E Bleem, B Stalder, M Brodwin, M T Busha, M D Gladders, F W High, A Rest, R H Wechsler, 10.1088/0067-0049/216/1/20ApJS. 21620Bleem L. E., Stalder B., Brodwin M., Busha M. T., Gladders M. D., High F. W., Rest A., Wechsler R. H., 2015, ApJS, 216, 20 . S Bocquet, 10.1088/0004-637X/799/2/214ApJ. 799214Bocquet S., et al., 2015, ApJ, 799, 214 . E Boldt, 10.1016/0370-1573(87)90108-6Phys. Rep. 146215Boldt E., 1987, Phys. Rep., 146, 215 . J R Bond, A H Jaffe, L Knox, Phys. Rev. D. 572117Bond J. R., Jaffe A. H., Knox L., 1998, Phys. Rev. D, 57, 2117 . J Borrill, 10.1103/PhysRevD.59.027302Phys. Rev. D. 5927302Borrill J., 1999, Phys. Rev. D, 59, 027302 . S Boughn, R Crittenden, 10.1038/nature02139Nature. 42745Boughn S., Crittenden R., 2004, Nature, 427, 45 . S P Boughn, R G Crittenden, N G Turok, 10.1016/S1384-1076(98)00009-8New Astr. 3275Boughn S. P., Crittenden R. G., Turok N. G., 1998, New Astr., 3, 275 . S P Boughn, R G Crittenden, G P Koehrsen, 10.1086/343861ApJ. 580672Boughn S. P., Crittenden R. G., Koehrsen G. P., 2002, ApJ, 580, 672 . J Bovy, 10.1088/0004-637X/729/2/141ApJ. 729141Bovy J., et al., 2011, ApJ, 729, 141 . A Cabré, E Gaztañaga, M Manera, P Fosalba, F Castander, 10.1111/j.1745-3933.2006.00218.x37223MN-RASCabré A., Gaztañaga E., Manera M., Fosalba P., Castander F., 2006, MN- RAS, 372, L23 . A Cabré, P Fosalba, E Gaztañaga, M Manera, 10.1111/j.1365-2966.2007.12280.xMNRAS. 3811347Cabré A., Fosalba P., Gaztañaga E., Manera M., 2007, MNRAS, 381, 1347 . Y.-C Cai, G Bernstein, R K Sheth, 10.1111/j.1365-2966.2010.17969.xMNRAS. 412995Cai Y.-C., Bernstein G., Sheth R. K., 2011, MNRAS, 412, 995 . J E Carlstrom, G P Holder, E D Reese, 10.1146/annurev.astro.40.060401.093803ARA&A. 40643Carlstrom J. E., Holder G. P., Reese E. D., 2002, ARA&A, 40, 643 . M Carrasco Kind, R J Brunner, 10.1093/mnras/stt574MNRAS. 4321483Carrasco Kind M., Brunner R. J., 2013, MNRAS, 432, 1483 . A Challinor, A Lewis, 10.1103/PhysRevD.84.043516Phys. Rev. D. 8443516Challinor A., Lewis A., 2011, Phys. Rev. D, 84, 043516 . M Chevallier, D Polarski, 10.1142/S0218271801000822International Journal of Modern Physics D. 10213Chevallier M., Polarski D., 2001, International Journal of Modern Physics D, 10, 213 . G Chon, A Challinor, S Prunet, E Hivon, I Szapudi, 10.1111/j.1365-2966.2004.07737.xMNRAS. 350914Chon G., Challinor A., Prunet S., Hivon E., Szapudi I., 2004, MNRAS, 350, 914 . S Cole, G Efstathiou, MNRAS. 239195Cole S., Efstathiou G., 1989, MNRAS, 239, 195 . J J Condon, W D Cotton, E W Greisen, Q F Yin, R A Perley, G B Taylor, J J Broderick, 10.1086/300337AJ. 1151693Condon J. J., Cotton W. D., Greisen E. W., Yin Q. F., Perley R. A., Taylor G. B., Broderick J. J., 1998, AJ, 115, 1693 . R G Crittenden, N Turok, 10.1103/PhysRevLett.76.575Physical Review Letters. 76575Crittenden R. G., Turok N., 1996, Physical Review Letters, 76, 575 . M Crocce, A Cabré, E Gaztañaga, 10.1111/j.1365-2966.2011.18393.xMNRAS. 414329Crocce M., Cabré A., Gaztañaga E., 2011, MNRAS, 414, 329 . M Crocce, F J Castander, E Gaztañaga, P Fosalba, J Carretero, 10.1093/mnras/stv1708MNRAS. 4531513Crocce M., Castander F. J., Gaztañaga E., Fosalba P., Carretero J., 2015, MNRAS, 453, 1513 . M Crocce, 10.1093/mnras/stv2590MNRAS. 4554301Crocce M., et al., 2016, MNRAS, 455, 4301 . S Das, 10.1103/PhysRevLett.107.021301Physical Review Letters. 10721301Das S., et al., 2011a, Physical Review Letters, 107, 021301 . S Das, 10.1088/0004-637X/729/1/62ApJ. 72962Das S., et al., 2011b, ApJ, 729, 62 . S Das, J Errard, D Spergel, arXiv:1311.2338preprintDas S., Errard J., Spergel D., 2013, preprint, (arXiv:1311.2338) . S Das, 10.1088/1475-7516/2014/04/014J. Cosmology Astropart. Phys. 414Das S., et al., 2014, J. Cosmology Astropart. Phys., 4, 14 . A Dekel, O Lahav, 10.1086/307428ApJ. 52024Dekel A., Lahav O., 1999, ApJ, 520, 24 . S Desai, 10.1088/0004-637X/757/1/83ApJ. 75783Desai S., et al., 2012, ApJ, 757, 83 . M A Dipompeo, A D Myers, R C Hickox, J E Geach, G Holder, K N Hainline, S W Hall, 10.1093/mnras/stu2341MNRAS. 4463492DiPompeo M. A., Myers A. D., Hickox R. C., Geach J. E., Holder G., Hain- line K. N., Hall S. W., 2015, MNRAS, 446, 3492 . S Dodelson, M D Schneider, 10.1103/PhysRevD.88.063537Phys. Rev. D. 8863537Dodelson S., Schneider M. D., 2013, Phys. Rev. D, 88, 063537 . C Feng, B Keating, H P Paar, O Zahn, 10.1103/PhysRevD.85.043513Phys. Rev. D. 8543513Feng C., Keating B., Paar H. P., Zahn O., 2012, Phys. Rev. D, 85, 043513 . B Flaugher, arXiv:1504.02900preprintFlaugher B., et al., 2015, preprint, (arXiv:1504.02900) . P Fosalba, E Gaztañaga, 10.1111/j.1365-2966.2004.07837.xMNRAS. 35037Fosalba P., Gaztañaga E., 2004, MNRAS, 350, L37 . P Fosalba, I Szapudi, 10.1086/427234ApJ. 61795Fosalba P., Szapudi I., 2004, ApJ, 617, L95 . P Fosalba, E Gaztañaga, F J Castander, 10.1086/379848ApJ. 59789Fosalba P., Gaztañaga E., Castander F. J., 2003, ApJ, 597, L89 . P Fosalba, E Gaztañaga, F J Castander, M Crocce, 10.1093/mnras/stu2464MNRAS. 4471319Fosalba P., Gaztañaga E., Castander F. J., Crocce M., 2015a, MNRAS, 447, 1319 . P Fosalba, M Crocce, E Gaztañaga, F Castander, 10.1093/mnras/stv138Mon. Not. R. Astron. Soc. 4482987Fosalba P., Crocce M., Gaztañaga E., Castander F., 2015b, Mon. Not. R. Astron. Soc., 448, 2987 . J N Fry, E Gaztanaga, 10.1086/173015ApJ. 413447Fry J. N., Gaztanaga E., 1993, ApJ, 413, 447 . E Gaztañaga, M Eriksen, M Crocce, F J Castander, P Fosalba, P Marti, R Miquel, A Cabré, 10.1111/j.1365-2966.2012.20613.xMNRAS. 4222904Gaztañaga E., Eriksen M., Crocce M., Castander F. J., Fosalba P., Marti P., Miquel R., Cabré A., 2012, MNRAS, 422, 2904 . J E Geach, 10.1088/2041-8205/776/2/L41ApJ. 77641Geach J. E., et al., 2013, ApJ, 776, L41 . T Giannantonio, W J Percival, 10.1093/mnrasl/slu036MNRAS. 44116Giannantonio T., Percival W. J., 2014, MNRAS, 441, L16 . T Giannantonio, 10.1103/PhysRevD.74.063520Phys. Rev. D. 7463520Giannantonio T., et al., 2006, Phys. Rev. D, 74, 063520 . T Giannantonio, R Scranton, R G Crittenden, R C Nichol, S P Boughn, A D Myers, G T Richards, 10.1103/PhysRevD.77.123520Phys. Rev. D. 77123520Giannantonio T., Scranton R., Crittenden R. G., Nichol R. C., Boughn S. P., Myers A. D., Richards G. T., 2008, Phys. Rev. D, 77, 123520 . T Giannantonio, C Porciani, J Carron, A Amara, A Pillepich, 10.1111/j.1365-2966.2012.20604.xMNRAS. 4222854Giannantonio T., Porciani C., Carron J., Amara A., Pillepich A., 2012a, MNRAS, 422, 2854 . T Giannantonio, R Crittenden, R Nichol, A J Ross, 10.1111/j.1365-2966.2012.21896.xMNRAS. 4262581Giannantonio T., Crittenden R., Nichol R., Ross A. J., 2012b, MNRAS, 426, 2581 . T Giannantonio, A J Ross, W J Percival, R Crittenden, D Bacher, M Kilbinger, R Nichol, J Weller, 10.1103/PhysRevD.89.023511Phys. Rev. D. 8923511Giannantonio T., Ross A. J., Percival W. J., Crittenden R., Bacher D., Kil- binger M., Nichol R., Weller J., 2014, Phys. Rev. D, 89, 023511 . K M Górski, E Hivon, A J Banday, B D Wandelt, F K Hansen, M Reinecke, M Bartelmann, 10.1086/427976ApJ. 622759Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Rei- necke M., Bartelmann M., 2005, ApJ, 622, 759 . M J Griffin, 10.1051/0004-6361/201014519A&A. 5183Griffin M. J., et al., 2010, A&A, 518, L3 . D Hanson, 10.1103/PhysRevLett.111.141301Physical Review Letters. 111141301Hanson D., et al., 2013, Physical Review Letters, 111, 141301 . J Hartlap, P Simon, P Schneider, 10.1051/0004-6361:20066170A&A. 464399Hartlap J., Simon P., Schneider P., 2007, A&A, 464, 399 . C Heymans, 10.1093/mnras/stt601MNRAS. 4322433Heymans C., et al., 2013, MNRAS, 432, 2433 . C M Hirata, U Seljak, 10.1103/PhysRevD.67.043001Phys. Rev. D. 6743001Hirata C. M., Seljak U., 2003, Phys. Rev. D, 67, 043001 . C M Hirata, S Ho, N Padmanabhan, U Seljak, N A Bahcall, 10.1103/PhysRevD.78.043520Phys. Rev. D. 7843520Hirata C. M., Ho S., Padmanabhan N., Seljak U., Bahcall N. A., 2008, Phys. Rev. D, 78, 043520 . E Hivon, K M Górski, C B Netterfield, B P Crill, S Prunet, F Hansen, 10.1086/338126ApJ. 5672Hivon E., Górski K. M., Netterfield C. B., Crill B. P., Prunet S., Hansen F., 2002, ApJ, 567, 2 . S Ho, 10.1088/0004-637X/761/1/14ApJ. 76114Ho S., et al., 2012, ApJ, 761, 14 . H Hoekstra, L Van Waerbeke, M D Gladders, Y Mellier, H K C Yee, 10.1086/342228ApJ. 577604Hoekstra H., van Waerbeke L., Gladders M. D., Mellier Y., Yee H. K. C., 2002, ApJ, 577, 604 . G P Holder, 10.1088/2041-8205/771/1/L16ApJ. 77116Holder G. P., et al., 2013, ApJ, 771, L16 . E Jullo, 10.1088/0004-637X/750/1/37ApJ. 75037Jullo E., et al., 2012, ApJ, 750, 37 . 10.1088/0004-637X/811/2/126ApJ. 811126Keck Array and BICEP2 Collaborations 2015, ApJ, 811, 126 . R Keisler, 10.1088/0004-637X/743/1/28ApJ. 74328Keisler R., et al., 2011, ApJ, 743, 28 . R Keisler, 10.1088/0004-637X/807/2/151ApJ. 807151Keisler R., et al., 2015, ApJ, 807, 151 . B P Koester, 10.1086/509599ApJ. 660239Koester B. P., et al., 2007, ApJ, 660, 239 . B Leistedt, H V Peiris, D J Mortlock, A Benoit-Lévy, A Pontzen, 10.1093/mnras/stt1359MNRAS. 4351857Leistedt B., Peiris H. V., Mortlock D. J., Benoit-Lévy A., Pontzen A., 2013, MNRAS, 435, 1857 . B Leistedt, arXiv:1507.05647preprintLeistedt B., et al., 2015, preprint, (arXiv:1507.05647) . A Lewis, A Challinor, 10.1016/j.physrep.2006.03.002Phys. Rep. 4291Lewis A., Challinor A., 2006, Phys. Rep., 429, 1 . A Lewis, A Challinor, A Lasenby, 10.1086/309179ApJ. 538473Lewis A., Challinor A., Lasenby A., 2000, ApJ, 538, 473 . E V Linder, MNRAS. 243353Linder E. V., 1990, MNRAS, 243, 353 . E V Linder, R N Cahn, 10.1016/j.astropartphys.2007.09.003Astroparticle Physics. 28481Linder E. V., Cahn R. N., 2007, Astroparticle Physics, 28, 481 . J Liu, J C Hill, 10.1103/PhysRevD.92.063517Phys. Rev. D. 9263517Liu J., Hill J. C., 2015, Phys. Rev. D, 92, 063517 . N Maccrann, J Zuntz, S Bridle, B Jain, M R Becker, 10.1093/mnras/stv1154MNRAS. 4512877MacCrann N., Zuntz J., Bridle S., Jain B., Becker M. R., 2015, MNRAS, 451, 2877 . M Manera, E Gaztañaga, 10.1111/j.1365-2966.2011.18705.xMNRAS. 415383Manera M., Gaztañaga E., 2011, MNRAS, 415, 383 . A B Mantz, 10.1093/mnras/stu2096MNRAS. 4462205Mantz A. B., et al., 2015, MNRAS, 446, 2205 . T Okamoto, W Hu, 10.1103/PhysRevD.67.083002Phys. Rev. D. 6783002Okamoto T., Hu W., 2003, Phys. Rev. D, 67, 083002 . Y Omori, POLARBEAR Collaboration ; POLARBEAR CollaborationG Holder, POLARBEAR Collaboration ; POLARBEAR Collaboration10.1088/0004-637X/794/2/171arXiv:1502.03405Physical Review Letters. 112171ApJOmori Y., Holder G., 2015, preprint, (arXiv:1502.03405) POLARBEAR Collaboration 2014a, Physical Review Letters, 112, 131302 POLARBEAR Collaboration 2014b, ApJ, 794, 171 . Pen U.-L , 10.1086/306098ApJ. 504601Pen U.-L., 1998, ApJ, 504, 601 . W J Percival, 10.1093/mnras/stu112MNRAS. 4392531Percival W. J., et al., 2014, MNRAS, 439, 2531 . 10.1051/0004-6361/201323195A&A. 57111Planck Collaboration et al., 2014a, A&A, 571, A11 . 10.1051/0004-6361/201321591A&A. 57116Planck Collaboration et al., 2014b, A&A, 571, A16 . 10.1051/0004-6361/201321543A&A. 57117Planck Collaboration et al., 2014c, A&A, 571, A17 . 10.1093/mnras/stv554arXiv:1507.02704arXiv:1502.01591MNRAS. 4494326preprint. preprint. preprintPlanck Collaboration et al., 2015b, preprint, (arXiv:1507.02704) Planck Collaboration et al., 2015c, preprint, (arXiv:1502.01589) Planck Collaboration et al., 2015a, preprint, (arXiv:1502.01591) Pullen A. R., Alam S., Ho S., 2015, MNRAS, 449, 4326 . M J Rees, D W Sciama, 10.1038/217511a0Nature. 217511Rees M. J., Sciama D. W., 1968, Nature, 217, 511 . R Reyes, R Mandelbaum, U Seljak, T Baldauf, J E Gunn, L Lombriser, R E Smith, 10.1038/nature08857Nature. 464256Reyes R., Mandelbaum R., Seljak U., Baldauf T., Gunn J. E., Lombriser L., Smith R. E., 2010, Nature, 464, 256 . A J Ross, W J Percival, M Crocce, A Cabré, E Gaztañaga, 10.1111/j.1365-2966.2011.18843.x4152193MN-RASRoss A. J., Percival W. J., Crocce M., Cabré A., Gaztañaga E., 2011a, MN- RAS, 415, 2193 . A J Ross, 10.1111/j.1365-2966.2011.19351.xMNRAS. 4171350Ross A. J., et al., 2011b, MNRAS, 417, 1350 . E J Ruiz, D Huterer, 10.1103/PhysRevD.91.063009Phys. Rev. D. 9163009Ruiz E. J., Huterer D., 2015, Phys. Rev. D, 91, 063009 . E Rykoff, in prepRykoff E., et al., 2015, in prep. . R K Sachs, A M Wolfe, 10.1086/148982ApJ. 14773Sachs R. K., Wolfe A. M., 1967, ApJ, 147, 73 . L Samushia, 10.1093/mnras/stu197MNRAS. 4393504Samushia L., et al., 2014, MNRAS, 439, 3504 . C Sánchez, 10.1093/mnras/stu1836MNRAS. 4451482Sánchez C., et al., 2014, MNRAS, 445, 1482 . P Schneider, 10.1086/305559ApJ. 49843Schneider P., 1998, ApJ, 498, 43 . U Seljak, 10.1086/177218ApJ. 4631Seljak U., 1996, ApJ, 463, 1 . E S Sheldon, 10.1086/383293AJ. 1272544Sheldon E. S., et al., 2004, AJ, 127, 2544 . B D Sherwin, 10.1103/PhysRevD.86.083006Phys. Rev. D. 8683006Sherwin B. D., et al., 2012, Phys. Rev. D, 86, 083006 . P Simon, M Hetterscheidt, M Schirmer, T Erben, P Schneider, C Wolf, K Meisenheimer, 10.1051/0004-6361:20065904A&A. 461861Simon P., Hetterscheidt M., Schirmer M., Erben T., Schneider P., Wolf C., Meisenheimer K., 2007, A&A, 461, 861 . M F Skrutskie, 10.1086/498708AJ. 1311163Skrutskie M. F., et al., 2006, AJ, 131, 1163 . I Smail, R S Ellis, M J Fitchett, A C Edge, MNRAS. 273277Smail I., Ellis R. S., Fitchett M. J., Edge A. C., 1995, MNRAS, 273, 277 . J Smidt, A Cooray, A Amblard, S Joudaki, D Munshi, M G Santos, P Serra, 10.1088/2041-8205/728/1/L1ApJ. 7281Smidt J., Cooray A., Amblard A., Joudaki S., Munshi D., Santos M. G., Serra P., 2011, ApJ, 728, L1 . R E Smith, 10.1046/j.1365-8711.2003.06503.xMNRAS. 3411311Smith R. E., et al., 2003, MNRAS, 341, 1311 . K M Smith, O Zahn, O Doré, 10.1103/PhysRevD.76.043510Phys. Rev. D. 7643510Smith K. M., Zahn O., Doré O., 2007, Phys. Rev. D, 76, 043510 . B Soergel, T Giannantonio, J Weller, R A Battye, 10.1088/1475-7516/2015/02/037J. Cosmology Astropart. Phys. 237Soergel B., Giannantonio T., Weller J., Battye R. A., 2015, J. Cosmology Astropart. Phys., 2, 37 . K T Story, 10.1088/0004-637X/779/1/86ApJ. 77986Story K. T., et al., 2013, ApJ, 779, 86 . K T Story, 10.1088/0004-637X/810/1/50ApJ. 81050Story K. T., et al., 2015, ApJ, 810, 50 . R A Sunyaev, I B Zeldovich, 10.1146/annurev.aa.18.090180.002541ARA&A. 18537Sunyaev R. A., Zeldovich I. B., 1980, ARA&A, 18, 537 . I Szapudi, S Prunet, S Colombi, 10.1086/324312ApJ. 56111Szapudi I., Prunet S., Colombi S., 2001, ApJ, 561, L11 . R Takahashi, M Sato, T Nishimichi, A Taruya, M Oguri, 10.1088/0004-637X/761/2/152ApJ. 761152Takahashi R., Sato M., Nishimichi T., Taruya A., Oguri M., 2012, ApJ, 761, 152 . A Taylor, B Joachimi, T Kitching, 10.1093/mnras/stt270MNRAS. 432Taylor A., Joachimi B., Kitching T., 2013, MNRAS, 432, 1928 . M Tegmark, 10.1103/PhysRevD.55.5895Phys. Rev. D. 555895Tegmark M., 1997, Phys. Rev. D, 55, 5895 . M Tegmark, P J E Peebles, 10.1086/311426ApJ. 50079Tegmark M., Peebles P. J. E., 1998, ApJ, 500, L79 The Dark Energy Survey Collaboration. arXiv:astro-ph/0510346preprintThe Dark Energy Survey Collaboration 2005, preprint, (arXiv:astro-ph/0510346) . A Vallinotto, 10.1088/0004-637X/778/2/108Astrophys. J. 778108Vallinotto A., 2013, Astrophys. J., 778, 108 . E L Wright, 10.1088/0004-6256/140/6/1868AJ. 1401868Wright E. L., et al., 2010, AJ, 140, 1868 . M Zaldarriaga, U Seljak, 10.1103/PhysRevD.58.023003Phys. Rev. D. 5823003Zaldarriaga M., Seljak U., 1998, Phys. Rev. D, 58, 023003 . P Zhang, M Liguori, R Bean, S Dodelson, 10.1103/PhysRevLett.99.141302Physical Review Letters. 99141302Zhang P., Liguori M., Bean R., Dodelson S., 2007, Physical Review Letters, 99, 141302 . A Van Engelen, 10.1088/0004-637X/808/1/7ApJ. 7567ApJvan Engelen A., et al., 2012, ApJ, 756, 142 van Engelen A., et al., 2015, ApJ, 808, 7 . L Van Waerbeke, L Van Waerbeke, 10.1111/j.1365-2966.2009.15809.xMNRAS. 33412093A&Avan Waerbeke L., 1998, A&A, 334, 1 van Waerbeke L., 2010, MNRAS, 401, 2093
[]
[ "Fine-Tuning Language Models with Advantage-Induced Policy Alignment", "Fine-Tuning Language Models with Advantage-Induced Policy Alignment" ]
[ "Banghua Zhu ", "Hiteshi Sharma ", "Felipe Vieira ", "Frujeri Shi ", "Dong Chenguang ", "Zhu Michael ", "I Jordan ", "Jiantao Jiao " ]
[]
[]
Reinforcement learning from human feedback (RLHF) has emerged as a reliable approach to aligning large language models (LLMs) to human preferences. Among the plethora of RLHF techniques, proximal policy optimization (PPO) is of the most widely used methods. Despite its popularity, however, PPO may suffer from mode collapse, instability, and poor sample efficiency. We show that these issues can be alleviated by a novel algorithm that we refer to as Advantage-Induced Policy Alignment (APA), which leverages a squared error loss function based on the estimated advantages. We demonstrate empirically that APA consistently outperforms PPO in language tasks by a large margin, when a separate reward model is employed as the evaluator. In addition, compared with PPO, APA offers a more stable form of control over the deviation from the model's initial policy, ensuring that the model improves its performance without collapsing to deterministic output. In addition to empirical results, we also provide a theoretical justification supporting the design of our loss function. * †
null
[ "https://export.arxiv.org/pdf/2306.02231v2.pdf" ]
259,075,917
2306.02231
370e51386abb7b999728e08b74f0a77fbd064834
Fine-Tuning Language Models with Advantage-Induced Policy Alignment June 8, 2023 Banghua Zhu Hiteshi Sharma Felipe Vieira Frujeri Shi Dong Chenguang Zhu Michael I Jordan Jiantao Jiao Fine-Tuning Language Models with Advantage-Induced Policy Alignment June 8, 2023 Reinforcement learning from human feedback (RLHF) has emerged as a reliable approach to aligning large language models (LLMs) to human preferences. Among the plethora of RLHF techniques, proximal policy optimization (PPO) is of the most widely used methods. Despite its popularity, however, PPO may suffer from mode collapse, instability, and poor sample efficiency. We show that these issues can be alleviated by a novel algorithm that we refer to as Advantage-Induced Policy Alignment (APA), which leverages a squared error loss function based on the estimated advantages. We demonstrate empirically that APA consistently outperforms PPO in language tasks by a large margin, when a separate reward model is employed as the evaluator. In addition, compared with PPO, APA offers a more stable form of control over the deviation from the model's initial policy, ensuring that the model improves its performance without collapsing to deterministic output. In addition to empirical results, we also provide a theoretical justification supporting the design of our loss function. * † Introduction Reinforcement learning from human feedback (RLHF, or preference-based reinforcement learning) (Knox and Stone, 2008;Wirth et al., 2017) has delivered significant empirical successes in several fields, including game playing (Christiano et al., 2017), robotics (Sadigh et al., 2017;Kupcsik et al., 2018), recommender systems, andn (Maghakian et al., 2022). Recently, RLHF has also exhibited striking potential for integrating human knowledge with large language models (Ziegler et al., 2019;Ouyang et al., 2022;OpenAI, 2023;Beeching et al., 2023;Zhu et al., 2023;Bai et al., 2022b). To employ RLHF in the training pipeline of language models, a common protocol is as follows. • Pre-training (PT): training the language model on a large amount of unlabeled or weakly labeled text data to produce general features and patterns that can be useful for downstream tasks (Vaswani et al., 2017;Devlin et al., 2018;Brown et al., 2020); • Supervised fine-tuning (SFT): training the model on a smaller amount of labeled data to improve the performance and accuracy of the model on specific tasks; • Reinforcement learning with human feedback (RLHF): using a human-labeled dataset together with reinforcement learning (RL) algorithms to further align the model with complex and subjective human values or preferences (Ziegler et al., 2019;Ouyang et al., 2022). Both PT and SFT rely on the use of distributional loss functions, such as cross entropy, to minimize the distance between the text distributions in the training dataset and in the model output (Vaswani et al., 2017;Devlin et al., 2018;Brown et al., 2020). Such a simple strategy is not viable, however, for the RLHF stage. As the ultimate target is to make the language model output conform to human linguistic norms, which are difficult to define or quantify, researchers usually resort to a reward model that is trained separately from the language model on a meticulously collected, human-labeled dataset (Ouyang et al., 2022). Such a reward model produces a scalar score for each generated response, and is, therefore, able to provide the language model with online feedback. This accessibility to online feedback allows the language model to be trained via reinforcement learning (RL), giving rise to the RLHF stage. Among the RL techniques that are applied to language models, one of the most prominent algorithms is proximal policy optimization (PPO) (Schulman et al., 2017). Despite the acclaimed effectiveness of PPO (Ouyang et al., 2022;Stiennon et al., 2020;Nakano et al., 2021), recent research has identified the following three issues that require additional study: • Mode collapse. It has been observed that PPO can reduce the output randomness of language models, misleading the model into producing deterministic responses. 1 • Instability. PPO regularizes each new policy towards the previous policy via an adaptive KL controller, and applies importance sampling to approximate the current policy distribution. As we observe in our experiments, these steps sometimes trigger instability in training, leading to an abrupt drop in model performance. • Poor sample efficiency. The family of policy gradient algorithms suffers from slow convergence and can yield poor ultimate policies (Yuan et al., 2022;Dong et al., 2022). As a variant of policy gradient, PPO is susceptible to the same problems. To address such issues, we introduce a novel algorithm, Advantage-Induced Policy Alignment (APA), which leverages a squared error loss function that directly aligns the output policy of the language model with a target policy in each training epoch. The target policies combine the initial language model policy before the RLHF stage and an advantage-induced correction term, where the advantage function is estimated from previous online samples. We compare PPO, APA, and advantage weighted regression (AWR) in both theory and experiments. At a high level, the two existing algorithms directly solve a KL-constrained policy optimization problem, whereas APAdoes not rely on the estimated importance ratio between consecutive policies, thus providing a stable way to update the model policy. To demonstrate the efficacy of APA, we apply APA, PPO and AWR to the fine-tuning of the GPT-J-6B language model (Wang and Komatsuzaki, 2021) using the human-labeled Helpfulness and Harmlessness dataset (Ganguli et al., 2022). A separate GPT-J-6B model, trained on the same dataset to produce a scalar reward for each prompt-response pair, serves as the reward model and the evaluator. Our empirical results highlight three major advantages of APA over PPO: (i) APA is more sample-efficient. Fine-tuned on the same number of samples, the language model obtained via APA scores consistently higher on the evaluation set than the one obtained with PPO. (ii) APA affords steadier control over the deviation from the language model's initial policy. In terms of KL divergence, the deviation of the ultimate policy yielded by APA is comparable with that of PPO, yet APA is much less prone to sudden performance degradation during training. The control over such deviations is critical in preventing over-optimization on reward models (Gao et al., 2022). (iii) APA has fewer hyperparameters. The loss function in APA involves only one major tunable parameter for KL control, whereas in PPO one has to carefully calibrate the combination of various extra hyperparameters, such as the clipping ranges for importance ratio and value estimates, and the coefficients of the KL controller. More broadly, this work is related to the line of literature on leveraging ideas from RL to improve the performance of language models. A few notable examples in this literature include Paulus et al. (2017), who propose a loss function based on the policy gradient objective to tackle the abstractive summarization task, using ROUGE scores as reward; and Snell et al. (2022), who present the implicit language Q-learning (ILQL) algorithm to facilitate learning from offline human-labeled samples without a reward model. A thorough comparison between different RL algorithms is also made in Ramamurthy et al. (2022) on GRUE benchmarks. There have been some alternative frameworks of RLHF that replaces PPO with SFT on best generated sample (Yuan et al., 2023), or a direct preference-based offline learning (Rafailov et al., 2023). The remainder of this paper is organized as follows. In Section 2, we introduce our notation. In Section 3, we formally specify the algorithm APA, and discuss the intuitions behind the algorithmic elements. Experimental results are presented in Section 4. Section 5 concludes by summarizing and discussing the experimental results. Preliminaries In this section, we overview the standard RL setting in Section 2.1, and discuss how language model training fits into this setting in Section 2.2. We use the following notation. For a positive integer n, we will use the bracket notation [n] to refer to the set of integers {1, . . . , n}; for a finite set Z, we denote by ∆(Z) the set of probability distributions on Z, and |Z| the cardinality of Z. We use B d to denote the unit ball in d-dimensional space. Reinforcement Learning Reinforcement learning (RL) captures the interaction between an agent and an environment via the formalism of a Markov decision process (MDP). We consider a finite-horizon MDP represented by a tuple M = (S, A, H, P, r, ρ), where S is a finite state space, A is a finite action space, H is the horizon, P : S ×A → ∆(S) is a probability transition matrix, r : S × A → [0, 1] is a reward function, and ρ : S → ∆(S) is the initial state distribution. When the agent takes action a in state s at step h, it receives a scalar reward r(s, a), and transitions to a state s , where s is drawn from distribution P (·|s, a). Each episode consists of H consecutive steps. At the end of an episode, the agent is reset to a state drawn from ρ(·), and a new episode begins. A policy π : S → ∆(A) is a function that maps a state to a distribution over actions. The value function V π : S → R of policy π is defined as the expected sum of discounted rewards when the agent starts from initial state s and follows policy π throughout the episode. Let γ ∈ [0, 1] be the discount factor. For any s ∈ S, we have V π (s) := E H τ =0 γ τ r(s τ , a τ ) | s 0 = s, a τ ∼ π(·|s τ ), s τ +1 ∼ P (· | s τ , a τ ) . Given a policy π, the state-action value function, also known as the Q-function, can be defined analogously. For state s ∈ S and a ∈ A, we have Q π (s, a) := E H τ =0 γ τ r(s τ , a τ ) | s 0 = s, a 0 = a, a τ ∼ π(·|s τ ), s τ +1 ∼ P (· | s τ , a τ ) . We also define the important notion of an advantage function. For a policy π, state s and action a, the advantage, defined as Adv π (s, a) = Q π (s, a) − V π (s), quantifies the extra value that is obtained by replacing the immediate action prescribed by π with the action a, when the agent is in state s at step h. We also define the occupancy measures d π state : S → P(s h = s, a h = a | π), where P(· | π) signifies that all actions are drawn from π. To avoid clutter, we overload the notation d π such that d π (s) refers to d π state (s), and d π (s, a) refers to d π action (s, a). A language model as a reinforcement learning agent In its simplest form, a language model receives as input a sequence of tokens (x 1 , . . . , x n ), and generates a distribution over the next token x n+1 . All tokens lie in a finite set X . Whenever the agent selects a token that represents the completion of a response (e.g., the end-of-sequence token), or the total number of tokens reaches a specific value, the entire sequence is scored by a reward model, which produces a scalar reward r. Comparing with the RL formulation in Section 2.1, a language model can be viewed as an agent that operates in an environment with state space S = H k=0 X k and action space A = X , where H is the maximum number of tokens. The transitions are always deterministic, with the next state equal to the concatenation of all the previous tokens and the current token P (s h+1 = (x 1 , · · · , x k ) | s h = (x 1 , · · · , x k−1 ), a h = x k ) = 1. Each episode involves the generation of one complete sequence, and a non-zero reward is delivered only when an episode terminates. In this context, fine-tuning is equivalent to improving the agent policy π. The field of RL offers a formidable arsenal for this task. In this work, we will focus on policy-based RL algorithms, which parameterize the set of agent policies by a set of parameters θ and optimize in the parameter space. In what follows, we will omit the step index h, as its information is already encoded in each state. We note that most transformer-based language models map a state (context) s and an action (next token) a to a logit q θ (s, a), and the next token is sampled according to the distribution induced by the logits {q θ (s, a)} a∈A . This gives rise to the following natural parameterization of a language model policy: π θ (a | s) = exp(q θ (s, a)) a∈A exp(q θ (s, a)) . Fine-Tuning Based on Reinforcement Learning As is mentioned in Section 1, the RLHF stage is usually composed of two steps. First, a reward model is trained from a human-labeled dataset. An RL algorithm is then applied to improve the language model policy, using the rewards generated by the reward model. Here we focus mainly on the second step with a given reward function. We summarize a typical policy-based RL algorithm in Algorithm 1. In practice, the parameter update in Equation (1) usually involves several gradient steps rather than a full minimization. Algorithm 1 Policy Iteration 1: Input: An initial policy parameter θ 0 , a given loss function L(θ; D). 2: Set π 0 = π init . 3: For iteration t = 1, 2 · · · , T 4: Roll out π θt−1 to produce dataset D t = (s (t) 1 , a (t) 1 , r (t) 1 ), · · · , (s (t) n , a (t) n , r (t) n ) 5: Update policy parameter according to θ t = arg min θ L(θ; D t ).(1) In the remainder of this section, we discuss several potential choices for L(θ; D), each targeting the goal of maximizing regularized advantages. We also introduce the new algorithm APA, and discuss the intuitions behind it. As a first step, for each fixed state s, we consider the following KL-regularized optimization problem as a target of policy improvement: maximize θ F(θ; s, π) := E a∼π θ (·|s) [Adv π (s, a)] − λ · KL π θ (· | s) π init (· | s) . (2) Here π init refers to the initial policy of the language model before the RLHF stage, π is an arbitrary policy that we hope to improve upon. The first term in the objective function F(θ; s, π) is an expected advantage, and to maximize the expected advantage, the agent is encouraged to move toward the optimal action in state s. The second term in F(θ; s, π), a KL regularizer, controls the deviation of π θ from π init . Such regularization is essential, as language models are prone to over-optimization when rewards are generated by an imperfect reward model, a phenomenon observed in Gao et al. (2022). Combined, the single-state optimization problem in (2) aims at improving upon policy π in state s within the proximity of π init . The optimization (2) is usually broken down into multiple iterations. In each iteration, we maximize F(θ; s, π old ), where π old is the policy that the agent arrives at in the previous iteration. This technique, referred to as Conservative Policy Iteration (CPI), was first presented in Kakade and Langford (2002). The optimization was subsequently generalized to KL-constrained and regularized methods referred to as Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) and Proximal Policy Optimization (PPO) (Schulman et al., 2017), respectively. In addition to these core methods, there have been several other policy optimization methods inspired by (2), with one notable example being the Advantage-Weighted Regression (AWR) method (Peng et al., 2019;Nair et al., 2020). In the following subsection, we will discuss how F(θ; s, π) is connected with the loss function L(θ; D) in various algorithms, and propose a new proximal optimization problem whose solution approximates that of (2). The loss function in APA will be based on this new proximal optimization problem. Proximal policy optimization PPO leverages importance sampling to circumvent sampling from π θ , arriving at E a∼π θ (·|s) [Adv π old (s, a)] = E a∼π old (·|s) π θ (a | s) π old (a | s) Adv π old (s, a) , where the expectation on the right-hand side can be estimated in an unbiased manner from finite samples. PPO also involves the following innovation: Instead of penalizing the expected advantage with the estimated KL-divergence as in (2), PPO directly subtracts the KL penalty term from the reward received by the agent, and adaptively adjusts the penalty weight λ based on the deviation of π θ from π init (Schulman et al., 2017;Dhariwal et al., 2017;Ziegler et al., 2019). The KL-penalized reward is then used to estimate a new advantage function Adv. To avoid ill-conditioned gradients caused by large values or importance ratio estimates, PPO applies clipping to the objective function. The final loss function is thus L PPO (θ; D) = − 1 |D| (s,a)∈D min π θ (a | s) π old (a | s) Adv(s, a), clip π θ (a | s) π old (a | s) , 1 − , 1 + Adv(s, a) . Note that the loss function relies on extra tunable hyperparameters. The clipping also makes the estimator biased. Advantage weighted regression If the parameterized policy space {π θ } contained all possible policies, the maximizer of F(θ; s, π old ) (2) would induce a policy π that satisfies π (a | s) = 1 Z(s) π init (a | s) · exp(Adv π old (s, a)/λ),(3) where Z(s) = a ∈A π init (a | s) · exp(Adv π (s, a )/λ) is a normalizing factor. In the case that {π θ } does not contain all policies, a natural way to maximize F(θ; s, π old ) is to project π to {π θ } with respect to KL-divergence, which gives rise to the AWR algorithm. From (3), KL π (a | s) π θ (a | s) = − π init (a | s) Z(s) exp Adv π old (s, a) λ log π θ (a | s) + C(s),(4) where C(s) is a constant that does not depend on θ. To minimize the KL divergence in (4), in our implementation we make three changes to the objective that help to set the stage for our new method: • We replace π init with π old , which can be approximated with finite samples. • The KL-divergence in (4) only accounts for one state s. To incorporate other states, we minimize a weighted sum of KL-divergences, with states sampled from the state-action occupancy measure d π old . • We use the approximation Z(s) ≈ 1. A discussion is provided in Appendix A on why such an approximation is warranted. With these changes, we arrive at the following population loss for AWR: L AWR (θ) = −E (s,a)∼d π old exp Adv π old (s, a)/λ log π θ (a | s) . Given a finite dataset D = {(s i , a i ) : i = 1, . . . , n} sampled from d π old , the corresponding empirical loss can be written as L AWR (θ; D) = − 1 |D| (s,a)∈D exp Adv π old (s, a)/λ log π θ (a | s) . For the well-specified case where the parameterized family {π θ } contains the estimated policy, the minimizer of the population loss is as follows: π (a | s) = π old (a | s) exp(Adv π old (s, a)/λ) a π old (a | s) exp(Adv π old (s, a)/λ) . Note that Equation (6) is a maximum likelihood estimator that converges to the same policy π , or a log loss weighted on the advantages. However, π is based on the sampling policy π old , which changes in each iteration in Algorithm 1. Thus when plugging in Equation (6) with the online learning framework in Algorithm 1, the policy may not converge due to the changing value of π old . As we observe in Section 4 and Appendix D.3, AWR can indeed be unstable in the online case, although it performs better with offline data collected by a fixed logging policy π old . Advantage-Induced Policy Alignment To project the optimal policy π in (3) onto the parameterized policy space, we may also consider another distance instead of KL-divergence. In APA, we employ the squared error between log probabilities in place of the KL-divergence: log π (a | s) − log π θ (a | s) 2 = log π θ (a | s) + log Z(s) − Adv π old (s, a)/λ − log π init (a | s) 2 . Similar to our implementation of AWR, we also apply Z(s) ≈ 1, and consider a weighted sum of squared errors with states sampled from d π old , giving rise to the following population loss: L APA (θ) = E (s,a)∼d π old log π θ (a | s) − Adv π old (s, a)/λ − log π init (a | s) 2 .(8) The empirical loss on a finite dataset D sampled from d π old is thus L APA (θ; D) = 1 |D| (s,a)∈D log π θ (a | s) − Adv π old (s, a)/λ − log π init (a | s) 2 .(9) Assuming that the parameter space is Θ = B d and that the parameterized policy space is well-specified such that π ∈ {π θ | θ ∈ Θ}, where π is defined in Equation (3), we can establish theoretically that the empirical loss is a reasonable surrogate for the population loss. Theorem 1. Let θ ∈ arg min θ∈Θ L APA (θ) be a minimizer of the population loss. Then π θ (a | s) = π (a | s), ∀(s, a) ∈ supp(π old ). Furthermore, letθ ∈ arg min θ∈Θ L APA (θ, D) be an empirical loss minimizer. Assume that min(π θ (a | s), π init (a | s)) ≥ B 1 and |Adv(s, a)| ≤ B 2 for any s, a, and that log(π θ ) is L-Lipschitz with respect to θ under 2 -norm for any s, a. Then for all δ > 0, with probability at least 1 − δ, for some universal constant C, L APA (θ) ≤ CL(B 2 − log(B 1 )) 2 d log(nL/δ) n . The proof is deferred to Appendix E. From the theorem, we see that the minimizer of the population APA loss is exactly the target policy π if the policy π old is supported on all state-action pairs. In contrast, as we discussed earlier, convergence properties of the PPO and AWR algorithms have not yet been established. We also provide alternative interpretations of the proposed loss in terms of f -divergence and soft-Q learning in Appendix B. Experimental Results In our implementation of all of the algorithms that we test, including APA, we define the advantage function to be Adv π old (s, a), which is estimated from data. We imp the same generalized advantage estimation approach to estimate the advantage as discussed in earlier work Mnih et al. (2016); Schulman et al. (2015b). In particular, for the rollout (s 0 , a 0 , r 0 , s 1 , a 1 , r 1 , · · · , s T −1 , a T −1 , r T −1 , s T ), the generalized advantage estimator is π old (s t , a t ) = δ t + λγδ t+1 + · · · + (λγ) T −1 δ T −1 , where δ t = r(s t , a t ) + γV π old (s t+1 ) − V π old (s t ). Here the value function is another standalone network that we fit throughout the training process with a squared loss, L V (D) = si,ai (V (s i ) − π old (s i , a i ) − V π old (s i )) 2 . Thus the overall loss function is L APA θ (D) =L APA (D) + η ·L V (D), L AWR θ (D) =L AWR (D) + η ·L V (D). For the implementation of PPO, we use the PPO2 version from Dhariwal et al. (2017), with the adaptive KL controller from Ziegler et al. (2019). We implement PPO with the same hyperparameters as the implementation in trlX 2 , which also follows default hyperparameters suggested by Schulman et al. (2017). The main difference between our version of PPO and that in trlX is that we create a completely separate value network rather than creating a value head on top of the language model. In APA, we take λ = 0.1 to impose a weaker constraint on the KL coefficient. For AWR, we find that setting λ = 0.1 leads to an explosion of the loss; thus we take λ = 1 to stabilize training. The code is available at https://github.com/microsoft/RLHF-APA. Results on the HH dataset In this section, we compare PPO, AWR and APA on the human-labeled Helpfulness and Harmlessnes (HH) dataset from Bai et al. (2022a). 3 Each item in the dataset is comprised of a prompt, a chosen response and a rejected response labeled by human to evaluate the helpfulness and harmlessness of the responses. For the reward model, we use the proxy reward model Dahoas/gptj-rm-static 4 with 6B parameters trained from the same dataset based on EleutherAI/gpt-j-6b. 5 We fine tune three models, including Dahoas/pythia-125M-static-sft, 6 Dahoas/pythia-1B-static-sft, 7 and Dahoas/pythia-6B-static-sft. 8 All the models have gone through supervised fine-tuning with labeled prompt-response pairs, similar to the protocol in Ouyang et al. (2022) and Ramamurthy et al. (2022). For all three algorithms, we run two epochs of update after generating 64 responses from randomly sampled prompts. For the 125M model, we use batch size 8 and learning rate 8 × 10 −6 . For the 1B model, we use batch size 2 and learning rate 10 −6 . For the 6B and larger models, we use batch size 1 and learning rate 10 −6 . We use a 32GB Nvidia V100 GPU for fine-tuning 125M and 1B models, and a 64GB AMD Mi200 GPU for fine-tuning the 6B and larger models. The maximum response length is set to be 128 tokens, and the maximum total sequence length is set to be 1024 tokens. We unfreeze the last two layers during fine-tuning. For each experiment, we run 20k steps in total. The results are plotted as below. Figure 1: Comparison of the performance of three methods on the HH dataset. Left: The x-axis represents the total steps, which are proportional to the amount of data used in the training procedure. The y-axis is the reward evaluated by the same reward model. Right: The x-axis represents the total steps. The y-axis is the KL divergence between the trained model and the initial model. In the left of Figure 1, we compare the three methods on the HH dataset. For all three models, we repeat the experiments with three random seeds 0, 100, 1000, and plot their min, mean and max. We see that with the same amount of data, APA is able to achieve the highest reward in all three cases. We also observe that PPO becomes more stable with large models, potentially due to smaller batch size, or the ability of getting higher reward with a smaller deviation in KL divergence. On the right of Figure 1, we show how the KL divergence between the current policy and the initial policy changes as a function of the training process for the three seeds. We can see that for all three models, APA provides similar or better KL control than PPO and AWR, although we note that for the 6B model the KL control for PPO is slightly better than APA. Combined with the left part of the figure, we can see that APA is more KL-efficient than PPO and AWR; i.e., it attains a better performance on the reward model under the same KL divergence. We include more experiment results in Appendix C, where we fine tune databricks/ dolly-v2-7b 9 on the same HH dataset, and 2.7B and 6B models on the TLDR dataset 10 for the summarization task. We also conduct ablation studies on the effect of the adaptive KL controller on PPO and the effect of different choices of λ for AWR; see Appendix D. We show in Appendix D.2 that without KL control, PPO can be as sample efficient as APA, but less KL-efficient. We also observe instability even without the KL controller. On the other hand, we observe that changing λ provides a straightforward tradeoff between KL control and performance in APA. Conclusions We have studied the problem of online policy optimization in RLHF. We benchmarked the performance of existing algorithms PPO and AWR, and introduced a new method, APA that has a theoretical convergence guarantee and empirically outperforms existing algorithms. The key takeaways from the comparisons of the three RLHF algorithms that we have studied can be summarized as follows. Mode collapse. As we discussed in Section 1, RLHF faces the challenge of mode collapse. RL algorithms may encourage the language model to produce less diverse output, such that the policy that maximizes total reward is deterministic. To rein in such behavior, it is crucial to impose control on the divergence between new policies and the initial policy after SFT. However, the clipping of the objective function and the adaptive KL controller make the behavior of PPO unpredictable; for AWR, the update in (7), which reweights the previous policy by a multiplicative factor in each iteration, also has unknown ramifications. APA, on the other hand, provably converges to π when the advantage function is fixed, which is close to the initial policy in KL divergence. From the experimental results, we see that APA is able to provide better and easy-to-adjust KL control by explicitly tuning the hyperparameter λ, which helps mitigate mode collapse. Stability. Our experiments reveal different levels of instability for PPO and AWR. Specifically, PPO suffers from significant performance degradation whenever the model policy diverges too much from the initial policy π init , an effect which is more pronounced for smaller models. We attribute this to the KL controller in PPO. In Appendix D, we demonstrate that PPO can achieve a similar sample efficiency as APA without the KL penalty, albeit at the cost of weaker KL efficiency. Sample efficiency. With the same level of control over KL-divergence, APA shows higher sample efficiency than PPO and AWR. One possible explanation is that in both PPO and AWR, policy improvement critically depends on using finite samples to reconstruct the sampling policy π old , whereas in APA, minimizing the population loss (8) hinges less on the reconstruction of π old . In fact, the APA population loss can be effectively minimized as long as the dataset D has good coverage over state-action pairs that are frequently visited by π old . We also provide more discussions on the sample efficiency in Appendix B. Online vs. offline learning. Our experiments primarily examine the online case, where new data can be collected during the training process. The offline setting, where a fixed dataset is given and new samples are not available, may yield qualitatively different results. In particular, suppose that the offline dataset consists of rollouts from a policy π off . In this case, if it were trained with infinitely many samples, AWR would converge to the policy specified in (3). However, the performance of APA may suffer from distribution shift because it can only learn from state-action pairs covered by π off , and there is no guarantee that the learned policy performs well on the state-action pairs visited by the current policy. Such distribution mismatch can lead to a significant performance drop for APA, as we observe in Appendix D.3. We also observe that AWR typically outperforms ILQL for offline learning, although both perform poorly with larger models. A Argument for Z(s) ≈ 1 Note that in both advantage-weighted regression and advantage-based squared loss, we approximate Z(s) with 1. Here we justify why this does not hurt the performance. Consider an infinitesimal scenario where |Adv/λ| | log π init |. In the scenario of language model, this is usually true since π init is supported on approximately 50k distinct tokens and can be very close to zero, while Adv/λ can be adjusted to small numbers by adjusting λ. In this case, we have Z(s) = a∈A π init (a | s) exp(Adv(s, a)/λ) =E a∼π init [exp(Adv(s, a)/λ)] =E a∼π init [1 + Adv(s, a)/λ + o(Adv 2 (s, a)/λ 2 )]. This advantage is usually estimated as Adv π old , which can be close to Adv π init . And we have E a∼π init [Adv π old (s, a)/λ] ≈ E a∼π init [Adv π init (s, a)/λ] = 0. Thus we know that Z(s) ≈ 1 + E a∼π init [o(Adv 2 (s, a)/λ 2 )] ≈ 1. In practice, we observe that the squared loss decreases very slowly due to a small learning rate (8e − 6). This suggests that the policy changes very slowly, which is another reason why the normalizing factor is not important. B Alternative Interpretation of APA Recall that APA can be written as L APA (θ) = E (s,a)∼d π old log π θ (a | s) − log π (a | s) 2 , where π = π init · exp(Adv/λ). In the case when π old is close to π θ , minimizing the squared loss in APAis equivalent to minimizing the following distance between π and π θ : d(π (· | s), π θ (· | s)) = a π θ (a | s) log π θ (a | s) π (a | s) 2 . This can be viewed as a new f -divergence with f (x) = x log 2 (x). We can show by Cauchy-Schwarz that this is always a upper bound for the KL divergence: d(π (· | s), π θ (· | s)) = a π θ (a | s) a π θ (a | s) log π θ (a | s) π (a | s) 2 ≥ a π θ (a | s) log π θ (a | s) π (a | s) ≥ a π θ (a | s) log π θ (a | s) π (a | s) . C Additional Experiments C.1 Results on the TLDR Dataset We fine-tune the EleutherAI/gpt-neo-2.7B 11 and 6B CarperAIopenai_summarize_tldr_sft 12 models on the TLDR dataset 13 for the summarization task. For EleutherAI/gpt-neo-2.7B, we first fine-tune it 11 https://huggingface.co/EleutherAI/gpt-neo-2.7B 12 https://huggingface.co/CarperAI/openai_summarize_tldr_sft 13 https://huggingface.co/datasets/CarperAI/openai_summarize_comparisons with supervised fine-tuning on the labeled response in the same summarization dataset, and run RLHF on the supervised fine-tuned policy. The 6B model CarperAIopenai_summarize_tldr_sft has already gone through the supervised fine-tuning stage. The reward model is a pre-trained EleutherAI/gpt-j-6b 14 reward model for summarization dataset CarperAI/openai_summarize_comparisons 15 . We follow the default setting in trlX with seed 0 and 100, and plot the results in Figure 2. One can see that APA is more sample efficient and provides better KL control than PPO in both 2.7B and 6B models. D Ablation Studies D.1 KL control in APA In this section, we show how the performance and KL divergence change with different values of λ. We set λ = 0.1, 1 for the 125M model and plot their performances in Figure 4 with seed 1000. One can see that the choice of λ directly determines the level of KL control, along with the convergent point APA reaches. This shows that λ provides a clear trade-off between KL control and model performance. Figure 4: Comparisons of the performance between different λ on the 125M model. Left: The x-axis represents the total steps, which are proportional to the number of data used in the training procedure. The y-axis is the reward evaluated by the same reward model. Right: The x-axis represents the total steps. The y-axis is the KL divergence between the trained model and the initial model. D.2 KL control in PPO We show how the performance and KL divergence change with or without adaptive KL control in PPO. We plot their performances in Figure 5 for 125M model with seed 1000. For PPO with adaptive KL controller, the initial KL coefficient is set to be 0.05. One can see that without KL control, PPO converges to a higher reward compared to APA in Figure 4, at a cost of a significantly higher KL divergence. On the other hand, the reward of PPO with adaptive KL control begins to drop in the middle. This is due to the large deviation from the original policy, which leads to a much larger KL regularization term that dominates the reward. Compared with Figure 4, one can see that APA provides more stable and controllable KL regularization. The x-axis represents the total steps, which are proportional to the number of data used in the training procedure. The y-axis is the reward evaluated by the same reward model. Right: The x-axis represents the total steps. The y-axis is the KL divergence between the trained model and the initial model. D.3 Experiments for Offline Learning We conduct experiments for offline learning as well. The offline dataset is selected to be all the prompts and responses from the HH dataset, with reward labeled by the reward model. We use the trained GPT-J reward function to label the reward for all the offline data, and compare ILQL, AWR and APA on the same 125M and 1B model after supervised fine-tuning with seed 1000. The result is given in Figure 6. From the results, one can see that AWR performs better than ILQL, and APA cannot be directly adapted to the offline case. Furthermore, offline learning cannot help too much after the supervised fine-tuning stage, potentially due to the large distribution shift between the offline data and the current policy. Figure 6: Comparisons of the performance between ILQL, AWR and APA on the offline learning dataset. E Proof of Theorem 1 Proof. From the well-specified assumption π ∈ {π θ | θ ∈ Θ}, we know that there exists some θ ∈ Θ such that π θ = π . For the population loss, we know that L APA (θ ) = E (s,a)∼d π old s,a (log π θ (a | s) − Adv(s, a)/λ − log π init (a | s)) 2 = E (s,a)∼d π old s,a (log π (a | s) − Adv(s, a)/λ − log π init (a | s)) 2 = 0. Thus for any θ ∈ arg min θ∈Θ L APA (θ), there must be L APA (θ ) = 0, which is equivalent to E (s,a)∼d π old s,a (log π θ (a | s) − Adv(s, a)/λ − log π init (a | s)) 2 = 0. This means that for any s, a on the support of d π old s,a , we have π θ (a | s) = π (a | s). For the second part of the theorem, we know from Hoeffding's inequality that for any fixed θ ∈ Θ, |L APA (θ) − L APA (θ; D)| = 1 n n i=1 log π θ (a i | s i ) − Adv(s i , a i )/λ − log π init (a i | s i ) 2 − E log π θ (a | s) − Adv(s, a)/λ − log π init (a | s) 2 ≤C · (B 2 /λ − 2 log(B 1 )) 2 log(1/δ) n . Let the Θ be -covering of Θ under 2 norm, i.e. for any θ ∈ Θ, one can find some θ ∈ Θ such that θ − θ 2 ≤ . We also have |Θ | ≤ (1/ ) d . By taking union bound, we know that for all θ ∈ Θ , with probability at least 1 − δ, |L APA (θ) − L APA (θ; D)| ≤C · (B 2 /λ − log(B 1 )) 2 d log(1/( δ)) n . Letθ be the minimizer of L APA (θ; D). Then we know that there exists someθ ∈ Θ such that θ −θ ≤ . This further implies that |L APA (θ) − L APA (θ )| =|E log πθ(a | s) − Adv(s, a)/λ − log π init (a | s) 2 − E log πθ (a | s) − Adv(s, a)/λ − log π init (a | s) 2 | ≤C(B 2 /λ − log(B 1 ))L . Similarly, we also have | L APA (θ) − L APA (θ )| ≤ C(B 2 /λ − log(B 1 ))L . Overall, we have L APA (θ) = (L APA (θ) − L APA (θ )) + (L APA (θ ) − L APA (θ )) + ( L APA (θ ) − L APA (θ)) + L APA (θ). For the first and third difference, from Equation (11) we know that they are both bounded by C(B 2 /λ − log(B 1 ))L . For the second difference, we know from Equation (10) that it is bounded by C(B 2 /λ − log(B 1 )) 2 d log(1/( δ)) n . Lastly, we know that L APA (θ) = 0 sinceθ ∈ arg min θ L APA (θ) and L APA (θ ) = 0. Thus overall, we have L APA (θ) ≤ C((B 2 /λ − log(B 1 ))L + (B 2 /λ − log(B 1 )) 2 d log(1/( δ)) n ). Taking = 1/(Ln) finishes the proof. between the trained and the initial policy for 6B model PPO AWR APA Figure 2 :Figure 3 : 23Comparisons of the performance on TLDR dataset. Left: The x-axis represents the total steps, which are proportional to the number of data used in the training procedure. The y-axis is the reward evaluated by the same reward model. Right: The x-axis represents the total steps. The y-axis is the KL divergence between the trained model and the initial model. Comparisons of the performance on the dolly 7B model. Left: The x-axis represents the total steps, which are proportional to the number of data used in the training procedure. The y-axis is the reward evaluated by the same reward model. Right: The x-axis represents the total steps. The y-axis is the KL divergence between the trained model and the initial model. Figure 5 : 5Comparisons of the performance of PPO on the 125M model. Left: Offline Policy Optimization with 125M modelOffline Policy Optimization with 1B model0 2 4 6 8 10 12 steps / k 2.8 2.7 2.6 2.5 2.4 2.3 2.2 2.1 2.0 reward ILQL AWR APA 0 2 4 6 8 10 steps / k 2.1 2.0 1.9 1.8 1.7 1.6 reward ILQL AWR APA An interesting discussion on mode collapse caused by PPO can be found here. https://github.com/CarperAI/trlx 3 https://huggingface.co/datasets/Dahoas/static-hh 4 https://huggingface.co/Dahoas/gptj-rm-static 5 https://huggingface.co/EleutherAI/gpt-j-6b 6 https://huggingface.co/Dahoas/pythia-125M-static-sft 7 https://huggingface.co/Dahoas/pythia-1B-static-sft 8 https://huggingface.co/Dahoas/pythia-6B-static-sft https://huggingface.co/databricks/dolly-v2-7b 10 https://huggingface.co/datasets/CarperAI/openai_summarize_comparisons Training a helpful and harmless assistant with reinforcement learning from human feedback. Y Bai, A Jones, K Ndousse, A Askell, A Chen, N Dassarma, D Drain, S Fort, D Ganguli, T Henighan, arXiv:2204.05862arXiv preprint2022a. 4.1Y. Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli, T. Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022a. 4.1 Y Bai, S Kadavath, S Kundu, A Askell, J Kernion, A Jones, A Chen, A Goldie, A Mirhoseini, C Mckinnon, arXiv:2212.08073Constitutional AI: Harmlessness from AI feedback. arXiv preprintY. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirhoseini, C. McKinnon, et al. Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073, 2022b. 1 StackLLaMA: An RL fine-tuned LLaMA model for Stack Exchange question and answering. E Beeching, Y Belkada, K Rasul, L Tunstall, L Werra, N Rajani, N Lambert, E. Beeching, Y. Belkada, K. Rasul, L. Tunstall, L. von Werra, N. Rajani, and N. Lambert. StackLLaMA: An RL fine-tuned LLaMA model for Stack Exchange question and answering, 2023. URL https:// huggingface.co/blog/stackllama. 1 Language models are few-shot learners. T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, arXiv:2005.14165arXiv preprintT. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 1 Deep reinforcement learning from human preferences. P F Christiano, J Leike, T Brown, M Martic, S Legg, D Amodei, Advances in Neural Information Processing Systems. P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, pages 4299-4307, 2017. 1 J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 1 P Dhariwal, C Hesse, O Klimov, A Nichol, M Plappert, A Radford, J Schulman, S Sidor, Y Wu, P Zhokhov, OpenAI baselines. P. Dhariwal, C. Hesse, O. Klimov, A. Nichol, M. Plappert, A. Radford, J. Schulman, S. Sidor, Y. Wu, and P. Zhokhov. OpenAI baselines. https://github.com/openai/baselines, 2017. 3.1, 4 Simple agent, complex environment: Efficient reinforcement learning with agent states. S Dong, B Van Roy, Z Zhou, Journal of Machine Learning Research. 23255S. Dong, B. Van Roy, and Z. Zhou. Simple agent, complex environment: Efficient reinforcement learning with agent states. Journal of Machine Learning Research, 23(255):1-54, 2022. 1 Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. D Ganguli, L Lovitt, J Kernion, A Askell, Y Bai, S Kadavath, B Mann, E Perez, N Schiefer, K Ndousse, arXiv:2209.07858arXiv preprintD. Ganguli, L. Lovitt, J. Kernion, A. Askell, Y. Bai, S. Kadavath, B. Mann, E. Perez, N. Schiefer, K. Ndousse, et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858, 2022. 1 Scaling laws for reward model overoptimization. L Gao, J Schulman, J Hilton, arXiv:2210.107603arXiv preprintL. Gao, J. Schulman, and J. Hilton. Scaling laws for reward model overoptimization. arXiv preprint arXiv:2210.10760, 2022. (ii), 3 Approximately optimal approximate reinforcement learning. S Kakade, J Langford, Proceedings of the Nineteenth International Conference on Machine Learning. the Nineteenth International Conference on Machine LearningS. Kakade and J. Langford. Approximately optimal approximate reinforcement learning. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 267-274, 2002. 3 TAMER: Training an agent manually via evaluative reinforcement. W B Knox, P Stone, 7th IEEE International Conference on Development and Learning. IEEEW. B. Knox and P. Stone. TAMER: Training an agent manually via evaluative reinforcement. In 7th IEEE International Conference on Development and Learning, pages 292-297. IEEE, 2008. 1 Learning dynamic robot-to-human object handover from human feedback. A Kupcsik, D Hsu, W S Lee, Robotics research. SpringerA. Kupcsik, D. Hsu, and W. S. Lee. Learning dynamic robot-to-human object handover from human feedback. In Robotics research, pages 161-176. Springer, 2018. 1 Personalized reward learning with interaction-grounded learning (IGL). J Maghakian, P Mineiro, K Panaganti, M Rucker, A Saran, C Tan, arXiv:2211.15823arXiv preprintJ. Maghakian, P. Mineiro, K. Panaganti, M. Rucker, A. Saran, and C. Tan. Personalized reward learning with interaction-grounded learning (IGL). arXiv preprint arXiv:2211.15823, 2022. 1 Asynchronous methods for deep reinforcement learning. V Mnih, A P Badia, M Mirza, A Graves, T Lillicrap, T Harley, D Silver, K Kavukcuoglu, PMLRInternational Conference on Machine Learning. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928-1937. PMLR, 2016. 4 Awac: Accelerating online reinforcement learning with offline datasets. A Nair, A Gupta, M Dalal, S Levine, arXiv:2006.09359arXiv preprintA. Nair, A. Gupta, M. Dalal, and S. Levine. Awac: Accelerating online reinforcement learning with offline datasets. arXiv preprint arXiv:2006.09359, 2020. 3 R Nakano, J Hilton, S Balaji, J Wu, L Ouyang, C Kim, C Hesse, S Jain, V Kosaraju, W Saunders, arXiv:2112.09332Browser-assisted question-answering with human feedback. arXiv preprintR. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, et al. WebGPT: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. 1 . Openai, arXiv:2303.08774GPT-4 technical report. arXiv preprintOpenAI. GPT-4 technical report. arXiv preprint arXiv:2303.08774, 2023. 1 Training language models to follow instructions with human feedback. L Ouyang, J Wu, X Jiang, D Almeida, C L Wainwright, P Mishkin, C Zhang, S Agarwal, K Slama, A Ray, arXiv:2203.02155arXiv preprint2022. 1, 4.1L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022. 1, 4.1 A deep reinforced model for abstractive summarization. R Paulus, C Xiong, R Socher, arXiv:1705.04304arXiv preprintR. Paulus, C. Xiong, and R. Socher. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304, 2017. 1 Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. X B Peng, A Kumar, G Zhang, S Levine, arXiv:1910.00177arXiv preprintX. B. Peng, A. Kumar, G. Zhang, and S. Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177, 2019. 3 Direct preference optimization: Your language model is secretly a reward model. R Rafailov, A Sharma, E Mitchell, S Ermon, C D Manning, C Finn, arXiv:2305.18290arXiv preprintR. Rafailov, A. Sharma, E. Mitchell, S. Ermon, C. D. Manning, and C. Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023. 1 R Ramamurthy, P Ammanabrolu, K Brantley, J Hessel, R Sifa, C Bauckhage, H Hajishirzi, Y Choi, arXiv:2210.01241Is reinforcement learning (not) for natural language processing? benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint2022. 1, 4.1R. Ramamurthy, P. Ammanabrolu, K. Brantley, J. Hessel, R. Sifa, C. Bauckhage, H. Hajishirzi, and Y. Choi. Is reinforcement learning (not) for natural language processing? benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241, 2022. 1, 4.1 Active preference-based learning of reward functions. D Sadigh, A D Dragan, S Sastry, S A Seshia, Robotics: Science and Systems. D. Sadigh, A. D. Dragan, S. Sastry, and S. A. Seshia. Active preference-based learning of reward functions. In Robotics: Science and Systems, 2017. 1 Trust region policy optimization. J Schulman, S Levine, P Abbeel, M Jordan, P Moritz, PMLRInternational Conference on Machine Learning. J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In International Conference on Machine Learning, pages 1889-1897. PMLR, 2015a. 3 High-dimensional continuous control using generalized advantage estimation. J Schulman, P Moritz, S Levine, M Jordan, P Abbeel, arXiv:1506.02438arXiv preprintJ. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b. 4 J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJ. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. 1, 3, 3.1, 4 Offline RL for natural language generation with implicit language Q-learning. C Snell, I Kostrikov, Y Su, M Yang, S Levine, arXiv:2206.118712022arXiv preprintC. Snell, I. Kostrikov, Y. Su, M. Yang, and S. Levine. Offline RL for natural language generation with implicit language Q-learning. arXiv preprint arXiv:2206.11871, 2022. 1 Learning to summarize with human feedback. N Stiennon, L Ouyang, J Wu, D Ziegler, R Lowe, C Voss, A Radford, D Amodei, P F Christiano, Advances in Neural Information Processing Systems. 33N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, and P. F. Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33: 3008-3021, 2020. 1 Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. 30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. Advances in Neural Information Processing Systems, 30, 2017. 1 GPT-J-6B: A 6 billion parameter autoregressive language model. B Wang, A Komatsuzaki, B. Wang and A. Komatsuzaki. GPT-J-6B: A 6 billion parameter autoregressive language model. https: //github.com/kingoflolz/mesh-transformer-jax, May 2021. 1 A survey of preference-based reinforcement learning methods. C Wirth, R Akrour, G Neumann, J Fürnkranz, The Journal of Machine Learning Research. 181C. Wirth, R. Akrour, G. Neumann, and J. Fürnkranz. A survey of preference-based reinforcement learning methods. The Journal of Machine Learning Research, 18(1):4945-4990, 2017. 1 A general sample complexity analysis of vanilla policy gradient. R Yuan, R M Gower, A Lazaric, Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS). the 25th International Conference on Artificial Intelligence and Statistics (AISTATS)2022R. Yuan, R. M. Gower, and A. Lazaric. A general sample complexity analysis of vanilla policy gradient. Proceedings of the 25th International Conference on Artificial Intelligence and Statistics (AISTATS), 2022. Z Yuan, H Yuan, C Tan, W Wang, S Huang, F Huang, Rrhf, arXiv:2304.05302Rank responses to align language models with human feedback without tears. arXiv preprintZ. Yuan, H. Yuan, C. Tan, W. Wang, S. Huang, and F. Huang. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302, 2023. 1 Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. B Zhu, J Jiao, M I Jordan, International Conference on Machine Learning. 2023B. Zhu, J. Jiao, and M. I. Jordan. Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. International Conference on Machine Learning, 2023. 1 Fine-tuning language models from human preferences. D M Ziegler, N Stiennon, J Wu, T B Brown, A Radford, D Amodei, P Christiano, G Irving, arXiv:1909.08593arXiv preprintD. M. Ziegler, N. Stiennon, J. Wu, T. B. Brown, A. Radford, D. Amodei, P. Christiano, and G. Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. 1, 3.1, 4 We follow the default setting in trlX with seed 0 and 100, and plot the results in Figure 3. We only include the results for APA and PPO since AWR drops directly. Different from all other experiments, here for APA we set λ = 1 rather than 0.1 to stablize the training and impose stronger KL control. One can see that APA can still improve over the original dolly 7B model and provide better KL control, while PPO fails to bring further improvement. Results on the Dolly Model We fine-tune the databricks/ dolly-v2-7b 16 model on the HH dataset. 14C.2 Results on the Dolly Model We fine-tune the databricks/ dolly-v2-7b 16 model on the HH dataset. We follow the default setting in trlX with seed 0 and 100, and plot the results in Figure 3. We only include the results for APA and PPO since AWR drops directly. Different from all other experiments, here for APA we set λ = 1 rather than 0.1 to stablize the training and impose stronger KL control. One can see that APA can still improve over the original dolly 7B model and provide better KL control, while PPO fails to bring further improvement. 14 https://huggingface.co/EleutherAI/gpt-j-6b
[ "https://github.com/microsoft/RLHF-APA.", "https://github.com/CarperAI/trlx", "https://github.com/openai/baselines," ]
[ "On Size-Independent Sample Complexity of ReLU Networks", "On Size-Independent Sample Complexity of ReLU Networks" ]
[ "Mark Sellke " ]
[]
[]
We study the sample complexity of learning ReLU neural networks from the point of view of generalization. Given norm constraints on the weight matrices, a common approach is to estimate the Rademacher complexity of the associated function class. Previously [GRS20] obtained a bound independent of the network size (scaling with a product of Frobenius norms) except for a factor of the square-root depth. We give a refinement which often has no explicit depth-dependence at all.
null
[ "https://export.arxiv.org/pdf/2306.01992v1.pdf" ]
259,075,992
2306.01992
aed71660ede42da9a8df3027892f9a39ba1d89c8
On Size-Independent Sample Complexity of ReLU Networks Mark Sellke On Size-Independent Sample Complexity of ReLU Networks We study the sample complexity of learning ReLU neural networks from the point of view of generalization. Given norm constraints on the weight matrices, a common approach is to estimate the Rademacher complexity of the associated function class. Previously [GRS20] obtained a bound independent of the network size (scaling with a product of Frobenius norms) except for a factor of the square-root depth. We give a refinement which often has no explicit depth-dependence at all. Introduction Given the stunning empirical successes of deep neural networks, a pressing need has emerged to explain their ability to generalize. The traditional approach to generalization proceeds via bounds on VC dimension or Rademacher complexity and provides uniform convergence guarantees for a given function class [BM02,SSBD14]. Historically, the classical VC bounds for neural networks, given in [AB99], scale with the number of neurons. Recent advances have focused on Rademacher complexity bounds that scale with the product of operator norms of the weight matrices W i . For instance, [BFT17,NBS18] achieve such results using covering number and PAC-Bayes techniques respectively. However these bounds contain additional polynomial dependence on the depth of the network; [GRS20, Theorem 5.1] later showed that such depth dependence cannot be avoided in general. Surprisingly, [GRS20] showed this issue can be mostly avoided if one is willing to consider a product of Frobenius norms; they obtain mild depth dependence of only a square-root factor, which can be removed entirely at the cost of worse decay in the number of samples. Their approach stems from the natural idea of iteratively peeling off layers and using the Ledoux-Talagrand contraction lemma [LT91] to handle each application of the non-linearity σ. A previous work [NTS15] used this idea directly and paid exponentially in the depth for repeated use of the contraction lemma; the technical innovation of [GRS20] was to apply the contraction lemma inside an auxilliary exponential moment. We give a refinement of the main result of [GRS20] which depends on upper bounds M F (i) and M op (i) on both the Frobenius and operator norm of each W i . Our bound is never worse, and is fully depth-independent unless M op (i)/M F (i) ≈ 1 for nearly all of the initial layers. The idea is to use their argument repeatedly along an well-chosen subsequence of the layers and take advantage of improved concentration estimates at the intermediate stages. Finally we mention that although this work, along with the papers referenced above, apply for essentially arbitrary neural networks, more refined results have been obtained under further assumptions as well as for structured classes of neural networks [AGNZ18, WM19, CLZ20, LS20, GJJ20]. Problem Formulation and Main Result A feedforward neural network is a function of the form x → W D σ(W D−1 σ(. . . W 1 x)). (1.1) Here each W i is a w i × w i−1 weight matrix and x ∈ X ⊆ R w0 . The ReLU non-linearity σ(x) = max(x, 0) is applied coordinate-wise, and we require w D = 1 so the output is a scalar. ∥W m ∥ F ≤ M F (m), ∥W m ∥ op ≤ M op (m). We assume X is contained in the radius B ball in R w0 so that ∥x∥ 2 ≤ B for all x ∈ X . The other widths w 1 , . . . , w D1 are arbitrary and may vary across F d . It will also be convenient to set P op (d) = d m=1 M op (m), P F (d) = d m=1 M F (m), R(d) = P op (d)/P F (d). (1.2) Note that 1 = R(0) ≥ R(1) ≥ · · · ≥ R(D). Next, let R n (F) denote the Rademacher complexity of a class F of functions f : X → R: (1.4) Since D−1 d=0 R(d) ≤ d, this recovers [GRS20, Theorem 3.1]. Moreover one expects that generically R(d) decays exponentially with d, so this sum can be viewed as "usually" constant. Additionally the widths w 1 , . . . , w D1 do not enter at all, so we could also directly allow arbitrary width networks in defining F d . R n (F; x 1 , . . . , x n ; ⃗ ε) = 1 n sup f ∈F n i=1 ε i f (x i ); Main Argument The general version of our improved bound will depend on an arbitrary subsequence 0 ≤ d 0 < d 1 < · · · < d k = D of the layers. We will then optimize this sequence based on the values R(1), R(2), . . . , R(D). We remark that [GRS20, Proof of Theorem 3.1] is essentially the one-step case (d 0 , d 1 ) = (0, D). Theorem 2. For any 0 = d 0 < d 1 < · · · < d k = D we have (recall (1.2)): R n (F D ) ≤ 5Bn −1/2 P F (D) · k i=1 R(d i−1 ) d i − d i−1 . (2.1) Proof. We assume B = 1 for simplicity and prove inductively on 1 ≤ j ≤ k the bound R n (F dj ) ≤ 5n −1/2 P F (d j ) j i=1 R(d i−1 ) d i − d i−1 (2.2) To induct from d j to d j+1 , we will apply the technique of [GRS20] between d j , d j+1 . We fix inputs x 1 , . . . , x n ∈ X throughout the proof and define X j = R(F dj , x 1 , . . . , x n ; ⃗ ε) which is random since ⃗ ε is. Our goal will be to iteratively bound E[X j ]. Note that if ∥x∥ ≤ 1 and f ∈ F dj then |f (x)| ≤ P op (d j ). By the bounded differences inequality (e.g. [BLM13, Theorem 6.2]), it follows that X j − E[X j ] is sub-Gaussian with variance factor P op (d j ) 2 /n. In particular for λ j > 0 to be chosen later, we have log E exp λ j (X j − E[X j ]) P F (d j ) ≤ 5P op (d j ) 2 λ 2 j P F (d j ) 2 n . (2.3) Next we apply [GRS20, Lemma 3.1] to peel the layers between d j and d j+1 , obtaining: E e λj Xj+1/P F (dj+1) ≤ 2 dj+1−dj E e λj Xj /P F (dj ) =⇒ log E e λj Xj+1/P F (dj+1) ≤ log E e λj Xj /P F (dj ) + d j+1 − d j . Combining, we find that E[X j+1 ] ≤ P F (d j+1 ) λ j · log E e λj Xj+1/P F (dj+1) ≤ P F (d j+1 ) λ j log E e λj Xj /P F (dj ) + d j+1 − d j (2.3) ≤ P F (d j+1 ) λ j λ j E[X j ] P F (d j ) + 5P op (d j ) 2 λ 2 j P F (d j ) 2 n + d j+1 − d j ≤ P F (d j+1 ) P F (d j ) · E[X j ] + 5P op (d j ) 2 P F (d j+1 ) P F (d j ) 2 n · λ j + P F (d j+1 )(d j+1 − d j ) λ j . Taking λ j = P F (d j )n 1/2 d j+1 − d j 2P op (d j ) and defining Y j = X j /P F (d j ), we obtain E[Y j+1 ] ≤ E[Y j ] + 5P op (d j ) d j+1 − d j P F (d j )n 1/2 = E[Y j ] + 5n −1/2 R(d j ) d j+1 − d j . This completes the inductive step for (2.2) and hence the proof. Optimizing the Choice of Subsequence Proof of Theorem 1. The result follows from Theorem 2. Indeed, we may take d i minimal such that R(d i ) ≤ 2 −i , with d 0 = 0 and the last value d k equal to D. Then using Cauchy-Schwarz in the second step, k i=1 R(d i−1 ) d i − d i−1 ≤ 2 k i=1 d i − d i−1 2 i ≤ 2 k i=1 d i − d i−1 2 i · k i=1 1 2 i ≤ 2 k i=1 d i − d i−1 2 i ≤ 2 D d=0 R(d) . Fix Frobenius norm bounds M F (1), . . . , M F (D) and operator norm bounds M op (1), . . . , M op (D), where without loss of generality M op (d) ≤ M F (d). For each 1 ≤ d ≤ D, consider the class F d of d-layer ReLU neural networks of the form (1.1) with D = d, such that for all 1 ≤ m ≤ d the m-th weight matrix W m satisfies R n (F; x 1 , . . . , x n ; ⃗ ε); R n (F) = sup x1,...,xn∈X R n (F; x 1 , . . . , x n ).(1.3)It is well-known (see e.g.[SSBD14, Chapter 26]) that an upper bound on R n (F) implies a uniform generalization guarantee for F. Our main result is as follows.Theorem 1. In the setting of Subsection 1.1, we have the Rademacher complexity bound R n (F D ) ≤ 10Bn −1/2 P F (D) Martin Anthony, L Peter, Bartlett, Neural Network Learning: Theoretical Foundations. CambridgeCambridge University Press9Martin Anthony and Peter L Bartlett. Neural Network Learning: Theoretical Foundations, volume 9. Cambridge University Press, Cambridge, 1999. Stronger generalization bounds for deep nets via a compression approach. Sanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang, International Conference on Machine Learning. PMLRSanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In International Conference on Machine Learning, pages 254-263. PMLR, 2018. Spectrally-Normalized Margin Bounds for Neural Networks. L Peter, Dylan J Bartlett, Matus J Foster, Telgarsky, Advances in Neural Information Processing Systems. 30Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-Normalized Margin Bounds for Neural Networks. Advances in Neural Information Processing Systems, 30, 2017. Concentration inequalities: A nonasymptotic theory of independence. Stéphane Boucheron, Gábor Lugosi, Pascal Massart, Oxford university pressStéphane Boucheron, Gábor Lugosi, and Pascal Massart. Concentration inequalities: A nonasymptotic theory of independence. Oxford university press, 2013. Rademacher and Gaussian complexities: Risk bounds and structural results. L Peter, Shahar Bartlett, Mendelson, Journal of Machine Learning Research. 3Peter L Bartlett and Shahar Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463-482, 2002. On generalization bounds of a family of recurrent neural networks. Minshuo Chen, Xingguo Li, Tuo Zhao, International Conference on Artificial Intelligence and Statistics. PMLRMinshuo Chen, Xingguo Li, and Tuo Zhao. On generalization bounds of a family of recurrent neural networks. In International Conference on Artificial Intelligence and Statistics, pages 1233-1243. PMLR, 2020. Generalization and representational limits of graph neural networks. Vikas Garg, Stefanie Jegelka, Tommi Jaakkola, International Conference on Machine Learning. PMLRVikas Garg, Stefanie Jegelka, and Tommi Jaakkola. Generalization and representational limits of graph neural networks. In International Conference on Machine Learning, pages 3419-3430. PMLR, 2020. Size-Independent Sample Complexity of Neural Networks. Information and Inference: A. Noah Golowich, Alexander Rakhlin, Ohad Shamir, Journal of the IMA. 92Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-Independent Sample Complexity of Neural Networks. Information and Inference: A Journal of the IMA, 9(2):473-504, 2020. Generalization bounds for deep convolutional neural networks. M Philip, Hanie Long, Sedghi, International Conference on Learning Representations. Philip M Long and Hanie Sedghi. Generalization bounds for deep convolutional neural networks. In International Conference on Learning Representations, 2020. Michel Ledoux, Michel Talagrand, Probability in Banach Spaces: Isoperimetry and Processes. Springer Science & Business Media23Michel Ledoux and Michel Talagrand. Probability in Banach Spaces: Isoperimetry and Processes, vol- ume 23. Springer Science & Business Media, 1991. A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks. Srinadh Behnam Neyshabur, Nathan Bhojanapalli, Srebro, International Conference on Learning Representations. Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A PAC-Bayesian Approach to Spectrally- Normalized Margin Bounds for Neural Networks. In International Conference on Learning Representa- tions, 2018. Norm-based capacity control in neural networks. Ryota Behnam Neyshabur, Nathan Tomioka, Srebro, Conference on learning theory. PMLRBehnam Neyshabur, Ryota Tomioka, and Nathan Srebro. Norm-based capacity control in neural networks. In Conference on learning theory, pages 1376-1401. PMLR, 2015. Understanding machine learning: From theory to algorithms. Shai Shalev, - Shwartz, Shai Ben-David, Cambridge university pressShai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. Data-Dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation. Colin Wei, Tengyu Ma, Advances in Neural Information Processing Systems. 32Colin Wei and Tengyu Ma. Data-Dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation. Advances in Neural Information Processing Systems, 32, 2019.
[]
[ "First-principles molten salt phase diagrams through thermodynamic integration", "First-principles molten salt phase diagrams through thermodynamic integration" ]
[ "Tanooj Shah \nDepartment of Materials Science and Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA\n", "Kamron Fazel \nDepartment of Electrical\nComputer and Systems Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA\n", "Jie Lian \nDepartment of Mechanical\nAerospace and Nuclear Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA\n", "Liping Huang \nDepartment of Materials Science and Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA\n", "Yunfeng Shi \nDepartment of Materials Science and Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA\n", "Ravishankar Sundararaman \nDepartment of Materials Science and Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA\n" ]
[ "Department of Materials Science and Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA", "Department of Electrical\nComputer and Systems Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA", "Department of Mechanical\nAerospace and Nuclear Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA", "Department of Materials Science and Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA", "Department of Materials Science and Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA", "Department of Materials Science and Engineering\nRensselaer Polytechnic Institute\n12180TroyNYUSA" ]
[]
Precise prediction of phase diagrams in molecular dynamics (MD) simulations is challenging due to the simultaneous need for long time and large length scales and accurate interatomic potentials. We show that thermodynamic integration (TI) from low-cost force fields to neural network potentials (NNPs) trained using density-functional theory (DFT) enables rapid first-principles prediction of the solid-liquid phase boundary in the model salt NaCl. We use this technique to compare the accuracy of several DFT exchange-correlation functionals for predicting the NaCl phase boundary, and find that the inclusion of dispersion interactions is critical to obtain good agreement with experiment. Importantly, our approach introduces a method to predict solid-liquid phase boundaries for any material at an ab-initio level of accuracy, with the majority of the computational cost at the level of classical potentials.
null
[ "https://export.arxiv.org/pdf/2306.02406v1.pdf" ]
259,076,056
2306.02406
6f9bf090e5999d77e0359fcddf968f0ad3eaf05d
First-principles molten salt phase diagrams through thermodynamic integration Tanooj Shah Department of Materials Science and Engineering Rensselaer Polytechnic Institute 12180TroyNYUSA Kamron Fazel Department of Electrical Computer and Systems Engineering Rensselaer Polytechnic Institute 12180TroyNYUSA Jie Lian Department of Mechanical Aerospace and Nuclear Engineering Rensselaer Polytechnic Institute 12180TroyNYUSA Liping Huang Department of Materials Science and Engineering Rensselaer Polytechnic Institute 12180TroyNYUSA Yunfeng Shi Department of Materials Science and Engineering Rensselaer Polytechnic Institute 12180TroyNYUSA Ravishankar Sundararaman Department of Materials Science and Engineering Rensselaer Polytechnic Institute 12180TroyNYUSA First-principles molten salt phase diagrams through thermodynamic integration Precise prediction of phase diagrams in molecular dynamics (MD) simulations is challenging due to the simultaneous need for long time and large length scales and accurate interatomic potentials. We show that thermodynamic integration (TI) from low-cost force fields to neural network potentials (NNPs) trained using density-functional theory (DFT) enables rapid first-principles prediction of the solid-liquid phase boundary in the model salt NaCl. We use this technique to compare the accuracy of several DFT exchange-correlation functionals for predicting the NaCl phase boundary, and find that the inclusion of dispersion interactions is critical to obtain good agreement with experiment. Importantly, our approach introduces a method to predict solid-liquid phase boundaries for any material at an ab-initio level of accuracy, with the majority of the computational cost at the level of classical potentials. I. INTRODUCTION Molten salts are a class of high-temperature ionic fluids that have recently attracted renewed interest due to their potential applications in modular nuclear reactors 1 and thermal energy-storage systems. 2 A molten alkali halide salt such as LiF or a mixture such as FLiNaK may be used as a coolant instead of highly pressurized water in nuclear reactors; these salts can also act as the medium in which fuel and fission products are dissolved. 3 Accurate knowledge of the salt's phase diagram is critical for the design of such reactors. Experimental results serve as the ultimate benchmark of these properties, and methods such as CALPHAD can be used in the design process using parameters that are fit to experimental inputs. 4 However such methods often use empirical functional forms for the relevant thermodynamic quantities needed to predict phase coexistence. A more accurate method would obtain the relevant quantitiesspecifically, free energies -from a direct description of the interactions between the constituent atoms. Predictions of thermodynamic properties of condensed phases from atomistic simulations can employ either Monte Carlo (MC) 5 or molecular dynamics (MD) 6,7 approaches. Each method's accuracy depends upon the treatment of the interatomic potential energy function. This potential energy function can be calculated from first principles using Kohn-Sham electronic densityfunctional theory (DFT) 8,9 in ab initio molecular dynamics (AIMD) simulations, or approximated by classical force fields such as additive pairwise potentials. MD simulations for predicting bulk phase equilibria accurately typically require system sizes containing at least 500 − 1000 atoms, while AIMD simulations are typically a) These authors contributed equally b) Electronic mail: [email protected] limited by computational costs to 100 − 200 atoms and time scales of 10 − 100 ps. Consequently, MD predictions of phase equilibria have typically employed classical force fields. [10][11][12] However, this requires explicit parameterization of the empirical force fields for new materials, and even for single component systems, is limited in accuracy for smaller temperature and pressure ranges than a first-principles method. Machine-learned interatomic potentials promise to bridge this gap between AIMD and classical MD by using highly flexible functional forms such as neural-network potentials (NNPs), which can better reproduce the potential energy surface from ab initio results than simpler classical force-fields. 13 Several families of NNPs are finding increasing usage for MD simulations, [14][15][16] and essentially serve to extrapolate the DFT level predictions from smaller AIMD simulations to larger-scale MD simulations. For molten salts, NNPs have been used to predict structure, diffusivity, [17][18][19] shear viscosity, 20 equations of state, heat capacity, thermal conductivity and phase coexistence at individual state points. 21 However, to our knowledge, systematic mapping of the solid-liquid phase boundary of an alkali halide such as NaCl using either AIMD or machine-learned potentials in the pressuretemperature space has not been performed yet. The most common way to estimate phase equilibria in MD is to carry out direct simulations of coexistence of the two phases in a large interface calculation. At a given state point, the interface will typically move to expand the thermodynamically favorable phase at the expense of the less favorable phase. 22 This requires simulating large interfaces with at least 10 4 atoms over long time scales (typically nanoseconds) and must be repeated over several state points to pinpoint the coexistence point. However at state points close to the true coexistence point, the velocity of the interface will typically be too low to reliably capture in simulations of tractable length. A more accurate approach with better resolution in-arXiv:2306.02406v1 [cond-mat.mtrl-sci] 4 Jun 2023 volves calculating the free energy difference between the phases at various state points. One can use thermodynamic integration 23 (TI) in order to obtain these relevant free energies in molecular simulation. A reversible "pseudosupercritical" pathway that directly transforms the liquid to the solid phase can be employed, where the interatomic potential U (λ) is varied continuously as a function of an introduced path parameter λ, so as to establish a reversible transformation between the solid and liquid phases path at a particular state point (P, T ). The resulting free energy difference between phases is calculated by integrating dλ(∂U/∂λ) along the pathway. In particular, such a method avoids the problem of interfaces between two separate phases, and requires much smaller system sizes (on the order of 500-1000 atoms) than the aforementioned interface coexistence technique. TI simulations have recently been used with NNPs to assess solid-liquid coexistence for uranium, and solvation free energy predictions. [24][25][26] However to our knowledge no similar study has been yet performed using NNPs to compare the effects of different DFT approximations on an alkali halide phase boundary such as NaCl yet. Here, we introduce an approach using TI to combine low-cost classical force fields and more complex NNPs trained to electronic structure data in order to respectively combine the computational cost and accuracy advantages of each kind of interatomic potential. Briefly, our method involves performing most of the complex transformations along the pseudosupercritical pathway using a cheap additive pairwise potential, and an additional bulk transformation from the classical potential to the NNP in each phase to obtain the NNP melting point. From this initially determined melting point, we can extend the phase boundary in the (P, T) space using the Clausius-Clapeyron equation. The remainder of this paper is organized as follows: in section II we specify the classical interatomic potential and NNP parameterization details. Afterwards, in section III, we detail the phase equilibrium approach, starting from prediction of a single coexistence point using TI, and then using the Clausius-Clapeyron equation to extend the phase boundary in the (P, T) space. Finally, we show results of our method in section IV for the NaCl solid-liquid phase boundary, for NNPs trained to AIMD data with different choices of the exchangecorrelation (XC) functional. We find that the predicted phase boundaries are highly sensitive to the choice of XC functional, and that those functionals that explicitly build in treatment of dispersion interactions agrees with experiment over a much wider range of temperatures and pressures than other functionals. We perform classical MD simulations in LAMMPS 27 using the standard Fumi-Tosi (FT) parameters 28 for NaCl. This model is also referred to as the rigid ion model (RIM) within the molten salts literature. Table I lists the FT parameters used to model NaCl in this study. U (r ij ) = Ae σ−r ij ρ − C r 6 ij + D r 8 ij + kq i q j r ij(1) In the present work, the long-range Coulomb part of the FT potential is treated using the damped shifted force model, 29 which allows faster computation than Ewald and particle-particle-particle-mesh (PPPM) methods. While this simple functional form allows for rapid computation of interatomic forces, it limits the accuracy of predicted properties to specific chemical environments and thermodynamic conditions used to parameterize the model. 30 B. Neural Network Potentials Neural networks are universal function approximators 31 that have been proven increasingly useful for describing the complex multidimensional potential energy surfaces of atomistic systems; in an NNP, the input to the neural network is a representation of the atomic coordinates, and the output is an energy. Featurization and neural network formulation All approaches to NNPs require some means to map the local neighbor configuration of each atom into the input features for the neural network. It is particularly important for these features to account for rotational, translational and atomic permutation symmetries in order for the neural network to represent the potential energy landscape of the system with practical AIMD training data sets. Prevalent current approaches for generating these features (or descriptors) from atomic configurations include atom-centered symmetry functions, 32 smooth overlap of atomic positions (SOAP), 33 neighbor density bispectrum, 34 Coulomb matrices, 35 and atomic cluster expansions (ACE), 36 amongst many others. Here, we use the SimpleNN code 15 for training and evaluating neural network potentials, which implements the atom-centered symmetry function approach. In this approach, the total energy is written as a sum of atomic energies, each expressed as a neural network of several symmetry functions evaluated on the local atomic configuration. These include radial functions that effectively measure the radial density of each atom type in a finite basis, and angular functions that similarly measure the angular distribution of pairs of atom types in a finite basis surrounding each atom. 32 We use the default set of radial G 2 and angular G 4 symmetry functions (70 total) implemented in the SimpleNN package with a cutoff of 6Å. To train neural networks using the SimpleNN code, we use the built-in principal component preprocessing to mitigate linear dependence of symmetry functions and thereby accelerate the training. 15 We also use adaptive sampling of the local atomic configurations based on Gaussian density functions, which increases the weight for infrequently encountered configurations in the loss function for training and improves the transferability of the resulting potential. 37 Finally, we find that a standard feed-forward neural network architecture with two hidden layers of 30 nodes each (30-30) proves sufficient, with negligible reduction of training errors with deeper or wider networks. AIMD Training Data An appropriate dataset of AIMD calculations is critical to reliably fit the parameters in the neural networks. For the present use case, we need to ensure that the NNP is able to accurately model both solid and molten NaCl across a wide range of temperatures and pressures. Table II lists the training AIMD simulations we use. We include solid configurations in the stable rock- (b) (a) FIG. 1. (a) Correlation plots (above) and error distributions (below) between AIMD data and NNP for energies (left) and forces (right panels). (b) Comparison of partial radial pair correlation functions between Fumi-Tosi and NNPs trained to AIMD with each of the four XC functionals considered here, for solid (top), liquid at ambient pressure (middle) and high pressure (bottom panels). The predicted RDFs are overall very similar, except for overstructuring in the solid RDF from Fumi-Tosi potential compared to the NNPs, and slight shifts of the NNP-PBE RDF peaks to the right relative to the others due to its underbinding. High pressure predominantly affects the liquid RDFs from the second coordination shell onwards. salt and metastable cesium chloride and zincblende structures near the melting temperature, and liquid configurations at ambient and high pressures spanning a range of temperatures above the melting point. High pressure simulations are necessary in order to ensure stability of the NNPs, essentially by sampling more of the repulsive regime of the potential energy surface. 21 In order to compare the effects of different functionals, we repeat the AIMD simulations, NNP training and all subsequent calculations for four different XC functionals, starting with the most frequently used Perdew-Burke-Ernzerhof (PBE) generalized-gradient approximation (GGA). 38 Since PBE often underbinds solids leading to larger lattice constants, we also use its version reparameterized for solids, PBESol. 39 To analyze the im-pact of long-range dispersion interactions, we consider two variants of dispersion corrections to PBE, namely PBE D2 40 and PBE D3, 41 which have been shown to be important for accurate structure prediction in molten salts. 42 For the remainder of this paper, we refer to these four NNPs trained to different XC functionals as NNP-PBE, NNP-PBEsol, NNP-PBE D2 and NNP-PBE D3 respectively. Each AIMD simulation is started from an equilibrated configuration of 64 atoms using the Fumi-Tosi potential, with this size sufficient to get atomic environments extending to the symmetry function cutoff and to require no Brillouin zone sampling in the DFT. The AIMD simulations are performed using the open-source JDFTx software, 43 using a Nose-Hoover thermostat and barostat, a time step of 1 fs, with configurations extracted for the data set every 10 fs. Configurations are extracted at this frequency in order for a balance between obtaining sufficient training data and tractable computation time for each individual simulation. Each AIMD simulation is run for 2 ps in the NPT ensemble, except for the high pressure sweep, which consists of four snapshots chosen along a classical MD compression simulation and then simulated in AIMD as an NVE ensemble for 0.2 ps each. We use a plane-wave basis with kinetic energy cutoffs of 20 and 100 Hartrees for wavefunctions and charge densities, respectively, as recommended for use with the GBRV ultrasoft pseudopotential set, 44 and converge the wavefunctions to an energy threshold of 10 −7 Hartrees at each time step. All subsequent simulations using the NNP are performed in LAMMPS. 27 Benchmarks We validate the trained NNP in two ways -by comparing its forces and energies to those generated via AIMD, and by comparing its radial distribution functions (RDFs) to those generated by the FT potential at three different state points. This allows us to check if the forces and energies learned by the neural network are sufficient to capture the relevant structures of each phase needed for the later phase boundary calculations. Figure 1(a) shows the correlation and errors between the NNP and AIMD (DFT) energies and forces. The energy errors are all within 26 meV/atom, which is thermal energy k B T at ambient temperature. The resulting structures predicted by the NNPs are checked by running larger calculations with each potential on 4096-atom simulation cells of the solid and liquid phases at different state points displayed in Figure 1(b). In the rocksalt phase, NNP-PBE predicts a less dense phase than the other potentials (RDF peaks shifted slightly to the right), which is expected due to that functional's well-known underbinding in solids, however the RDFs for the other NNPs appear to overlap with FT. Note that in the liquid phase, the RDFs for all four NNPs appear to overlap with FT at both ambient and high pressures, indicating that each potential has learned the appropriate structure. Cross-validation strategy We also implement a 3-fold cross validation strategy to assess the impact of training errors of each NNP on the final phase boundary results presented later in this paper. For each exchange-correlation functional, we fit three more NNPs to 2/3 of the AIMD training data, excluding a different 1/3 each time, and restart training with different randomly initialized weights in the network. The different subsets are selected to evenly span the range of configurations in the training data. A random sampling is not used to select the subsets of training data, in order that none of the subsets lose out any relevant information in the training data needed to maintain a stable potential. All subsequent calculations for the phase boundaries use four NNPs for each functional -the three NNPs using 2/3 of the training data, and one NNP using all the training data. Subsequent estimates along the phase boundary are reported as the mean of the predictions made by each of these four NNPs, and error bars are reported as the standard deviation of these predictions. III. PHASE COEXISTENCE APPROACH A. Direct Interface Coexistence The most commonly used technique for estimating solid-liquid coexistence in MD simulations is to directly set up an interface between a crystalline configuration and a liquid configuration and allow the system to equilibrate at a specified state point. 45 If the state point is far from the phase boundary, the thermodynamically favorable phase grows at the expense of the less favorable one, whereas close to the phase boundary, the interface does not move appreciably. We test the direct interface coexistence method as an initial benchmark for the ambient-pressure melting temperature of the FT potential and the NNP trained to the PBE functional (hereafter referred to as NNP-PBE). For each direct interface simulation, we start with a large orthorhombic supercell of NaCl on the order of 10 4 atoms and convert the middle region to a liquid by running a high-temperature (3000 K) NVT simulation for 30 ps, while the remainder of the system is excluded from timeintegration during this initial melt. We subsequently reset velocities of the entire system, and run NPT simulations to equilibrate the system at each candidate temperature at ambient pressure. We monitor the fraction of atoms in the crystalline phase as a function of time with Steinhardt's q 6 order parameter 46 , and repeat this at several candidate melting temperatures. Figure 2 a) displays a representative snapshot of the section of the system with the interface and the fraction of atoms in the solid phase as a function of time at different temperatures for the FT potential. An inherent drawback of this method is that any assessment of a coexisting state point depends upon the orientation of the interface set up between the solid phase and the liquid phase. We repeat these simulations with solid surfaces oriented along the (100), (110) and (111) facets, and track the rate of change of solid fraction as a function of temperature for each, displayed in Figure 2 b) and c) for the Fumi-Tosi and NNP-PBE potentials, respectively. While a rough value of T m can be estimated at 1070 ± 30 K and 875 ± 40 K for the FT and NNP-PBE potentials respectively, we see in Figure 2 b) that there is a qualitative difference in behaviour for each facet, and hence that such a method may have limitations for estimating coexistence of the bulk of a material with high accuracy. Moreover, the velocity of the interface is essentially a product of a mobility term and a driving force, and this driving force is proportional to |T − T m |. Close to the melting temperature, the velocity of the interface may be too slow to observe in a tractable simulation on the order of nanoseconds 22 . B. Free Energies Through Thermodynamic Integration Predicting phase coexistence from the free energy difference between phases as a function of thermodynamic variables is a more robust method, and can be achieved with significantly smaller systems than the ones necessary in direct interface coexistence simulations above. 47 Most commonly, the free energy difference between two systems (or states) with different interaction potentials U in molecular dynamics can be obtained via thermodynamic integration (TI), 23 which involves the construction of a reversible pathway between the two states. In the canonical ensemble, we can obtain the Helmholtz free energy difference as ∆A T I = 1 0 dλ ∂U ∂λ .(2) Here, λ is a parameter that continuously changes the interaction potential from U (λ = 0) at the starting point to U (λ = 1) at the endpoint of the pathway, and the average ∂U ∂λ is calculated in a canonical ensemble corresponding to each intermediate λ. The key requirement is that the path is reversible, so that the ensemble averages are continuous with respect to λ. Reversibility of the pathway is maintained by ensuring that the system is in equilibrium at each λ point. The resulting ensemble averages ∂U ∂λ depend upon the way λ is introduced to the interatomic potential energy function. Since the free energy is a state variable it does not matter whether the pathway or the intermediate states are physically realistic, and only the reversibility of the path is important; such simulations are often referred to as alchemical simulations in the community. We can employ a "pseudosupercritical" pathway 10 to obtain the free energy difference between the solid and liquid phase at a single state point (P, T) for a given interatomic potential. We can also use TI to transform from relatively expensive NNPs to cheaper additive pairwise potentials such as the FT potential at the start and end of said pathway. This allows us to obtain final results that only depend on the NNP, even though most of the computation over the pathway is performed with pair potentials, as long as the path remains reversible. Once an initial coexistence point is found, the phase boundary can be extended in the (P, T) space by integration of the Clausius-Clapeyron equation 48 using data from simulations, for which numerous techniques are available in the literature. So our overall scheme for predicting the first-principles solid-liquid phase boundary of NaCl using NNPs (trained to DFT) can be broken down into three major steps: We detail each of the steps in the following sections, and note that the second and third steps are repeated for NNPs trained to different DFT functionals, and with different training sets for the cross-validation and error estimation strategy discussed in Section II B 4. Pseudosupercritical Pathway for Fumi-Tosi Tm The melting temperature of NaCl at ambient pressure has been predicted previously for the FT potential using different TI approaches, including the pseudosupercritical pathway proposed by Eike et al, which yielded T FT m = (1089 ± 8) K, 10 , and an approach proposed by Anwar et al, involving separate pathways connecting the solid to a harmonic crystal and the liquid to an ideal gas, which yielded T FT m = (1064 ± 14) K. 11 The former approach originally computed the solid-liquid free energy difference ∆G sl at a single guessed melting point, and used analytical corrections based on the solid and liquid equations of state to obtain a correction factor to predict the melting point. 10 Since all of our subsequent more-expensive NNP steps depend on it, we adapt this pathway and make it more robust by running it independently at multiple temperatures to obtain ∆G sl (T ), and extract T m from its zerocrossing, which reduces the possibility of systematic errors in analytic corrections and convergence / ergodicity issues in individual calculations. The reversible pathway from liquid to solid at a single state point consists of 4 steps, displayed in Figure 3 a): 1. Deform liquid from its equilibrium volume to the equilibrium volume of solid at same (P, T), with free energy ∆A def orm = − V S V L P dV . 2. Scale down Fumi-Tosi interaction potential U F T to ηU F T , with free energy given by Eq. 2 applied to U (λ) = (1 − λ)U F T + λ(ηU F T ). Using η = 0.1, this transforms the ionic liquid to a weakly interacting liquid, amenable for the next step of transformation to the solid's structure. 3. Switch on a tethering potential U tether , consisting of attractive Gaussian potentials, −A exp(−Br 2 ), 3. (a) Thermodynamic integration pathway to compute solid-liquid free energy difference at a single state point. In the first step, an equilibrium ionic liquid is compressed to the corresponding equilibrium volume occupied by the crystal; in the second step, the ionic interactions are scaled down; in the third step, a tethering potential is switched on at each of the crystal lattice sites; in the fourth step, the tethering potential is switched off and the ionic interactions are scaled back to their full values. (b) Representative Helmholtz freeenergy curves for each of the four steps along the pathway, for a single run at T = 1140 K (c) Gibbs free energy difference ∆G sl (T ) obtained from repeating this pathway for several temperatures. To avoid assuming a specific polynomial form, we fit an ensemble of kernel ridge regression models to the data with resampling, and find the zero-crossing to get the melting point with a 95% confidence interval, T FT m = 1060.8± 5.5 K. with A = 2.0 eV and B = 1.1Å −2 at each crystal site, which interacts with the corresponding species of atoms. This path has net potential, U (λ) = (1−λ)(ηU F T )+λ(ηU F T +U tether ), with free energy given by Eq. 2, and transforms the weakly interacting liquid to an Einstein solid. 4. Restore the original Fumi-Tosi potential and simul-taneously switch off the tethering potential. This path has net potential, U (λ) = (1 − λ)(ηU F T + U tether ) + λU F T , with free energy given by Eq. 2, and transforms the Einstein solid to the ionic crystal. Adding the free energy from these four steps yields the net Helmholtz free energy difference between the solid and the liquid, ∆A sl . We can then calculate the Gibbs free energy difference ∆G sl = ∆A sl +P ∆V sl , where ∆V sl is the corresponding change in volume at this state point (P, T). We perform each of these steps in a cell with 256 Na-Cl ion pairs, in the N V T ensemble using the Nose-Hoover thermostat in LAMMPS. The simulations use a time step of 1 fs, and converged within 25 ps for each of 50 λ values in steps 2 and 4, and within 50 ps for each λ in step 3 above. The final configuration at each λ point is used as the initial configuration for the simulation at the next λ point in order to ensure a smooth transformation along the pathway. We repeat this entire process to compute ∆G sl (T ) for several temperatures ranging from 1030 K to 1140 K, and fit an ensemble of kernel ridge models fit to different resamplings of the data to extract the zero-crossing with an error estimate, displayed in Figure 3(b). We thereby estimate the Fumi-Tosi melting point, T FT m = 1060.8 ± 5.5 K, which is consistent with the previous estimate from Ref. 11, but is slightly lower than the one from Ref. 10. We use this ambient-pressure T FT m as starting point to determine the NNP melting temperatures. Thermodynamic Cycle to Obtain NNP Tm Once we have an estimate for the Fumi-Tosi melting point, we can use TI to obtain an estimate for the NNP melting point. We could adapt the approach mentioned in the previous section by simply adding on a bulk transformation between the NNP and the Fumi-Tosi potential at the start and end of the pseudosupercritical pathway (Figure 3 a) to obtain ∆G NNP sl (T ), however this would involve running multiple equilibrium simulations with the NNP at intermediate λ points at every temperaturetypical NNP simulations are on the order of 100 times slower than a FT simulation, so we propose an alternative approach which allows more rapid convergence for T NNP m . Starting from an initial guessed value for T NNP m , we run the following thermodynamic cycle separately for each phase: • Convert NNP to Fumi-Tosi potential through TI, with ∆A computed using Eq. 2 applied to U (λ) = (1−λ)U N N P +λU F T . At fixed volume, this changes the pressure from P for NNP equilibrium to P for FT equilibrium. Consequently, this step yields ∆G = ∆A + V (P − P ). • Change pressure (at fixed T ) of Fumi-Tosi solid/liquid from P back to P . • Change temperature (at fixed P ) of Fumi-Tosi solid/liquid to T FT m , where ∆G FT sl = 0 by definition See Figure 4 (b) for an elucidation of this cycle. Adding these three steps together, we obtain ∆G sl for the NNP at the guessed temperature. We can subsequently calculate a correction factor to update the guess for T NNP m using the following equation. ∆T m = − ∆G NNP sl ∂∆G NNP sl /∂T m ≈ T m ∆G NNP sl ∆H NNP sl ,(3) because ∂G/∂T = −S and ∆S sl = ∆H sl /T m at the true melting point, and the latter is approximately true close to the melting point. We find that for the NNPs trained in this study, this pathway converges to 4 K tolerance within at most 5 such steps. Figure 4(c) shows a representative dU/dλ curve obtained for the NNP to FT connection with ten λ points. Note the near perfect linearity of dU/dλ with λ, indicating that this TI can be performed with very few λ points, possibly even with just three λ points at 0, 0.5 and 1. Once again, this indicates the power of the present approach to keep most of the computation at the cheaper classical potential level, requiring very few calculations using NNPs. The main requirement is that the classical potential is just accurate enough to predict a liquid and solid phases that remain stable through the TI paths shown above. Extension of Phase Boundary using Clausius-Clapeyron Equation Once we have an initial point of solid-liquid coexistence (P, T ), we can numerically integrate the Clausius-Clapeyron equation, 48 dP dT = ∆H sl T ∆V sl ,(4) to find the entire coexistence line in P -T space for whichever interatomic potential. Here, the solid-liquid difference in enthalpy ∆H sl and molar volume ∆V sl can be obtained directly from N P T simulations of both phases at a known coexisting state point (P, T ). This allows an initial estimate for the melting temperature at pressure P = P + ∆P at T = T + ∆P/(dP/dT ) (we use a ∆P of 1000 bars in the present work). We then converge from this straight-line approximation to find the point where ∆G sl = 0, using a method very similar to the calculation of the NNP melting point in the previous section. We run a compression step from P → P at constant T , and then iteratively run heating 4. (a) The NNP melting point can in principle be found by performing bulk transformation to and from an additive pairwise potential at the endpoints of the pseudosupercritical pathway (b) Thermodynamic cycle we use to iteratively converge upon T NNP m . We make a guess for T NNP m , use this cycle to compute ∆G NNP sl at that state point, and subsequently update our guess using Equation 3. This is a faster way to obtain T NNP m than the method in a) since it converges within a maximum of 5 iterations, whereas the pathway above to obtain ∆G NNP sl (T ) directly would involve running multiple expensive NNP simulations at various λ points at every scanned temperature. (c) Representative dU/dλ variation for the TI step above involving bulk transformation from the NNP to the the Fumi-Tosi potential; the nearly linear variation indicates that very few λ points are needed to converge the free energy change. steps from T → T at constant P until convergence, using Equation 3 at each iteration to obtain the correction factors. Note that these are essentially the last two steps of the thermodynamic cycle in Figure 4(b). We find that this approach converges within 3 iterations for each step in pressure, with a convergence criterion of |∆T | < 4 K, for all the interaction potentials used in this study. This approach is closely related to the coexistence-line free-energy difference integration method, 12 but distinct from Gibbs-Duhem integration; 49 the present approach keeps each molecular dynamics simulation as an N V T or N P T simulation at a single state point for robustness and ease of applicability to both classical potentials and NNPs. IV. PHASE BOUNDARY RESULTS The techniques developed in the previous section allow mapping of the P -T solid-liquid coexistence curve for any interatomic potential, including classical potentials such as the Fumi-Tosi potential for NaCl and machinelearned potentials, including NNPs. Figure 5 compares the NaCl phase boundaries predicted by NNPs trained to four different DFT XC functionals against experimental measurements and the Fumi-Tosi classical potential predictions. First note that while the Fumi-Tosi potential is accurate for the melting point at ambient pressure compared to experiment, it deviates from experiment at higher pressures, consistent with previous classical potential simulations. 10 Note that even the slope dP/dT of the coexistence line is incorrect near ambient pressure, indicating that the error stems from either the predicted enthalpy difference or molar volume difference between the phases, as indicated by the Clausius-Clapeyron equation. Table III shows that the Fumi-Tosi potential is reasonably accurate for the enthalpy difference and solid volume, but overestimates the liquid volume and thereby results in a smaller dP/dT than experiment. The NNP-PBE potential leads to a consistently lower melting point than experiment for all pressures, as seen in Figure 5(a). This is expected given the tendency of PBE to underestimate binding in solids generally. Specifically, for NaCl, PBE predicts an almost 2% larger lattice constant for the crystal than experiment, and understimates the atomization energy by 6% (Table III), leading to the ∼ 15% underestimation of the melting point. The PBEsol functional is a reparameterization of PBE which restores the correct gradient expansion of the correlation energy, generally improving performance for solids. 39 This fixes the lattice constant of the crystal (< 0.2% error), but the atomization energy is still underestimated by 3%. Correspondingly, the NNP-PBEsol melting point predictions shown in Figure 5(b) are slightly improved compared to the PBE case, but are still substantially lower than experiment at all pressures . Only NNPs trained to DFT that includes dispersion corrections predict melting points that agree reasonably with experiment, shown in Figures 5(c) and (d). This was also recently pointed out for the ambient-pressure melting point in a previous work 21 . The dispersion-corrected PBE D2 variant 40 has the lowest error for this ambientpressure melting point, but the PBE D3 variant 41 exhibits better accuracy overall for the entire range of pressures considered in this study. temperatures and for the liquid phase. PBE D3 has both the closest molar volumes and enthalpy difference compared to experiment amongst all the functionals considered here (including the Fumi-Tosi potential), correlating with its best accuracy for the phase boundary across the pressure-temperature space. V. CONCLUSION We introduced a computational approach to efficiently predict ab-initio-level solid-liquid phase boundaries in molten salts using a combination of machine-learned potentials and thermodynamic integration. We used NNPs trained to DFT with different exchange-correlation functionals in order to compare the accuracy of different DFT TABLE III. Comparison of lattice constant a and atomization energy Ea of NaCl crystals at ambient temperature, as well as molar volumes and solid-liquid enthalpy difference at the respective melting points, predicted by different DFT exchangecorrelation functionals and classical potentials against the experimental values. 50,51 Values for dP/dT are reported at ambient pressure. a Ea Vs V l ∆H sl dP/dT (Å) (eV) (L/mol) (L/mol) (kJ/mol) (bar/K) Experiment 5.60 6.68 32.0 52 37.6 52 28 methods for the thermodynamics of molten salts, with error bars on all predictions using ensembles of NNPs trained to different ab initio MD data. Most importantly, we tailored the thermodynamic integration approach to carry out most of the simulations using low-cost classical potentials, with NNPs used only in the final connection. Critically, once this approach is converged, the final result depends only on the NNP interaction potential, even though we used the lowerlevel classical potential in all intermediate steps of the path connecting the solid and liquid. Overall, this approach makes it much more tractable to explore molten salt equilibria with accuracy ultimately limited by the first-principles methods underlying the NNPs. Specifically, for the melting of NaCl, we show that treatment of long-range dispersion interactions in the DFT exchange-correlation functional is critical, with PBE D3 yielding the overall highest accuracy for solidliquid coexistence across a wide range of pressures. We show that the atomization energy of the crystal is the best proxy for the accuracy of melting-point predictions, while estimates of under/overbinding based on lattice constants do not correlate as well: PBEsol yields the best lattice constant, but significantly underestimates melting points at all pressures. The overall approach described here was prototyped using NaCl as a model system, but is applicable for any single-component system with an interatomic potential that is accurate enough to exhibit a stable solid and liquid phase in the relevant temperature range. Importantly, the thermodynamic integration approach removes dependence of the final results on this potential and allows prediction via NNPs using any underlying first-principles method. Future work can extend such free-energy methods combining NNPs and thermodynamic integration to predict phase diagrams of binary systems and solubility limits from first principles. FIG . 2. (a) Left: a snapshot of a representative interface coexistence simulation, here with the interface set up along the (100) plane. Right: the fraction of atoms in the rocksalt phase as a function of simulation time for a simulation set up along the 100 interface using the Fumi-Tosi potential. (b) The rate of change of the fraction of atoms in the rocksalt phase as a function of temperature along the 100, 110 and 111 interfaces respectively, for the Fumi-Tosi potential. (c) The rate of change of the fraction of atoms in the rocksalt phase for the NNP-PBE potential. A coexistence temperature can be coarsely estimated from the zero-crossing of such curves, however the resulting estimate is inherently a feature of the surface set up in the simulation. FIG. 4. (a) The NNP melting point can in principle be found by performing bulk transformation to and from an additive pairwise potential at the endpoints of the pseudosupercritical pathway (b) Thermodynamic cycle we use to iteratively converge upon T NNP TABLE I . IFumi-Tosi parameters used for classical MD simulations of NaCl.28 Pair A (eV) ρ (Å) σ (Å) C (eV/Å 6 ) D (eV/Å 8 ) Na-Na 0.2637 0.317 2.340 1.0486 -0.4993 Na-Cl 0.2110 0.317 2.755 6.9906 -8.6758 Cl-Cl 0.1582 0.317 3.170 72.4022 -145.4285 II. METHODS A. Fumi-Tosi Potential TABLE II . IIStructure, thermodynamic state points, and number of configurations sampled (10 fs apart) for each NaCl AIMD simulation used to train the NNPs utilized in this study.Structure T (K) P (bar) # configs Rocksalt 1000 1 201 CsCl 1000 1 201 Zincblende 1100 1 201 Liquid 1300 1 201 Liquid 1100 1 201 Liquid 1500 1 201 Liquid 1700 10 5 201 Liquid 1500 5 × 10 4 201 Liquid 1500 2 × 10 4 to 3 × 10 6 108 Table III indicates that both PBE D2 and PBE D3 are actually less accurate than PBEsol for the lattice constant at lower temperatures; they are only more accurate for the atomization energy. However both these functionals are more accurate for the solid and liquid volumes near the melting point, so the relative underbinding of these dispersion-corrected functionals for the perfect crystal become less important for solids at higher FIG. 5. Predicted NaCl solid-liquid phase boundaries for NNPs trained to four different DFT functionals, compared to experimental results and Fumi-Tosi predictions. Error bars shown for the NNP predictions are from cross-validation using NNPs trained to different subsets of the DFT data for each case. The NNPs trained to PBE and PBEsol without explicit dispersion corrections strongly underestimate the melting temperature at all pressures, while the PBE D2 and PBE D3 dispersion-corrected results agree better with experiment than the empirical Fumi-Tosi potential. *Predicted energy of splitting crystal to ions, combined with experimental Na ionization energy and Cl electron affinity, since this classical potential can only describe ions and not atoms..0 53 4689 Fumi-Tosi 5.62 6.53* 31.8 41.2 28.5 3043 PBE 5.70 6.28 32.8 44.4 28.5 2438 PBEsol 5.61 6.47 30.2 41.2 28.2 2550 PBE D2 5.66 6.75 30.3 36.9 33.0 4944 PBE D3 5.66 6.66 31.7 37.9 28.8 4653 ACKNOWLEDGEMENTSThis work was supported by funding from the DOE Office of Nuclear Energy's Nuclear Energy University (NEUP) Program under Award # DE-NE0008946. . C , Le Brun, Journal of Nuclear Materials. 3601C. Le Brun, Journal of Nuclear Materials 360, 1 (2007). . H Zhang, J Baeyens, J Degreve, G Caceres, Renewable and Sustainable Energy Reviews. 22H. Zhang, J. Baeyens, J. Degreve, and G. Caceres, Renewable and Sustainable Energy Reviews 22 (2013). . W Grimes, Nuclear Applications and Technology. 8137W. Grimes, Nuclear Applications and Technology 8, 137 (1970). . A Kroupa, 10.1016/j.commatsci.2012.02.003Computational Materials Science. 663A. Kroupa, Computational Materials Science 66, 3 (2013). . N Metropolis, A Rosenbluth, M Rosenbluth, A Teller, E Teller, Journal of Chemical Physics. 21N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, Journal of Chemical Physics 21 (1953). . Wainwright Alder, Journal of Chemical Physics. 27Alder and Wainwright, Journal of Chemical Physics 27 (1957). . L Verlet, Physical Review. 159L. Verlet, Physical Review 159 (1967). . Kohn Hohenberg, Physical Review Letters. 136Hohenberg and Kohn, Physical Review Letters 136 (1964). . W Kohn, L J Sham, Physical Review A. 140W. Kohn and L. J. Sham, Physical Review A 140 (1965). . D M Eike, J F Brennecke, E J Maginn, 10.1063/1.1823371The Journal of Chemical Physics. 12214115D. M. Eike, J. F. Brennecke, and E. J. Maginn, The Journal of Chemical Physics 122, 014115 (2005). . J Anwar, D Frenkel, M G Noro, 10.1063/1.1522375The Journal of Chemical Physics. 118728J. Anwar, D. Frenkel, and M. G. Noro, The Journal of Chemical Physics 118, 728 (2003). . E J Meijer, F El Azhar, 10.1063/1.473504The Journal of Chemical Physics. 1064678E. J. Meijer and F. El Azhar, The Journal of Chemical Physics 106, 4678 (1997). . J Behler, 10.1063/1.4966192The Journal of Chemical Physics. 145170901J. Behler, The Journal of Chemical Physics 145, 170901 (2016). . R Lot, F Pellegrini, Y Shaidu, E Kucukbenli, 10.1016/j.cpc.2020.107402arXiv:1907.03055Computer Physics Communications. 256107402R. Lot, F. Pellegrini, Y. Shaidu, and E. Kucukbenli, Computer Physics Communications 256, 107402 (2020), arXiv:1907.03055. . K Lee, D Yoo, W Jeong, S Han, 10.1016/j.cpc.2019.04.014Computer Physics Communications. 24295K. Lee, D. Yoo, W. Jeong, and S. Han, Computer Physics Com- munications 242, 95 (2019). . H Wang, L Zhang, J Han, W E , 10.1016/j.cpc.2018.03.016Computer Physics Communications. 228178H. Wang, L. Zhang, J. Han, and W. E, Computer Physics Com- munications 228, 178 (2018). . S.-C Lee, Y Zhai, Z Li, N P Walter, M Rose, B J Heuser, Y , 10.1021/acs.jpcb.1c05608The Journal of Physical Chemistry B. S.-C. Lee, Y. Zhai, Z. Li, N. P. Walter, M. Rose, B. J. Heuser, and Y. Z, The Journal of Physical Chemistry B 10.1021/acs.jpcb.1c05608 (2021). . W Liang, G Lu, J Yu, 10.1021/acsami.0c20665ACS Applied Materials & Interfaces. 134034W. Liang, G. Lu, and J. Yu, ACS Applied Materials & Interfaces 13, 4034 (2021). . W Liang, G Lu, J Yu, 10.1016/j.jmst.2020.09.040Journal of Materials Science & Technology. 7578W. Liang, G. Lu, and J. Yu, Journal of Materials Science & Technology 75, 78 (2021). . T Xu, X Li, L Guo, F Wang, Z Tang, 10.1016/j.solener.2020.09.038Solar Energy. 209568T. Xu, X. Li, L. Guo, F. Wang, and Z. Tang, Solar Energy 209, 568 (2020). . Q.-J Li, E Küçükbenli, S Lam, B Khaykovich, E Kaxiras, J Li, 10.1016/j.xcrp.2021.100359Cell Reports Physical Science. 2100359Q.-J. Li, E. Küçükbenli, S. Lam, B. Khaykovich, E. Kaxiras, and J. Li, Cell Reports Physical Science 2, 100359 (2021). Understanding Molecular Simulation: From Algorithms to Applications. D Frenkel, B Smit, Academic Press2nd ed.D. Frenkel and B. Smit, Understanding Molecular Simulation: From Algorithms to Applications, 2nd ed. (Academic Press, 2002). . J G Kirkwood, 10.1063/1.1749657The Journal of Chemical Physics. 3300J. G. Kirkwood, The Journal of Chemical Physics 3, 300 (1935). . I A Kruglov, A Yanilkin, A R Oganov, P Korotaev, 10.1103/PhysRevB.100.174104Physical Review B. 100174104I. A. Kruglov, A. Yanilkin, A. R. Oganov, and P. Korotaev, Phys- ical Review B 100, 174104 (2019). . R Jinnouchi, F Karsai, G Kresse, 10.1103/PhysRevB.101.060201Physical Review B. 10160201R. Jinnouchi, F. Karsai, and G. Kresse, Physical Review B 101, 060201 (2020). . S Fukushima, E Ushijima, H Kumazoe, A Koura, F Shimojo, K Shimamura, M Misawa, R K Kalia, A Nakano, P Vashishta, 10.1103/PhysRevB.100.214108Physical Review B. 100214108S. Fukushima, E. Ushijima, H. Kumazoe, A. Koura, F. Shi- mojo, K. Shimamura, M. Misawa, R. K. Kalia, A. Nakano, and P. Vashishta, Physical Review B 100, 214108 (2019). . S Plimpton, 10.1006/jcph.1995.1039Journal of Computational Physics. 1171S. Plimpton, Journal of Computational Physics 117, 1 (1995). . F G Fumi, M P Tosi, 10.1016/0022-3697(64)90159-3Journal of Physics and Chemistry of Solids. 2531F. G. Fumi and M. P. Tosi, Journal of Physics and Chemistry of Solids 25, 31 (1964). . C J Fennell, J D Gezelter, 10.1063/1.2206581The Journal of Chemical Physics. 124234104C. J. Fennell and J. D. Gezelter, The Journal of Chemical Physics 124, 234104 (2006). . J Lu, S Yang, G Pan, J Ding, S Liu, W Wang, 10.3390/en14030746Energies. 14746J. Lu, S. Yang, G. Pan, J. Ding, S. Liu, and W. Wang, Energies 14, 746 (2021). . D Rumelhart, G Hinton, R Williams, Nature. 323533D. Rumelhart, G. Hinton, and R. Williams, Nature 323, 533 (1986). . J Behler, 10.1063/1.3553717The Journal of Chemical Physics. 13474106J. Behler, The Journal of Chemical Physics 134, 074106 (2011). . A Bartok, R Kondor, G Csanyi, Physical Review B. 87A. Bartok, R. Kondor, and G. Csanyi, Physical Review B 87 (2013). . A Bartok, M Payne, R Kondor, G Csanyi, Physical Review Letters. 104A. Bartok, M. Payne, R. Kondor, and G. Csanyi, Physical Review Letters 104 (2010). . M Rupp, A Tkatchenko, K.-R Muller, O A Von Lilienfeld, Physical Review Letters. 108M. Rupp, A. Tkatchenko, K.-R. Muller, and O. A. von Lilienfeld, Physical Review Letters 108 (2012). . R Drautz, Physical Review B. 99R. Drautz, Physical Review B 99 (2019). . W Jeong, K Lee, D Yoo, D Lee, S Han, 10.1021/acs.jpcc.8b08063The Journal of Physical Chemistry C. 12222790W. Jeong, K. Lee, D. Yoo, D. Lee, and S. Han, The Journal of Physical Chemistry C 122, 22790 (2018). . J P Perdew, K Burke, M Ernzerhof, 10.1103/PhysRevLett.77.3865Physical Review Letters. 773865J. P. Perdew, K. Burke, and M. Ernzerhof, Physical Review Let- ters 77, 3865 (1996). . J P Perdew, A Ruzsinszky, G I Csonka, O A Vydrov, G E Scuseria, L A Constantin, X Zhou, K Burke, 10.1103/PhysRevLett.100.136406Physical Review Letters. 100136406J. P. Perdew, A. Ruzsinszky, G. I. Csonka, O. A. Vydrov, G. E. Scuseria, L. A. Constantin, X. Zhou, and K. Burke, Physical Review Letters 100, 136406 (2008). . S Grimme, 10.1002/jcc.20495Journal of Computational Chemistry. 271787S. Grimme, Journal of Computational Chemistry 27, 1787 (2006). . S Grimme, J Antony, S Ehrlich, H Krieg, 10.1063/1.3382344The Journal of Chemical Physics. 132154104S. Grimme, J. Antony, S. Ehrlich, and H. Krieg, The Journal of Chemical Physics 132, 154104 (2010). . S Roy, M Brehm, S Sharma, F Wu, D S Maltsev, P Halstenberg, L C Gallington, S M Mahurin, S Dai, A S Ivanov, C J Margulis, V S Bryantsev, 10.1021/acs.jpcb.1c03786The Journal of Physical Chemistry B. 1255971S. Roy, M. Brehm, S. Sharma, F. Wu, D. S. Maltsev, P. Hal- stenberg, L. C. Gallington, S. M. Mahurin, S. Dai, A. S. Ivanov, C. J. Margulis, and V. S. Bryantsev, The Journal of Physical Chemistry B 125, 5971 (2021). . R Sundararaman, K Letchworth-Weaver, K A Schwarz, D Gunceler, Y Ozhabes, T A Arias, 10.1016/j.softx.2017.10.006SoftwareX. 6278R. Sundararaman, K. Letchworth-Weaver, K. A. Schwarz, D. Gunceler, Y. Ozhabes, and T. A. Arias, SoftwareX 6, 278 (2017). . K F Garrity, J W Bennett, K M Rabe, D Vanderbilt, 10.1016/j.commatsci.2013.08.053Computational Materials Science. 81446K. F. Garrity, J. W. Bennett, K. M. Rabe, and D. Vanderbilt, Computational Materials Science 81, 446 (2014). Tildesley Allen, Computer Simulations of Liquids. Oxford University Press2nd ed.Allen and Tildesley, Computer Simulations of Liquids, 2nd ed. (Oxford University Press, 2017). . P J Steinhardt, D R Nelson, M Ronchetti, 10.1103/PhysRevB.28.784Physical Review B. 28784P. J. Steinhardt, D. R. Nelson, and M. Ronchetti, Physical Re- view B 28, 784 (1983). Free Energy Calculations: Theory and Applications in Chemistry and Biology. C Chipot, A Pohorille, Springer Series in Chemical Physics. 86SpringerC. Chipot and A. Pohorille, Free Energy Calculations: Theory and Applications in Chemistry and Biology, Springer Series in Chemical Physics, Vol. 86 (Springer, 2007). . R Clausius, Annalen der Physik. 155500R. Clausius, Annalen der Physik 155, 500 (1850). . D Kofke, Molecular Physics. 781331D. Kofke, Molecular Physics 78, 1331 (1992). D E Gray, American Institute of Physics Handbook, Third Edition. New YorkMcGraw-Hill3rd ed.D. E. Gray, ed., American Institute of Physics Handbook, Third Edition, 3rd ed. (McGraw-Hill, New York, 1972). A level Born Haber Cycle Calculations sodium chloride magnesium chloride magnesium oxide sodium oxide enthalpy level diagrams KS5 GCE chemistry revision notes. P Brown, P. Brown, A level Born Haber Cycle Calculations sodium chlo- ride magnesium chloride magnesium oxide sodium oxide enthalpy level diagrams KS5 GCE chemistry revision notes (2000). . A Kirshenbaum, J Cahill, P Mcgonigal, A Grosse, 10.1016/0022-1902(62)80205-XJournal of Inorganic and Nuclear Chemistry. 241287A. Kirshenbaum, J. Cahill, P. McGonigal, and A. Grosse, Journal of Inorganic and Nuclear Chemistry 24, 1287 (1962). Ullmann's Encyclopedia of Industrial Chemistry. F Ullmann, W Gerhartz, Y S Yamamoto, F T Campbell, R Pfefferkorn, J F Rounsaville, VCHWeinheim, Federal Republic of Germany5th ed.F. Ullmann, W. Gerhartz, Y. S. Yamamoto, F. T. Campbell, R. Pfefferkorn, and J. F. Rounsaville, Ullmann's Encyclopedia of Industrial Chemistry, 5th ed. (VCH, Weinheim, Federal Republic of Germany, 1985).
[]
[ "Evaluating robustness of support vector machines with the Lagrangian dual approach", "Evaluating robustness of support vector machines with the Lagrangian dual approach" ]
[ "Yuting Liu \nFaculty of Electronic Information and Electrical Engineering\nDalian University of Technology\n2 Linggong Road116024Dalian, LiaoningChina\n", "Hong Gu \nFaculty of Electronic Information and Electrical Engineering\nDalian University of Technology\n2 Linggong Road116024Dalian, LiaoningChina\n", "Pan Qin \nFaculty of Electronic Information and Electrical Engineering\nDalian University of Technology\n2 Linggong Road116024Dalian, LiaoningChina\n" ]
[ "Faculty of Electronic Information and Electrical Engineering\nDalian University of Technology\n2 Linggong Road116024Dalian, LiaoningChina", "Faculty of Electronic Information and Electrical Engineering\nDalian University of Technology\n2 Linggong Road116024Dalian, LiaoningChina", "Faculty of Electronic Information and Electrical Engineering\nDalian University of Technology\n2 Linggong Road116024Dalian, LiaoningChina" ]
[]
Adversarial examples bring a considerable security threat to support vector machines (SVMs), especially those used in safety-critical applications. Thus, robustness verification is an essential issue for SVMs, which can provide provable robustness against various kinds of adversary attacks. The evaluation results obtained through the robustness verification can provide a safe guarantee for the use of SVMs. The existing verification method does not often perform well in verifying SVMs with nonlinear kernels. To this end, we propose a method to improve the verification performance for SVMs with nonlinear kernels. We first formalize the adversarial robustness evaluation of SVMs as an optimization problem. Then a lower bound of the original problem is obtained by solving the Lagrangian dual problem of the original problem. Finally, the adversarial robustness of SVMs is evaluated concerning the lower bound. We evaluate the adversarial robustness of SVMs with linear and nonlinear kernels on the MNIST and Fashion-MNIST datasets. The experimental results show that the percentage of provable robustness obtained by our method on the test set is better than that of the state-of-the-art.
null
[ "https://export.arxiv.org/pdf/2306.02639v1.pdf" ]
259,075,300
2306.02639
cb0eaf77a4a437e252972f6b888e56a6c9d141d3
Evaluating robustness of support vector machines with the Lagrangian dual approach 5 Jun 2023 Yuting Liu Faculty of Electronic Information and Electrical Engineering Dalian University of Technology 2 Linggong Road116024Dalian, LiaoningChina Hong Gu Faculty of Electronic Information and Electrical Engineering Dalian University of Technology 2 Linggong Road116024Dalian, LiaoningChina Pan Qin Faculty of Electronic Information and Electrical Engineering Dalian University of Technology 2 Linggong Road116024Dalian, LiaoningChina Evaluating robustness of support vector machines with the Lagrangian dual approach 5 Jun 20231 2Support vector machinesAdversarial robustnessRobustness verificationLagrangian DualitySubgradient method Adversarial examples bring a considerable security threat to support vector machines (SVMs), especially those used in safety-critical applications. Thus, robustness verification is an essential issue for SVMs, which can provide provable robustness against various kinds of adversary attacks. The evaluation results obtained through the robustness verification can provide a safe guarantee for the use of SVMs. The existing verification method does not often perform well in verifying SVMs with nonlinear kernels. To this end, we propose a method to improve the verification performance for SVMs with nonlinear kernels. We first formalize the adversarial robustness evaluation of SVMs as an optimization problem. Then a lower bound of the original problem is obtained by solving the Lagrangian dual problem of the original problem. Finally, the adversarial robustness of SVMs is evaluated concerning the lower bound. We evaluate the adversarial robustness of SVMs with linear and nonlinear kernels on the MNIST and Fashion-MNIST datasets. The experimental results show that the percentage of provable robustness obtained by our method on the test set is better than that of the state-of-the-art. Introduction Machine learning has been widely used in safety-critical fields such as autonomous driving [1], medical [2], and network security [3]. Adversarial examples, designed by adding subtle perturbations to the input to fool the model, bring a considerable security threat to well-trained models [4]. Adversarial examples degenerate the model's prediction performance because the assumption that the training and test data are independently distributed is broken [5]. In this case, the prediction performance criteria on test datasets, such as accuracy, precision, and F1-score, cannot prove robustness for models. Thus, it is necessary to develop methods to evaluate the robustness of models to subtle perturbations on the input. So far, many studies have evaluated the adversarial robustness of machine learning models, most of which are designed for artificial neural networks (ANNs) . The concept of adversarial examples was initially proposed by Szegedy et al. while exploring the interpretability of ANNs [4]. Szegedy et al. found that adding specific perturbed image samples could easily fool deep neural networks. This finding has spurred research into evaluating the adversarial robustness of ANNs. There are two approaches for adversarial robustness evaluation. One approach focuses on adversarial attacks, such as the gradientbased methods [6][7][8][9][10], the score-based methods [11][12][13][14], and the decision-based methods [15][16][17][18]. Those studies use adversarial attacks to obtain minimal perturbation added to sample features to evaluate the robustness of ANNs. However, the research in [19,20] has shown that the robustness evaluation results obtained in a specific adversarial attack cannot usually provide robustness guarantees for other attacks. The other approach is robustness verification methods, which are developed by the mixed integer linear programming [21][22][23][24][25], the satisfiability modulo theory [26][27][28][29][30], the duality in optimization theory [5,31,32], the abstract interpretation [33][34][35][36][37], and bounding the local Lipschitz constant [38][39][40]. The robustness verification method investigates how the outputs of models change under a given range of perturbations. For this reason, the robustness verification method can provide provable robustness for ANNs against various kinds of adversary attacks. Note that the problem of lack of adversarial robustness is not unique to ANNs. Compared with the limitation of ANNs [41], support vector machines (SVMs) have a more simple structure and solid mathematical theoretical foundation and are widely used in safety-critical fields [42]. At present, there are relatively few studies on evaluating the adversarial robustness of SVMs. Meanwhile, most of these studies focus on adversarial attacks, such as [43][44][45]. Thus, it is necessary to develop robustness verification methods to evaluate the adversarial robustness of SVMs. Note that a robustness verification method, named SAVer, is proposed by Ranzato et al. based on the abstract interpretation [46], which can provide provable robustness for SVMs against various kinds of adversary attacks. Specifically, SAVer uses an abstraction that combines interval domains [47] and reduced affine form domains [46] to provide provable robustness for SVMs. However, for evaluating the robustness of SVMs with nonlinear kernels, the abstract nonlinear operations applied by SAVer on the interval domain and the reduced affine form domain lead to a loss of computational accuracy. The experimental results in [48] show that SAVer does not often perform well in verifying SVMs with nonlinear kernels. To this end, we propose a robustness verification method called SVM dual verifier (SDVer), based on the Lagrangian duality. We first formulate the optimization problem of various kernels in SVMs into feedforward neural network representations. Then, considering that the solution to the original problem is generally NP-hard, we transform the object to be solved from the original problem into the Lagrangian dual problem of the original problem. By using the subgradient method to solve the Lagrangian dual problem, a lower bound of the original optimization problem is obtained. The adversarial robustness of SVMs is evaluated according to the lower bound. In this way, we summarize the adversarial robustness evaluation of SVMs as an optimization problem. Finally, we compare our method with the state-of-the-art SVMs robustness verifier SAVer on the MNIST [49] and Fashion-MNIST [50] datasets. The results indicate that when the kernel function is linear, the percentage of provable robustness obtained by our method is consistent with that obtained by SAVer. When the kernel function is nonlinear, the percentage of provable robustness obtained by our method is better than that of SAVer. Our main contributions are as follows: We proposed SDVer to evaluate the adversarial robustness of SVMs. The method is based on the Lagrangian duality considering the kernels used in SVMs. The method is proposed based on the idea of robustness verification, which can provide provable robustness for SVMs against various kinds of adversarial attacks. With the proposed SDVer, we significantly improve the verification performance for SVMs with nonlinear kernels in experiments. The rest of the paper is organized as follows. Section 2 introduces some preliminaries. Section 3 is a detailed introduction to our proposed method for the adversarial robustness evaluation of SVMs. In Section 4, we conduct experimental evaluation and analysis. Finally, Section 5 is the conclusion of the paper. Preliminaries Support vector machines To elaborate on the robustness verification method of SVMs, we first give a brief introduction to SVMs. Given a training dataset D = {(x i , y i ) | i = 1, . . . , N } ⊆ R n ×{−1, +1}, training SVM is to find a hyperplane equation f (x) that divides training samples into different classes in the sample space. We denote the trained SVM binary classifier asŷ = C {+1,−1} (x), withŷ being the prediction of y. y = C {+1,−1} (x) = sign (f (x)) ,(1)f (x) = m i=1 α i y i κ (x, x i ) + b,(2) where α i ∈ R and b ∈ R are the parameters obtained after training, {(x i , y i ) | i = 1, 2, . . . , m} ⊆ R n × {−1, +1} represent support vectors, and κ (x i , x j ) : R n × R n → R is a kernel function. This paper mainly analyzes the adversarial robustness of SVMs for the most commonly used kernel functions listed in Table 1. linear kernel κ (x i , x j ) = x i x j polynomial kernel κ (x i , x j ) = x i x j + c d , d 1 sigmoid kernel κ (x i , x j ) = tanh βx i x j + θ , β > 0, θ < 0 Gaussian kernel (RBF) κ (x i , x j ) = exp || x i − x j || 2 /2σ 2 , σ > 0 Robustness verification goal of SVMs We first define the adversarial region to describe the robustness verification goal of SVMs. The adversarial region is often defined by an l ∞norm ball. Formally, taking the input x as the center of the ball and the perturbation δ > 0 as the radius, the adversarial region denotes by P ∞ δ (x) = {x ∈ R n | x − x ∞ ≤ δ} = {x ∈ R n | ∀i, x i ∈ [x i − δ, x i + δ]}, where x i is the i-th element of x. Then we define the robustness verification goal of SVMs in the adversarial region. Given a trained SVM model C {+1,−1} (·), a test example (x, y), and a perturbation δ, if we can verify that C {+1,−1} (x ) = C {+1,−1} (x) holds for x ∈ P ∞ δ (x) , then sample x is considered robust in its adversarial region. We formulate the robustness evaluation of SVMs as an optimization problem (3). If the optimal value of (3) is larger than 0, then the sample is considered robust in its adversarial region. Otherwise, it is not robust in its adversarial region. min x ŷ · f (x ),(3a)s.t. f (x ) = m i=1 α i y i k (x , x i ) + b, (3b) x ∈ P ∞ δ (x). (3c) 3 Methodology Feedforward neural network representation of SVMs Our method is developed under the framework of the verification method proposed in [5]. Thus, we formulate the optimization problem (3) as a feedforward neural network representation. Let h l (x) denotes the vector activation function for layer l, and h l k (x) denotes the k-th element of h l (x). When the kernel function is linear, (3) can be expressed as follows: min x ŷ · f (x ), (4a) s.t. z 0 = W 0 x ,(4b)x 1 = h 0 z 0 , (4c) f (x ) = w 1 x 1 + b 1 , (4d) x ∈ P ∞ δ (x),(4e)where W 0 = (x 1 , . . . , x m ) ∈ R n×m , h 0 (x) = x ∈ R m is a vector identity function, w 1 = (α 1 y 1 , . . . , α m y m ) ∈ R m , and b 1 = b ∈ R. According to the classical formula of ANN [51], (4b)-(4d) can be considered as a representation of a single hidden layer neural network, with h 0 (x) = x as its activation function. When the kernel function is polynomial, (3) can be expressed by (5). min x ŷ · f (x ), (5a) s.t. z 0 = W 0 x + b 0 ,(5b)x 1 = h 0 z 0 , (5c) f (x ) = w 1 x 1 + b 1 , (5d) x ∈ P ∞ δ (x),(5e)where W 0 = (x 1 , . . . , x m ) ∈ R n×m , b 0 = (c, . . . , c) ∈ R m , h 0 (x) = x d ∈ R m , w 1 = (α 1 y 1 , . . . , α m y m ) ∈ R m , and b 1 = b ∈ R. Similar to the linear kernel, (5b)-(5d) can be considered as a representation of a single hidden layer neural network, with h 0 (x) = x d as its activation function. When the kernel function is sigmoid, (3) can be expressed by (6). min x ŷ · f (x ), (6a) s.t. z 0 = W 0 x + b 0 ,(6b)x 1 = h 0 z 0 ,(6c)f (x ) = w 1 x 1 + b 1 , (6d) x ∈ P ∞ δ (x),(6e)where W 0 = (βx 1 , . . . , βx m ) ∈ R n×m , b 0 = (θ, . . . , θ) ∈ R m , h 0 (x) = tanh(x) ∈ R m , w 1 = (α 1 y 1 , . . . , α m y m ) ∈ R m , and b 1 = b ∈ R. Similar to the linear and polynomial kernel, (6b)-(6d) can be viewed as a representation of a single hidden layer neural network, with h 0 (x) = tanh(x) as its activation function. When the kernel function is RBF, (3) can be represented as (7). min x ŷ · f (x ), (7a) s.t. z 0 = W 0 x + b 0 ,(7b)x 1 = h 0 z 0 ,(7c)z 1 = W 1 x 1 ,(7d)x 2 = h 1 z 1 , (7e) f (x ) = w 2 x 2 + b 2 , (7f) x ∈ P ∞ δ (x),(7g) where W 0 = (I n , . . . , I n ) ∈ R n×mn with I n ∈ R n×n being an identity matrix, b 0 = − (x 1 , . . . , x m ) ∈ R mn , h 0 (x) = x 2 ∈ R mn , h 1 (x) = e −γx ∈ R m , w 2 = (α 1 y 1 , . . . , α m y m ) ∈ R m , b 2 = b ∈ R, and W 1 ∈ R mn×m is as the following: W 1 =      1 n 0 n · · · 0 n 0 n 1 n · · · 0 n . . . . . . . . . . . . 0 n · · · 0 n 1 n      , with 1 n ∈ R n being an n-dimensional vector with all one elements and 0 n ∈ R n being an n-dimensional vector with all zero elements. (7b)-(7f) can be considered as a representation of two hidden layer neural network. h 0 (x) = x 2 and h 1 (x) = e −γx are the activation functions of the first and second layer networks, respectively. To obtain a more general feedforward neural network representation of (3), we set z L = f (x ), x 0 = x ,ŷ =ŷ and denote the optimal value of the optimization problem by p * to obtain the following expression. p * = min x 0ŷ z L ,(8a)s.t. x l+1 = h l z l , l = 0, 1, . . . , L − 1,(8b)z l = W l x l + b l , l = 0, 1, . . . , L, (8c) x 0 ∈ P ∞ δ (x). (8d) Robustness verification based on the Lagrangian duality for SVMs A general feedforward neural network representation of (3) can be obtained from 3.1, as shown in (8). When the kernel function is sigmoid, the corresponding optimization problem is a type of sigmoid programming problem. The work [52] has proved that the solution to the sigmoid programming problem is NP-hard. Motivated by [5], we develop a robustness verification method based on the Lagrangian duality to solve (8) as follows: The Lagrangian multipliers µ l and λ l are introduced to relax the equations (8b) and (8c), then the Lagrangian dual problem of the original problem is obtained. L * = max µ,λ L(µ, λ), L(µ, λ) = min z l ,x lŷ W L x L + b L + L−1 l=0 µ l z l − W l x l − b l + L−1 l=0 λ l x l+1 − h l z l , s.t. z l ≤ z l ≤ z l , l = 0, 1, . . . , L − 1, x l ≤ x l ≤ x l , l = 0, 1, . . . , L, x 0 ∈ P ∞ δ (x),(9) where z l and z l are lower and upper bounds of z l ; x l and x l are lower and upper bounds of x l . The values of z l and z l can be calculated by (10a) and (10b), respectively. z l = W l + x l + W l − x l + b l ,(10a)z l = W l + x l + W l − x l + b l ,(10b) where W l + = max W l , 0 and W l − = min W l , 0 . The values of x l and x l are calculated according to the specific expression of h. The expression of h is related to the choice of kernels in Table 1. When h is a monotonically increasing function, the upper and lower bounds of each dimension of x are computed as shown in (11a) and (11b). x l+1 k = h l k z l k , (11a) x l+1 k = h l k z l k .(11b) When h is a monotonically decreasing function, the upper and lower bounds of each dimension of x are calculated as shown in (12a) and (12b). x l+1 k = h l k z l k ,(12a)x l+1 k = h l k z l k . (12b) When h is a nonmonotonic function, the upper and lower bounds of each dimension of x are computed by (13a) and (13b). x l+1 k = 0, z l k ≤ 0 ≤ z l k , min h l k z l k , h l k z l k , otherwise, (13a) x l+1 k = max h l k z l k , h l k z l k .(13b) According to [53], the following inequality holds. L(µ, λ) ≤ L * ≤ p * ≤ŷ z L .(14) Under the definition of robustness verification goal in 2.2, if p * is larger than 0, SVM is considered robust in the adversarial region P ∞ δ (x). If there exist µ and λ for L(µ, λ) > 0, p * must be positive concerning (14). Then, under the condition of fixed µ and λ, L(µ, λ) can be decomposed into the following three optimization problems: f l µ l , λ l−1 = min x l ∈[x l ,x l ] λ l−1 − W l µ l x l − b l µ l , l = 1, · · · , L,(15a) f l µ l , λ l = min z l ∈[z l ,z l ] µ l z l − λ l h l z l , l = 0, · · · , L − 1, (15b) f 0 µ 0 = min x 0 ∈P ∞ δ (x) −W 0 µ 0 x 0 − b 0 µ 0 ,(15c) where (15a) can be solved by (16a). (15b) is essentially a one-dimensional optimization problem, as shown in (16b). The optimal solution can be easily obtained according to the specific form of h. (15c) is solved in the same way as (16a). f l µ l , λ l−1 = λ l−1 − W l µ l + x l + λ l−1 − W l µ l − x l − b l µ l , (16a) f l,k µ l k , λ l k = min z l k ∈[z l k ,z l k ] µ l k z l k − λ l k h l k z l k .(16b) After solving (15a), (15b) and (15c) , using the subgradient method [54] to solve (17) to approximate the optimal value L * gradually. where n l is the size of layer l. We combine the original subgradient method with the Adam algorithm [55] to achieve convergence effectiveness. The details of the algorithm are provided as Algorithm 1. α, β 1 , β 2 and ε are the hyperparameters of the Adam algorithm. α is the step size of the subgradient update. β 1 and β 2 are the exponential decay rates. ε is the parameter to avoid the divisor becoming zero. m and v are the moment vectors of the Adam algorithm. m is the 1 st moment vector. v is the 2 nd moment vector. θ and K are the parameters we set to stop the iteration. θ is the minimum threshold of error. K is the total steps of iteration. (x, y) is the test example to be verified. f is the trained classification hyperplane equation. Experiments and analysis 4.1 Datasets Experimental evaluation of SDVer is conducted on the MNIST [49] and the Fashion-MNIST (F-MNIST) [50] datasets. MNIST is a widespread and standard dataset in the field of adversarial robustness evaluation. It consists of grayscale images of handwritten digits 0 to 9 with a pixel size of 28×28, including 60000 training samples and 10000 test samples. F-MNIST is more challenging than MNIST for benchmarking machine learning algorithms. It consists of 10 categories of clothing images, with the same image size and the number of training and test samples as MNIST. We compared SDVer with SAVer [48], a state-of-the-art robustness verification method for SVMs based on the abstract interpretation. The SAVer evaluated the adversarial robustness of one-versus-one (OVO) multi-class SVMs on the MNIST and F-MNIST datasets. The OVO multi-class SVM integrates 45 binary-class SVM. We focus on the adversarial robustness evaluation of binary-class SVMs. Binary classifiers trained on dissimilar and similar classes are chosen as robustness evaluation objects considering two extreme scenarios. In MNIST, we choose binary classifiers of dissimilar handwritten Experimental settings Algorithm 1 SDVer Input x, f, α, β 1 , β 2 , ε, θ, K Output L µ (k) , λ (k) 1: Initialize k = 0, µ l(0) = 0, m µ l(0) = 0, v µ l(0) = 0, λ l(0) = 0, m λ l(0) = 0, v λ l(0) = 0 2: while k < K do 3: Calculate x l(k) , z l(k) by minimizing L µ (k) , λ (k) in (15) 4: if L µ (k) , λ (k) > 0 or |L µ (k) , λ (k) −ŷ · f (x 0(k) )| < θ then 5: return L µ (k) , λ (k) 6: else 7: Calculate the subgradient of L (µ, λ) at λ l(k) : g λ l(k) = x l+1(k) − h l z l(k) 8: Calculate the subgradient of L (µ, λ) at µ l(k) : g µ l(k) = z l(k) − W l x l(k) − b l 9: if g λ l(k) = g µ l(k) = 0 then Update λ l(k) : m λ l(k+1) ← β 1 · m λ l(k) + (1 − β 1 ) · g λ l(k) v λ l(k+1) ← β 2 · v λ l(k) + (1 − β 2 ) · g 2 λ l(k) m λ l(k+1) ← m λ l(k+1) /(1 − β k+1 1 ) v λ l(k+1) ← v λ l(k+1) /(1 − β k+1 2 ) λ l(k+1) ← λ l(k) − α ·m λ l(k+1) /( v λ l(k+1) + ε) 13: Update µ l(k) : 14: end if 15: end if 16: k ← k + 1 17: end while 18: return L µ (k) , λ (k) digits 0 and 1 and binary classifiers of similar handwritten digits 4 and 9 as robustness evaluation objects. In F-MNIST, we choose binary classifiers of dissimilar ankle-boot and bag and binary classifiers of similar shirt and coat as robustness evaluation objects. The parameters of the trained SVMs are consistent with those of the SVMs verified in SAVer. The robustness of the trained SVMs is evaluated on the first 100 images of each test dataset. Table 2 shows the test accuracy of the trained SVMs on the test set. The proposed SDVer is encoded with Pytorch and runs on an NVIDIA Geforce RTX 3090 64GB. m µ l(k+1) ← β 1 · m µ l(k) + (1 − β 1 ) · g µ l(k) v µ l(k+1) ← β 2 · v µ l(k) + (1 − β 2 ) · g 2 µ l(k) m µ l(k+1) ← m µ l(k+1) /(1 − β k+1 1 ) v µ l(k+1) ← v µ l(k+1) /(1 − β k+1 2 ) µ l(k+1) ← µ l(k) − α ·m µ l(k+1) /( v µ l(k+1) + ε) The subgradient method with the Adam optimizer is used to solve the optimization problem (17). The step size of updating parameters is linearly decreased between the initial learning rate α 1 and the final learning rate α 2 . The values of α 1 and α 2 are different for different kernels of SVMs. We set α 1 ∈ {10 −5 , 10 −4 , 10 −3 , 10 −2 } and α 2 ∈ {10 −10 , 10 −9 , 10 −8 , 10 −7 , 10 −6 }. The maximum value of K is 10350000. We set other hyperparameters β 1 = 0.9, β 2 = 0.999, ε = 10 −8 and θ = 0.001 for all tasks. Results and analyse We are interested in the percentage of provable robustness, that is, the fraction of the test set that is robust. Figures 1 -4 Figure 1 shows the provable robustness of handwritten digits 0 and 1 classifiers with different kernels evaluated with SDVer and SAVer. Figure 2 shows a box-plot of the provable robustness differences from SDVer to SAVer in Fig. 1. Figures 1(a) and 2 show that our method obtains the same percentage of provable robustness as SAVer for SVMs with linear kernels. Figures 1(b) and 2 show that the evaluation result of our method is slightly better than that of SAVer for SVMs with 2-polynomial kernels. Figures 1(c) and 2 show that our method obtains the same percentage of provable robustness as SAVer for SVMs with 3-polynomial kernels. Figures 1(d) and 2 show that our method demonstrates a significantly better percentage of provable robustness than SAVer for SVMs with RBF kernels. Figure 3 shows the provable robustness of handwritten digits 4 and 9 classifiers with different kernels evaluated with SDVer and SAVer. Figure 4 shows a box-plot of the provable robustness differences from SDVer to SAVer in Fig. 3. Figures 3(a) and 4 show that our method obtains the same percentage of provable robustness as SAVer for SVMs with linear kernels. Figures 3(b), 3(c), and 4 show that our method achieves a slightly better percentage of provable robustness than SAVer for SVMs with polynomial kernels. Figures 3(d percentage of provable robustness for the handwritten digits 4 and 9 classifiers is lower than that of handwritten digits 0 and 1 classifiers in the same adversarial region. The reason is that handwritten digits 4 and 9 are very similar, so handwritten digits 4 and 9 classifiers are more susceptible to adversarial perturbations. Figure 5 shows the provable robustness of ankle-boot and bag classifiers with different kernels evaluated with SDVer and SAVer. Figure 6 shows a boxplot of the provable robustness differences from SDVer to SAVer in Fig. 5. The conclusions drawn from Figs. 5 and 6 are consistent with those drawn from Figs. 3 and 4. Note that there is a clear difference between ankle-boot and bag. The ankle-boot and bag classifiers achieve 100% accuracy on the test set. In similar cases, the percentage of provable robustness of the ankle-boot and bag classifiers is lower than that of the handwritten digits 0 and 1 classifiers. The reason for this may be that ankle-boot and bag images carry more information than handwritten digits 0 and 1 images, making them more susceptible to adversarial perturbations. Figure 7 shows the provable robustness of shirt and coat classifiers with different kernels evaluated with SDVer and SAVer. Figure 8 shows a boxplot of the provable robustness differences from SDVer to SAVer in Fig. 7. Figures 7(a), 7(c), and 8 show that our method obtains the same percentage of provable robustness as SAVer for SVMs with linear and polynomial kernels. Figures 7(d) and 8 show that our method demonstrates a higher percentage of provable robustness than SAVer for SVMs with RBF kernels. The accuracy of the shirt and coat classifiers on the test set is not very high. In addition, the difference between the shirt and coat is not noticeable. These reasons may lead to a lower percentage of provable robustness of the shirt and coat classifiers. Conclusion In this paper, SDVer is proposed for evaluating the adversarial robustness of SVMs. The method is based on the Lagrangian duality considering the kernels used in SVMs. The proposed SDVer is a robustness verification method that provides provable robustness for SVMs against various adversarial attacks. The state-of-the-art method for robustness verification of SVMs is SAVer, which uses an abstraction that combines interval domains and reduced affine form domains. When robustness evaluations are performed on SVMs with nonlinear kernels, the abstract nonlinear operations applied by SAVer on the interval domain and the reduced affine form domain lead to a loss of computational accuracy. Unlike SAVer, our method directly solves the verification problem using optimization techniques that improve the percentage of provable robustness of SVMs with nonlinear kernels. We conducted experiments on the MNIST and F-MNIST datasets. The proposed method obtained the same robustness evaluation results as SAVer in the case of linear kernels. The proposed method achieved better robustness evaluation results than SAVer for nonlinear kernels. Our method can be expected to be applied in safety-critical fields such as malware detection, intrusion detection, and spam filtering. For now, the time complexity of our proposed method is sublinear, while the time complexity of the SAVer is linear. Our method has no advantage in terms of time consumption. How reduce the time complexity of our proposed method is a challenging direction. Our proposed method is suitable for evaluating the adversarial robustness of binary-class SVMs. The multi-class SVMs are also widely used in safety-critical fields. In the future, we also will extend our robustness verification method for binary-class SVMs to multi-class SVMs. Declarations • Conflict of interest l µ l , λ l−1 + f 0 µ 0 , are the experimental results on the MNIST dataset. The robustness evaluation objects in Figs. 1 and 2 are classifiers for handwritten digits 0 and 1. The robustness evaluation objects in Figs. 3 and 4 are classifiers for handwritten digits 4 and 9. Figures 5 -8 are the experimental results on the F-MNIST dataset. The robustness evaluation objects in Figs. 5 and 6 are classifiers for ankle-boot and bag. The robustness evaluation objects in Figs. 7 and 8 are classifiers for shirt and coat. Fig. 5 Fig. 6 56Provable robustness of ankle-boot and bag classifiers with different kernels Provable robustness differences from SDVer to SAVer inFig. 5 Fig. 7 Fig. 8 78Provable robustness of shirt and coat classifiers with different kernels Provable robustness differences from SDVer to SAVer inFig. 7 Table 1 1Most commonly used kernel functionsName Expression Table 2 2Test accuracy of the trained SVMs with different kernels on the first 100 images of each test datasetMNIST F-MNIST binary classifiers 0 and 1 4 and 9 ankle-boot and bag shirt and coat linear kernel 100% 100% 100% 86% 2-polynomial kernel 100% 100% 100% 93% 3-polynomial kernel 100% 100% 100% 92% RBF kernel 100% 100% 100% 92% ) and 4 show that our method demonstrates a significantly better percentage of provable robustness than SAVer for SVMs with RBF kernels. Note that theFig. 1 Provable robustness of handwritten digits 0 and 1 classifiers with different kernels3HUWXUEDWLRQPDJQLWXGH DOLQHDUNHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH ESRO\QRPLDONHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH FSRO\QRPLDONHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH G5%)NHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU OLQHDU SRO\QRPLDO SRO\QRPLDO 5%) 3URYDEOHUREXVWQHVV Fig. 2 Provable robustness differences from SDVer to SAVer in Fig. 1 13 3HUWXUEDWLRQPDJQLWXGH DOLQHDUNHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH ESRO\QRPLDONHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH FSRO\QRPLDONHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH G5%)NHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU Fig. 3 Provable robustness of handwritten digits 4 and 9 classifiers with different kernels OLQHDU SRO\QRPLDO SRO\QRPLDO 5%) 3URYDEOHUREXVWQHVV Fig. 4 Provable robustness differences from SDVer to SAVer in Fig. 3 3HUWXUEDWLRQPDJQLWXGH DOLQHDUNHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH ESRO\QRPLDONHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH FSRO\QRPLDONHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU 3HUWXUEDWLRQPDJQLWXGH G5%)NHUQHO 3URYDEOHUREXVWQHVV 6'9HU 6$9HU The authors declare that they have no conflict of interest.• Availability of data and materialsThe data used in this paper are all from public datasets. Enhanced decision making in multi-scenarios for autonomous vehicles using alternative bidirectional q network. M S Rais, K Zouaidia, R Boudour, Neural. Comput. Appl. 3418Rais, M.S., Zouaidia, K., Boudour, R.: Enhanced decision making in multi-scenarios for autonomous vehicles using alternative bidirectional q network. Neural. Comput. Appl. 34(18), 15981-15996 (2022) Big data medical behavior analysis based on machine learning and wireless sensors. M Cui, Neural. Comput. Appl. 3412Cui, M.: Big data medical behavior analysis based on machine learning and wireless sensors. Neural. Comput. Appl. 34(12), 9413-9427 (2022) A stacked ensemble learning model for intrusion detection in wireless network. H Rajadurai, U D Gandhi, Neural. Comput. Appl. 34Rajadurai, H., Gandhi, U.D.: A stacked ensemble learning model for intrusion detection in wireless network. Neural. Comput. Appl. 34, 15387-15395 (2022) Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, R Fergus, 2nd International Conference on Learning Representations. Banff, AB, CanadaPaper presented at theSzegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. Paper presented at the 2nd International Conference on Learning Representations, Banff, AB, Canada, April 14-16 2014 (2014) A dual approach to scalable verification of deep networks. K Dvijotham, R Stanforth, S Gowal, T A Mann, P Kohli, Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence. Globerson, A., Silva, R.the Thirty-Fourth Conference on Uncertainty in Artificial IntelligenceMonterey, CA, USAAUAI Press2Dvijotham, K., Stanforth, R., Gowal, S., Mann, T.A., Kohli, P.: A dual approach to scalable verification of deep networks. In: Globerson, A., Silva, R. (eds.) Proceedings of the Thirty-Fourth Conference on Uncer- tainty in Artificial Intelligence, vol. 2, pp. 550-559. AUAI Press, Monterey, CA, USA (2018) Improving adversarial attacks on deep neural networks via constricted gradient-based perturbations. Y Xiao, C.-M Pun, Inf. Sci. 571Xiao, Y., Pun, C.-M.: Improving adversarial attacks on deep neural net- works via constricted gradient-based perturbations. Inf. Sci. 571, 104-132 (2021) Adversarial examples in the physical world. A Kurakin, I Goodfellow, S Bengio, the 5th International Conference on Learning Representations. Toulon, FrancePaper presented atKurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. Paper presented at the 5th International Conference on Learning Representations, Toulon, France, April 24-26 2019 (2019) Adversarial training with fast gradient projection method against synonym substitution based text attacks. X Wang, Y Yang, Y Deng, K He, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceVirtual, OnlineAAAI Press35Wang, X., Yang, Y., Deng, Y., He, K.: Adversarial training with fast gra- dient projection method against synonym substitution based text attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 13997-14005. AAAI Press, Virtual, Online (2021) Improved gradient-based adversarial attacks for quantized networks. K Gupta, T Ajanthan, Preprint atGupta, K., Ajanthan, T.: Improved gradient-based adversarial attacks for quantized networks. Preprint at https://arxiv.org/abs/2003.13511 (2022) Learning perturbation sets for robust machine learning. E Wong, J Z Kolter, Preprint atWong, E., Kolter, J.Z.: Learning perturbation sets for robust machine learning. Preprint at https://arxiv.org/abs/2007.08450 (2020) Black-box bayesian adversarial attack with transferable priors. S Zhang, H Gao, C Shu, X Cao, Y Zhou, J He, Mach. Learn. Zhang, S., Gao, H., Shu, C., Cao, X., Zhou, Y., He, J.: Black-box bayesian adversarial attack with transferable priors. Mach. Learn., 1-18 (2022) Camdar-adv: generating adversarial patches on 3d object. C Chen, T Huang, Int. J. Intell. Syst. 363Chen, C., Huang, T.: Camdar-adv: generating adversarial patches on 3d object. Int. J. Intell. Syst. 36(3), 1441-1453 (2021) Spanning attack: reinforce black-box attacks with unlabeled data. L Wang, H Zhang, J Yi, C.-J Hsieh, Y Jiang, Mach. Learn. 10912Wang, L., Zhang, H., Yi, J., Hsieh, C.-J., Jiang, Y.: Spanning attack: reinforce black-box attacks with unlabeled data. Mach. Learn. 109(12), 2349-2368 (2020) Square attack: a query-efficient black-box adversarial attack via random search. M Andriushchenko, F Croce, N Flammarion, M Hein, Computer Vision -ECCV 2020 -16th European Conference. Vedaldi, A., Bischof, H., Brox, T., Frahm, J.Glasgow, UKSpringer12368Andriushchenko, M., Croce, F., Flammarion, N., Hein, M.: Square attack: a query-efficient black-box adversarial attack via random search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J. (eds.) Computer Vision - ECCV 2020 -16th European Conference, vol. 12368 LNCS, pp. 484-501. Springer, Glasgow, UK (2020) Robust decision-based black-box adversarial attack via coarse-to-fine random search. B C Kim, Y Yu, Y M Ro, 2021 IEEE International Conference on Image Processing. Anchorage, AK, United statesIEEEKim, B.C., Yu, Y., Ro, Y.M.: Robust decision-based black-box adversar- ial attack via coarse-to-fine random search. In: 2021 IEEE International Conference on Image Processing, pp. 3048-3052. IEEE, Anchorage, AK, United states (2021) Decision-based adversarial attack with frequency mixup. X.-C Li, X.-Y Zhang, F Yin, C.-L Liu, IEEE Trans. Inf. Forensics Secur. 17Li, X.-C., Zhang, X.-Y., Yin, F., Liu, C.-L.: Decision-based adversarial attack with frequency mixup. IEEE Trans. Inf. Forensics Secur. 17, 1038- 1052 (2022) HopSkipJumpAttack: A queryefficient decision-based attack. Paper presented at the 2020 IEEE Symposium on Security and Privacy. J Chen, M I Jordan, M J Wainwright, San Francisco, CA, USAChen, J., Jordan, M.I., Wainwright, M.J.: HopSkipJumpAttack: A query- efficient decision-based attack. Paper presented at the 2020 IEEE Sympo- sium on Security and Privacy, San Francisco, CA, USA, May 18-21 2020 (2020) Low Frequency Adversarial Perturbation. Paper presented at the Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence. C Guo, J S Frank, K Q Weinberger, Tel Aviv, IsraelGuo, C., Frank, J.S., Weinberger, K.Q.: Low Frequency Adversarial Perturbation. Paper presented at the Proceedings of the Thirty-Fifth Con- ference on Uncertainty in Artificial Intelligence, Tel Aviv, Israel, July 22-25 2019 (2020) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. A Athalye, N Carlini, D Wagner, International Conference on Machine Learning. Dy, J.G., Krause, A.Stockholm, SwedenPMLR80Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: Dy, J.G., Krause, A. (eds.) International Conference on Machine Learning, vol. 80, pp. 274-283. PMLR, Stockholm, Sweden (2018) Adversarial risk and the dangers of evaluating against weak attacks. J Uesato, B O&apos;donoghue, P Kohli, A Oord, International Conference on Machine Learning. Dy, J.G., Krause, A.Stockholm, SwedenPMLR80Uesato, J., O'donoghue, B., Kohli, P., Oord, A.: Adversarial risk and the dangers of evaluating against weak attacks. In: Dy, J.G., Krause, A. (eds.) International Conference on Machine Learning, vol. 80, pp. 5025-5034. PMLR, Stockholm, Sweden (2018) Attack-guided efficient robustness verification of relu neural networks. Y Zhu, F Wang, W Wan, M Zhang, 2021 International Joint Conference on Neural Networks. Shenzhen, ChinaIEEEZhu, Y., Wang, F., Wan, W., Zhang, M.: Attack-guided efficient robust- ness verification of relu neural networks. In: 2021 International Joint Conference on Neural Networks, vol. 2021-July, pp. 1-8. IEEE, Virtual, Shenzhen, China (2021) Robustness Verification for Attention Networks using Mixed Integer Programming. H.-C Liao, C.-H Cheng, M Kneissl, A Knoll, Preprint atLiao, H.-C., Cheng, C.-H., Kneissl, M., Knoll, A.: Robustness Verification for Attention Networks using Mixed Integer Programming. Preprint at https://arxiv.org/abs/2202.03932 (2022) An rnn-based framework for the milp problem in robustness verification of neural networks. H Xue, X Zeng, W Lin, Z Yang, C Peng, Z Zeng, Proceedings of the Asian Conference on Computer Vision. the Asian Conference on Computer VisionMacao, ChinaXue, H., Zeng, X., Lin, W., Yang, Z., Peng, C., Zeng, Z.: An rnn-based framework for the milp problem in robustness verification of neural net- works. In: Proceedings of the Asian Conference on Computer Vision, Macao, China, pp. 1842-1857 (2022) Partition-based formulations for mixed-integer optimization of trained relu neural networks. C Tsay, J Kronqvist, A Thebelt, R Misener, Advances in Neural Information Processing Systems. 4Tsay, C., Kronqvist, J., Thebelt, A., Misener, R.: Partition-based formu- lations for mixed-integer optimization of trained relu neural networks. In: Advances in Neural Information Processing Systems, vol. 4. Virtual, Online, pp. 3068-3080 (2021) Evaluating Robustness of Neural Networks with Mixed Integer Programming. V Tjeng, K Y Xiao, R Tedrake, the 7th International Conference on Learning Representations. New Orleans, LA, USAPaper presented atTjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating Robustness of Neural Networks with Mixed Integer Programming. Paper presented at the 7th International Conference on Learning Representations, New Orleans, LA, USA, May 6-9 2019 (2019) Efficient exact verification of binarized neural networks. K Jia, M Rinard, Advances in neural information processing systems. 33Jia, K., Rinard, M.: Efficient exact verification of binarized neural net- works. Advances in neural information processing systems 33, 1782-1795 (2020) Scalable verification of quantized neural networks. T A Henzinger, M Lechner, Ikelic, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceVirtual, Online35Henzinger, T.A., Lechner, M., ikelic, o.: Scalable verification of quantized neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35. Virtual, Online, pp. 3787-3795 (2021) QNNVerifier: A Tool for Verifying Neural Networks using SMT-Based Model Checking. X Song, E Manino, L Sena, E Alves, I Bessa, M Lujan, L Cordeiro, Preprint atSong, X., Manino, E., Sena, L., Alves, E., Bessa, I., Lujan, M., Cordeiro, L., et al.: QNNVerifier: A Tool for Verifying Neural Networks using SMT- Based Model Checking. Preprint at https://arxiv.org/abs/2111.13110 (2021) The marabou framework for verification and analysis of deep neural networks. G Katz, D A Huang, D Ibeling, K Julian, C Lazarus, R Lim, P Shah, S Thakoor, H Wu, A Zeljić, International Conference on Computer Aided Verification. Dillig, I., Tasiran, S.New York City, USASpringer11561Katz, G., Huang, D.A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljić, A., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) International Conference on Computer Aided Verification, vol. 11561, pp. 443-452. Springer, New York City, USA (2019) An smt-based approach for verifying binarized neural networks. G Amir, H Wu, C Barrett, G Katz, International Conference on Tools and Algorithms for the Construction and Analysis of Systems. ChamAmir, G., Wu, H., Barrett, C., Katz, G.: An smt-based approach for verifying binarized neural networks. In: International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Cham, pp. 203-222 (2021) Provable defenses against adversarial examples via the convex outer adversarial polytope. E Wong, Z Kolter, International Conference on Machine Learning. Dy, J.G., Krause, A.Stockholm, SwedenPMLR80Wong, E., Kolter, Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: Dy, J.G., Krause, A. (eds.) International Conference on Machine Learning, vol. 80, pp. 5283-5292. PMLR, Stockholm, Sweden (2018) Certified defenses against adversarial examples. Paper presented at the 6th International Conference on Learning Representations. A Raghunathan, J Steinhardt, P Liang, Vancouver, BC, CanadaRaghunathan, A., Steinhardt, J., Liang, P.: Certified defenses against adversarial examples. Paper presented at the 6th International Confer- ence on Learning Representations, Vancouver, BC, Canada, April 30 - May 3, 2018 (2018) Ai2: Safety and robustness certification of neural networks with abstract interpretation. T Gehr, M Mirman, D Drachsler-Cohen, P Tsankov, S Chaudhuri, M Vechev, IEEE Symposium on Security and Privacy. Paper presented atGehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: Ai2: Safety and robustness certification of neural networks with abstract interpretation. Paper presented at the 2018 IEEE Sympo- sium on Security and Privacy, San Francisco, California, USA, May 21-23 2018 (2018) Abstract interpretation based robustness certification for graph convolutional networks. Y Liu, J Peng, L Chen, Z Zheng, ECAI 2020. Santiago de Compostela, Online, SpainLiu, Y., Peng, J., Chen, L., Zheng, Z.: Abstract interpretation based robustness certification for graph convolutional networks. In: ECAI 2020, Santiago de Compostela, Online, Spain, pp. 1309-1315 (2020) An abstract domain for certifying neural networks. G Singh, T Gehr, M Püschel, M Vechev, P. ACM Program. Lang. 3Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. P. ACM Program. Lang. 3(POPL), 1-30 (2019) Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification. J Li, J Liu, P Yang, L Chen, X Huang, L Zhang, International Static Analysis Symposium. Chang, B.E.Porto, PortugalSpringer11822Li, J., Liu, J., Yang, P., Chen, L., Huang, X., Zhang, L.: Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification. In: Chang, B.E. (ed.) International Static Analysis Symposium, vol. 11822, pp. 296-319. Springer, Porto, Portugal (2019) Perfectly parallel fairness certification of neural networks. C Urban, M Christakis, V Wüstholz, F Zhang, P. ACM Program. Lang. 4Urban, C., Christakis, M., Wüstholz, V., Zhang, F.: Perfectly paral- lel fairness certification of neural networks. P. ACM Program. Lang. 4(OOPSLA), 1-30 (2020) Reachability analysis of deep neural networks with provable guarantees. W Ruan, X Huang, M Kwiatkowska, Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. Lang, J.the Twenty-Seventh International Joint Conference on Artificial IntelligenceStockholm, SwedenRuan, W., Huang, X., Kwiatkowska, M.: Reachability analysis of deep neural networks with provable guarantees. In: Lang, J. (ed.) Proceed- ings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, vol. 2018-July, pp. 2651-2659. ijcai.org, Stockholm, Sweden (2018) Towards fast computation of certified robustness for relu networks. L Weng, H Zhang, H Chen, Z Song, C.-J Hsieh, L Daniel, D Boning, I Dhillon, International Conference on Machine Learning. Dy, J.G., Krause, A.Stockholm, SwedenPMLR80Weng, L., Zhang, H., Chen, H., Song, Z., Hsieh, C.-J., Daniel, L., Boning, D., Dhillon, I.: Towards fast computation of certified robustness for relu networks. In: Dy, J.G., Krause, A. (eds.) International Conference on Machine Learning, vol. 80, pp. 5273-5282. PMLR, Stockholm, Sweden (2018) Lipschitz constant estimation of neural networks via sparse polynomial optimization. F Latorre, P Rolland, V Cevher, the 8th International Conference on Learning Representations. Addis Ababa, EthiopiaPaper presented atLatorre, F., Rolland, P., Cevher, V.: Lipschitz constant estimation of neu- ral networks via sparse polynomial optimization. Paper presented at the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, April 26-30 2020 (2020) A comprehensive survey on support vector machine classification: Applications, challenges and trends. J Cervantes, F Garcia-Lamont, L Rodríguez-Mazahua, A Lopez, 10.1016/j.neucom.2019.10.118Neurocomputing. 408Cervantes, J., Garcia-Lamont, F., Rodríguez-Mazahua, L., Lopez, A.: A comprehensive survey on support vector machine classification: Appli- cations, challenges and trends. Neurocomputing 408, 189-215 (2020). https://doi.org/10.1016/j.neucom.2019.10.118 Security Evaluation of Support Vector Machines in Adversarial Environments. B Biggio, I Corona, B Nelson, B I P Rubinstein, D Maiorca, G Fumera, G Giacinto, F Roli, SpringerChamBiggio, B., Corona, I., Nelson, B., Rubinstein, B.I.P., Maiorca, D., Fumera, G., Giacinto, G., Roli, F.: Security Evaluation of Support Vector Machines in Adversarial Environments. Springer, Cham (2014) Evasion attacks against machine learning at test time. B Biggio, I Corona, D Maiorca, B Nelson, N Šrndić, P Laskov, G Giacinto, F Roli, Machine Learning and Knowledge Discovery in Databases -European Conference. Blockeel, H., Kersting, K., Nijssen, S., Zelezný, F.Prague, Czech republicSpringer8190Biggio, B., Corona, I., Maiorca, D., Nelson, B.,Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Zelezný, F. (eds.) Machine Learning and Knowledge Discovery in Databases -European Conference, vol. 8190, pp. 387-402. Springer, Prague, Czech republic (2013) Adversarial feature selection against evasion attacks. F Zhang, P P Chan, B Biggio, D S Yeung, F Roli, IEEE T. Cybern. 463Zhang, F., Chan, P.P., Biggio, B., Yeung, D.S., Roli, F.: Adversarial fea- ture selection against evasion attacks. IEEE T. Cybern. 46(3), 766-777 (2016) Defending support vector machines against data poisoning attacks. S Weerasinghe, T Alpcan, S M Erfani, C Leckie, IEEE Transactions on Information Forensics and Security. 16Weerasinghe, S., Alpcan, T., Erfani, S.M., Leckie, C.: Defending support vector machines against data poisoning attacks. IEEE Transactions on Information Forensics and Security 16, 2566-2578 (2021) Robustness verification of support vector machines. F Ranzato, M Zanella, International Static Analysis Symposium. Chang, B.E.Porto, PortugalSpringer11822Ranzato, F., Zanella, M.: Robustness verification of support vector machines. In: Chang, B.E. (ed.) International Static Analysis Symposium, vol. 11822, pp. 271-295. Springer, Porto, Portugal (2019) Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. P Cousot, R Cousot, Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languagesCousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fix- points. Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages (1977) F Ranzato, M Zanella, SAVer GitHub Repository. Ranzato, F., Zanella, M.: SAVer GitHub Repository. https://github.com/ svm-abstract-verifier (2019) Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proc. IEEE. 8611LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278-2324 (1998) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. H Xiao, K Rasul, R Vollgraf, Preprint atXiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. Preprint at http://arxiv. org/abs/1708.07747 (2017) I Goodfellow, Y Bengio, A Courville, Deep Learning. Cambridge, MA, USAMIT pressGoodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT press, Cambridge, MA, USA (2016) Maximizing a sum of sigmoids. M Udell, S Boyd, Optim. Eng. Udell, M., Boyd, S.: Maximizing a sum of sigmoids. Optim. Eng., 1-25 (2013) R K Ahuja, T L Magnanti, J B Orlin, Network flows. Massachusetts Institute of Technology. Ahuja, R.K., Magnanti, T.L., Orlin, J.B.: Network flows. Massachusetts Institute of Technology, Operations Research Center (1988) Subgradient methods. lecture notes of EE392o. S Boyd, L Xiao, A Mutapcic, Stanford UniversityBoyd, S., Xiao, L., Mutapcic, A.: Subgradient methods. lecture notes of EE392o, Stanford University, Autumn Quarter 2004, 2004-2005 (2004) Adam: A method for stochastic optimization. D P Kingma, J Ba, the 3rd International Conference on Learning Representations. San Diego, USAPaper presented atKingma, D.P., Ba, J.: Adam: A method for stochastic optimization. Paper presented at the 3rd International Conference on Learning Representa- tions, San Diego, USA, May 7-9 2015 (2015)
[]
[]
[ "Jun Yang \nHarvard University\nCambridgeMAUSA\n\nHarvard University\n02138CambridgeMAUSA\n" ]
[ "Harvard University\nCambridgeMAUSA", "Harvard University\n02138CambridgeMAUSA" ]
[]
Given a connected semisimple Lie group G and an arithmetic subgroup Γ, it is well-known that each irreducible representation π of G occurs in the discrete spectrum L 2 disc (Γ\G) of L 2 (Γ\G) with at most a finite multiplicity m Γ (π). While m Γ (π) is unknown in general, we are interested in its limit as Γ is taken to be in a tower of lattices Γ 1 ⊃ Γ 2 ⊃ . . . . For a bounded measurable subset X of the unitary dual G, we let m Γn (X) be the sum of the multiplicity m Γn (π) over all π in X. Let H X be the direct integral of the irreducible representations in X with respect to the Plancherel measure of G, which is also a module over the group von Neumann algebra LΓn. We prove:for any bounded subset X of G, when i) Γn's are cocompact, or, ii) G = SL(n, R) and {Γn} are principal congruence subgroups.
null
[ "https://export.arxiv.org/pdf/2306.02999v1.pdf" ]
259,075,547
2306.02999
88091d60598be054c9cb41e13607b94ced66ca6d
5 Jun 2023 Jun Yang Harvard University CambridgeMAUSA Harvard University 02138CambridgeMAUSA 5 Jun 2023VON NEUMANN DIMENSIONS AND TRACE FORMULAS I: LIMIT MULTIPLICITIES Given a connected semisimple Lie group G and an arithmetic subgroup Γ, it is well-known that each irreducible representation π of G occurs in the discrete spectrum L 2 disc (Γ\G) of L 2 (Γ\G) with at most a finite multiplicity m Γ (π). While m Γ (π) is unknown in general, we are interested in its limit as Γ is taken to be in a tower of lattices Γ 1 ⊃ Γ 2 ⊃ . . . . For a bounded measurable subset X of the unitary dual G, we let m Γn (X) be the sum of the multiplicity m Γn (π) over all π in X. Let H X be the direct integral of the irreducible representations in X with respect to the Plancherel measure of G, which is also a module over the group von Neumann algebra LΓn. We prove:for any bounded subset X of G, when i) Γn's are cocompact, or, ii) G = SL(n, R) and {Γn} are principal congruence subgroups. Introduction: an example on SL(2, R) In this section, we introduce a multiplicity problem of square-integrable irreducible representations of G = SL(2, R) on L 2 cusp (Γ\G) for some arithmetic subgroups Γ. It is one of the motivations of this article. We first Γ = SL(2, Z) and Γ(N ) be the principal congruence subgroup of level n defined by Γ(N ) = a b c d ∈ SL(2, Z) : a, d ≡ 1 (mod N ), b, c ≡ 0 (mod N ) . Consider the right quasi-regular representation of G on L 2 (Γ(N )\G) given by (R(g)φ)(x) = φ(xg) for φ ∈ L 2 (Γ(N )\G), g ∈ G. It is well known (see [28]) to be reducible and can be decomposed as The multiplicities m Γ(N ) (π) are still unknown in general, except for some special families of irreducible representations including the discrete series of SL(2, R) (see [26] for an introduction of discrete series). Let S k (Γ) be the space of cusp forms of weight k for a Fuchsian group Γ. We have the following result (see [22] Theorem 2.10). Lemma 1.1. For the discrete series π k , we have m Γ(N ) (π k ) = dim S k (Γ(N )). By applying the dimension formulas of cusp forms (see [14] Chapter 3.9), we obtain (1) m Γ(N ) (π k ) = ( k − 1 24 − 1 4N )N 3 p|N (1 − 1 p 2 ) for all N > 2. On the other hand, let H k be the underlying Hilbert space of the discrete series π k . As H k is a module over the group Γ(N ), we can further prove that it is also a module over the group von Neumann algebra L(Γ(N )) (see Section 4.1 for the definition). Hence H k has a von Neumann dimension dim L(Γ(N )) H k over L(Γ(N )). Indeed, if a discrete group Γ is ICC (infinite conjugacy class, see also 4.1), this dimension totally determines the equivalence class of L(Γ)-module, i.e., dim L(Γ) H 1 = dim L(Γ) H 2 if and only if H 1 , H 2 are isomorphic as L(Γ)-module. We consider a lattice Γ in a Lie group G. Suppose (π, H) is a discrete series representation of G and let d(π) be the formal dimension of π (see [33] Chapter 16). We have Lemma 1.2 (Goodman-la Harpe-Jones [23]). dim L(Γ) H = vol(Γ\G) · d(π) By Example 3.3.4 in [23], we know dim L(PSL(2,Z)) H k = k−1 12 . As SL(2, Z) = (Z/2Z)⋊PSL(2, Z), we have dim L(SL(2,Z)) H k = k−1 24 . Since [Γ : Γ(n)] = N 3 p|N (1− 1 p 2 ), we can conclude (2) dim L(Γ(N )) H k = k − 1 24 N 3 p|N (1 − 1 p 2 ). Thus we obtain: Corollary 1.3. For a discrete series (π k , H k ) of SL(2, R), we have lim N →∞ m Γ(N ) (π k ) dim L(Γ(N )) H k = 1. Proof: Comparing Equations 1 and 2, we obtain m Γ(N ) (π k ) dim L(Γ(N )) H k = k−1−6/N k−1 and then take the limit. While the explicit multiplicities of most irreducible representations are still unknown, the limit multiplicities have been studied since 1970s. In the case of towers of uniform lattices, DeGeorge and Wallach got the first results for discrete series of Lie groups [11] and later for bounded sets of irreducible representations in rank one group [12]. Delorme [13] finally solved the problem for bounded sets of irreducible representations in all Lie groups. See also [1] for a recent approach. For the non-uniform lattices (or most arithmetic subgroups), Savin [36] obtained the results on discrete series in his thesis at first, which is based on the work by Rohlfs and Speh [34]. Then Deitmar and Hoffmann proved the results on certain towers of arithmetic subgroups in rank one group. Recently, Finis and Lapid solved the case of congruence subgroups in SL(n, R) [20,17], which are based on their study of the spectral side of Arthur's trace formulas [18,16]. The goal of this paper is to extend Corollary 1.3 to some general settings. In the rest of this paper, we generalize this result mainly in the following aspects: (1) from a single discrete series representation to any bounded subset of the unitary dual G of G; (2) from SL(2, R) to the towers of uniform lattices in an arbitrary semisimple Lie group, (3) from SL(2, R) to SL(n, R) with its the principal congruence subgroups. Finally, we are able to prove: Theorem 1.4 (The Main Theorem). Let G be a semisimple simply-connected Lie group. Let X be a bounded subset of the unitary dual of G and H X be the direct integral of the irreducible representations of G in X. We have: lim n→∞ m Γn (X) dim LΓn H X = 1, when i) Γ n 's are cocompact, or ii) G = SL(n, R) and {Γ n } are principal congruence subgroups. The trace formulas and dominant terms We have a brief review of the Arthur-Selberg trace formulas and give the dominant terms in these formulas. We mainly follow [3,4,19]. Let G be a reductive group over Q. The group G(A) acts naturally on L 2 (G(Q)\G(A)) by R(g)φ(x) = φ(xg) for φ ∈ L 2 (G(Q)\G(A)) and g ∈ G(A)). Let C ∞ c (G(A)) be the complex algebra of smooth, compactly supported function on G(A)). Given f ∈ C ∞ c (G(A)), we may define (R(f )φ)(x) = G(A)) f (g)R(g)φ(x)dg = G(A)) f (g)φ(xg)dg. If we define the kernel K(x, y) = K f (x, y) : = γ∈G(Q) f (x −1 γy), we have (R(f )φ)(x) = G(Q)\G(A) K(x, y)φ(y)dy.) = ⊕ χ∈X L 2 (G(Q)\G(A)) χ . Here L 2 (G(Q)\G(A)) χ = m(χ) · χ, which is m(χ) copies of the irreducible representation χ. Assume B χ is a orthonormal basis of L 2 (G(Q)\G(A)) χ . Then K χ (x, y) = K f,χ (x, y) : = φ∈Bχ (R(f ))φ(x) · φ(y) converges. Now we let (1) k χ (x, f ) = K χ (x, x) and J χ (f ) = G(Q)\G(A) k χ (x, f )dx, (2) k o (x, f ) = K o (x, x) and J o (f ) = G(Q)\G(A) k o (x, f )dx. If we let γ be a representatives of o ∈ O and H γ = {h ∈ H|hγh −1 = γ} for a group H containing γ, we get J o (f ) = vol(G(Q) γ \G(A) γ ) G(A)γ \G(A) f (x −1 γx)dx. Theorem 2.1. Assuming G(Q)\G(A) is compact, we have (3) tr R(f ) = o∈O J o (f ) = χ∈X J χ (f ) for any f ∈ C ∞ c (G(A)). For the classical setting, we start with a real Lie group G and a lattice Γ ⊂ G. Consider the representation R Γ of G on L 2 (Γ\G) given by (R Γ (g)φ)(x) = φ(xg) for x, g ∈ G. Let C ∞ c (G) be the space of smooth function on G with compact support. For f ∈ C ∞ c (G) and a representation (π, H) of G, we let π(f )v = G f (g)π(g)vdg. If π is irreducible, π(f ) is a trace class operator and we let θ π (f ) = tr π(f ). Note for the representation R Γ , we have ( R Γ (f )φ)(x) = G f (g)R Γ (g)φ(x)dg = G f (g)φ(xg)dg. It is known that Γ\G is compact if and only if the reductive part of G is anisotropic (see [32] Theorem 4.12). In this case, L 2 (Γ\G) can be decomposed into a direct sum of irreducible representations of G with each of finite multiplicity, i.e., L 2 (Γ\G) = ⊕m Γ (π) · π with m Γ (π) = dim Hom G (π, L 2 (Γ\G)) < ∞ for each π. By taking the test function in Theorem 2.1 to be f ⊗ 1 K for a maximal compact subgroup K of G(A fin ) with f ∈ C ∞ c (G) (see Section 3.1), we get the following result for the lattice Γ in the real Lie group G. Corollary 2.2 (The Selberg trace formula). If Γ\G is compact, R Γ (f ) is of trace class and (4) trR Γ (f ) = π∈ G m Γ (π)θ π (f ) = γ∈[Γ] vol(Γ γ \G γ ) Γγ \G f (x −1 γx)dx 2.2. The Arthur trace formula. We assume G is not necessarily anisotropic and G(Q)\G(A) may not be compact. Assume B is a Borel subgroup defined over Q, M 0 is a Levi factor of B defined over Q, P is a standard parabolic subgroup defined over Q (i.e., P 0 = B ⊂ P ), N P = R u (P ) (the unipotent radical of P ), M P is the unique Levi component of P such that M 0 ⊂ M P . We also assume A P is the split component of the center of M P and Z = A G , ∆ 0 = ∆ B is a base for a root system. We will mostly use the notations of [3,4,6] and [21] as follows: • aP = Hom(X(MP ) Q , R) where X(MP ) Q is the Q-characters of MP , a * P = X(MP ) Q ⊗ R and a + P = {H ∈ aP |α(H) > 0, ∀α ∈ ∆P }. • γ = γsγu, which is the decomposition such that γs is semisimple and γu is unipo- tent. • O is the set of G(Q)-semisimple conjugacy class of G(Q) (γ ∼ = β if γs and βs are G(Q)-conjugate). • o ∈ O is a conjugacy class in G(Q). • X is the set of equivalence classes of pairs (M, ρ), where M is a Levi subgroup of G and ρ ∈ M (A) 1 ((M, ρ) ∼ (M ′ , ρ ′ ) if there is an s ∈ Ω(a, a ′ ) such that the representation (sρ)(m ′ ) = ρ(w −1 s mws) is unitarily equivalent to ρ ′ ). • For a pair of parabolic groups P1 ⊂ P2, ∆ P 2 P 1 is the set of simple roots of (MP 2 ∩ P1, AP 1 ) and∆ P 2 P 1 = {̟α|α ∈ ∆ P 2 P 1 }, i.e, the dual basis for ∆ P 2 P 1 . •τP is the characteristic function on a0 of {H ∈ a0|̟(H) > 0, ̟ ∈∆ G P }. • For m = v mv ∈ M (A), let HM (m) ∈ ap given by e H M (m),χ = |χ(m)| = v |χ(mv)|v, ∀χ ∈ X(M ) Q . • x = nmak ∈ G(A) with n ∈ G(A), m ∈ M (A) 1 , a ∈ A(R) 0 and k ∈ K. • H(x) = HM (ma) = HM (a) ∈ ap. Let T ∈ a + 0 be suitably regular, i.e., α(T ) is sufficiently large for all α ∈ ∆ 0 . For a parabolic subgroup P , there are kernels [3] p.923 and p.935 for the precise definitions). Then Arthur is able to define the truncated kernels and distributions J T o , J T χ as follows: K P,o = γ∈M(Q)∩o N (A) f (x −1 γny)dn and K P,χ (see(1) k T o (x, f ) = P (−1) dim(A P /Z) δ∈P (Q)\G(Q) K P,o (δx, δx) ·τ p (H(δx) − T ). (2) k T χ (x, f ) = P (−1) dim(A P /Z) δ∈P (Q)\G(Q) K P,χ (δx, δx) ·τ p (H(δx) − T ). (3) J T o (f ) = G(Q)\G(A) 1 k T o (x, f )dx. (4) J T χ (f ) = G(Q)\G(A) 1 k T χ (x, f )dx. Let X (G) = {(M, ρ) ∈ X |M = G}. We reach a coarse trace formula, which is firstly given in [4] Chapter 5. Theorem 2.3 (The Arthur trace formula) . For any f ∈ C ∞ c (G(A) 1 ) and any suitably regular T ∈ a + 0 , we have (5) o∈O J T o (f ) = χ∈X J T χ (f ) Moreover, the trace formula of R(f ) is given by tr R cusp (f ) = o∈O J T o (f ) − χ∈X \X (G) J T χ (f ). 2.3. The dominant term on the geometric side. Consider the adelic case at first. Let F be a number field and V, V ∞ and V f be the set of places, Archimedean and non-Archimedean places of F respectively. Let A be adele ring of F and A fin ⊂ A be restricted product over the finite places. Suppose S ⊂ V is a finite set containing V ∞ . Let F S = v∈S F s and A S = ′ v∈V \S F s so that A = F S × A S . We define (1) G(F S ) 1 = χ∈Hom(G(FS ),F × ) {ker |χ| : G(F S ) → R + }, (2) G(A) 1 = χ∈Hom(G(A),F × ) {ker |χ| : G(A) → R + }, where | · | is the product of valuations on F S and A respectively. We will consider the representation of G(F S ) on L 2 (G(F )\G(A) 1 /K) for an open compact subgroup K of G(A S ). In particular, it will reduce to the representation of G(F ∞ ) on L 2 (Γ K \F ∞ ) if we take S = {∞} and Γ K = G(F ) ∩ K. Let J(f ) be the distribution defined by Equation 3 or 5 in Section 2 for f ∈ C ∞ c (G(F S )), which also depends on G(F )\G(A) 1 is compact or not. The goal of this subsection is to prove lim n→∞ vol(G(F )\G(A) 1 )f (1) J(f ⊗ 1 Kn ) = 1 for certain towers of open compact subgroups {K n } n≥1 of G(A S ). Let us assume Γ is a uniform lattice in the semisimple Lie group G. We add a subscript such as R Γ and J Γ for the representation of G on L 2 (Γ\G) and the corresponding trace formulas as an emphasis on the lattice Γ. Since Γ\G is compact, (1), the contribution of the identity to the geometric side of the trace formula. J Γ (f ) is the trace tr R Γ (f ) and we obtain J Γ (f ) = tr R Γ (f ) = π∈ G J π,Γ (f ) = o∈O J o,Γ (f ). Let J {1},Γ (f ) = vol(Γ\G)f We take a tower of uniform lattices {Γ n } n≥1 , such that Γ n Γ 1 , [Γ 1 : Γ n ] < ∞ and ∩ n≥1 Γ n = {1}. Proposition 2.4. With the assumption of uniform lattice {Γ n } above, we have lim n→∞ J {1},Γn (f ) J Γn (f ) = 1. Proof: Following the Equation (2) in [10], we obtain tr RΓ n (φ) = J {1},Γn (φ) + γ =1 sn(γ) vol(Γj\G) vol(Γγ \Gγ ) Γγ \G φ(x −1 γx)dx, where 0 ≤ s n (γ) ≤ vol(Γ\G) −1 . As ∩ n≥1 Γ n = {1}, lim n→∞ s n (γ) for all γ = 1. By [10] Theorem 2, we have vol(Γ n \G) −1 · lim n→∞ tr R Γn (φ) = φ(1). Hence lim n→∞ J {1},Γn (φ) JΓ n (φ) = lim n→∞ J {1},Γn (φ) tr RΓ n (φ) = 1. Now we let G be a reductive group over a number field F . Let K = K ∞ K fin be a maximal compact subgroup of G(A) = G(A F ). By fixing a faithful F -rational representation ρ : G(F ) → GL(m, F ) for some m > 0, we let Λ ⊂ F m be an O F - lattice such that the stablilizer of Λ = O F ⊗ F Λ in G(A fin ) is K fin . For a non-trivial ideal I of O F , we let K(I) = {g ∈ G(A fin )|ρ(g)v ≡ v (mod I · Λ) , v ∈ Λ} be the principal congruence subgroup of level I. We also denote the ideal norm of I by N (I) = [O F : I]. Consider a descending tower of ideals I 1 I 2 I 3 · · · such that each I k is prime to (the prime ideals in) S. We obtain the corresponding tower of principal congruence subgroups: K 1 K 2 K 3 · · · , where K n = K(I n ) . By factoring into prime ideals, the family {I n } n≥1 satisfies either one of the following properties: (1) there exists a prime ideal p such that each p k is eventually contained in the tower, i.e., for any k ≥ 1, there is N k > 0 such that p k ⊂ I n for all n ≥ n k , or, (2) there exists infinitely many prime ideals {p k } k≥1 such that for each k, there exist M k > 0 such that p k ⊂ I n for all n ≥ M k . In either of these two cases, we have Lemma 2.5. ∩ n≥1 I n = {0} and ∩ n≥1 K n = {1}. Recall the equivalence class of unipotent elements in G(F ), which is the element γ = γ s γ u with the semisimple component r s = 1 (see [5] p.1240). Let J T unip (f ), f ∈ C ∞ c (G(A) 1 ) . be the contribution of this equivalence class on the geometric side of the trace formula 5. We will consider the function of the form Suppose the G(A) conjugacy classes of elements in o intersects D h K n for infinitely many n, i.e., {gγg −1 |g ∈ G(A), γ ∈ o} ∩ D h K n = ∅ for infinitely many n. Take some γ ∈ o. By fixing a faithful F -representation ρ : G(F ) → GL(m), we let p(x) ∈ F [x] be the characteristic polynomial of ρ(γ) − 1 (a m-by-m matrix over F ). Suppose p(x) = x m + a m−1 x m−1 + · · · + a 0 with all a i ∈ F . By Lemma 2.5, we know a i belongs to infinitely many I n , or, equivalently a i = 0. Hence p(x) = x m and γ is unipotent. f = h S ⊗ 1 Kn with h S ∈ C ∞ c (G(F S ) 1 ). Lemma 2.6. For h S ∈ C ∞ c (G(F S ) 1 ), lim n→∞ J(h S ⊗ 1 Kn ) = lim n→∞ J unip (h S ⊗ 1 Kn ). Proof: Let D h = supp(h S ) ⊂ G(F S ) 1 The unipotent contribution J unip (h S ⊗ 1 Kn ) can be further reduced to the the one from the identity as follows. We let I S be a product of prime ideals in at the places of S and K S−S∞ (I S ) be the S − S ∞ component of the compact group K(I S ). We also let C ∞ Ω (G(F S ) 1 ) be the set of smooth functions with compact support contained in a compact subset Ω of G(F S ) 1 . For each k ≥ 0, we let B k be the k-th component of the universal enveloping algebra U(g C ), where g C is the complexified Lie algebra of the Lie group G(F ∞ ). We set h k = X∈B k X • h L 1 (G(A) 1 for h ∈ C ∞ Ω (G(F S ) 1 ). The following result is a special case of Proposition 3.1 in [20], whose proof is mainly based on Theorem 3.1 and Theorem 4.2 in [5]. Proposition 2.7 (Finis-Lapid-Müller). There exists an integer k ≥ 0 such that for any compact subset Ω of G(F S ) 1 , we have a constant C Ω > 0 and |J unip (h S ⊗ 1 Kn ) − vol(G(F )\G(A) 1 )h S (1)| ≤ C Ω (1+log N (IS I)) d 0 N (I) h S k for any bi-K S−S∞ (I S )-invariant function h S ∈ C ∞ Ω (G(F S ) 1 ) . Then, combining Lemma 2.6 we can obtain: Corollary 2.8. For h S ∈ C ∞ c (G(F S ) 1 ), we have lim n→∞ vol(G(F )\G(A) 1 )h S (1) J(h S ⊗ 1 K S (n) ) = 1. The multiplicities problem This section is devoted to the multiplicity of bounded subsets of the unitary dual, instead of a single irreducible representation. 3.1. The multiplicities in L 2 (Γ\G). Let G = G(R) 0 , the connected component of the real group obtained from an almost simple group G over Q. By fixing a faithful Q-embedding ρ : G → GL n , we have an arithmetic group Γ commensurable with G ∩ GL n (Z). Let G be the unitary dual of G and G temp ⊂ G be the tempered dual. Let us consider the following two cases. 1. Γ\G is compact. As introduced in Section 2.1, L 2 (Γ\G) can be decomposed into a direct sum of irreducible representations of G with each of finite multiplicity, i.e., L 2 (Γ\G) = ⊕m Γ (π) · π with m Γ (π) : = dim Hom G (π, L 2 (Γ\G)) < ∞ for each π ∈ G. 2. Γ\G is not compact. If G is semisimple, we have Γ\G is of finite (Haar) measure (see [31] Theorem 4.13). The regular representation has both discrete and continuous spectra: L 2 (Γ\G) = L 2 disc (Γ\G) ⊕ L 2 disc (Γ\G). The discrete spectrum can be written as the direct sum of cuspidal and residue subspaces: L 2 disc (Γ\G) = L 2 cusp (Γ\G) ⊕ L 2 res (Γ\G). which can be decomposed further into a direct sum of irreducible representations with finite multiplicities, i.e., L 2 disc (Γ\G) = ⊕m Γ (π) · π with m Γ (π) : = dim Hom G (π, L 2 disc (Γ\G)) = dim Hom G (π, L 2 (Γ\G)) is finite for each π ∈ G. We say X ⊂ G is bounded if it is relatively compact under the Fell topology. Definition 3.1 (The multiplicity for X ⊂ G). For a bounded X ⊂ G, we define the multiplicity of X to be the sum of the multiplicities of the irreducible representations in X, i.e., m Γ (X) : = π∈X m Γ (π). Borel and Garland proved the finiteness of m Γ (X) by considering the spectrum of a certain Laplacian (see [9] Theorem 3, Theorem 4.6 and also [24] Borel-Garland). Let G = G(R) 0 for a connected semisimple group G over Q and X ⊂ G is bounded. We have m Γ (X) < ∞. For a subset X ⊂ G(F S ) 1 , we call it bounded if it is relatively compact under the Fell topology (see [35]). (1) The multiplicity of σ with respect to K is defined as m K (σ) : = dim Hom G(FS ) 1 (σ, L 2 (G(Q)\G(A) 1 /K)). (2) The multiplicity of X with respect to K is defined as m K (X) : = σ∈X m K (σ). For an irreducible representation π of G(A) 1 , we write π = π S ⊗ π S , where π S and π S denote the components of the representations of G(F S ) 1 and G(A S ) respectively. As shown in Theorem 3.1, m K (X) is finite and hence well-defined. If we treat L 2 (G(Q)\G(A) 1 /K) as the subspace of K-right invariant functions in L 2 (G(Q)\G(A) 1 )), we have m K (σ) = π∈ G(A) 1 ,πS=σ dim Hom G(A) 1 (π, L 2 (G(Q)\G(A) 1 )) dim(π S ) K . If we take S = V ∞ and G is semisimple, simply connected, and without any Fsimple factors H such that H(F ∞ ) is compact and K is an open compact subgroup of G(A fin ), we know Γ K = G(F)∩K is a lattice in the seimisimple Lie group G(F ∞ ). Lemma 3.2. With the assumption above, we have m ΓK (π) = m K (π) for any π ∈ G(F ∞ ) 1 and m ΓK (X) = m K (X) for any bounded X ⊂ G(F ∞ ) 1 Proof: It follows the fact G(Q)\G(A)/K can be identified with Γ K \G(F ∞ ), which leads to a G(F ∞ )-isomorphism L 2 (Γ K \G(F ∞ )) ∼ = L 2 (G(Q)\G(A) 1 /K) (see [27] Chapter 6 and [32] Chapter 7.4.). For a finite set S and a function φ on G(F S ) 1 , we define m K (φ) : = G(FS) 1 φ(π)dm K (π) as its integral with respect to the measure given by multiplicities above. If 1 X is the characteristic function of X, i.e., 1 X (π) = 1 if π ∈ X and 0 otherwise, m K (1 X ) = m K (X). For f ∈ C ∞ c (G(F S ) 1 ) , we let f (π) = tr π(f ), the distribution character of π. Let R disc denote the action of G(A) on the discrete subspace L 2 (G(Q)\G(A) 1 ). Proposition 3.3. For f ∈ C ∞ c (G(F S ) 1 ), we have tr R disc (f ⊗ 1K vol(K) ) = m K (f ). Proof: Observe for the component π S of representation of G(A S ), we have tr π S (1 K ) = G(A S ) 1 K (x)π S (x −1 )dµ S (x) = K π S (x −1 )dµ S (x) = vol(K) dim(π S ) K , where we apply the fact that K σ(x)dµ S (x) = 0 for any non-trivial irreducible representation σ of K. Hence we obtain tr R disc (f ⊗ 1 K vol(K) ) = 1 vol(K) π∈ G(A) 1 m(π) tr π(f ⊗ 1 K ) = 1 vol(K) π∈ G(A) 1 m(π) tr π S (f ) tr π S (1 K ) = 1 vol(K) π∈ G(A) 1 m(π) tr π S (f ) vol(K) dim(π S ) K = σ∈ G(FS ) 1 m K (σ) tr σ(f ) = m K ( f ). We also give the following result which connects the trace formulas for adelic groups and Lie groups. Corollary 3.4. Let Γ K = G(F ) ∩ K with an open compact subgroup K of G(A fin ). We have tr R disc (f ⊗ 1K vol(K) ) = tr R ΓK (f ). for all f ∈ C ∞ c (G(F ∞ ) 1 ). Proof: It follows the fact m K ( f ) = m ΓK ( f ) in Lemma 3.2, m ΓK ( f ) = tr R ΓK (f ) and Proposition 3.3. 3.2. Sauvageot's density theorems. We have a brief review of the results in [35]. See also [37] for an alternative approach and corrections. For an open compact subgroup K of G(A S ), we define a measure on G(F S ) 1 by ν K (X) : = vol(K) vol(G(Q)\G(A) 1 ) m K (X) for any bounded subset X of G(F S ) 1 and m K is the multipilicity defined in Chapter 3.1. Let K 1 K 2 · · · be a sequence of open compact subgroups of G(A S ). Given a bounded subset X of G(F S ) 1 and C ≥ 0, we write lim n→∞ ν K (X) = C, if for any ε > 0, there exists N = N (ε) > 0 such that |ν Kn (X) − C| < ε for all n ≥ N . Let H(G(F S ) 1 ) be the complex algebra of smooth, compactly-supported, bi-K Sfinite functions on G(F S ) 1 . X ⊂ G(F S ) 1 \ G(F S ) 1 temp , there is Ψ ∈ H(G(F S ) 1 ) such that Ψ| G(FS ) 1 ≥ 0, ν( Ψ) < ε and Ψ| X ≥ 1. Given a function f defined on G(F S ) 1 temp , we also denote by f the function on G(F S ) 1 , which is extended by 0 on the untempered part. 3(b)). For ε > 0 and any ν-integrable function f on G(F S ) 1 temp , there exists φ, ψ ∈ H(G(F S ) 1 ) such that |f (π) − φ(π)| ≤ ψ(π) and ν( ψ) < ε. Here we obtain one of the main results in [35] and we also provide a proof for completeness. Theorem 3.7 (Sauvageot). Suppose lim n→∞ ν Kn ( φ) = φ(1) for all φ ∈ H(G(F S ) 1 ). We have lim n→∞ ν Kn (X) = ν(X) for all bounded subset X of G(F S ) 1 . Proof: First, we show the contribution from the untempered part is negligible in the limit. For a bounded subset X 0 of G(F S ) 1 \ G(F S ) 1 temp and ε > 0, we let Ψ ∈ H(G(F S ) 1 ) satisfies Lemma 3.5 with respect to X. We have ν Kn (X) ≤ ν Kn ( Ψ) ≤ |ν Kn ( Ψ) − ψ(1)| + ψ(1) < 2ε for all n ≥ N 1 with some N 1 ≥ 0. For the tempered part, we fix a bounded subset X 1 of G(F S ) 1 temp with the same ε above. Let φ, ψ ∈ H(G(F S ) 1 ) satisfy Lemma 3.6 with respect to the function f = 1 X1 on G(F S ) 1 temp and ε. By assumption, we have |ν Kn ( φ) − φ(1)| < ε and |ν Kn ( ψ) − ψ(1)| < ε for all n ≥ N 2 with some N 2 ≥ 0. Hence, for n ≥ N 2 , we obtain |ν Kn (X 1 ) − ν(X 1 )| ≤ |ν Kn (X 1 ) − ν Kn ( φ)| + |ν Kn ( φ) − φ(1)| + |φ(1) − ν(X)| ≤ |ν Kn ( φ) − φ(1)| + ν Kn ( ψ) + ψ(1) ≤ |ν Kn ( φ) − φ(1)| + | + |ν Kn ( ψ) − ψ(1)| + 2ψ(1) < 4ε. Hence, for the bounded set X of G(F S ) 1 , let X = X 0 ⊔ X 1 be the decomposition into its untempered and tempered parts. We have |ν Kn (X) − ν(X)| = |ν Kn (X) − ν(X 1 )| = |ν Kn (X 1 ) − ν(X)| + ν Kn (X 0 ) ≤ 4ε + 2ε = 6ε for all N ≥ max{N 1 , N 2 }. The von Neumann dimensions of direct integrals 4.1. The group von Neumann algebra and the trace. Let Γ be a countable group with the counting measure. Let {δ γ } γ∈Γ be the usual orthonormal basis of l 2 (Γ). We also let λ and ρ be the left and right regular representations of Γ on l 2 (Γ) respectively. For all γ, γ ′ ∈ Γ, we have λ(γ ′ )δ γ = δ γ ′ γ and ρ(γ ′ )δ γ = δ γγ ′−1 . Let L(Γ) be the strong operator closure of the complex linear span of λ(γ)'s (or equivalently, ρ(γ)'s). This is the group von Neumann algebra of Γ. There is a canonical faithful normal tracial state τ Γ , or simply τ , on L(Γ), which is given by τ (x) = xδ e , δ e l 2 (Γ) , x ∈ L(Γ). Hence L(Γ) is a finite von Neumann algebra (which must be of type I or II 1 ). More generally, for a tracial von Neumann algebra M with the trace τ , we consider the GNS representation of M on the Hilbert space constructed from the completion of M with respect to the inner product x, y τ = τ (xy * ). The underlying space will be denoted by L 2 (M, τ ), or simply L 2 (M ). ( 1) dim M (⊕ i H i ) = i dim M (H i ). (2) dim M (L 2 (M )) = 1. Note dim M (H) depends on the trace τ . If M is a finite factor, i.e., Z(M ) ∼ = C, there is a unique normal tracial state (see [25,29]) and we further have: (3) dim M (H) = dim M (H ′ ) if and only if H and H ′ are isomorphic as Mmodules (provided M is a factor). When M is not a factor, there is a Z(M )-valued trace which determines the isomorphism class of an M -module (see [8]). In the following sections, we will consider the group von Neumann algebra L(Γ) with the canonical trace tr(x) = xδ e , δ e . Hence the von Neumann dimension of L(Γ) is the one uniquely determined by this trace. Note a discrete group Γ is called an infinite conjugacy class (ICC) group if every nontrivial conjugacy class C γ = {gγg −1 |g ∈ Γ}, γ = e, is infinite. It is well-known that L(Γ) is a II 1 factor if and only if Γ is a nontrivial ICC group. Now we consider the case that Γ is a discrete subgroup of a locally compact unimodular type I group G. Let µ be a Haar measure of G. A measurable set D ⊂ G is called a fundamental domain for Γ if D satisfies µ(G\ ∪ γ∈Γ γD) = 0 and µ(γ 1 D ∩ γ 2 D) = 0 if γ 1 = γ 2 in Γ. In this section, we always assume Γ is a lattice, i.e., µ(D) < ∞. The measure µ(D) is called covolume of Γ and will be denoted by covol(Γ). Note the covolume depends on the Haar measure µ (see Remark 4.3). There is a natural isomorphism L 2 (G) ∼ = l 2 (Γ) ⊗ L 2 (D, µ) given by φ → γ∈Γ δ γ ⊗ φ γ with φ γ (z) = φ(γ · z), where z ∈ D and γ ∈ Γ. The restriction representation λ G | Γ of Γ is the tensor product of λ Γ on l 2 (Γ) and the identity operator id on L 2 (D, µ). Hence we obtain the von Neumann algebra λ G (Γ) ′′ ∼ = L(Γ) ⊗ C = L(Γ), which will be denoted by M throughout this section. Please note L 2 (M ) = l 2 (Γ). 4.2. A theorem of von Neumann dimension. Suppose X is a measurable subset of G with the Plancherel measure ν(X) < ∞. Define H X = ⊕ X H π dν(π) , which is the direct integral of the spaces H π with π ∈ X. It is a module over G, its lattice Γ, and also the group von Neumann algebra L(Γ). We state a result on the von Neumann dimension of direct integrals. One may refer to [38] Section 4 for the proof. Theorem 4.2. Let G be a locally compact unimodular type I group with Haar measure µ. Let ν be the Plancherel measure on the unitary dual G of G. Suppose Γ is a lattice in G and L(Γ) is the group von Neumann algebra of Γ. Let X ⊂ G such that ν(X) < ∞ and H X = ⊕ X H π dν(π). We have dim L(Γ) (H X ) = covol(Γ) · ν(X). (1) If µ ′ = k · µ is another Haar measure on G for some k > 0, the covolumes are related by covol ′ (Γ) = µ ′ (G/Γ) = k ′ · µ(G/Γ) = k · covol(Γ). But the induced Plancherel measure ν ′ = k −1 · ν and the dependencies cancel out in the formula above. (2) There is a relevant approach by H. Peterson and A. Valette [31]. They study the von Neumann dimension over locally compact groups. The group von Neumann algebra is equipped with a semifinite tracial weight instead of a tracial state for a discrete group. It is motivated by the study of L 2 -Betti number of locally compact groups [30]. If π is an atom in G, i.e., ν({π}) > 0, the irreducible representation π is a discrete series and ν({π}) is just the formal dimension of π [15,33]. Under this assumption, if G is a real Lie group that has discrete series and Γ is an ICC group, the theorem reduces to the special case of a single representation (see [23] Theorem 3.3.2) dim L(Γ) (H π ) = covol(Γ) · d π . This is motivated by the geometric construction of discrete series of Lie groups by M. Atiyah and W. Schmid [7]. 4.3. The proof of the main theorem. We will prove the main theorem. We first give the proof for a tower of uniform lattices. Proof: Recall that m ΓK (X) = vol(Γ K \G(F ∞ ))ν K (X) by definition and dim LΓn H X = vol(Γ K \G(F ∞ ))ν(X) by Theorem 4.2. We need to show lim n→∞ ν Kn (X) = ν(X), which reduces to lim n→∞ ν Kn ( φ) = φ(1) for all φ ∈ C ∞ c (G(F ∞ ) 1 ) by Theorem 3.7. From Proposition 3.3, we know tr R disc (φ ⊗ 1 K vol(K) ) = m K ( φ) = vol(G(Q)\G(A) 1 )/K) · ν K ( φ), which is to say tr R disc (φ⊗1 K ) = vol(G(Q)\G(A) 1 ))·ν K ( φ). By Proposition 2.4, we have lim n→∞ tr R disc (φ ⊗ 1 Kn ) = vol(G(Q)\G(A) 1 )) · φ(1). Hence lim n→∞ ν Kn ( φ) = φ(1). For the non-uniform case, the distribution J(f ) in Equation 5 will no longer be the trace of R disc (f ), which leads to the main task for most arithmetic subgroups. Fortunately, Finis-Lapid-Müller proved the following result on the limit of the spectral side of Equation 5 (see [20] Corollary 7.8). Theorem 4.5. [Finis-Lapid-Müller] Suppose G = SL(n). Let {I n } be a family of descending integral ideals in O F prime to S and K n = K(I n ) be the compact subgroups of G(A S ) given by I n . We have lim n→∞ J(h S ⊗ 1 Kn ) = lim n→∞ tr R disc (h S ⊗ 1 Kn ) for any h S ∈ C ∞ c (G(F S ) 1 ). Then we are able to prove: L 2 ( 2Γ(N )\G) = L 2 cusp (Γ(N )\G) ⊕ L 2 cont (Γ(N )\G) ⊕ C. Here L 2 cusp (Γ(N )\G)is the cuspidal part, which is a direct sum of irreducible representations with finite multiplicities, i.e., L 2 cusp (Γ(N )\G) = m Γ(N ) (π) · π, m Γ(N ) (π) < ∞ for each π, and L 2 cont (Γ(N )\G) is a direct integral of irreducible representations given by the Eisenstein series. 2. 1 . 1The Selberg trace formula. We first assume G is anisotropic and hence the quotient space G(Q)\G(A) is compact. Let O be the set of conjugacy classes in G(Q) and o ∈ O be a conjugacy class. We may define K o (x, y) = γ∈o f (x −1 γy) and obtain K(x, y) = o∈O K o (x, y). On the other hand, the representation R decomposes into a direct sum of irreducible representations with finite multiplicities, i.e., L 2 (G(Q)\G(A) be the compact support of h S . Then supp(h S ⊗1 Kn) ) = D h K n is compact and hence it intersects finitely many semisimpleconjugate class o ∈ O.Consider the trace formula and Equation 5, only the classes o's (and its G(A)conjugations) which intersect infinitely many D h K n contributes a non-trivial J o (h S ⊗ 1 Kn ) to the limit lim n→∞ J(h S ⊗ 1 Kn ). Definition 3. 2 ( 2The multiplicity for G(F S ) 1 ). Suppose K is a compact open subgroup of G(A S ). Let σ be an irreducible representation of G(F S ) 1 and X ⊂ G(F S ) 1 be a bounded subset. . For ε > 0 and any bounded Consider a normal unital representation π : M → B(H) with both M and H separable. There exists an isometry u : H → L 2 (M ) ⊗ l 2 (N), which commutes with the actions of M :u • π(x) = (λ(x) ⊗ id l 2 (N) ) • u, ∀x ∈ M ,where λ : M → L 2 (M ) denotes the left action. Then p = uu * is a projection in B(L 2 (M ) ⊗ l 2 (N)) such that H ∼ = p(L 2 (M ) ⊗ l 2 (N)). We have the following result (see[2] Proposition 8.2.3).Proposition 4.1. The correspondence H → p above defines a bijection between the set of equivalence classes of left M -modules and the set of equivalence classes of projections in (M ′ ∩ B(L 2 (M ))) ⊗ B(l 2 (N)).The von Neumann dimension of the M -module H are defined to be (τ ⊗ Tr)(p) and denoted by dim M (H), which takes its value in [0, ∞]. We have: Theorem 4 . 4 . 44[a tower of uniform lattices] Let Γ 1 Γ 2 · · · be a normal tower of cocompact lattice in a semisimple real Lie group G such that ∩ n≥1 Γ n = {1}. For any bounded subset X of G, we have lim n→∞ m(X, Γ n ) dim LΓn H X = 1. ν Corollary 4.6. [principal congruence subgroups in SL(n, R)] Let Γ 1 Γ 2 · · · be a tower of principal congruence subgroups in G = SL(n, R). For any bounded subset X of G, we havelim n→∞ m(X, Γ n ) dim LΓn H X = 1.Proof: As shown in Theorem 4.4, it suffices to prove limn→∞ ν Kn ( φ) = φ(1) for all φ ∈ C ∞ c (G(F ∞ ) 1 ). By Proposition 3.3 and Theorem 4.5, we know limn→∞ vol(G(Q)\G(A) 1 )) · ν Kn ( φ) = lim n→∞ tr R disc (φ ⊗ 1 Kn ) Kn ( φ) = φ(1). Theorem 1.1.3). On the growth of L 2 -invariants for sequences of lattices in Lie groups. M Abert, N Bergeron, I Biringer, T Gelander, N Nikolov, J Raimbault, I Samet, Ann. of Math. 1852M. Abert, N. Bergeron, I. Biringer, T. Gelander, N. Nikolov, J. Raimbault, and I. Samet. On the growth of L 2 -invariants for sequences of lattices in Lie groups. Ann. of Math. (2), 185(3):711-790, 2017. An introduction to II 1 factors. C Anantharaman, S Popa, preprint, 8C. Anantharaman and S. Popa. An introduction to II 1 factors. preprint, 8, 2017. A trace formula for reductive groups. I. Terms associated to classes in G(Q). J Arthur, Duke Math. J. 454J. Arthur. A trace formula for reductive groups. I. Terms associated to classes in G(Q). Duke Math. J., 45(4):911-952, 1978. A trace formula for reductive groups. II. Applications of a truncation operator. J Arthur, Compositio Math. 401J. Arthur. A trace formula for reductive groups. II. Applications of a truncation operator. Compositio Math., 40(1):87-121, 1980. A measure on the unipotent variety. J Arthur, Canad. J. Math. 376J. Arthur. A measure on the unipotent variety. Canad. J. Math., 37(6):1237-1274, 1985. In Harmonic analysis, the trace formula, and Shimura varieties. J Arthur, Clay Math. Proc. 4Amer. Math. SocAn introduction to the trace formulaJ. Arthur. An introduction to the trace formula. In Harmonic analysis, the trace formula, and Shimura varieties, volume 4 of Clay Math. Proc., pages 1-263. Amer. Math. Soc., Providence, RI, 2005. A geometric construction of the discrete series for semisimple Lie groups. M Atiyah, W Schmid, Invent. Math. 42M. Atiyah and W. Schmid. A geometric construction of the discrete series for semisimple Lie groups. Invent. Math., 42:1-62, 1977. Square integrable representations, von Neumann algebras and an application to Gabor analysis. B Bekka, J. Fourier Anal. Appl. 104B. Bekka. Square integrable representations, von Neumann algebras and an application to Gabor analysis. J. Fourier Anal. Appl., 10(4):325-349, 2004. Laplacian and the discrete spectrum of an arithmetic group. A Borel, H Garland, Amer. J. Math. 1052A. Borel and H. Garland. Laplacian and the discrete spectrum of an arithmetic group. Amer. J. Math., 105(2):309-335, 1983. The Plancherel measure in nilpotent Lie groups as a limit of point measures. L Corwin, Math. Z. 1552L. Corwin. The Plancherel measure in nilpotent Lie groups as a limit of point measures. Math. Z., 155(2):151-162, 1977. Limit formulas for multiplicities in L 2 (Γ\G). D L De George, N R Wallach, Ann. of Math. 1072D. L. de George and N. R. Wallach. Limit formulas for multiplicities in L 2 (Γ\G). Ann. of Math. (2), 107(1):133-150, 1978. Limit formulas for multiplicities in L 2 (Γ\G). II. The tempered spectrum. D L Degeorge, N R Wallach, Ann. of Math. 1092D. L. DeGeorge and N. R. Wallach. Limit formulas for multiplicities in L 2 (Γ\G). II. The tempered spectrum. Ann. of Math. (2), 109(3):477-495, 1979. P Delorme, Formules limites et formules asymptotiques pour les multiplicités dans L. 53P. Delorme. Formules limites et formules asymptotiques pour les multiplicités dans L 2 (G/Γ). Duke Math. J., 53(3):691-731, 1986. A first course in modular forms. F Diamond, J Shurman, Graduate Texts in Mathematics. 228Springer-VerlagF. Diamond and J. Shurman. A first course in modular forms, volume 228 of Graduate Texts in Mathematics. Springer-Verlag, New York, 2005. . J Dixmier, C * -Algebras, Francis JellettNorth-Holland Publishing Co15Amsterdam-New York-OxfordNorth-Holland Mathematical LibraryJ. Dixmier. C * -algebras. North-Holland Mathematical Library, Vol. 15. North-Holland Pub- lishing Co., Amsterdam-New York-Oxford, 1977. Translated from the French by Francis Jel- lett. On the spectral side of Arthur's trace formula-combinatorial setup. T Finis, E Lapid, Ann. of Math. 1742T. Finis and E. Lapid. On the spectral side of Arthur's trace formula-combinatorial setup. Ann. of Math. (2), 174(1):197-223, 2011. An approximation principle for congruence subgroups II: application to the limit multiplicity problem. T Finis, E Lapid, Math. Z. 2893-4T. Finis and E. Lapid. An approximation principle for congruence subgroups II: application to the limit multiplicity problem. Math. Z., 289(3-4):1357-1380, 2018. On the spectral side of Arthur's trace formula-absolute convergence. T Finis, E Lapid, W Müller, Ann. of Math. 1742T. Finis, E. Lapid, and W. Müller. On the spectral side of Arthur's trace formula-absolute convergence. Ann. of Math. (2), 174(1):173-195, 2011. On the spectral side of Arthur's trace formula-absolute convergence. T Finis, E Lapid, W Müller, Ann. of Math. 1742T. Finis, E. Lapid, and W. Müller. On the spectral side of Arthur's trace formula-absolute convergence. Ann. of Math. (2), 174(1):173-195, 2011. Limit multiplicities for principal congruence subgroups of GL(n) and SL(n). T Finis, E Lapid, W Müller, J. Inst. Math. Jussieu. 143T. Finis, E. Lapid, and W. Müller. Limit multiplicities for principal congruence subgroups of GL(n) and SL(n). J. Inst. Math. Jussieu, 14(3):589-638, 2015. Lectures on the Arthur-Selberg trace formula. S Gelbart, University Lecture Series. 9American Mathematical SocietyS. Gelbart. Lectures on the Arthur-Selberg trace formula, volume 9 of University Lecture Series. American Mathematical Society, Providence, RI, 1996. Automorphic forms on adèle groups. S S Gelbart, Annals of Mathematics Studies. 83University of Tokyo PressS. S. Gelbart. Automorphic forms on adèle groups. Annals of Mathematics Studies, No. 83. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1975. Coxeter graphs and towers of algebras. F M Goodman, P De La Harpe, V F R Jones, Mathematical Sciences Research Institute Publications. 14Springer-VerlagF. M. Goodman, P. de la Harpe, and V. F. R. Jones. Coxeter graphs and towers of algebras, volume 14 of Mathematical Sciences Research Institute Publications. Springer-Verlag, New York, 1989. The trace class conjecture for arithmetic groups. L Ji, J. Differential Geom. 481L. Ji. The trace class conjecture for arithmetic groups. J. Differential Geom., 48(1):165-203, 1998. Index for subfactors. V F R Jones, Invent. Math. 721V. F. R. Jones. Index for subfactors. Invent. Math., 72(1):1-25, 1983. Representation theory of semisimple groups. A W Knapp, Princeton Mathematical Series. 36Princeton University PressAn overview based on examplesA. W. Knapp. Representation theory of semisimple groups, volume 36 of Princeton Math- ematical Series. Princeton University Press, Princeton, NJ, 1986. An overview based on examples. Introduction to the Langlands program. A W Knapp, Representation theory and automorphic forms. Edinburgh; Providence, RIAmer. Math. Soc61A. W. Knapp. Introduction to the Langlands program. In Representation theory and auto- morphic forms (Edinburgh, 1996), volume 61 of Proc. Sympos. Pure Math., pages 245-302. Amer. Math. Soc., Providence, RI, 1997. Theoretical aspects of the trace formula for GL(2). A W Knapp, Representation theory and automorphic forms. Edinburgh; Providence, RIAmer. Math. Soc61A. W. Knapp. Theoretical aspects of the trace formula for GL(2). In Representation theory and automorphic forms (Edinburgh, 1996), volume 61 of Proc. Sympos. Pure Math., pages 355-405. Amer. Math. Soc., Providence, RI, 1997. On rings of operators. F J Murray, J Von Neumann, Ann. of Math. 372F. J. Murray and J. Von Neumann. On rings of operators. Ann. of Math. (2), 37(1):116-229, 1936. L2-betti numbers of locally compact groups. H D Petersen, Comptes Rendus Mathematique. 3519H. D. Petersen. L2-betti numbers of locally compact groups. Comptes Rendus Mathematique, 351(9-10):339-342, 2013. L 2 -Betti numbers and Plancherel measure. H D Petersen, A Valette, J. Funct. Anal. 2665H. D. Petersen and A. Valette. L 2 -Betti numbers and Plancherel measure. J. Funct. Anal., 266(5):3156-3169, 2014. Algebraic groups and number theory. V Platonov, A Rapinchuk, Pure and Applied Mathematics. Rachel Rowen139Academic Press, IncV. Platonov and A. Rapinchuk. Algebraic groups and number theory, volume 139 of Pure and Applied Mathematics. Academic Press, Inc., Boston, MA, 1994. Translated from the 1991 Russian original by Rachel Rowen. Introduction to the representation theory of compact and locally compact groups. A Robert, London Mathematical Society Lecture Note Series. 80Cambridge University PressA. Robert. Introduction to the representation theory of compact and locally compact groups, volume 80 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge-New York, 1983. On limit multiplicities of representations with cohomology in the cuspidal spectrum. J Rohlfs, B Speh, Duke Math. J. 551J. Rohlfs and B. Speh. On limit multiplicities of representations with cohomology in the cuspidal spectrum. Duke Math. J., 55(1):199-211, 1987. Principe de densité pour les groupes réductifs. F Sauvageot, Compositio Math. 1082F. Sauvageot. Principe de densité pour les groupes réductifs. Compositio Math., 108(2):151- 184, 1997. Limit multiplicities of cusp forms. G Savin, Invent. Math. 951G. Savin. Limit multiplicities of cusp forms. Invent. Math., 95(1):149-159, 1989. Automorphic Plancherel density theorem. S W Shin, Israel J. Math. 1921S. W. Shin. Automorphic Plancherel density theorem. Israel J. Math., 192(1):83-120, 2012. Plancherel measures of reductive adelic groups and von neumann dimensions. J Yang, arXiv:2203.07974arXiv preprintJ. Yang. Plancherel measures of reductive adelic groups and von neumann dimensions. arXiv preprint arXiv:2203.07974, 2022.
[]
[ "Graph Mover's Distance: An Efficiently Computable Distance Measure for Geometric Graphs", "Graph Mover's Distance: An Efficiently Computable Distance Measure for Geometric Graphs" ]
[ "Sushovan Majhi " ]
[]
[]
Many applications in pattern recognition represent patterns as a geometric graph. The geometric graph distance (GGD) has recently been studied in [13] as a meaningful measure of similarity between two geometric graphs. Since computing the GGD is known to be N P-hard, the distance measure proves an impractical choice for applications. As a computationally tractable alternative, we propose in this paper the Graph Mover's Distance (GMD), which has been formulated as an instance of the earth mover's distance. The computation of the GMD between two geometric graphs with at most n vertices takes only O(n 3 )-time. Alongside studying the metric properties of the GMD, we investigate the stability of the GGD and GMD. The GMD also demonstrates extremely promising empirical evidence at recognizing letter drawings from the LETTER dataset[18].
null
[ "https://export.arxiv.org/pdf/2306.02133v1.pdf" ]
259,075,682
2306.02133
15cc48ee0c3b0c1a9c983ccc585385d5167eaadf
Graph Mover's Distance: An Efficiently Computable Distance Measure for Geometric Graphs 3 Jun 2023 Sushovan Majhi Graph Mover's Distance: An Efficiently Computable Distance Measure for Geometric Graphs 3 Jun 2023 Many applications in pattern recognition represent patterns as a geometric graph. The geometric graph distance (GGD) has recently been studied in [13] as a meaningful measure of similarity between two geometric graphs. Since computing the GGD is known to be N P-hard, the distance measure proves an impractical choice for applications. As a computationally tractable alternative, we propose in this paper the Graph Mover's Distance (GMD), which has been formulated as an instance of the earth mover's distance. The computation of the GMD between two geometric graphs with at most n vertices takes only O(n 3 )-time. Alongside studying the metric properties of the GMD, we investigate the stability of the GGD and GMD. The GMD also demonstrates extremely promising empirical evidence at recognizing letter drawings from the LETTER dataset[18]. Introduction Graphs have been a widely accepted object for providing structural representation of patterns involving relational properties. While hierarchical patterns are commonly reduced to a string [7] or a tree representation [6], non-hierarchical patterns generally require a graph representation. The problem of pattern recognition in such a representation then requires quantifying (dis-)similarity between a query graph and a model or prototype graph. Defining a relevant distance measure for a class of graphs has been studied for almost five decades now and has a myriad of applications including chemical structure matching [21], fingerprint matching [16], face identification [11], and symbol recognition [12]. Depending on the class of graphs of interest and the area of application, several methods have been proposed. Graph isomorphisms [5] or subgraph isomorphisms can be considered. These, however, cannot cope with (sometimes minor) local and structural deformations of the two graphs. To address this issue, several alternative distance measures have been studied. We particularly mention edit distance [20,9] and inexact matching distance [3]. Although these distance measures have been battle-proven for attributed graphs (i.e., combinatorial graphs with finite label sets), the formulations seem inadequate in providing meaningful similarity measures for geometric graphs. A geometric graph belongs to a special class of attributed graphs having an embedding into a Euclidean space R d , where the vertex labels are inferred from the Euclidean locations of the vertices and the edge labels are the Euclidean lengths of the edges. In the last decade, there has been a gain in practical applications involving comparison of geometric graphs, such as road-network or map comparison [1], detection of chemical structures using their spatial bonding geometry, etc. In addition, large datasets like [18] are being curated by pattern recognition and machine learning communities. Related Work and Our Contribution We are inspired by the recently developed geometric graph distance (GGD) in [4,13]. Although the GGD succeeds to be a relevant distance measure for geometric graphs, its computation, unfortunately, is known to be N P-hard. Our motivation stems from applications that demand an efficiently computable measure of similarity for geometric graphs. The formulation of our graph mover's distance is based on the theoretical underpinning of the GGD. The GMD provides a meaningful yet computationally efficient similarity measure between two geometric graphs. In Section 2, we revisit the definition of the (GGD) to investigate its stability under Hausdorff perturbation. Section 3 is devoted to the study of the GMD. The GMD has been shown to render a pseudo-metric on the class of (ordered) geometric graphs. Finally, we apply the GMD to classify letter drawings in Section 4. Our experiment involves matching each of 2250 test drawings, modeled as geometric graphs, to 15 prototype letters from the English alphabet. For the drawings with LOW distortion, the correct letter has been found among the top 3 matches at a rate of 98.93%, where the benchmark accuracy is 99.6% obtained using a k-nearest neighbor classifier (k-NN) with the graph edit distance [3]. Geometric Graph Distance (GGD) We first formally define a geometric graph. Throughout the paper, the dimension of the ambient Euclidean space is denoted by d ≥ 1. We also assume that the cost coefficients C V and C E are positive constants. We denote the set of all geometric graphs of R d by G(R d ). Two geometric graphs G = (V G , E G ) and H = (V H , E H ) are said to be equal, written G = H, if and only if V G = V H and E G = E H . We make no distinction between a geometric graph G = (V G , E G ) and its geometric realization as a subset of R d ; an edge (u, v) ∈ E G can be identified as the line-segment uv in R d , and its length by the Euclidean length |uv|. Definition 2.1 (Geometric Graph). A geometric graph of R d is a (finite) combinatorial graph G = (V G , E G ) with vertex set V G ⊂ R d , Following the style of [13], we first revisit the definition of GGD. The definition uses the notion of an inexact matching. In order to denote a deleted vertex and a deleted edge, we introduce the dummy vertex ǫ V and the dummy edge ǫ E , respectively. Definition 2.2 (Inexact Matching). Let G, H ∈ G(R d ) be two geometric graphs. A relation π ⊆ (V G ∪ {ǫ V }) × (V H ∪ {ǫ V }) is called an (inexact) matching if for any u ∈ V G (resp. v ∈ V H ) there is exactly one v ∈ V H ∪ {ǫ V } (resp. u ∈ V G ∪ {ǫ V }) such that (u, v) ∈ π. The set of all matchings between graphs G, H is denoted by Π(G, H). Intuitively, a matching π is a relation that covers the vertex sets V G , V H exactly once. As a result, when restricted to V G (resp. V H ), a matching π can be expressed as a map π : V G → V H ∪ {ǫ V } (resp. π −1 : V H → V G ∪ {ǫ V }). In other words, when (u, v) ∈ π and u = ǫ V (resp. v = ǫ V ), it is justified to write π(u) = v (resp. π −1 (v) = u). It is evident from the definition that the induced map π : {u ∈ V G | π(u) = ǫ V } → {v ∈ V H | π −1 (v) = ǫ V } is a bijection. For edges e = (u 1 , u 2 ) ∈ E G and f = (v 1 , v 2 ) ∈ E H , we introduce the short-hand π(e) := (π(u 1 ), π(u 2 )) and π −1 (f) := (π −1 (v 1 ), π −1 (v 2 )). Another perspective of π is to view it as a matching between portions of G and H, (possibly) after applying some edits on the two graphs. For example, π(u) = ǫ V (resp. π −1 (v) = ǫ V ) encodes deletion of the vertex u from G (resp. v from H), whereas π(e) = ǫ E (resp. π −1 (f) = ǫ E ) encodes deletion of the edge e from G (resp. f from H). Once the above deletion operations have been performed on the graphs, the resulting subgraphs of G and H become isomorphic, which are finally matched by translating the remaining vertices u to π(u). Now, the cost of the matching π is defined as the total cost for all of these operations: Definition 2.3 (Cost of a Matching). Let G, H ∈ G(R d ) be geometric graphs and π ∈ Π(G, H) an inexact matching. The cost of π, is Cost(π) = u∈V G π(u) =ǫ V C V |u − π(u)| vertex translations + e∈E G π(e) =ǫ E C E |e| − |π(e)| edge translations + e∈E G π(e)=ǫ E C E |e| edge deletions + f∈E H π −1 (f)=ǫ E C E |f| edge deletions . (1) Definition 2.4 (GGD). For geometric graphs G, H ∈ G(R d ), their geometric graph distance, GGD(G, H), is GGD(G, H) def = min π∈Π(G,H) Cost(π) . Stability of GGD A distance measure is said to be stable if it does not change much if the inputs are perturbed only slightly. Usually, the change is expected to be bounded above by the amount of perturbation inflicted on the inputs. The perturbation is measured under a suitable choice of metric. In the context of geometric graphs, it is natural to wonder if the GGD is stable under the Hausdorff distance between two graphs. To our disappointment, we can easily see for the graphs shown in Fig. 1 that the GGD is positive, whereas the Hausdorff distance between their realizations is zero. So, the Hausdorff distance between the graphs can not bound their GGD from above. = C V + C E is non-zero. The optimal matching is given by π(u 1 ) = v 1 , π(u 2 ) = v 2 , and π(u 3 ) = ǫ V . One might think that the GGD is stable when the Hausdorff distance only between the vertices is considered. However, the graphs in Fig. 2 indicate otherwise. Under strong requirements, however, it is not difficult to prove the following result on the stability of GGD under the Hausdorff distance. Figure 2: For the graphs G, H ∈ G(R 2 ), the Hausdorff distance between the vertex sets is zero, however GGD(G, H) = 4C E is non-zero. The optimal matching is given by π(u 1 ) = v 1 , π(u 3 ) = v 3 , π(u 2 ) = ǫ V , and π −1 (v 2 ) = ǫ V . Theorem 1 (Hausdorff Stability of GGD). Let G, H ∈ G(R d ) be geometric graphs with a graph isomor- phism π : V G → V H . If δ > 0 is such that |u − π(u)| ≤ δ for all u ∈ V G , then GGD(G, H) ≤ C V |V G |δ. Proof. The given graph isomorphism π is a bijective mapping between the vertices of G and H. So, π ∈ Π(G, H), i.e., it defines an inexact matching. Since π is a graph isomorphism, it does not delete any vertex or edge. More formally, for all u ∈ V G and v ∈ V H , we have π(u) = ǫ V and π −1 (v) = ǫ V , respectively. Also, for all e ∈ E G and f ∈ E H , we have π(e) = ǫ E and π −1 (f) = ǫ E , respectively. From (1), the cost Cost(π) = u∈V G C V |u − π(u)| ≤ C V |V G |δ. So, GGD(G, H) ≤ Cost(π) ≤ C V |V G |δ. Graph Mover's Distance (GMD) We define the Graph Mover's Distance for two ordered geometric graphs. A geometric graph is called ordered if its vertices are ordered or indexed. In that case, we denote the vertex set as a (finite) sequence V G = {u i } m i=1 . Let us denote by G O (R d ) the set of all ordered geometric graphs of R d . The formulation of the GMD uses the framework known as the earth mover's distance (EMD). Earth Mover's Distance (EMD) The EMD is a well-studied distance measure between weighted point sets, with many successful applications in a variety of domains; for example, see [8,10,17,19]. The idea of the EMD was first conceived by Monge [14] in 1781, in the context of transportation theory. The name "earth mover's distance" was coined only recently, and is well-justified due to the following analogy. The first weighted point set can be thought of as piles of earth (dirt) lying on the point sites, with the weight of a site indicating the amount of earth; whereas, the other point set as pits of volumes given by the corresponding weights. Given that the total amount of earth in the piles equals the total volume of the pits, the EMD computes the least (cumulative) cost needed to fill all the pits with earth. Here, a unit of cost corresponds to moving a unit of earth by a unit of "ground distance" between the pile and the pit. The EMD can be cast as a transportation problem on a bipartite graph, which has several efficient implementations, e.g., the network simplex algorithm [2,15]. Let the weighted point sets P = {(p i , w p i )} m i=1 and Q = {(q j , w q j )} n j=1 be a set of suppliers and a set of consumers, respectively. The weight w p i denotes the total supply of the supplier p i , and w q j the total demand of the consumer q j . The matrix [d i,j ] is the matrix of ground distances, where d i,j denotes the cost of transporting a unit of supply from p i to q j . We also assume the feasibility condition that the total supply equals the total demand: m i=1 w p i = n j=1 w q j .(2) A flow of supply is given by a matrix [f i,j ] with f i,j denoting the units of supply transported from p i to q j . We want to find a flow that minimizes the overall cost Constraint (3) ensures a flow of units from P to Q, and not vice versa; constraint (4) dictates that a supplier must send all its supply-not more or less; constraint (5) guarantees that the demand of every consumer is exactly fulfilled. The earth mover's distance (EMD) is then defined by the cost of the optimal flow. A solution always exists, provided condition (2) is satisfied. The weights and the ground distances can be chosen to be any non-negative numbers. However, we choose them appropriately in order to solve our graph matching problem. Defining the GMD Let G, H ∈ G O (R d ) be two ordered geometric graphs of R d with V G = {u i } m i=1 and V H = {v j } n j=1 . For each i = 1, . . . , m, let E G i denote the (row) m-vector containing the lengths of (ordered) edges incident to the vertex u i of G. More precisely, the kth element of E G i = |e G i,k |, if e G i,k := (u i , u k ) ∈ E G 0, otherwise. Similarly, for each j = 1, . . . , n, we define E H j to be the (row) n-vector with the kth element of E H j = |e H j,k |, if e H j,k := (v j , v k ) ∈ E H 0, otherwise. In order to formulate the desired instance of the EMD, we take the point sets to be P = {u i } m+1 i=1 and Q = {v j } n+1 j=1 . Here, u m+1 and v n+1 have been taken to be a dummy supplier and dummy consumer, respectively, to incorporate vertex deletion into our GMD framework. The weights on the sites are defined as follows: w u i = 1 for i = 1 . . . , m and w u m+1 = n . And, w v j = 1 for j = 1 . . . , n and w v m+1 = m . We note that the feasibility condition (2) is satisfied: m + n is the total weight for both P and Q. An instance of the transportation problem is depicted in Fig. 3. Finally, the ground distance from u i to v j is defined by: d i,j =                C V |u i − v j |+ C E E G i D m×p − E H j D n×p 1 , if 1 ≤ i ≤ m, 1 ≤ j ≤ n C E E H j 1 , if i = m + 1 and 1 ≤ j ≤ n C E E G i 1 , if 1 ≤ i ≤ m and j = n + 1 0, otherwise. Here, p = min{m, n}, the 1-norm of a row vector is denoted by · 1 , and D denotes a diagonal matrix with the all diagonal entries being 1. , the GMD is zero. The optimal flow is given by the matching π(u 1 ) = v 2 , π(u 2 ) = v 1 , π(u 3 ) = v 4 , π(u 4 ) = v 3 , and π(u 5 ) = v 5 . Metric Properties We can see that the GMD induces a pseudo-metric on the space of ordered geometric graphs G O (R d ). Non-negativity, symmetry, and triangle inequality follow from those of the cost matrix [d i,j ] defined in the GMD. In addition, we note that G = H (as ordered graphs) implies that d i,j = 0 whenever i = j. The trivial flow, where each u i sends its full supply to v i , has a zero cost. So, GMD(G, H) = 0. The GMD does not, however, satisfy the separability condition on G O (R d ). For the graphs G, H shown in Fig. 4,       0 0 0 2 √ 2 0 0 2 0 √ 2 0 2 0 0 0 2 0 0 0 0 √ 2 √ 2 0 0 0       and       0 0 2 0 √ 2 0 0 0 2 √ 2 2 0 0 0 0 0 2 0 0 0 √ 2 √ 2 0 0 0       . It can be easily checked that the flow that transports a unit of supply from u 1 → v 2 , u 2 → v 1 , u 3 → v 4 , u 4 → v 3 , u 5 → v 5 , and five units from u 6 → v 6 has total cost zero. So, GMD(G, H) = 0. However, the graphs G and H are not the same geometric graph. The fact that GGD(G, H) = 0 implies the GGD is not stable under the GMD. One can easily find even simpler configurations for two distinct geometric graphs with a zero GMD-if the graphs are allowed to have multiple connected components. We conclude this section by stating a stability result for the GMD under the Hausdorff distance. We omit the proof, since it uses a similar argument presented in Theorem 1. Theorem 2 (Hausdorff Stability of GMD). Let G, H ∈ G O (R d ) be ordered geometric graphs with a bijection π : V G → V H such that e G i,j = e H π(i),π(j) for all i, j. If δ > 0 is such that |u i − π(u i )| ≤ δ for all u i ∈ V G , then GMD(G, H) ≤ C V |V G |δ. Computing the GMD As pointed out earlier, the GMD can be computed as an instance of transportation problem-using, for example, the network simplex algorithm. If the graphs have at most n vertices, computing the ground cost matrix Experimental Results We have implemented the GMD in Python, using network simplex algorithm from the networkx package. We ran a pattern retrieval experiment on letter drawings from the IAM Graph Database [18]. The repository provides an extensive collection of graphs, both geometric and labeled. In particular, we performed our experiment on the LETTER database from the repository. The graphs in the database represent distorted letter drawings. The database considers only 15 uppercase letters from the English alphabet: A, E, F, H, I, K, L, M, N, T, V, W, X, Y, and Z. For each letter, a prototype line drawing has been manually constructed. On the prototypes, distortions are applied with three different level of strengths: LOW, MED, and HIGH, in order to produce 2250 letter graphs for each level. Each test letter drawing is a graph with straight-line edges; each node is labeled with its two-dimensional coordinates. Since some of the graphs in the dataset were not embedded, we had to compute the intersections of the intersecting edges and label them as nodes. The preprocessing guaranteed that all the considered graphs were geometric; a prototype and a distorted graph are shown in Fig. 5. We devised a classifier for these letter drawings using the GMD. For this application, we chose C V = 4.5 and C E = 1. For a test letter, we computed its GMD from the 15 prototypes, then sorted the prototypes in an increasing order of their distance to the test graph. We then check if the letter generating the test graph is among the first k prototypes. For each level of distortion and various values of k, we present the rate at which the correct letter has been found in the first k models. The summary of the empirical results have been shown in Table 1. Although the graph edit distance based k-NN classifier still outperforms the GMD by a very small margin, our results has been extremely satisfactory. One possible reason why the GMD might fail to correctly classify some of the graphs is that lacks the separability property as a metric. Discussions We have successfully introduced an efficiently computable and meaningful similarity measure for geometric graphs. However, the GMD lacks some of the desirable properties, like separability and stability. The currently presented stability results for the GGD and GMD have a factor that depends on the size of the input graphs. The question remains if the distance measures are in fact stable under much weaker conditions, possibly with constant factors on the right side. It will also be interesting to study the exact class of geometric graphs for which the GMD is, in fact, a metric. and the Euclidean straight-line segments {ab | (a, b) ∈ E G } intersect (possibly) at their endpoints. Figure 1 : 1The graphs G (top) and H (bottom) are embedded in the real line; the distance between consecutive ticks is 1 unit. The Hausdorff distance between G and H is zero, however GGD(G, H) j ≥ 0 for any i = 1, . . . , m and j = 1, . . . j = w j for any j = 1, . . . , n, Figure 3 : 3The bipartite network used by the GMD is shown for two ordered graphs G, H with vertex sets V G = {u 1 , u 2 , u 3 } and V H = {v 1 , v 2 }, respectively. The dummy nodes u 4 for G and v 3 for H, respectively, have been shown in gray. Below each node, the corresponding weights are shown. A particular flow has been depicted here. The gray edges do not transport anything. A red edge has a non-zero flow with the transported units shown on them. Figure 4 : 4For the geometric graph G, H ∈ G O (R 2 ) [d i,j ] takes O(n 3 )-time. Since the bipartite network has O(n) vertices and O(n 2 ) edges, the simplex algorithm runs with a time complexity of O(n 3 ), with a pretty good constant. Overall, the time complexity of the GMD is O(n 3 ). Figure 5 : 5The prototype geometric graph of the letter A is shown on the left. On the right, a (MED) distorted letter A is shown.correct letter in first k models (%) we have GMD(G, H) = 0. We note that G, H have the following adjacency length matrices [E G i ] i and [E H j ] j , respectively: Table 1: Empirical result on the LETTER datasetDistortion k = 1 k = 3 k = 5 LOW 96.66% 98.93% 99.37% MED 66.66% 85.37% 91.15% HIGH 73.73% 90.48% 95.51% Map Construction Algorithms. M Ahmed, S Karagiorgou, D Pfoser, C Wenk, Springer International Publishingfirst editionM. Ahmed, S. Karagiorgou, D. Pfoser, and C. Wenk. Map Construction Algorithms. Springer International Publishing, first edition, 2015. R Ahuja, T Magnanti, J Orlin, Network Flows: Theory, Algorithms, and Applications. Always learning. PearsonR. Ahuja, T. Magnanti, and J. Orlin. Network Flows: Theory, Algorithms, and Applications. Always learning. Pearson, 2013. Inexact graph matching for structural pattern recognition. H Bunke, G Allermann, Pattern Recognition Letters. 14H. Bunke and G. Allermann. Inexact graph matching for structural pattern recognition. Pattern Recogni- tion Letters, 1(4):245-253, May 1983. Measuring the Similarity of Geometric Graphs. O Cheong, J Gudmundsson, H.-S Kim, D Schymura, F Stehn, J. VahrenholdSpringer5526O. Cheong, J. Gudmundsson, H.-S. Kim, D. Schymura, and F. Stehn. Measuring the Similarity of Geomet- ric Graphs. In J. Vahrenhold, editor, Experimental Algorithms, volume 5526, pages 101-112. Springer, 2009. An efficient algorithm for graph isomorphism. D G Corneil, C C Gotlieb, J. ACM. 171D. G. Corneil and C. C. Gotlieb. An efficient algorithm for graph isomorphism. J. ACM, 17(1):51-64, 1970. Tree systems for syntactic pattern recognition. K.-S Fu, B Bhargava, IEEE Transactions on Computers, C-22. 12K.-S. Fu and B. Bhargava. Tree systems for syntactic pattern recognition. IEEE Transactions on Computers, C-22(12):1087-1099, 1973. On syntactic pattern recognition. K.-S Fu, P Swain, Computer and Information Sciences -1969. J. T. TouElsevier2K.-S. Fu and P. Swain. On syntactic pattern recognition. In J. T. Tou, editor, Computer and Information Sciences -1969, volume 2 of SEN Report Series Software Engineering, pages 155-182. Elsevier, 1971. The Earth Mover's Distance as a Metric for the Space of Inorganic Compositions. C J Hargreaves, M S Dyer, M W Gaultois, V A Kurlin, M J Rosseinsky, Chemistry of Materials. 3224C. J. Hargreaves, M. S. Dyer, M. W. Gaultois, V. A. Kurlin, and M. J. Rosseinsky. The Earth Mover's Distance as a Metric for the Space of Inorganic Compositions. Chemistry of Materials, 32(24):10610- 10620, Dec. 2020. A binary linear programming formulation of the graph edit distance. D Justice, A Hero, IEEE Transactions on Pattern Analysis and Machine Intelligence. 288D. Justice and A. Hero. A binary linear programming formulation of the graph edit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8):1200-1214, Aug. 2006. From Word Embeddings To Document Distances. M Kusner, Y Sun, N Kolkin, K Weinberger, PMLRProceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningM. Kusner, Y. Sun, N. Kolkin, and K. Weinberger. From Word Embeddings To Document Distances. In Proceedings of the 32nd International Conference on Machine Learning, pages 957-966. PMLR, June 2015. ISSN: 1938-7228. Graph-based method for face identification from a single 2d line drawing. J Liu, Y T Lee, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2310J. Liu and Y. T. Lee. Graph-based method for face identification from a single 2d line drawing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10):1106-1119, 2001. Symbol recognition by error-tolerant subgraph matching between region adjacency graphs. J Llados, E Marti, J Villanueva, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2310J. Llados, E. Marti, and J. Villanueva. Symbol recognition by error-tolerant subgraph matching between region adjacency graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(10):1137- 1143, 2001. Distance measures for geometric graphs. S Majhi, C Wenk, arXiv:2209.12869arXiv preprintS. Majhi and C. Wenk. Distance measures for geometric graphs. arXiv preprint arXiv:2209.12869, 2022. Mémoire sur la théorie des déblais et des remblais. G Monge, Imprimerie royale. 1781G. Monge. Mémoire sur la théorie des déblais et des remblais. Imprimerie royale, 1781. A Linear Time Histogram Metric for Improved SIFT Matching. O Pele, M Werman, Lecture Notes in Computer Science. D. Forsyth, P. Torr, and A. ZissermanSpringerO. Pele and M. Werman. A Linear Time Histogram Metric for Improved SIFT Matching. In D. Forsyth, P. Torr, and A. Zisserman, editors, Computer Vision -ECCV 2008, Lecture Notes in Computer Science, pages 495-508, Berlin, Heidelberg, 2008. Springer. Effectiveness of graph-based and fingerprint-based similarity measures for virtual screening of 2D chemical structure databases. J W Raymond, P Willett, Journal of Computer-Aided Molecular Design. 161J. W. Raymond and P. Willett. Effectiveness of graph-based and fingerprint-based similarity measures for virtual screening of 2D chemical structure databases. Journal of Computer-Aided Molecular Design, 16(1):59-71, 2002. Robust hand gesture recognition based on finger-earth mover's distance with a commodity depth camera. Z Ren, J Yuan, Z Zhang, Proceedings of the 19th ACM international conference on Multimedia, MM '11. the 19th ACM international conference on Multimedia, MM '11New York, NY, USAAssociation for Computing MachineryZ. Ren, J. Yuan, and Z. Zhang. Robust hand gesture recognition based on finger-earth mover's distance with a commodity depth camera. In Proceedings of the 19th ACM international conference on Multimedia, MM '11, pages 1093-1096, New York, NY, USA, Nov. 2011. Association for Computing Machinery. IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning. K Riesen, H Bunke, Structural, Syntactic, and Statistical Pattern Recognition. Springer5342K. Riesen and H. Bunke. IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning. In Structural, Syntactic, and Statistical Pattern Recognition, volume 5342, pages 287- 297. Springer, 2008. The Earth Mover's Distance as a Metric for Image Retrieval. Y Rubner, C Tomasi, L J Guibas, International Journal of Computer Vision. 402Y. Rubner, C. Tomasi, and L. J. Guibas. The Earth Mover's Distance as a Metric for Image Retrieval. International Journal of Computer Vision, 40(2):99-121, Nov. 2000. A distance measure between attributed relational graphs for pattern recognition. A Sanfeliu, K.-S Fu, IEEE Transactions on Systems, Man, and Cybernetics, SMC. 133A. Sanfeliu and K.-S. Fu. A distance measure between attributed relational graphs for pattern recogni- tion. IEEE Transactions on Systems, Man, and Cybernetics, SMC-13(3):353-362, May 1983. Similarity Searching in Databases of Three-Dimensional Chemical Structures. P Willett, Information Systems and Data Analysis. H.-H. Bock, W. Lenski, and M. M. RichterSpringerP. Willett. Similarity Searching in Databases of Three-Dimensional Chemical Structures. In H.-H. Bock, W. Lenski, and M. M. Richter, editors, Information Systems and Data Analysis, pages 280-293. Springer, 1994.
[]
[ "Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting", "Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting" ]
[ "Yuchen Liu ", "† Chen Chen ", "Lingjuan Lyu ", "Fangzhao Wu ", "Sai Wu ", "Gang Chen " ]
[]
[]
Federated learning has exhibited vulnerabilities to Byzantine attacks, where the Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model. A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks. However, Byzantine clients can still circumvent robust AGRs when data is non-Identically and Independently Distributed (non-IID). In this paper, we first reveal the root causes of performance degradation of current robust AGRs in non-IID settings: the curse of dimensionality and gradient heterogeneity. In order to address this issue, we propose GAS, a GrAdient Splitting approach that can successfully adapt existing robust AGRs to non-IID settings. We also provide a detailed convergence analysis when the existing robust AGRs are combined with GAS. Experiments on various real-world datasets verify the efficacy of our proposed GAS. The implementation code is provided in https://github. com/YuchenLiu-a/byzantine-gas.
null
[ "https://export.arxiv.org/pdf/2302.06079v2.pdf" ]
259,075,734
2302.06079
5c208a4565c9ce2935a518d06bf5f1e59de1eaed
Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting Yuchen Liu † Chen Chen Lingjuan Lyu Fangzhao Wu Sai Wu Gang Chen Byzantine-Robust Learning on Heterogeneous Data via Gradient Splitting Federated learning has exhibited vulnerabilities to Byzantine attacks, where the Byzantine attackers can send arbitrary gradients to a central server to destroy the convergence and performance of the global model. A wealth of robust AGgregation Rules (AGRs) have been proposed to defend against Byzantine attacks. However, Byzantine clients can still circumvent robust AGRs when data is non-Identically and Independently Distributed (non-IID). In this paper, we first reveal the root causes of performance degradation of current robust AGRs in non-IID settings: the curse of dimensionality and gradient heterogeneity. In order to address this issue, we propose GAS, a GrAdient Splitting approach that can successfully adapt existing robust AGRs to non-IID settings. We also provide a detailed convergence analysis when the existing robust AGRs are combined with GAS. Experiments on various real-world datasets verify the efficacy of our proposed GAS. The implementation code is provided in https://github. com/YuchenLiu-a/byzantine-gas. Introduction Federated Learning (FL) (McMahan et al., 2017;Lyu et al., 2020;Zhao et al., 2020) provides a privacy-aware and distributed machine learning paradigm. It has recently attracted widespread attention as a result of emerging data silos and growing privacy awareness. In this paradigm, data owners (clients) repeatedly use their private data to compute local gradients and send them to a central server for aggregation. In this way, clients can collaborate to train a model without exposing their private data. However, the distributed property of FL also makes it vulnerable to Byzantine at- tacks (Blanchard et al., 2017;Guerraoui et al., 2018;Chen et al., 2020), in which Byzantine clients can send arbitrary messages to the central server to bias the global model. Moreover, it is challenging for the server to identify the Byzantine clients, since the server can neither access clients' training data nor monitor their local training process. In order to defend against Byzantine attacks, the community has proposed a wealth of defenses (Blanchard et al., 2017;Guerraoui et al., 2018;Yin et al., 2018). Most defenses abandon the averaging step adopted by conventional FL frameworks, e.g., FedAvg (McMahan et al., 2017). Instead, they use robust AGgregation Rules (AGRs) to aggregate local gradients and compute the global gradient. Most existing robust AGRs assume that the data distribution on different clients is Identically and Independently Distributed (IID) (Bernstein et al., 2018;Ghosh et al., 2019). In fact, the data is usually heterogeneous, i.e., non-IID, in real-world FL applications (McMahan et al., 2017;Kairouz et al., 2021;Lyu et al., 2022;Zhang et al., 2023;Chen et al., 2022a). In this paper, we focus on defending against Byzantine attacks in the more realistic non-IID setting. In the non-IID setting, defending against Byzantine attacks becomes more challenging (Karimireddy et al., 2022;Acharya et al., 2022). Robust AGRs that try to include all the honest gradients in aggregation (Blanchard et al., 2017;Shejwalkar & Houmansadr, 2021) fail to handle the curse of dimensionality (Guerraoui et al., 2018). Byzantine clients can take advantage of the high dimension of gradients and participate in aggregation. As a result, the global gradient is manipulated away from the optimal gradient, i.e., the average of honest gradients. Other robust AGRs (Guerraoui et al., 2018;Yin et al., 2018) aggregate fewer gradients to ensure that only honest gradients participate in aggregation. However, the global gradient is still of limited utility due to gradient heterogeneity (Li et al., 2020;Karimireddy et al., 2020) in the non-IID setting. In summary, most existing AGRs fail to address both the curse of dimensionality (Guerraoui et al., 2018) and gradient heterogeneity (Karimireddy et al., 2022) at the same time. Consequently, they fail to achieve satisfactory performance in the non-IID setting. Motivated by the above observations, we propose a GrAdient Splitting based approach called GAS for Byzantine robustness in non-IID settings. In particular, to ad-dress the curse of dimensionality, GAS splits each highdimensional gradient into low-dimensional sub-vectors and detects Byzantine gradients with the sub-vectors. To handle the gradient heterogeneity, GAS aggregates all the identified honest gradients. Our contributions in this work are summarized below. • We reveal the root causes of defending against Byzantine attacks in the non-IID setting: the gradient heterogeneity and the curse of dimensionality. Gradient heterogeneity makes it hard for Byzantine defenses to obtain a global gradient close to the optimal. The curse of dimensionality enables the Byzantine gradients to circumvent defenses that aggregate more gradients. To the best of our knowledge, no existing defense can address both issues at the same time. • We propose a novel and compatible approach called GAS which consists of three steps: 1. splitting the high-dimensional gradients into low-dimensional subvectors; 2. penalizing each gradient by a score with a robust AGR based on their split low-dimensional subvectors to circumvent the curse of dimensionality; 3. identifying the gradients with low scores as honest ones and aggregating all the identified honest gradients to tackle the gradient heterogeneity issue. In step 2, GAS can apply any robust AGR to low-dimensional subvectors for identification, offering great compatibility. • We provide convergence analysis for our proposed GAS. Extensive experiments on four real-world datasets across various non-IID settings empirically validate the effectiveness and superiority of our GAS. Related Works IID defenses. Blanchard et al. (2017) first introduce Byzantine robust learning and propose a distance-based AGR called Multi-Krum. Yin et al. (2018) theoretically analyze the statistical optimality of Median and Trimmed Mean. Guerraoui et al. (2018) propsoe Bulyan that applies a variant of Trimmed Mean as a post-processing method to handle the curse of dimensionality. Pillutla et al. (2019) discuss the Byzantine robustness of Geometric Median and propose a computationally efficient approximation of Geometric Median. Shejwalkar & Houmansadr (2021) propose to perform dimensionality reduction using random sampling, followed by spectral-based outlier removal. These defenses assume the data is IID. Their efficacy is therefore limited in more realistic FL applications where the data is non-IID. Non-IID defenses. Recent works have also explored defenses applicable to the non-IID setting. Park et al. (2021) can only achieve Byzantine robustness when the server has a validation set, which compromises the privacy principle of the FL (McMahan et al., 2017). Data & Diggavi (2021) adapt a robust mean estimation approach to FL in order to combat the Byzantine attack in the non-IID setting. However, it requires Ω(d 2 ) time (d is the number of model parameters), which is unacceptable due to the high dimensionality of model parameters. El-Mhamdi et al. (2021) consider Byzantine robustness in the asynchronous communication and unconstrained topologies settings. Acharya et al. (2022) propose to apply geometric median only to the sparsified gradients to save computation cost. Karimireddy et al. (2022) perform a bucketing process before aggregation to reduce the gradient heterogeneity. However, most of these methods ignore the curse of dimensionality (Guerraoui et al., 2018), which becomes intractable in the non-IID setting (refer to Section 4 for more discussion). As a result, they fail to achieve satisfactory performance in the non-IID setting. Notations and Preliminaries Notations. For any positive integer n ∈ N + , we denote the set {1, . . . , n} by [n]. The cardinality of a set S is denoted by |S|. We denote the ℓ 2 norm of vector x by ∥x∥. We use [x] j to represent the j-th component of vector x. The sub-vector of vector x indexed by index set J is denoted by [x] J = ([x] j1 , . . . , [x] j k ), where J = {j 1 , . . . , j k }, and k = |J | is the number of indices. For a random variable X, we use E[X] and Var[X] to denote the expectation and variance of X, respectively. Federated learning. We consider the Federated Learning (FL) system with a central server and n clients following (Blanchard et al., 2017;Yin et al., 2018;Chen et al., 2022b). Then the objective is to minimize loss L(w) defined as follows. L(w) = 1 n n i=1 L i (w),(1)where L i (w) = E ξi [L(w; ξ i )], i ∈ [n],(2) where w is the model parameter, L i is the loss function on the i-th client, ξ i is the data distribution on the i-th client, and L(w; ξ) is the loss function. In the t-th communication round, the server distributes the parameter w t to the clients. Each client i conducts several epochs of local training on local data to obtain the updated local parameter w t i . Then, client i computes the local gradient g t i as follows and sends it to the server. g t i = w t − w t i .(3) Finally, the server collects the local gradients and uses the average gradient to update the global model. (Guerraoui et al., 2018)). The process is repeated until the number of communication rounds reaches the set value T . w t+1 = w t − g t , g t = 1 n n i=1 g t i .(4) Byzantine threat model. In real-world applications, not all clients in FL systems are honest. In other words, there may exist Byzantine clients in FL systems (Blanchard et al., 2017). Suppose that among total n clients, f clients are Byzantine. Let B ⊆ [n] denote set of Byzantine clients and H = [n] \ B denote the set of honest clients. In the presence of Byzantine clients, the uploaded message of client i in the t-th communication round is g t i = w t − w t+1 i , i ∈ H, * , i ∈ B,(5) where * represents an arbitrary value. Robust AGRs. Most existing Byzantine defenses replace the averaging step with a robust AGR to defend against Byzantine attacks. More specifically, the server aggregates the gradients and updates the global model as follows. w t+1 = w t −ĝ t ,ĝ t = A(g t 1 , . . . , g t n ),(6) whereĝ t is the aggregated gradient, and A is a robust AGR, e.g., Multi-Krum (Blanchard et al., 2017) and Bulyan (Guerraoui et al., 2018). For notation simplicity, we omit the superscript t of the gradient symbols when there is no ambiguity in the rest of this paper. The Challenges of Byzantine Robustness in Non-IID Setting Most robust AGRs focus on Byzantine robustness in the IID setting (Blanchard et al., 2017;Guerraoui et al., 2018). When the data is non-IID (Kairouz et al., 2021;Zhang et al., 2022), the performance of these robust AGRs drops drastically (Shejwalkar & Houmansadr, 2021;Karimireddy et al., 2022). In order to understand the root cause of this performance drop, we perform an experimental study on various robust AGRs. Particularly, we examine their behaviors under the attack of 20% Byzantines in both IID and non-IID settings on CIFAR-10 ( Krizhevsky et al., 2009) in Figure 1. More detailed setups are covered in Appendix A. Some robust AGRs try to include all honest gradients in aggregation (the number of aggregated gradients is no less than n − f , i.e., the number of honest clients) (Blanchard et al., 2017;Shejwalkar & Houmansadr, 2021;Pillutla et al., 2019). However, they fail to address the curse of dimensionality (Guerraoui et al., 2018) on heterogeneous data. Byzantine clients can take advantage of the high dimension of gradients and easily circumvent these defenses. As shown in Figure 1a, these defenses include significantly more Byzantine gradients in aggregation in the non-IID setting. As a result, the global gradient is manipulated away from the optimal gradient, which leads to an ineffectual global model in the non-IID setting as shown in Figure 1b. Other robust AGRs aggregate fewer gradients (less than n − f ) to get rid of Byzantine clients (Guerraoui et al., 2018;Yin et al., 2018;El-Mhamdi et al., 2021). The results in Fig-ure 1c imply that they can exclude Byzantine clients from the aggregation in both IID and non-IID settings. However, their performance still degrades in the non-IID setting as shown in Figure 1d. In fact, this degradation comes from gradient heterogeneity (Li et al., 2020;Karimireddy et al., 2020) in the non-IID setting. As a price for removing Byzantine gradients, these robust AGRs exclude a proportion of honest gradients from the aggregation. Since the honest gradients are heterogeneous, such exclusion causes the aggregated gradient to deviate far from the optimal gradient, i.e., the average of honest gradients. The deviation further leads to an ineffectual global model. Therefore, they fail to achieve satisfactory performance in the non-IID setting. In summary, no existing robust AGR is capable of handling both the curse of dimensionality and gradient heterogeneity at the same time. A new strategy is needed to tackle both challenges in the non-IID setting. Gradient Splitting Based Approach Our observations in Section 4 clearly motivate the need for a more robust defense to tackle both the curse of dimensionality and gradient heterogeneity to defeat Byzantine attacks in the non-IID setting. Inspired by these observations, we propose a novel GrAdient Splitting based approach called GAS, which consists of three steps as follows. Splitting. First, GAS splits the gradients to mitigate the curse of dimensionality for the next identification step. The splitting is specified by a partition of set [d], where d is the dimension of gradients. In particular, we randomly partition [d] into p subsets, with each subset having no more than ⌈d/p⌉ dimensions. Let {J 1 , . . . J p } denote the partition. Each gradient g i is correspondingly split into p sub-vectors as follows. g (q) i = [g i ] Jq , i ∈ [n], q ∈ [p],(7) where g (q) i is the q-th sub-vector of gradient g i . Identification. Then, GAS applies robust AGR A to each group of sub-vectors corresponding to J q : g (q) = A(g (q) 1 , . . . , g (q) n ), q ∈ [p],(8) whereĝ (q) is the aggregation result of group q. By performing aggregation on each group of low-dimensional sub-vectors separately, GAS can circumvent the curse of dimensionality and get rid of Byzantine gradients. Note thatĝ (q) may still deviate from the optimal gradient due to the gradient heterogeneity (Karimireddy et al., 2022) as illustrated in Section 4. Therefore, it is inappropriate to directly use the aggregation results {ĝ (q) , q ∈ [p]} as the final output. Instead, we useĝ (q) as an honest reference to compute identification scores for each client as follows. s (q) i = ∥g (q) i −ĝ (q) ∥, i ∈ [n], q ∈ [p].(9) Since the group-wise aggregation resultĝ (q) can get rid of Byzantine gradients, the identification score s (q) i can provably characterize the potential for the g (q) i being a subvector of a Byzantine gradient. Then, GAS collects the identification scores from all groups and computes the final aggregation result. In particular, the final identification score s i of each client is composed of its identification scores received from all groups as follows. s i = p q=1 s (q) i , i ∈ [n].(10) Aggregation. To handle the gradient heterogeneity issue, GAS selects total n − f gradients with the lowest identification scores for aggregation. Let I denote the index set of selected gradients, where |I| = n − f . Then the average of selected gradients is output as the final aggregation result as follows:ĝ = 1 n − f i∈I g i .(11) Note that in the second step (Identification) of GAS, A could be any (f, λ)-resilient AGR (Definition 1). The key difference lies in that all the existing robust AGRs (Multi-Krum, Bulyan, etc.) directly operate on the original gradients; instead, we propose to apply robust AGRs on the split gradients, followed by identification before aggregation. In this way, we can help enhance the ability to handle both the curse of dimensionality and gradient heterogeneity of the current robust AGRs that satisfy the (f, λ)-resilient property (Definition 1) in the non-IID setting. We also analyze the computation cost of our proposed GAS in Appendix B. Moreover, our GAS is a compatible approach that can be combined with most existing robust AGRs, e.g., Multi-Krum (Blanchard et al., 2017), Bulyan (Guerraoui et al., 2018). Theoretical Analysis In this section, we provide a convergence analysis for our GAS approach. We analyze a popular FL model widely considered by Karimireddy et al. (2021;; Acharya et al. (2022). In particular, each local gradient is computed by SGD as follows. g t i = η∇L(w t ; ξ t i ), i ∈ H,(12) where η is learning rate, ξ t i represents a minibatch uniformly sampled from the local data distribution ξ i in the t-th communication round, and ∇L(w t , ξ t i ) represents the gradient of loss over the minibatch ξ t i . We make the following assumptions, which are standard in FL (Karimireddy et al., 2021;Acharya et al., 2022). Assumption 1 (Unbiased Estimator). The stochastic gradients sampled from any local data distribution are unbiased estimators of local gradients over R d for all honest clients, i.e., E ξ t i [∇L i (w; ξ t i )] = ∇L i (w), ∀w ∈ R d , i ∈ H, t ∈ N + .(13) Assumption 2 (Bounded Variance). The variance of stochastic gradients sampled from any local data distribution is uniformly bounded over R d for all honest clients, i.e., there exists σ ≥ 0 such that E∥∇L i (w; ξ t i ) − ∇L i (w)∥ 2 ≤ σ 2 , ∀w ∈ R d , i ∈ H, t ∈ N + .(14) Assumption 3 (Gradient Dissimilarity). The difference between the local gradients and the global gradient is uniformly bounded over R d for all honest clients, i.e., there exists κ ≥ 0 such that ∥∇L i (w) − ∇L(w)∥ 2 ≤ κ 2 , ∀w ∈ R d , i ∈ H. (15) We consider arbitrary non-convex loss function L(·) that satisfies the following Lipschitz condition. This condition is widely applied in the convergence analysis of Byzantinerobust federated learning (Karimireddy et al., 2022;Allen-Zhu et al., 2020;El-Mhamdi et al., 2021). Assumption 4 (Lipschitz Smoothness). The loss function is L-Lipschitz smooth over R d , i.e., ∥∇L(w) − ∇L(w ′ )∥ ≤ L∥w − w ′ ∥, ∀w, w ′ ∈ R d .(16) We consider robust AGRs that satisfy the following robustness criterion (Definition 1) introduced by Farhadkhani et al. (2022). A wide class of state-of-the-art robust AGRs satisfy this criterion (Farhadkhani et al., 2022). Definition 1 ((f, λ)-resilient). For integer f < n/2 and real value λ > 0, an AGR A is called (f, λ)-resilient if for any input {x 1 , . . . , x n } and any set S ⊆ [n] of size n − f , the output of A satisfies: ∥A(x 1 , . . . , x n ) −x S ∥ ≤ λ max i,i ′ ∈S ∥x i − x i ′ ∥,(17)wherex S = i∈S x i /|S|. We show that given any (f, λ)-resilient base AGR A, our GAS can help the global model to reach a better parameter. Proposition 1. Suppose Assumptions 1 to 4 hold, and let learning rate η = 1/2L. Given any (f, λ)-resilient robust AGR A, we start from w 0 and run GAS for T communication rounds, it satisfies L(w 0 ) ≥ 3 16L T t=1 (∥∇L(w t )∥ 2 − e 2 ),(18) where e 2 = O((κ 2 + σ 2 ) (19) · (1 + n − f + 1 p )(1 + λ 2 + 1 n − f ) f 2 (n − f ) 2 ).(20) Please refer to Appendix C.1 for the proof. Proposition 1 provides an upper bound for the sum of gradient norms in the presence of Byzantine gradients. Equation (18) indicates that as the number of communication rounds increases, we can find an approximate optimal parameter w such that ∥∇L(w)∥ can be arbitrary close to e. κ 2 and σ 2 in Equation (19) are positively related to the gradient dimension d (Guerraoui et al., 2018). Therefore, the convergence error e grows larger when d increases. As the number of subvectors p increases, the approximation becomes better, i.e., e 2 decreases, which validates the efficacy of our approach. From another aspect, Proposition 1 also characterizes the fundamental difficulties of Byzantine-robust federated learning in the non-IID setting. The negative term −e 2 on the RHS of Equation (18) implies that FL may never converge to an optimal parameter. By contrast, the global model may wander among sub-optimal points. What's more, even after reaching the convergence point, the global model may step into another sub-optimal in the next communication round. It aligns with the previous lower bound in (Karimireddy et al., 2022). A detailed comparison of the convergence results between our approach and recent works is presented in Appendix C.2. (2021a). We follow Li et al. (2021a) and set the number of clients n = 50 and the concentration parameter of Dirichlet distribution β = 0.5 as default. FEMNIST is a dataset with a natural non-IID partition. In particular, the data is partitioned into 3,597 clients based on the writer of the digit/character. For each client, we randomly sample 0.9 portion of data as training data and let the rest 0.1 portion of data be test data by following Caldas et al. (2018). Table 9 in Appendix D. Experiments Baselines. We consider six representative robust AGRs . We compare each AGR with its GAS variant and name them GAS (Multi-Krum), GAS (Bulyan), GAS (Median), GAS (RFA), GAS (DnC), and GAS (RBTM), respectively. The detailed hyperparameter settings of the robust AGRs are listed in Table 10 in Appendix D. We also compare our GAS against Bucketing (Karimireddy et al., 2022). Evaluation. We use top-1 accuracy, i.e., the proportion of correctly predicted testing samples to total testing samples, to evaluate the performance of global models. We run each experiment for five times and report the mean and standard deviation of the highest accuracy during the training process. Other settings. We utilize AlexNet (Krizhevsky et al., 2017), SqueezeNet (Iandola et al., 2016), ResNet-18 (He et al., 2016) and a four-layer CNN (Caldas et al., 2018) for CIFAR-10, CIFAR-100, ImageNet-12 and FEMNIST, respectively. The number of Byzantine clients of all datasets is set to f = 0.2 · n. We also consider up to f = 0.3 · n Byzantine clients in the ablation study. Please refer to Table 8 in Appendix D for more details. Experiment Results Main results. Table 1 illustrates the results of different defenses against popular attacks on CIFAR-10, CIFAR-100, ImageNet-12 and FEMNIST. From these tables, we observe that: (1) Integrating current robust AGRs into our GAS generally outperform all their original versions on all datasets, which verifies the efficacy of our proposed GAS. For example, GAS improves the accuracy of Median by 15.93% under Min-Sum attack on CIFAR-10. (2) The improvement of GAS (DnC) over DnC is relatively mild on CIFAR-10. Our interpretation is that when the dataset is relatively small and simple, DnC is capable of obtaining a rational gradient estimation. Nevertheless, on larger and more complex datasets, i.e., FEMNIST and ImageNet-12, DnC fails to achieve satisfactory performance under Byzantine attacks. (3) Although RFA collapses on FEMNIST, combining with our GAS can still improve it to satisfactory performance. Our illustration is that although the aggregated gradient of RFA deviates from the optimal gradient, it can still assist in identifying honest gradients when combined with GAS. As a result, GAS (RFA) is still effective on FEMNIST. GAS v.s. Bucketing. We also compare our GAS method against Bucketing (Karimireddy et al., 2022) on CIFAR-10. For each robust AGR, we combine it with GAS or Bucketing separately and compare their performance. The results are posted in Table 2. As shown in Table 3. As shown in Table 3, when p increases, the accuracy of GAS first increases, then slightly drops. Compared to GAS (Multi-Krum), GAS (Bulyan) demonstrates the best performance at a larger p and declines more slowly as p continues to increase. These results imply that: (1) GAS with a moderate p is more likely to achieve better performance; (2) the best p for different base AGRs differs. Performance of GAS when number of Byzantine clients f is unknown. We run additional experiments to evaluate the performance of GAS when the number of Byzantine clients f is unknown to the server. In this case, GAS removes a fixed fraction of δ sampled clients in each communication round, where δ ∈ [0, 0.5) is the estimated ratio of Byzantine clients. We test for δ = 0.1, 0.3 when there are 20% Byzantine clients under LIE attack on CIFAR-10. From results in Table 4, we can summarize that: (1) When the server knows the number of Byzantine clients (i.e., δ is N/A), GAS achieves the best performance; (2) When the number of Byzantine clients is unknown, the performance degradation of GAS is relatively mild; (3) Compared to excluding fewer clients (δ = 0.1) from aggregation, the performance of GAS is generally better when excluding more clients (δ = 0.3). We hypothesize this is because gradient heterogeneity is more impactful than LIE attack. Therefore, excluding honest gradients (δ = 0.3) is more harmful to the performance of GAS compared to including Byzantine gradients (δ = 0.1). Results on different levels of non-IID. We discuss the impact of non-IID levels of data distributions. We modify the concentration parameter β to change the non-IID level. A smaller β implies a higher non-IID level. Table 5, all the existing AGRs achieve better performances than their original versions when combined with GAS, which validates the efficacy of our GAS under different non-IID levels. Moreover, when the level of non-IID is higher, the improve- Results on different number of Byzantine clients. We also conduct experiments across different number of Byzantine clients (the total number of clients n is fixed). Other setups follow the default setup of the main experiments in Section 7.1 and Appendix D. Table 6 demonstrates the results of different defenses under LIE attack across f = {5, 15} Byzantine clients on CIFAR-10 dataset. As shown in Table 6, our GAS outperforms the corresponding baselines across all Byzantine client numbers. Results on different number of clients. We further analyze the efficacy of our GAS under different number of clients. We test the performance of different defenses under LIE attack across n = {75, 100} clients on CIFAR-10 dataset. The number of Byzantine clients is set to f = 0.2 · n correspondingly. Other setups follow the default setup of the main experiments in Section 7.1 and Appendix D. These results demonstrate that all the robust AGRs consistently outperform all their original versions when combined with our GAS, which validates that our GAS can effectively defend against Byzantine across different numbers of clients. Conclusion and Discussion In this work, we identify two main challenges of Byzantine robustness in the non-IID setting: the curse of dimensionality and gradient heterogeneity. Robust AGRs that try to include all honest gradients in aggregation suffer from the curse of dimensionality. Other robust AGRs that aggregate fewer gradients to get rid of Byzantines fail due to gradient heterogeneity. Motivated by the above discoveries, we propose a novel GrAdient Splitting (GAS) based approach that is compatible with most existing robust AGRs and overcomes the high dimensionality and gradient heterogeneity. GAS splits each high-dimensional gradient into lowdimensional sub-vectors and detects Byzantine gradients with the sub-vectors to address the curse of dimensionality. Then, GAS aggregates all the identified honest gradients to handle the gradient heterogeneity to alleviate the gradient heterogeneity issue. We also provide a detailed convergence analysis of our proposed GAS. Empirical studies on four real-world datasets justify the efficacy of GAS. Discussion. In the first step of GAS, we use an equal splitting mechanism for splitting. In fact, there are many other mechanisms, e.g., split gradients by layer. A future research direction is to discover more effective splitting mechanisms for GAS. Note that our GAS can also be combined with adaptive client selection strategies (Wan et al., 2022) to achieve better Byzantine robustness. We would discuss it more in our future work. A. Setups for Experiments in Section 4 The experiments are conducted on CIFAR-10 ( Krizhevsky et al., 2009). For both IID and non-IID settings, the number of clients is set to n = 50. For IID data distribution, all 50,000 samples are randomly partitioned into 50 clients each containing 1,000 samples. For non-IID data distribution, the samples are partitioned in a Dirichlet manner with concentration parameter β = 0.5. Please refer to Section 7.1 for the details of Dirichlet partition. The number of Byzantine clients is set to f = 10. LIE (Baruch et al., 2019) attack with z = 1.5 is considered. We use AlexNet (Krizhevsky et al., 2017) as the model architecture. The number of communication rounds is set to 500. In each communication round, all clients participate in the training. For local training, the number of local epochs is set to 1, batch size is set to 64, the optimizer is set to SGD. For SGD optimizer, learning rate is set to 0.1, momentum is set to 0.5, weight decay coefficient is set to 0.0001. We also adopt gradient clipping with clipping norm 2. Six robust AGRs are considered: Bulyan (Guerraoui et al., 2018) B. Computation Cost of GAS We first give the computation cost of the proposed GAS method. The computation cost of GAS is closely related to the computation cost of the base robust AGR A. We use cost A (d, n) to denote the computation cost of the base AGR A given n gradients of dimensionality d. Our GAS method has three steps: splitting, identification, and aggregation. Splitting. The splitting step is of complexity O(d); Identification. The identification step consists of two parts: apply AGR A to sub-vectors (O(pcost A (d/p, n))) and compute identification scores (O(nd + np)). Aggregation. The complexity of aggregation step is O(n log(n − f ) + (n − f )d). In summary, the overall complexity for GAS is O(d + pcost A (d/p, n) + nd + np + n log(n − f ) + (n − f )d) = O(n(d + log(n − f )) + pcost A (d/p, n)). Then we analyze the computation cost of GAS O(n(d + log(n − f )) + pcost A (d/p, n)). Since n ≪ d [7], the first term O(n(d + log(n − f ))) ≈ O(d) ≪ Ω(d 2 ). The second term O(pcost A (d/p, n)) relies on cost A (d/p, n), the cost of base AGR A. The computation cost of popular AGRs are usually O(d/p) under assumption n ≪ d [7], e.g., Krum (O(n 2 d/p)), Bulyan (O(n 2 d/p)). Therefore, the second term usually satisfies O(pcost A (d/p, n)) ≈ O(d) ≪ Ω(d 2 ). In summary, the computation cost of GAS is generally O(d) (consider only d and omit n), which is much smaller than Ω(d 2 ). C. Convergence Analysis In this section, we provide the proof for our convergence results in Proposition 1 and the comparison of our convergence results with recent works. We first restate the assumptions, the definition and the proposition for the integrity of this section. Assumption 1 (Unbiased Estimator). The stochastic gradients sampled from any local data distribution are unbiased estimators of local gradients over R d for all honest clients, i.e., E ξ t i [∇L i (w; ξ t i )] = ∇L i (w), ∀w ∈ R d , i ∈ H, t ∈ N + .(13) Assumption 2 (Bounded Variance). The variance of stochastic gradients sampled from any local data distribution is uniformly bounded over R d for all honest clients, i.e., there exists σ ≥ 0 such that E∥∇L i (w; ξ t i ) − ∇L i (w)∥ 2 ≤ σ 2 , ∀w ∈ R d , i ∈ H, t ∈ N + .(14) Assumption 3 (Gradient Dissimilarity). The difference between the local gradients and the global gradient is uniformly bounded over R d for all honest clients, i.e., there exists κ ≥ 0 such that ∥∇L i (w) − ∇L(w)∥ 2 ≤ κ 2 , ∀w ∈ R d , i ∈ H.(15) Assumption 4 (Lipschitz Smoothness). The loss function is L-Lipschitz smooth over R d , i.e., ∥∇L(w) − ∇L(w ′ )∥ ≤ L∥w − w ′ ∥, ∀w, w ′ ∈ R d .(16) Definition 1 ((f, λ)-resilient). For integer f < n/2 and real value λ > 0, an AGR A is called (f, λ)-resilient if for any input {x 1 , . . . , x n } and any set S ⊆ [n] of size n − f , the output of A satisfies: ∥A(x 1 , . . . , x n ) −x S ∥ ≤ λ max i,i ′ ∈S ∥x i − x i ′ ∥,(17) wherex S = i∈S x i /|S|. Proposition 1. Suppose Assumptions 1 to 4 hold, and let learning rate η = 1/2L. Given any (f, λ)-resilient robust AGR A, we start from w 0 and run GAS for T communication rounds, it satisfies L(w 0 ) ≥ 3 16L T t=1 (∥∇L(w t )∥ 2 − e 2 ),(18) where e 2 = O((κ 2 + σ 2 )(19)· (1 + n − f + 1 p )(1 + λ 2 + 1 n − f ) f 2 (n − f ) 2 ).(20) C.1. Proof for Proposition 1 Lemma 1. For positive integer n, f ≤ n and real value λ, AGR A is (f, λ)-resilient. Then for any set of random variables {x 1 , . . . , x n } and S ⊆ [n] of size n − f that satisfies, E[∥x i − x i ′ ∥ 2 ] ≤ ρ 2 , ∀i, i ′ ∈ H(21) we have E[∥A(x 1 , . . . , x n ) − x S ∥ 2 ] ≤ 4λ 2 · (n − f − 1) 2 n − f · ρ 2 ,(22)where x S = i∈S x i /|S|. Proof. Since A is (f, λ)-resilient, we have E[∥A(x 1 , . . . , x n ) −x S ∥ 2 ] ≤ E[λ 2 max i,i ′ ∈S ∥x i − x i ′ ∥ 2 ] = λ 2 E[ max i,i ′ ∈S ∥x i − x i ′ ∥ 2 ].(23)Then we bound max i,i ′ ∈S ∥x i − x i ′ ∥ 2 as follows. max i,i ′ ∈S ∥x i − x i ′ ∥ 2 ≤ max i,i ′ ∈S 2(∥x i − x S ∥ 2 + ∥x S − x i ′ ∥ 2 ) (24) ≤ max i,i ′ ∈S 2∥x i − x S ∥ 2 + max i,i ′ ∈S 2∥x S − x i ′ ∥ 2 (25) = 4 max i∈S ∥x i − x S ∥ 2 (26) ≤ 4 i∈S ∥x i − x S ∥ 2(27) Here Inequality (24) comes from the Cauchy inequality. We further bound ∥x i − x S ∥ 2 for all i ∈ S as follows. ∥x i − x S ∥ 2 = 1 (n − f ) 2 ∥ i ′ ∈S\{i} (x i − x i ′ )∥ 2 (28) ≤ 1 (n − f ) 2 · (n − f − 1) i ′ ∈S\{i} ∥x i − x i ′ ∥ 2 (29) = n − f − 1 (n − f ) 2 i ′ ∈S\{i} ∥x i − x i ′ ∥ 2 .(30) Here Equation (29) comes from the Cauchy inequality. Combine Equations (23) and (30) and Inequality (27) and we have E[∥A(x 1 , . . . , x n ) − x S ∥ 2 ] ≤ λ 2 E[ max i,i ′ ∈S ∥x i − x i ′ ∥ 2 ](31)≤ λ 2 E[4 i∈S ∥x i − x S ∥ 2 ] (32) ≤ λ 2 E[4 i∈S n − f − 1 (n − f ) 2 i ′ ∈S\{i} ∥x i − x i ′ ∥ 2 ] (33) ≤ 4λ 2 · n − f − 1 (n − f ) 2 i,i ′ ∈S,i̸ =i ′ E[∥x i − x i ′ ∥ 2 ] (34) ≤ 4λ 2 · n − f − 1 (n − f ) 2 · (n − f )(n − f − 1)ρ 2 (35) = 4λ 2 · (n − f − 1) 2 n − f · ρ 2(36) We state and prove the following lemma for the proof of Lemma 3. Lemma 2. For any random vector X, we have Var[∥X∥] ≤ E∥X − EX∥ 2 .(37) Proof. From the definition of variance, we have Var[∥X∥] = E(∥X∥ − E∥X∥) 2 (38) = E(∥X∥ − ∥EX∥) 2 − (∥EX∥ − E∥X∥) 2 (39) ≤ E(∥X∥ − ∥EX∥) 2 (40) ≤ E∥X − EX∥ 2 .(41) The second inequality comes from triangular inequality. Lemma 3 (Aggregation error). Suppose Assumptions 1 to 3 hold. Given an (f, λ)-resilient robust AGR A, for any t > 0, it satisfies E[∥ĝ −ḡ∥ 2 ] ≤ O((κ 2 + σ 2 )(1 + n − f + 1 p )(1 + λ 2 + 1 n − f ) f 2 (n − f ) 2 )(42) Proof. We rewriteĝ as follows. g = 1 n − f i∈I g i = 1 n − f ( h∈H g h + b∈B g b ) = |H| n − f gH + |B| n − f gB.(43) HereH = H ∩ I,B = B ∩ I, and g S = i∈S g i /|S| for all S ∈ [n]. Then, we can bound E ∥ĝ −ḡ∥ 2 as follows E[∥ĝ −ḡ∥ 2 ] = E[∥ |H| n − f gH + |B| n − f gB −ḡ∥ 2 ] (44) = E[∥ |H| n − f (gH −ḡ) + |B| n − f (gB −ḡ)∥ 2 ](45)≤ 2|H| 2 (n − f ) 2 E[∥gH −ḡ∥ 2 ] + 2|B| 2 (n − f ) 2 E[∥gB −ḡ∥ 2 ].(46) We bound E[∥gH −ḡ∥ 2 ] as follows. E[∥gH −ḡ∥ 2 ] = E[∥(gH −ḡH) + (ḡH −ḡ)∥ 2 ] (47) = E[∥gH −ḡH∥ 2 ] + ∥ḡH −ḡ∥ 2 (48) ≤ σ 2 |H| + κ 2 ,(49)whereḡH = E[gH]. Then we consder E[∥gB −ḡ∥ 2 ]. According to the law of total expectation, we have E[∥gB −ḡ∥ 2 ] = f f =0 E[∥gB −ḡ∥ 2 | |B| =f ] Pr(|B| =f ).(50) For all parameter group q ∈ [p] and i, j ∈ H, we have E[∥g (q) i − g (q) j ∥ 2 ] = E[∥(g (q) i −ḡ (q) i ) + (ḡ (q) i −ḡ (q) ) + (ḡ (q) −ḡ (q) j ) + (ḡ (q) j − g (q) j )∥ 2 ] (51) = E[∥g (q) i −ḡ (q) i ∥ 2 ] + ∥ḡ (q) i −ḡ (q) ∥ 2 + ∥ḡ (q) −ḡ (q) j ∥ 2 + E[∥ḡ (q) j − g (q) j ∥ 2 ] + 2⟨ḡ (q) i −ḡ (q) ,ḡ (q) −ḡ (q) j ⟩ (52) ≤ E[∥g (q) i −ḡ (q) i ∥ 2 ] + ∥ḡ (q) i −ḡ (q) ∥ 2 + ∥ḡ (q) −ḡ (q) j ∥ 2 + E[∥ḡ (q) j − g (q) j )∥ 2 ] + 2∥ḡ (q) i −ḡ (q) ∥ · ∥ḡ (q) −ḡ (q) j ∥ (53) ≤ 2σ 2 + 4κ 2(54) Here Equation (52) is due to the independence of g (q) i and g (q) j , Inequality (53) comes from the Cauchy inequality, and Inequality (54) follows Assumptions 2 and 3. Then according to the Lemma 1, we have E[∥ĝ (q) − g (q) ∥ 2 ] ≤ c 2 max i,j∈H E[∥g (q) i − g (q) j ∥ 2 ] ≤ c 2 (2σ 2 + 4κ 2 ),(55) where c 2 = 4λ 2 (n − f − 1) 2 /(n − f ). For honest client h, the expectation of abnormal score s (q) h from group q can be bounded as follows. E[s (q) h ] = E[∥g (q) h −ĝ (q) ∥] (56) ≤ E[∥g (q) h −ḡ (q) h ∥ + ∥ḡ (q) h −ḡ (q) ∥ + ∥ḡ (q) − g (q) ∥ + ∥g (q) −ĝ (q) ∥] (57) = E[∥g (q) h −ḡ (q) h ∥] + ∥ḡ (q) h −ḡ (q) ∥ + E[∥ḡ (q) − g (q) ∥] + E[∥g (q) −ĝ (q) ∥] (58) ≤ E[∥g (q) h −ḡ (q) h ∥ 2 ] + ∥ḡ (q) h −ḡ (q) ∥ + E[∥ḡ (q) − g (q) ∥ 2 ] + E[∥g (q) −ĝ (q) ∥ 2 ](59)≤ (1 + 1 √ n − f )σ + κ + c 2σ 2 + 4κ 2 .(60) Here Inequality (57) is a result of triangular inequality, Inequality (59) comes from Cauchy inequality, and Inequality (60) is a combined result of Equation (55) and Assumptions 2 and 3. The variance of s (q) h can be bounded as follows. Var[s (q) h ] = E[(s (q) h ) 2 ] − (E[s (q) h ]) 2 (61) ≤ E[(s (q) h ) 2 ] (62) = E[∥g (q) h −ĝ (q) ∥ 2 ] (63) = E[∥(g (q) h −ḡ (q) h ) + (ḡ (q) h −ḡ (q) ) + (ḡ (q) − g (q) ) + (g (q) −ĝ (q) )∥ 2 ] (64) ≤ 4E[∥g (q) h −ḡ (q) h ∥ 2 + ∥ḡ (q) h −ḡ (q) ∥ 2 + ∥ḡ (q) − g (q) ∥ 2 + ∥g (q) −ĝ (q) ∥ 2 ].(65) Here Inequality (65) is a result of Cauchy inequality. We bound E[∥ḡ (q) − g (q) ∥ 2 ] as follows. E[∥ḡ (q) − g (q) ∥ 2 ] = E[∥ 1 n − f i∈H (ḡ (q) i − g (q) i )∥ 2 ] (66) = 1 (n − f ) 2 i∈H E[∥ḡ (q) i − g (q) i ∥ 2 ](67)≤ 1 (n − f ) 2 i∈H σ 2 (68) = σ 2 n − f(69) Here Equation (67) comes from the independence of minibatches sampling across different clients, and Inequality (68) is a result of Assumption 2. Applying Assumptions 2 and 3 and Equations (55) and (69) to Inequality (65), we have Var[s (q) h ] ≤ 4(σ 2 + κ 2 + σ 2 n − f + c(2σ 2 + 4κ 2 ))(70)= (4 + 8c 2 + 4 n − f )σ 2 + (4 + 16c 2 )κ 2 .(71) According to Inequality (60) and Equation (71), we can bound the expectation and variance of total abnormal score s h of an honest client h. E[s h ] = E[ p q=1 s (q) h ] ≤ p(σ + κ + c 2σ 2 + 4κ 2 ) := A,(72)Var[s h ] = pq=1 Var[s (q) h ] ≤ p((4 + 8c 2 + 4 n − f )σ 2 + (4 + 16c 2 )κ 2 ) := B. Here the addictive property of variance is a result of the independence of group abnormal scores {s (q) h | q ∈ [p]}, which comes from the independence of components in a gradient (Yang & Schoenholz, 2017). From Chebyshev's inequality, for any ∆ h > 0 and honest client h ∈ [n] \ B, we have P (s h < E[s h ] + ∆ h ) ≥ 1 − Var[s h ] ∆ 2 h .(74) Consider the expectation of abnormal score s (q) b from group q for Byzantine client b ∈ B E[s (q) b ] = E[∥g (q) b −ĝ (q) ∥] (75) = E[∥(g (q) b −ḡ (q) ) − (ĝ (q) −ḡ (q) )∥] (76) ≥ E[∥g (q) b −ḡ (q) ∥ − ∥ĝ (q) −ḡ (q) ∥] (77) ≥ E[∥g (q) b −ḡ (q) ∥ − (∥ĝ (q) − g (q) ∥ + ∥g (q) −ḡ (q) ∥)] (78) ≥ ∥g (q) b −ḡ (q) ∥ − ( E[∥ĝ (q) −ḡ (q) ∥ 2 ] + E[∥g (q) −ḡ (q) ∥ 2 ]) (79) ≥ δ b − c 2σ 2 + 4κ 2 − σ √ n − f (80) where δ b = ∥g (q) b −ḡ (q) ∥ is the expected deviation of Byzantine client b from the average of honest gradients. Here the first and second inequalities come from triangular inequality, the third inequality is based on Cauchy inequality, and the 4-th inequality is a combined result of Equations (55) and (69). The variance of abnormal score s (q) b can be bounded as follows. Var[s (q) b ] = Var[∥g (q) b −ĝ (q) ∥] (81) ≤ E[∥g (q) b −ĝ (q) − E[g (q) b −ĝ (q) ]∥ 2 ] (82) = E[∥(g (q) b − E[g (q) b ]) − (ĝ (q) − E[ĝ (q) ])∥ 2 ] (83) ≤ 2E[∥g (q) b − Eg (q) b ∥ 2 + ∥ĝ (q) − Eĝ (q) ∥ 2 ] (84) = 2E∥g (q) b − Eg (q) b ∥ 2 + 2E∥ĝ (q) − Eĝ (q) ∥ 2(85) The first inequality results from Lemma 2, and the second inequality comes from Cauchy inequality. We bound ∥ĝ (q) − Eĝ (q) )∥ as follows. E∥ĝ (q) − Eĝ (q) ∥ = E∥(ĝ (q) − g (q) ) + (g (q) − Eg (q) ) − E[ĝ (q) − g (q) ]∥ 2 ](86)≤ 3E[∥ĝ (q) − g (q) ∥ 2 + ∥g (q) − Eg (q) ∥ 2 + ∥E[ĝ (q) − g (q) ]∥ 2 ](87)≤ 6E∥ĝ (q) − g (q) ∥ 2 + 3E∥g (q) − Eg (q) ∥ 2 (88) ≤ 48(σ 2 + κ 2 ) + 3σ 2 n − f (89) = (48 + 3σ 2 n − f )σ 2 + 48κ 2(90) Apply Equation (90) to Equation (85), we have Var[s (q) b ] ≤ 2σ 2 b + (96 + 6 n − f )σ 2 + 96κ 2 ,(91) where σ 2 b = E∥g (q) b − Eg (q) b ∥ 2 is the variance. (72) and (73), we utilize Equations (80) and (91) to bound the expectation and variance of total abnormal score s b of a byzantine client b. Similar to Equations E[s b ] = E[ p q=1 s (q) b ] ≥ p(δ b − 2 √ 2c 2σ 2 + 4κ 2 − σ √ n − f ) := C,(92)Var[s b ] = p q=1 Var[s (q) b ] ≤ p(2const + (96 + 6 n − f )σ 2 + 96κ 2 ) := D(93) where δ b = E∥g b −ḡ∥. According to Shejwalkar & Houmansadr (2021), σ 2 b is bounded, i.e., σ 2 b ≤ const. Similarly, we apply Chebyshev's inequality to the abnormal score of a Byzantine client b ∈ B. Pr(s b ≥ E[s b ] − ∆ b ) ≥ 1 − Var[s b ] ∆ 2 b , b ∈ B.(94) Combine Equations (72) to (74), and take ∆ h = (C − A)/(1 + D/B), we have Pr(s h < √ DA + √ BC √ B + √ D ) = Pr(s h < A + ∆ h ) (95) ≥ Pr(s h < E[s h ] + ∆ h ) (96) ≥ 1 − Var[s h ] ∆ 2 h (97) ≥ 1 − B ∆ 2 h (98) = 1 − ( √ B + √ D) 2 (C − A) 2 ,(99) Combine Equations (92) to (94), and take ∆ b = (C − A)/(1 + B/D), we have Pr(s b ≥ √ DA + √ BC √ B + √ D ) ≥ Pr(s b > C − ∆ b ) (100) ≥ Pr(s b > E[s b ] − ∆) (101) ≥ 1 − Var[s b ] ∆ 2 (102) ≥ 1 − D ∆ 2 b ,(103)= 1 − ( √ B + √ D) 2 (C − A) 2 ,(104) Then consider the probability a Byzantine b is selected, Pr(b ∈B) = 1 − Pr(b / ∈B)(105)≤ 1 − Pr(s h < √ DA + √ BC √ B + √ D , ∀h ∈ H, s b > √ DA + √ BC √ B + √ D ) (106) = Pr(s h ≥ √ DA + √ BC √ B + √ D , ∀h ∈ H or s b < √ DA + √ BC √ B + √ D ) (107) ≤ h∈H Pr(s h ≥ √ DA + √ BC √ B + √ D ) + Pr(s b < √ DA + √ BC √ B + √ D )(108)≤ (n − f + 1) · ( √ B + √ D) 2 (C − A) 2(109)Solve (n − f + 1) · ( √ B + √ D) 2 /(C − A) 2 ≤ ε, we have E[∥g b −ḡ∥] ≥ (1 + 1 √ n − f )σ + κ + 2c 2σ 2 + 4κ 2 + n − f + 1 pε ( (4 + 16c 2 + 4 n − f )σ 2 + (4 + 8c 2 )κ 2 + 2const + (96 + 6 n − f )σ 2 + 96κ 2 )(110) which implies that the Byzantine gradients that deviate from the optimal gradient will be filtered by GAS. Therefore, for all b ∈B, E[∥gB −ḡ∥ 2 ] ≤ O((κ 2 + σ 2 )(1 + λ 2 + 1 n − f )(1 + n − f + 1 p )) := C 2 1(111) The elimination of ε is due to the sub-Gaussian property of gB −ḡ, which comes from the Gaussian property of benign gradients. Combine Equations (49) and (111), E[∥ĝ −ḡ∥ 2 ] is finally bounded as follows. E[∥ĝ −ḡ∥ 2 ] (112) ≤ |H| 2 (n − f ) 2 (σ 2 /h + κ 2 ) + |B| 2 (n − f ) 2 C 2 1 (113) ≤ (n − 2f ) 2 (n − f ) 2 (σ 2 /(n − 2f ) + κ 2 ) + f 2 (n − f ) 2 C 2 1 ,(114)= O((κ 2 + σ 2 )(1 + n − f + 1 p )(1 + λ 2 + 1 n − f ) f 2 (n − f ) 2 )(115) which completes the proof. C.1.1. PROOF FOR THE MAIN PROPOSITION Proof. According to the Lipschitz property of loss function L, we have L(w t ) − L(w t+1 ) ≥ ∇L(w t ), w t − w t+1 − L 2 ∥w t − w t+1 ∥ 2 .(116) Since w t − w t+1 = ∇L(w t ) + (ĝ t − ∇L(w t )), we can write Equation (116) as follows L(w t ) − L(w t+1 ) ≥ (η − L 2 η 2 )∥∇L(w t )∥ 2 + (η − L 2 η 2 )⟨∇L(w t ),ĝ t − ∇L(w t )⟩ − L 2 η 2 ∥ĝ t − ∇L(w t )∥ 2 .(117) Then, we bound inner product term ⟨∇L(w t ),ĝ t − ∇L(w t )⟩. |∇L(w t ),ĝ t − ∇L(w t )⟩| ≤ ∥⟨∇L(w t )∥ · ∥ĝ t − ∇L(w t )∥(118)≤ 1 2 ∥∇L(w t )∥ 2 + 2∥ĝ t − ∇L(w t )∥ 2(119) Combine Equations (117) and (119) and we have L(w t ) − L(w t+1 ) ≥ (η − L 2 η 2 )∥∇L(w t )∥ 2 + (η − L 2 η 2 ) · −( 1 2 ∥∇L(w t )∥ 2 + 2∥ĝ t − ∇L(w t )∥ 2 ) − L 2 η 2 ∥ĝ t − ∇L(w t )∥ 2 (120) = ( 1 2 η − L 4 η 2 )∥∇L(w t )∥ 2 − (2η − L 2 η 2 )∥ĝ t − ∇L(w t )∥ 2(121) Take the expectation on both sides of Equation (121), we have E[L(w t ) − L(w t+1 )] ≥ ( 1 2 η − L 4 η 2 )E[∥∇L(w t )∥ 2 ] − (2η − L 2 η 2 )E[∥ĝ t − ∇L(w t )∥ 2 ].(122) Apply Lemma 3 to Inequality (122) and sum over t = 0, 1, . . . , T − 1, then we have E[L(w 0 ) − L(w T )] ≥ ( 1 2 η − L 4 η 2 ) T t−1 E[∥∇L(w t )∥ 2 ] − T ( 1 2 η − L 2 η 2 )C 2 .(123) where C 2 = O((κ 2 + σ 2 )(1 + (n − f + 1)/p)(1 + λ 2 + 1/(n − f )) f 2 (n−f ) 2 ) Take η = 1/2L, and consider that the loss function is generally non-negative, e.g., cross-entropy loss, ℓ 2 loss, E[L(w 0 )] ≥ 3 16L T t−1 (E[∥∇L(w t )∥ 2 ] − 2 3 C 2 ),(124) which completes the proof. C.2. Comparasion of Our Convergence Results with Recent Works Recent works (Karimireddy et al., 2022;Yu & Kar, 2022;El-Mhamdi et al., 2021;Allen-Zhu et al., 2020) also analyze the convergence of Byzantine-robust FL in the non-IID setting. We compare our convergence results with them. Similarities. We all guarantee that we can reach an approximate optimal point after a certain number of communication rounds. Moreover, we all admit that convergence in the presence of Byzantine clients may be impossible due to non-IID data, i.e., ∥∇L(w)∥ may never decrease to zero. The difference from Karimireddy et al. (2022). Our result is orthogonal to one in Karimireddy et al. (2022) since our GAS method is orthogonal to the Bucketing scheme proposed by Karimireddy et al. (2022): we focus on how gradient splitting can alleviate the curse of dimensionality and gradient heterogeneity at the same time while (Karimireddy et al., 2022) considers how partitioning gradients into buckets can help with non-IID data. In fact, we can obtain a better convergence result by combining our method with Bucketing scheme (Karimireddy et al., 2022). The result would enjoy the strengths of both our GAS method and Bucketing scheme: (1) free from the curse of dimensionality; (2) handle gradient heterogeneity that comes from non-IID data; (3) the variance term diminishes when there is no Byzantine client. (2022) assume the strong convexity of the loss function (Assumption 3) while we do not. This assumption is restrictive since global models are neural networks in practical settings. • Yu & Kar (2022) do not assume uniformly bounded gradient differences but assume a common global minimizer. Instead, they assume a common minimizer among different agents (clients). Due to different settings and assumptions, our convergence results are different. Yu & Kar (2022) guarantee almost sure convergence while we ensure that we can approach an approximate optimal parameter. Note that our upper bound matches the lower bound in (Karimireddy et al., 2022). D. Experiment Setup D.1. Setup for Main Experiments in Section 7 Data distribution. For CIFAR-10, CIFAR-100 (Krizhevsky et al., 2009) and ImageNet-12 (Li et al., 2021b), we use Dirichlet distribution to generate non-IID data by following Yurochkin et al. (2019); Li et al. (2021a). In particular, for each client i, we sample p y i ∼ Dir(β) and allocate a p y i proportion of the data of label y to client i, where Dir(β) represents the Dirichlet distribution with a concentration parameter β. We follow Li et al. (2021a) and set the number of clients n = 50 and the concentration parameter β = 0.5 as default. Other setups. The setups for datasets FEMNIST (Caldas et al., 2018), CIFAR-10 (Krizhevsky et al., 2009), CIFR-100 (Krizhevsky et al., 2009) andImageNet-12 (Russakovsky et al., 2015) are listed in below Table 8. The hyperparameters of six attacks: BitFlip (Allen-Zhu et al. Table 9 below. Table 9: The hyperparameters of six attacks. N/A indicates that the attack has no hyperparameters that need to be set. Attacks Hyperparameters BitFlip N/A LabelFlip N/A LIE z = 1.5 Min-Max γ init = 10, τ = 1 × 10 −5 , δ: coordinate-wise standard deviation Min-Sum γ init = 10, τ = 1 × 10 −5 , δ: coordinate-wise standard deviation IPM # eval = 2 The hyperparameters of six robust AGRs: Multi-Krum (Blanchard et al., 2017), Bulyan (Guerraoui et al., 2018) Table 10 below. E. GAS mitigates the deviation of aggregated gradients In Section 6, we claim that our GAS approach can reduce the deviation of aggregated gradientĝ from the average of honest gradients g. To verify this fact, we compare the deviation of the aggregated gradient of different defenses and their GAS variants in Figure 2. In particular, we use ∥ĝ − g∥, the distance between the aggregated gradientĝ and the average of honest gradients g to measure the deviation degree. As shown in Figure 2, the gradient deviation degree of GAS-enhanced defenses is much lower than their original versions as expected, which validates that our GAS can mitigate the gradient deviation. Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). Figure 1 : 1The experiments are conducted under the attack of 20% Byzantine clients on CIFAR-10 (Krizhevsky et al., 2009) dataset in both IID and non-IID settings. More detailed setups are covered in Appendix A. Our experiments are conducted on four realworld datasets: CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009), a subset of ImageNet (Russakovsky et al., 2015) refered as ImageNet-12 (Li et al., 2021b) and FEMNIST (Caldas et al., 2018). Data distribution. For CIFAR-10, CIFAR-100, and ImageNet-12, we use Dirichlet distribution to generate non-IID data by following Yurochkin et al. (2019); Li et al. Evaluated attacks. We consider six representative attacks BitFlip (Allen-Zhu et al., 2020), LabelFlip (Allen-Zhu et al., 2020), LIE (Baruch et al., 2019), Min-Max (Shejwalkar & Houmansadr, 2021), Min-Sum (Shejwalkar & Houmansadr, 2021) and IPM (Xie et al., 2020). The detailed hyperparameter setting of the attacks are shown in : Multi-Krum(Blanchard et al., 2017), Bulyan(Guerraoui et al., 2018), Median (Yin et al., 2018), RFA (Pillutla et al., 2019), DnC (Shejwalkar & Houmansadr, 2021), RBTM (El-Mhamdi et al., 2021) backdoor learning: Training clean models on poisoned data. Advances in Neural Information Processing Systems, 34, 2021b. Lyu, L., Yu, H., and Yang, Q. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133, 2020. Lyu, L., Yu, H., Ma, X., Chen, C., Sun, L., Zhao, J., Yang, Q., and Philip, S. Y. Privacy and robustness in federated learning: Attacks and defenses. IEEE transactions on neural networks and learning systems, 2022. McMahan, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pp. 1273-1282. PMLR, 2017. Park, J., Han, D.-J., Choi, M., and Moon, J. Sageflow: Robust federated learning against both stragglers and adversaries. Advances in Neural Information Processing Systems, 34:840-851, 2021. Peng, J., Wu, Z., Ling, Q., and Chen, T. Byzantine-robust variance-reduced federated learning over distributed noniid data. Information Sciences, 616:367-391, 2022. Pillutla, K., Kakade, S. M., and Harchaoui, Z. Robust aggregation for federated learning. arXiv preprint arXiv:1912.13445, 2019. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3): 211-252, 2015. Shejwalkar, V. and Houmansadr, A. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In NDSS, 2021. Wan, W., Hu, S., Lu, j., Zhang, L. Y., Jin, H., and He, Y. Shielding federated learning: Robust aggregation with adaptive client selection. In Raedt, L. D. (ed.), Proceedings of the Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, N., and Khazaeni, Y. Bayesian nonparametric federated learning of neural networks. In International Conference on Machine Learning, pp. 7252-7261, 2019. Zhang, J., Chen, C., Li, B., Lyu, L., Wu, S., Ding, S., Shen, C., and Wu, C. Dense: Data-free one-shot federated learning. Advances in Neural Information Processing Systems, 35:21414-21428, 2022. Zhang, J., Li, B., Chen, C., Lyu, L., Wu, S., Ding, S., and Wu, C. Delving into the adversarial robustness of federated learning. arXiv preprint arXiv:2302.09479, 2023. Zhao, Y., Zhao, J., Jiang, L., Tan, R., Niyato, D., Li, Z., Lyu, L., and Liu, Y. Privacy-preserving blockchain-based federated learning for iot devices. IEEE Internet of Things Journal, 8(3):1817-1829, 2020. , Median (Yin et al., 2018), RBTM (El-Mhamdi et al., 2021), Multi-Krum (Blanchard et al., 2017), RFA (Pillutla et al., 2019), DnC (Shejwalkar & Houmansadr, 2021) , 2020), LabelFlip (Allen-Zhu et al., 2020), LIE (Baruch et al., 2019), Min-Max (Shejwalkar & Houmansadr, 2021), Min-Sum (Shejwalkar & Houmansadr, 2021), IPM (Xie et al., 2020), are listed in , Median (Yin et al., 2018), RFA (Pillutla et al., 2019), DnC (Shejwalkar & Houmansadr, 2021), RBTM (El-Mhamdi et al., 2021), are listed in Figure 2 : 2The gradient deviation ∥ĝ − g∥ of six different defenses w/ and w/o GAS under LIE attack on CIFAR-10. The lower the better. Table 1 : 1Accuracy (mean±std) of different defenses under 6 attacks on CIFAR-10, CIFAR-100, FEMNIST, and ImageNet-12.Multi-Krum) 59.23 ± 0.55 61.47 ± 0.26 55.66 ± 0.93 49.19 ± 0.72 53.59 ± 0.96 56.94 ± 3.60 Multi-Krum) 42.41 ± 0.58 42.55 ± 0.12 27.81 ± 0.32 31.18 ± 1.48 41.33 ± 0.50 42.62 ± 1.53 RFA 21.32 ± 0.84 28.76 ± 1.33 25.63 ± 0.20 26.46 ± 1.83 28.33 ± 0.93 21.36 ± 0.54 GAS (RFA) 42.64 ± 0.44 42.42 ± 0.25 26.30 ± 1.08 30.30 ± 0.12 41.09 ± 0.66 43.45 ± 0.52 DnC 41.77 ± 0.62 42.93 ± 0.07 42.95 ± 1.03 40.15 ± 0.70 40.02 ± 1.07 41.23 ± 2.29 GAS (DnC) 43.35 ± 0.41 43.57 ± 1.11 43.64 ± 0.11 41.66 ± 0.78 41.02 ± 1.39 43.25 ± 0.43 RBTM 36.35 ± 0.17 42.67 ± 1.55 24.06 ± 0.09 26.24 ± 1.04 36.51 ± 0.40 43.12 ± 1.12 GAS (RBTM) 43.44 ± 0.81 43.19 ± 2.65 33.14 ± 0.58 34.35 ± 0.76 41.51 ± 0.93 43.20 ± 0.76 Multi-Krum) 84.29 ± 1.76 85.45 ± 0.40 74.76 ± 1.74 57.46 ± 0.33 70.65 ± 1.35 81.46 ± 0.18 Median 80.25 ± 0.06 76.86 ± 1.96 64.88 ± 0.23 50.67 ± 0.37 61.33 ± 0.13 71.98 ± 0.77 GAS (Median) 84.59 ± 0.14 85.67 ± 0.48 76.19 ± 0.43 65.84 ± 0.41 70.84 ± 0.86 82.18 ± 0.40 Multi-Krum) 66.79 ± 1.08 63.04 ± 0.14 57.15 ± 0.19 59.94 ± 0.32 64.07 ± 1.38 61.92 ± 0.04Dataset Attack BitFlip LabelFlip LIE Min-Max Min-Sum IPM CIFAR-10 Multi-Krum 43.19 ± 0.38 43.90 ± 0.03 37.03 ± 1.62 39.06 ± 0.07 23.68 ± 0.18 36.47 ± 0.22 GAS (Bulyan 54.10 ± 0.19 55.12 ± 0.14 30.58 ± 0.75 29.03 ± 1.10 46.19 ± 0.92 33.88 ± 0.61 GAS (Bulyan) 59.14 ± 0.01 61.21 ± 0.60 48.90 ± 0.83 48.35 ± 1.58 53.74 ± 0.71 56.53 ± 1.51 Median 45.41 ± 0.44 51.88 ± 0.62 28.75 ± 0.35 32.72 ± 0.81 37.39 ± 0.90 43.21 ± 0.47 GAS (Median) 59.28 ± 0.24 61.24 ± 1.34 46.60 ± 0.13 49.37 ± 1.13 53.32 ± 1.90 56.33 ± 0.82 RFA 49.61 ± 0.31 44.35 ± 0.31 15.39 ± 0.37 16.62 ± 0.83 18.22 ± 0.43 45.92 ± 0.13 GAS (RFA) 53.35 ± 0.30 62.25 ± 0.56 52.69 ± 0.89 52.64 ± 1.48 56.16 ± 0.91 62.26 ± 1.27 DnC 58.63 ± 1.29 60.82 ± 1.56 61.07 ± 0.72 60.42 ± 0.59 53.71 ± 0.96 59.99 ± 0.82 GAS (DnC) 58.96 ± 0.60 61.02 ± 0.27 61.87 ± 0.51 61.04 ± 1.18 54.36 ± 1.12 57.92 ± 1.71 RBTM 54.27 ± 1.63 59.60 ± 1.76 47.67 ± 2.51 49.02 ± 0.31 50.74 ± 0.06 55.27 ± 1.60 GAS (RBTM) 59.41 ± 0.20 60.75 ± 0.19 52.10 ± 1.28 49.60 ± 0.17 53.63 ± 0.58 56.65 ± 1.52 CIFAR-100 Multi-Krum 34.27 ± 0.28 35.57 ± 0.94 17.17 ± 0.08 16.77 ± 0.78 22.89 ± 0.61 15.93 ± 2.00 GAS (Bulyan 35.77 ± 0.18 42.60 ± 0.07 35.41 ± 0.40 35.53 ± 1.38 39.13 ± 0.12 40.27 ± 1.64 GAS (Bulyan) 42.28 ± 1.61 43.77 ± 0.46 38.39 ± 0.19 36.33 ± 1.51 40.73 ± 0.39 42.88 ± 0.14 Median 36.62 ± 0.12 41.64 ± 0.76 22.75 ± 0.04 23.21 ± 0.71 30.68 ± 0.26 40.98 ± 0.38 GAS (Median) 42.41 ± 0.66 42.62 ± 0.09 35.16 ± 1.08 36.46 ± 0.10 41.08 ± 0.04 43.63 ± 2.85 FEMNIST Multi-Krum 67.65 ± 0.23 57.43 ± 1.25 44.58 ± 0.07 28.32 ± 0.31 29.98 ± 0.45 12.26 ± 1.34 GAS (Bulyan 77.58 ± 1.30 79.39 ± 2.14 56.43 ± 0.45 35.10 ± 0.69 44.83 ± 1.40 5.91 ± 0.17 GAS (Bulyan) 84.90 ± 0.69 83.68 ± 0.76 71.43 ± 1.07 66.22 ± 0.47 71.76 ± 0.99 82.97 ± 1.04 RFA 5.46 ± 0.06 5.46 ± 0.01 5.46 ± 0.05 5.46 ± 0.03 5.46 ± 0.02 5.59 ± 0.09 GAS (RFA) 84.86 ± 0.78 84.59 ± 0.20 69.82 ± 0.33 69.18 ± 0.09 77.67 ± 1.31 86.08 ± 2.51 DnC 8.90 ± 0.31 77.71 ± 0.03 78.52 ± 0.28 8.29 ± 0.37 74.18 ± 0.03 74.70 ± 1.57 GAS (DnC) 84.71 ± 0.39 85.39 ± 0.64 82.54 ± 0.26 74.37 ± 0.50 75.41 ± 0.22 82.73 ± 1.22 RBTM 82.57 ± 0.34 81.57 ± 1.12 59.93 ± 0.20 65.20 ± 0.60 71.82 ± 0.73 76.88 ± 1.75 GAS (RBTM) 84.89 ± 1.94 85.44 ± 0.20 73.38 ± 0.31 66.24 ± 0.94 75.50 ± 1.13 82.58 ± 1.85 ImageNet-12 Multi-Krum 44.36 ± 1.52 34.04 ± 1.69 45.38 ± 1.04 48.72 ± 0.16 57.69 ± 0.30 33.14 ± 0.86 GAS (Bulyan 62.28 ± 0.84 59.84 ± 1.09 48.04 ± 2.22 48.97 ± 1.87 59.94 ± 0.51 60.67 ± 0.07 GAS (Bulyan) 66.76 ± 0.72 62.28 ± 0.32 57.44 ± 0.39 58.81 ± 0.05 65.00 ± 0.08 62.76 ± 0.14 Median 55.93 ± 0.55 58.14 ± 0.18 46.67 ± 1.01 49.07 ± 1.19 58.40 ± 0.03 43.62 ± 1.72 GAS (Median) 66.28 ± 0.41 62.34 ± 1.10 60.74 ± 1.24 59.26 ± 0.31 64.78 ± 2.10 62.24 ± 0.51 RFA 61.12 ± 1.26 61.31 ± 1.68 49.49 ± 1.33 53.04 ± 0.13 61.92 ± 0.67 63.97 ± 0.93 GAS (RFA) 66.92 ± 1.58 63.88 ± 0.94 61.41 ± 0.02 59.42 ± 0.64 67.02 ± 0.54 66.67 ± 0.38 DnC 54.94 ± 0.04 5.59 ± 0.06 58.01 ± 1.52 58.11 ± 0.41 60.42 ± 1.60 59.99 ± 0.50 GAS (DnC) 65.19 ± 1.63 63.01 ± 0.27 64.42 ± 0.19 65.03 ± 1.23 65.38 ± 1.68 65.03 ± 0.04 RBTM 60.06 ± 1.76 60.44 ± 0.37 55.77 ± 0.82 57.50 ± 0.10 63.91 ± 0.78 56.19 ± 1.05 GAS (RBTM) 66.99 ± 0.38 61.92 ± 1.22 59.87 ± 0.72 59.81 ± 1.34 64.94 ± 0.72 63.40 ± 0.97 Table 2 : 2Accuracy of different robust AGRs combined with Bucketing or GAS under six attacks on CIFAR-10.Attack BitFlip LabelFlip LIE Min-Max Min-Sum IPM Bucketing (Multi-Krum) 47.87 49.86 45.90 43.53 44.92 50.28 GAS (Multi-Krum) 59.23 61.47 55.66 49.19 53.59 56.94 Bucketing (Bulyan) 51.79 61.16 46.02 45.90 52.30 56.44 GAS (Bulyan) 59.14 61.21 48.90 48.35 53.74 56.53 Bucketing (Median) 53.17 59.50 47.13 47.93 51.52 52.69 GAS (Median) 59.28 61.24 46.60 49.37 53.32 56.33 Bucketing (RFA) 52.55 58.44 48.71 47.51 52.29 55.19 GAS (RFA) 53.35 62.25 52.69 52.64 56.16 62.26 Bucketing (DnC) 57.79 59.39 57.53 55.09 53.83 54.01 GAS (DnC) 58.96 61.02 61.87 61.04 54.36 57.92 Bucketing (RBTM) 53.25 60.10 51.87 49.32 53.56 53.77 GAS (RBTM) 59.41 60.75 52.10 49.60 53.63 56.65 Table 3 : 3Accuracy of GAS with different number of sub-vectors p under LIE attack on CIFAR-10. d represents the number of model parameters.p 100 1000 10000 100000 1000000 2472266 (d) GAS (Multi-Krum) 55.07 63.23 63.86 60.16 58.31 57.70 GAS (Bulyan) 50.29 57.11 59.82 60.42 60.47 59.90 Table 4 : 4The accuracy of GAS with δ = 0.1, 0.3 under 20% LIE attack on CIFAR-10. N/A represents the case where the number of Byzantine clients f is known to the server and the server can exclude exactly f clients, i.e., δ is N/A.δ Multi-Krum Bulyan median RFA DnC RBTM N/A 55.66 48.90 46.60 52.69 61.87 52.10 0.1 54.30 46.94 45.74 52.21 56.93 51.47 0.3 50.48 44.96 43.09 52.04 60.50 50.21 Table 5: Accuracy (mean±std) of different defenses against LIE attack under different non-IID levels on CIFAR-10. A smaller β implies a higher non-IID level. β Multi-Krum GAS (Multi-Krum) Bulyan GAS (Bulyan) Median GAS (Median) 0.3 12.19 ± 1.04 52.80 ± 0.74 28.16 ± 0.44 42.81 ± 0.63 25.62 ± 0.83 40.97 ± 0.89 0.7 31.01 ± 0.54 55.64 ± 0.60 44.72 ± 1.43 51.29 ± 0.35 34.04 ± 0.29 53.34 ± 0.08 β RFA GAS (RFA) DnC GAS (DnC) RBTM GAS (RBTM) 0.3 20.08 ± 0.13 48.77 ± 0.84 59.99 ± 1.81 60.21 ± 0.62 37.67 ± 0.18 49.27 ± 0.05 0.7 18.11 ± 0.24 53.25 ± 1.41 62.15 ± 0.73 62.48 ± 0.52 48.43 ± 0.22 52.25 ± 1.16 Table 2 , 2our GAS outperforms Bucketing in most cases. Except for LIE attack, the test accuracy of GAS (Median) is slightly lower than Bucketing (Median). Number of sub-vectors. We vary sub-vector number p across {100, 1000, 10000, 100000, 1000000, 2472266(d)} under LIE attack on the heterogeneous CIFAR-10 dataset. Other setups align with the main experiments. The results are provided in Table 5 5demon- Table 6 : 6Accuracy (mean±std) of different defenses against LIE attack with different Byzantine client numbers f = {5, 15} on CIFAR-10. The number of total clients is fixed to n = 50.f Multi-Krum GAS (Multi-Krum) Bulyan GAS (Bulyan) Median GAS (Median) 5 41.65 ± 1.78 61.24 ± 0.01 56.28 ± 1.44 58.27 ± 0.17 46.91 ± 1.36 57.69 ± 1.81 15 10.00 ± 0.00 34.70 ± 0.28 10.00 ± 0.00 31.67 ± 0.19 18.85 ± 1.54 30.95 ± 0.42 f RFA GAS (RFA) DnC GAS (DnC) RBTM GAS (RBTM) 5 22.37 ± 1.00 58.06 ± 1.29 62.27 ± 0.04 63.14 ± 0.20 55.92 ± 0.10 59.72 ± 0.16 15 16.16 ± 0.14 40.37 ± 0.26 57.28 ± 1.37 60.14 ± 1.64 34.93 ± 1.36 35.78 ± 1.51 Table 7 : 7Accuracy (mean±std) of different defenses against LIE attack under different client numbers on CIFAR-10. ment on robust AGRs is more significant. The results further confirm that our GAS can overcome the failures aggravated under a higher non-IID level.n Multi-Krum GAS (Multi-Krum) Bulyan GAS (Bulyan) Median GAS (Median) 75 28.72 ± 0.71 54.89 ± 0.16 23.37 ± 1.22 51.11 ± 0.00 44.89 ± 2.98 52.22 ± 1.64 100 32.49 ± 1.22 56.51 ± 0.01 21.93 ± 0.55 46.49 ± 1.33 33.82 ± 0.21 46.12 ± 0.17 n RFA GAS (RFA) DnC GAS (DnC) RBTM GAS (RBTM) 75 16.89 ± 1.38 49.85 ± 0.06 59.31 ± 1.33 59.75 ± 0.42 45.06 ± 0.96 50.24 ± 0.31 100 14.01 ± 1.34 49.85 ± 1.97 58.88 ± 1.45 59.61 ± 1.19 40.38 ± 0.48 47.02 ± 0.03 The difference fromEl-Mhamdi et al. (2021). Technically, our result is orthogonal from one inEl-Mhamdi et al. (2021).El-Mhamdi et al. (2021) consider how to improve robust AGRs to achieve optimal Byzantine resilience. We focus on how to handle the high-dimension nature of gradients. Moreover,El-Mhamdi et al. (2021) focus on decentralized FL with a server and provide an order optimal upper bound. However, this strong result requires a Byzantine ratio lower than 1/3. By contrast, we consider a centralized FL setting and only assume the Byzantine ratio to be lower than 1/2. difference fromPeng et al. (2022). Peng et al. (2022) consider how client variance reduction and robust AGRs can jointly improve Byzantine resilience. And we concentrate more on gradient dimensionsPeng et al. (2022) consider an ideal case where the objective function is strongly convex, while we consider a more general non-convex case. difference fromYu & Kar (2022). We considered different settings. We consider standard federated learning with a central server and (Yu & Kar, 2022) considers distributed optimization without a central server. Besides, the convergence analysis is based on different assumptions.• Yu & KarThe The Table 8 : 8Default experimental settings for FEMNIST, CIFAR-10, CIFAR-100 and ImageNet-12.Dataset FEMNIST CIFAR-10 CIFAR-100 ImageNet-12 Architecture CNN (Caldas et al., 2018) AlexNet (Krizhevsky et al., 2017) SqueezeNet (Iandola et al., 2016) ResNet-18 (He et al., 2016) # Communication rounds 1000 200 400 200 Client sample ratio 0.005 0.1 0.1 0.1 # Local epochs 1 5 1 1 Optimizer SGD SGD SGD SGD Batch size 64 64 64 128 Learning rate 0.5 0.1 0.1 0.1 Momentum 0.5 0.5 0.5 0.9 Weight decay 0.0001 0.0001 0.0001 0.0001 Learning rate decay No No No Reduce to 0.01 after 100-th communication round Gradient clipping Yes Yes Yes Yes Clipping norm 2 2 2 2 Table 10 : 10The default hyperparameters of the AGRs. N/A indicates that the robust AGR has no hyperparameters that need to be set. DnC c = 4, niters = 1, b = 10000 RBTM N/AAGRs Hyperparameters Multi-Krum N/A Bulyan N/A Median N/A RFA T = 3 AcknowledgementsThis work is supported by the National Key R&D Program of China (No.2022YFB3304100) and by the Zhejiang University-China Zheshang Bank Co., Ltd. Joint Research Center. This work is also sponsored by Sony AI. Robust training in high dimensions via block coordinate geometric median descent. A Acharya, A Hashemi, P Jain, S Sanghavi, I S Dhillon, U Topcu, International Conference on Artificial Intelligence and Statistics. PMLRAcharya, A., Hashemi, A., Jain, P., Sanghavi, S., Dhillon, I. S., and Topcu, U. Robust training in high dimensions via block coordinate geometric median descent. In Inter- national Conference on Artificial Intelligence and Statis- tics, pp. 11145-11168. PMLR, 2022. Byzantine-resilient non-convex stochastic gradient descent. Z Allen-Zhu, F Ebrahimianghazani, J Li, Alistarh , D , International Conference on Learning Representations. Allen-Zhu, Z., Ebrahimianghazani, F., Li, J., and Alistarh, D. Byzantine-resilient non-convex stochastic gradient descent. In International Conference on Learning Repre- sentations, 2020. A little is enough: Circumventing defenses for distributed learning. G Baruch, M Baruch, Y Goldberg, Advances in Neural Information Processing Systems. 32Baruch, G., Baruch, M., and Goldberg, Y. A little is enough: Circumventing defenses for distributed learning. Ad- vances in Neural Information Processing Systems, 32, 2019. Compressed optimisation for nonconvex problems. J Bernstein, Y.-X Wang, K Azizzadenesheli, A Anandkumar, Signsgd, International Conference on Machine Learning. PMLRBernstein, J., Wang, Y.-X., Azizzadenesheli, K., and Anand- kumar, A. signsgd: Compressed optimisation for non- convex problems. In International Conference on Ma- chine Learning, pp. 560-569. PMLR, 2018. Machine learning with adversaries: Byzantine tolerant gradient descent. P Blanchard, E M El Mhamdi, R Guerraoui, J Stainer, Advances in Neural Information Processing Systems. 30Blanchard, P., El Mhamdi, E. M., Guerraoui, R., and Stainer, J. Machine learning with adversaries: Byzantine toler- ant gradient descent. Advances in Neural Information Processing Systems, 30, 2017. S Caldas, S M K Duddu, P Wu, T Li, J Konečnỳ, H B Mcmahan, V Smith, A Talwalkar, Leaf, arXiv:1812.01097A benchmark for federated settings. arXiv preprintCaldas, S., Duddu, S. M. K., Wu, P., Li, T., Konečnỳ, J., McMahan, H. B., Smith, V., and Talwalkar, A. Leaf: A benchmark for federated settings. arXiv preprint arXiv:1812.01097, 2018. Robust federated recommendation system. C Chen, J Zhang, A K Tung, M Kankanhalli, Chen , G , arXiv:2006.08259arXiv preprintChen, C., Zhang, J., Tung, A. K., Kankanhalli, M., and Chen, G. Robust federated recommendation system. arXiv preprint arXiv:2006.08259, 2020. Calfat: Calibrated federated adversarial training with label skewness. C Chen, Y Liu, X Ma, L Lyu, NeurIPS. Chen, C., Liu, Y., Ma, X., and Lyu, L. Calfat: Calibrated federated adversarial training with label skewness. In NeurIPS, 2022a. Practical attribute reconstruction attack against federated learning. C Chen, L Lyu, H Yu, Chen , G , IEEE Transactions on Big Data. Chen, C., Lyu, L., Yu, H., and Chen, G. Practical attribute reconstruction attack against federated learning. IEEE Transactions on Big Data, 2022b. Byzantine-resilient highdimensional sgd with local iterations on heterogeneous data. D Data, S Diggavi, International Conference on Machine Learning. PMLRData, D. and Diggavi, S. Byzantine-resilient high- dimensional sgd with local iterations on heterogeneous data. In International Conference on Machine Learning, pp. 2478-2488. PMLR, 2021. Collaborative learning in the jungle (decentralized, byzantine, heterogeneous, asynchronous and nonconvex learning). E M El-Mhamdi, S Farhadkhani, R Guerraoui, A Guirguis, L.-N Hoang, S Rouault, Advances in Neural Information Processing Systems. 34El-Mhamdi, E. M., Farhadkhani, S., Guerraoui, R., Guirguis, A., Hoang, L.-N., and Rouault, S. Collaborative learning in the jungle (decentralized, byzantine, heterogeneous, asynchronous and nonconvex learning). Advances in Neu- ral Information Processing Systems, 34:25044-25057, 2021. Byzantine machine learning made easy by resilient averaging of momentums. S Farhadkhani, R Guerraoui, N Gupta, R Pinot, Stephan , J , International Conference on Machine Learning. PMLRFarhadkhani, S., Guerraoui, R., Gupta, N., Pinot, R., and Stephan, J. Byzantine machine learning made easy by resilient averaging of momentums. In International Con- ference on Machine Learning, pp. 6246-6283. PMLR, 2022. Robust federated learning in a heterogeneous environment. A Ghosh, J Hong, D Yin, K Ramchandran, arXiv:1906.06629arXiv preprintGhosh, A., Hong, J., Yin, D., and Ramchandran, K. Robust federated learning in a heterogeneous environment. arXiv preprint arXiv:1906.06629, 2019. The hidden vulnerability of distributed learning in byzantium. R Guerraoui, S Rouault, International Conference on Machine Learning. PMLRGuerraoui, R., Rouault, S., et al. The hidden vulnerability of distributed learning in byzantium. In International Con- ference on Machine Learning, pp. 3521-3530. PMLR, 2018. Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHe, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016. F N Iandola, S Han, M W Moskewicz, K Ashraf, W J Dally, K Keutzer, Squeezenet, arXiv:1602.07360Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprintIandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning. P Kairouz, H B Mcmahan, B Avent, A Bellet, M Bennis, A N Bhagoji, K Bonawitz, Z Charles, G Cormode, R Cummings, 14Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1-2):1-210, 2021. Scaffold: Stochastic controlled averaging for federated learning. S P Karimireddy, S Kale, M Mohri, S Reddi, S Stich, A T Suresh, International Conference on Machine Learning. PMLRKarimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., and Suresh, A. T. Scaffold: Stochastic controlled averag- ing for federated learning. In International Conference on Machine Learning, pp. 5132-5143. PMLR, 2020. Learning from history for byzantine robust optimization. S P Karimireddy, L He, Jaggi , M , International Conference on Machine Learning. PMLRKarimireddy, S. P., He, L., and Jaggi, M. Learning from history for byzantine robust optimization. In Interna- tional Conference on Machine Learning, pp. 5311-5319. PMLR, 2021. Byzantine-robust learning on heterogeneous datasets via bucketing. S P Karimireddy, L He, Jaggi , M , International Conference on Learning Representations. Karimireddy, S. P., He, L., and Jaggi, M. Byzantine-robust learning on heterogeneous datasets via bucketing. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum? id=jXKKDEi5vJt. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Communications of the ACM. 606Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84-90, 2017. Learning multiple layers of features from tiny images. A Krizhevsky, Krizhevsky, A. et al. Learning multiple layers of features from tiny images. 2009. Federated learning on non-iid data silos: An experimental study. Q Li, Y Diao, Q Chen, B He, arXiv:2102.02079arXiv preprintLi, Q., Diao, Y., Chen, Q., and He, B. Federated learning on non-iid data silos: An experimental study. arXiv preprint arXiv:2102.02079, 2021a.
[]
[ "Microscopy image reconstruction with physics-informed denoising diffusion probabilistic model", "Microscopy image reconstruction with physics-informed denoising diffusion probabilistic model", "Microscopy image reconstruction with physics-informed denoising diffusion probabilistic model", "Microscopy image reconstruction with physics-informed denoising diffusion probabilistic model" ]
[ "Rui Li \nCenter for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany\n", "Gabriel Della Maggiora \nCenter for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany\n", "Vardan Andriasyan \nDepartment of Molecular Life Sciences\nUniversity of Zurich\nZurichSwitzerland\n\nArtificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom\n", "Anthony Petkidis \nDepartment of Molecular Life Sciences\nUniversity of Zurich\nZurichSwitzerland\n\nArtificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom\n", "Artsemi Yushkevich \nMax Delbrück Center for Molecular Medicine in the Helmholtz Association\nBerlinGermany\n", "Mikhail Kudryashev \nMax Delbrück Center for Molecular Medicine in the Helmholtz Association\nBerlinGermany\n\nInstitute of Medical Physics and Biophysics\nCharite-Universitätsmedizin\nBerlinGermany\n", "Artur Yakimovich \nCenter for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany\n\nArtificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom\n", "Rui Li \nCenter for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany\n", "Gabriel Della Maggiora \nCenter for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany\n", "Vardan Andriasyan \nDepartment of Molecular Life Sciences\nUniversity of Zurich\nZurichSwitzerland\n\nArtificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom\n", "Anthony Petkidis \nDepartment of Molecular Life Sciences\nUniversity of Zurich\nZurichSwitzerland\n\nArtificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom\n", "Artsemi Yushkevich \nMax Delbrück Center for Molecular Medicine in the Helmholtz Association\nBerlinGermany\n", "Mikhail Kudryashev \nMax Delbrück Center for Molecular Medicine in the Helmholtz Association\nBerlinGermany\n\nInstitute of Medical Physics and Biophysics\nCharite-Universitätsmedizin\nBerlinGermany\n", "Artur Yakimovich \nCenter for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany\n\nArtificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom\n" ]
[ "Center for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany", "Center for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany", "Department of Molecular Life Sciences\nUniversity of Zurich\nZurichSwitzerland", "Artificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom", "Department of Molecular Life Sciences\nUniversity of Zurich\nZurichSwitzerland", "Artificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom", "Max Delbrück Center for Molecular Medicine in the Helmholtz Association\nBerlinGermany", "Max Delbrück Center for Molecular Medicine in the Helmholtz Association\nBerlinGermany", "Institute of Medical Physics and Biophysics\nCharite-Universitätsmedizin\nBerlinGermany", "Center for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany", "Artificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom", "Center for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany", "Center for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany", "Department of Molecular Life Sciences\nUniversity of Zurich\nZurichSwitzerland", "Artificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom", "Department of Molecular Life Sciences\nUniversity of Zurich\nZurichSwitzerland", "Artificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom", "Max Delbrück Center for Molecular Medicine in the Helmholtz Association\nBerlinGermany", "Max Delbrück Center for Molecular Medicine in the Helmholtz Association\nBerlinGermany", "Institute of Medical Physics and Biophysics\nCharite-Universitätsmedizin\nBerlinGermany", "Center for Advanced Systems Understanding (CASUS)\nHelmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR)\nGörlitzGermany", "Artificial Intelligence for Life Sciences CIC\nDorsetUnited Kingdom" ]
[]
Light microscopy is a widespread and inexpensive imaging technique facilitating biomedical discovery and diagnostics. However, light diffraction barrier and imperfections in optics limit the level of detail of the acquired images. The details lost can be reconstructed among others by deep learning models. Yet, deep learning models are prone to introduce artefacts and hallucinations into the reconstruction. Recent state-of-the-art image synthesis models like the denoising diffusion probabilistic models (DDPMs) are no exception to this. We propose to address this by incorporating the physical problem of microscopy image formation into the model's loss function. To overcome the lack of microscopy data, we train this model with synthetic data. We simulate the effects of the microscope optics through the theoretical point spread function and varying the noise levels to obtain synthetic data. Furthermore, we incorporate the physical model of a light microscope into the reverse process of a conditioned DDPM proposing a physics-informed DDPM (PI-DDPM). We show consistent improvement and artefact reductions when compared to model-based methods, deep-learning regression methods and regular conditioned DDPMs. * Equal contribution Preprint. Under review.
null
[ "https://export.arxiv.org/pdf/2306.02929v2.pdf" ]
259,076,127
2306.02929
3f5cdc677ed458c631329ac67a9c04ae5e5c6d9f
Microscopy image reconstruction with physics-informed denoising diffusion probabilistic model Rui Li Center for Advanced Systems Understanding (CASUS) Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR) GörlitzGermany Gabriel Della Maggiora Center for Advanced Systems Understanding (CASUS) Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR) GörlitzGermany Vardan Andriasyan Department of Molecular Life Sciences University of Zurich ZurichSwitzerland Artificial Intelligence for Life Sciences CIC DorsetUnited Kingdom Anthony Petkidis Department of Molecular Life Sciences University of Zurich ZurichSwitzerland Artificial Intelligence for Life Sciences CIC DorsetUnited Kingdom Artsemi Yushkevich Max Delbrück Center for Molecular Medicine in the Helmholtz Association BerlinGermany Mikhail Kudryashev Max Delbrück Center for Molecular Medicine in the Helmholtz Association BerlinGermany Institute of Medical Physics and Biophysics Charite-Universitätsmedizin BerlinGermany Artur Yakimovich Center for Advanced Systems Understanding (CASUS) Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR) GörlitzGermany Artificial Intelligence for Life Sciences CIC DorsetUnited Kingdom Microscopy image reconstruction with physics-informed denoising diffusion probabilistic model Light microscopy is a widespread and inexpensive imaging technique facilitating biomedical discovery and diagnostics. However, light diffraction barrier and imperfections in optics limit the level of detail of the acquired images. The details lost can be reconstructed among others by deep learning models. Yet, deep learning models are prone to introduce artefacts and hallucinations into the reconstruction. Recent state-of-the-art image synthesis models like the denoising diffusion probabilistic models (DDPMs) are no exception to this. We propose to address this by incorporating the physical problem of microscopy image formation into the model's loss function. To overcome the lack of microscopy data, we train this model with synthetic data. We simulate the effects of the microscope optics through the theoretical point spread function and varying the noise levels to obtain synthetic data. Furthermore, we incorporate the physical model of a light microscope into the reverse process of a conditioned DDPM proposing a physics-informed DDPM (PI-DDPM). We show consistent improvement and artefact reductions when compared to model-based methods, deep-learning regression methods and regular conditioned DDPMs. * Equal contribution Preprint. Under review. Introduction Since its discovery, light microscopy (LM) remains an important and accessible way to explore the hidden biomedical world. The cost of simple LM equipment keeps dropping making them available to classrooms for education [1][2][3][4] and medical laboratories for applications like cytometry [1]. Furthermore, the contribution of advanced LM techniques like fluorescence microscopy [5], confocal microscopy [6] or superresolution microscopy [7] to a plethora of biomedical discoveries of the past century [8] are hard to overstate. Among the notable techniques widely used in laboratories and classrooms of the world, one could name upright and inverted brightfield microscopy, widefield fluorescence microscopy (epifluorescence) and confocal microscopy [8,6,5]. In brightfield (transmission light) microscopy, the image is formed through direct interaction of the white illumination light with the specimen. In widefield microscopy image formation is a result of the excitation of the molecular fluorophores, which emit photons contributing to the image. Finally, confocal microscopy builds upon widefield fluorescence microscopy by removing the emission light coming from outside of the immediate focal plane, thereby reducing blur in the image. This is achieved by introducing a pinhole in the light path of the microscope [8]. Yet, optical systems remain fundamentally limited owing to the principles of their design. With few exceptions, all LM systems obtain the image by collecting the light interacting with the specimen using a system of optical lenses. Passing through these components light becomes scattered and distorted causing imperfections and blur in the obtained image. The blur and imperfections of a point source (e.g. single molecule fluorescence) can be expressed mathematically as a point spread function (PSF) [9]. PSF describes the spread of light that occurs from scattering and diffraction as it passes through the optical components of the microscope. The advent of digital microscopy and image processing allowed us to attempt alleviating these limitations algorithmically in a process referred to as deconvolution [10][11][12]. While these algorithms are capable of significant improvement in image quality, they are prone to introduce artefacts due to their simplicity. On the other side of the spectrum, recent advances in deep learning (DL) with trainable models like convolutional neural networks (CNNs) allow for data-driven image restoration [13][14][15]. However, conventional trainable solutions require a large scale of training data, massive learning models and longer training time. To address these shortcomings, we introduce a physics-based diffusion model with a physicsinformed term incorporated into the loss function. We train our model on ImageNet [16] simulated to look like micrographs. We show that this approach not only provides a simpler and more principled model but also produces more naturallooking results. The objective of image reconstruction in microscopy is to obtain the high-frequency details that are lost due to the diffraction limit and optical imperfections of a microscope. Mathematical methods used for image reconstruction in microscopy can be categorised as deconvolution methods [17,18,9], regularisation methods [19][20][21] and bayesian methods [22]. Traditional methods, however, do not capture the complexity of the images leading to reconstructions that show little improvement in resolution. Additionally, these methods can be susceptible to noise and artefacts present in acquiring the images leading to degraded performance in the reconstructed image. To address these issues, researchers have turned to DL models. These models have shown promising results in different tasks, particularly image reconstruction. For example, Xu and co-authors [23] proposed to use CNNs to capture the characteristics of degradation, rather than modelling outliers perfectly. Specifically, the authors transformed a simple pseudo-inverse kernel for deconvolution into a CNN. Later, Ronneberger and colleagues incorporated the deconvolution layers in their U-Net architecture [24]. More recently, the image restoration task has been attempted with generative adversarial networks [25,26]. Another recent trend is a class of likelihood-based generative models called Denoising Probabilistic Diffusion Models (DDPMs) [27]. They have shown promise in several tasks such as superresolution [28], image colourisation, inpainting, uncropping, and JPEG restoration [29]. Furthermore, diffusion models have desirable properties such as distribution coverage, stationary optimisation function, scalability, and training stability and have shown better quality sample generation compared to generative adversarial networks [30]. However, most of these approaches largely ignore the great body of knowledge gathered on microscopy systems throughout the centuries by optical physics. Furthermore, one common issue with DL methods, especially generative models, is the tendency to generate unrealistic structure that is not present in the real image, which is problematic in fields in which an accurate reconstruction is important such as medical diagnostic images and microscopy [31,32]. To circumvent this, in other domains where the physics of the process is well understood, researchers have recently proposed an approach called physics-informed neural networks [33]. In this approach, prior knowledge of laws of physics may be employed as a regularisation for DL models. The method we propose here incorporates a physics-informed term into the loss function of DDPMs, particularly we propose to incorporate the physical problem into the DDPM loss function using the technique shown in [30,34]. Methods In LM, the diffraction pattern generated in an ideal optical system is the impulse response referred to as the point spread function (PSF) [9]. PSF of LM varies depending on the specifics of the technique employed, e.g. widefield and confocal LM (Fig. 1a,b). The diffraction pattern generated in an ideal optical system is the impulse response referred to as the point spread function (PSF). In fluorescence microscopy, usually, the illumination (excitation) and detection (emission) wavelength are not the same, so the most suitable model of the PSF [35] can be expressed as: h(x, y, z) = |u λex (x, y, z)| 2 |u λem (x, y, z)| 2 , where u λ corresponds to the PSF function for a respective emission or excitation wavelength λ. This model is known as the Airy diffraction pattern [36]. To add the effect of the pinhole used in confocal microscopy, we can convolve a disk function with the pupil of the emission sample. The disk function is usually modelled as: T (x) = 1 R 2 ≤X 2 +Y 2 , where R is the radius of the pinhole. Using this notion, we can rewrite the PSF as: h(x, y, z) = |T (x, y) * u λem (x, y, z)| 2 |u λex (x, y, z)| 2 . To model u we follow the Arnison-Sheppard approach [37]. In this approach, we model the optical transfer function (OTF). In the Arnison-Sheppard model, the OTF is expressed as the autocorrelation of the pupil function. Mathematically, the OTF C( ⃗ K) is expressed in the k-space: C( ⃗ K) = Q( ⃗ m + 1 2 ⃗ K) · Q * ( ⃗ m − 1 2 ⃗ K)d ⃗ m, where Q corresponds to the complex vectorial pupil function [38]. The complex vectorial pupil function is a complex-valued function Q( ⃗ m) of the position vector ⃗ m = (k x , k y , k z ) within the aperture of an optical system. This function can be expressed as: Q( ⃗ m) = A( ⃗ m)e iϕ( ⃗ m) , where A is the amplitude transmission function with respect to the numerical aperture of the microscope and ϕ is the phase shift produced by aberrations and microscope imperfections. Finally, to obtain u the inverse Fourier transform is applied to the OTF. Light Microscopy Image Acquisition Model LM is an imaging technique in which an analogue-digital converter detects the electrical impulses generated by the light. As a result, image statistics can be well represented by a Poisson process. If several acquisitions are made and averaged, then according to the central limit theorem, the statistics of the image can be modelled by the Gaussian process. The mathematical model for optical systems assumes that the model is linear and time-invariant. Therefore the image acquisition model is described by the equation I = ϕ(h * x + b), where the image I, is the result of the convolution between object x and system PSF h with the background signal noise b added. The Poisson noise ϕ is applied afterwards over the true signal, given by the previous equation. We used the previously described diffraction model to simulate the effects of different microscopes over two sets of images: photographs obtained from ImageNet [16], and structured illumination microscopy images obtained from the BioSR dataset [26]. To mimic microscopy images using these datasets, each image of the dataset is convolved by a PSF and then had Poisoin noise applied to it (Fig. 1c). PSF for each image was randomly generated with parameters corresponding to physically plausible parameters for microscopy systems. Specifically, we sampled numerical aperture between 0.4 and 1.0, the excitation wavelength between 320 µm and 400 µm, and the emission wavelength between 450 and 550 µm. The Pinhole size was sampled between 0.1 µm and 1000 µm. The focal plane was considered to be the centre of the volume object and the refractive index chosen was 1.33 which corresponds to the air refractive index. Using these parameters we generated 30000 different PSFs and randomly convolved between the object and the PSF to generate the training dataset. Simulated Dataset Generation Conditioned Denoising Probabilistic Diffusion models Denoising Probabilistic Diffusion Models (DDPMs) are latent variable models that use a sequence of latent variables to model the data [27] (see Supplementary Materials). In the conditioned case, samples are drawn from an unknown distribution p(y|x). This is referred to as distribution because conditioned image synthesis is by nature ill-posed, specifically, there are many possible solutions y for any given input x. In the conditioned process we want to learn an approximation of this distribution. To achieve that, it is possible to condition DDPMs in two ways. The first way is to redefine the Markov chain. Given an image y and some corruption process p(x|y), we want to learn p(y|x). To achieve this, the diffusion Markov chain states are concatenated by the respective conditioning image x [29]. Specifically, the distributions of the states of the Markov chain are generated by the diffusion process q(y t |y t−1 ) and concatenated to the conditioning image x. Where y i = √ᾱ t y 0 + ϵ √ 1 −ᾱ t , ϵ ∼ N (ϵ; 0, I). To learn the reverse process, a reverse Markov chain is established, where p(y T ) = N (y T ; 0, I): p θ (y t−1 |y t , x) = N (y t−1 |µ θ (x, y t ,ᾱ t ), σ 2 I). Using the same formulation as in regular DDPMs [29], we train the model to predict ϵ at each time step: min θ L simple := E t,(x,y0),ϵ ||ϵ − ϵ θ (y t , x,ᾱ t )|| 2 2 . Finally, to obtain y 0 , the same iterative denoising as in the regular DDPMs is applied. y t−1 = 1 √ α t y t − β t √ 1 −ᾱ t ϵ θ (x, y t ,ᾱ t ) + β t ϵ t . Model-guided Denoising Probabilistic Diffusion models The second way to obtain conditioned samples from a diffusion model is to condition an unconditioned reverse process [30]. Given an unconditional reverse process p θ (y t−1 |y t ), to condition on label x we can factorise p θ,ϕ (y t−1 |y t , x) = Zp θ (y t−1 |y t )p ϕ (x|y t−1 ), where Z is a normalizing constant. This expression can be approximated as a perturbed Gaussian distribution. Since our unconditioned reverse process is a Gaussian, we have: p θ (y t−1 |y t ) = N (µ, σ) log p θ (y t−1 |y t ) = − 1 2 (y t−1 − µ) T Σ −1 (y t−1 − µ) + C. Since, at infinity, the distribution of the reverse process tends to a delta distribution, then it is reasonable to approximate p(x|y t ) by its Taylor expansion around the mean. log p(x|y t ) ≈ log p(x|y t )| yt=µ + (y t − µ)∇ yt log p(x|y t )| yt=µ = (y t − µ)g + C 1 where g = ∇ yt log p(x|y t ) Finally, by replacing and rearranging, we get: p(y t−1 |y t )p(x|y t−1 ) ≈ N (µ + Σg, Σ). Thus, the reverse conditioned process approximates the unconditioned Gaussian transition with its mean shifted by Σg. Physics-informed Denoising Probabilistic Diffusion Models Physics-informed methods arise from the need to incorporate physical knowledge into the construction or training of the model. In inverse problems, the objective is to determine p(y|x) where y is the true object and x is some observation of the object that follows a possibly stochastic process p(x|y). In image reconstruction problems such as MRI undersampled image reconstruction, deblurring or microscopy image reconstruction, the physical model is generally of the form x = ϕ(K * y) where ϕ is a function that applies noise according to a certain distribution. Particularly, model-based methods try to solve a problem of the form min y ||K * y − x|| 2 + λR(y). Where λ is the weight of the regulariser R(y). To recover the true object from the observed variables. This formulation can be incorporated readily into the model guided DDPM, to obtain improvements over the conditioned case. However, one cannot access K during inference in several cases. To remediate this, we suggest learning the shifted mean of a conditioned DDPM by the physical model. In this sense, we incorporate the gradient of the solution of the physical problem as the shift of the mean of the unconditioned case. Given K,x,x and y t , where x is the noiseless version ofx. Using the same reparametrisation of the previous methods we obtain the following equation to learn the shifted mean: E ϵ,t,(x,y0) [C||ϵ − ϵ θ ( √ᾱ y 0 + √ 1 −ᾱ t ϵ,x,ᾱ t ) + Σ C ∇ yt ||K * y t − x|| m m || 2 2 ], where C = β 2 t 2σtαt (1−ᾱt) , and Σ = σ 2 t I. We also used the simplification of L simple [27] and trained by setting C = 1. This allows us to have a weighted variational bound that emphasises different aspects of the image reconstruction. Moreover, removing this constant adds a smaller weight to terms corresponding to a smaller t, which has been shown to be beneficial for higher-quality reconstructions. Furthermore, we re-weight the physics term by a constant sequence ν t , such that the physics term remains relevant throughout the diffusion chain. Thus, we train using the following simplified objective: E ϵ,t,(x,y0) [||ϵ − ϵ θ ( √ᾱ y 0 + √ 1 −ᾱ t ϵ,x,ᾱ t ) + ν t ∇ yt ||K * y t − x|| m m || 2 2 ]. This way, the diffusion model will predict the gradient of the physical problem alongside the ϵ of L simple The inference, in this case, is performed in the following way: y t−1 = 1 √ α t y t − β t √ 1 −ᾱ t ϵ θ (x, y t ,ᾱ t ) + ν t ∇ yt ||K * y t − x|| m m + β t ϵ t . This is analogous to learning the unconditioned model and guide during the inference scaled by a specific constant depending on the schedule. This expression highlights the usefulness of our approach since it enables the incorporation of information only available during training to guide the DDPM inference. Additionally, we can add a regulariser in the same fashion as the model-based methods to have a DDPM model guided by an interpretable physical prior. y t−1 = 1 √ α t y t − β t √ 1 −ᾱ t ϵ(x, y t ,ᾱ t )) + β t ϵ t −ν t β t √ 1 −ᾱ t ∇ yt ||K * y t −x|| m m −λ∇ yt R(y t ). Metrics To assess the performance of all the models in this study we used multi-scale structural similarity index measure (MS-SSIM) [39], normalised root mean square error NRMSE := √ MSE/y max − y min , and peak single-to-noise ratio (PSNR). MS-SSIM metric is defined as: MS-SSIM(x, y) := [l M (x, y)] α M M j=1 [c j (x, y)] βj [s j (x, y)] γj , where l j , c j , and s j are the measures of luminance, contrast, and structure corresponding to scale j. We used five scales and α j = β j = γ j for M j=1 γ j = 1 in accordance with the parameters reported in [39]. PSNR was defined as: PSNR := 20 log 10 (MAX I ) − 10 log 10 (MSE), where MAX I is the maximum pixel value of the image and MSE is the mean square error. Training Details DDPM and PI-DDPM were trained in a cluster environment employing a single Nvidia A100 with 40GB of vRAM. The batch size was 16. Pre-training continued for 1 million iterations using an ImageNet-derived dataset and fine-tuned for 800 000 iterations on the mixed dataset. The U-Net model was trained in a cluster environment using a single Nvidia V100 GPU equipped with 32GB of vRAM. Pre-training continued for 80 000 iterations, and fine-tuning for 10 000. Datasets Simulated Datasets To train our models on the image reconstruction task for microscopy, we required a dataset large enough to prevent overfitting. For this we have constructed a simulated dataset using photography images from ImageNet [16] containing 1.2 million training and 100 000 test images. To simulate micrographs, we have processed each image using a forward microscopy model, which we have termed the Image Acquisition Model (see Methods and Fig. 1c). We have then employed the processed ImageNet-derived dataset in accordance with the training and test holdouts defined in the original dataset and trained a U-Net [24], DDPM [27] models, as well as the PI-DDPM model proposed here on the task of reconstructing the high-resolution image from the simulated blurred micrograph. Next, to further increase the relevance of our training dataset, we have combined the ImageNet-derived simulated microscopy dataset with a simulated dataset obtained from the publicly available BioSR dataset [26]. BioSR dataset contains approximately 20 000 structured illumination microscopy (SIM) fluorescence images of subcellular structures like clathrin-coated pits (CCPs), endoplasmatic reticulum (ER), microtubules (MT) and F-actin imaged at varying levels of fluorescence. Specifically, from the BioSR containing pairs of low and high-resolution images we have taken all high-resolution images. We followed BioSR train-test split. During training, the highresolution images constituted the ground truth, while images processed with our Image Acquisition Model (see Methods) we used as the input. This combined dataset was then used for finetuning our models. At test time we have used the BioSR low-resolution (widefield) images as input. Direct Stochastic Optical Reconstruction Microscopy Dataset To test how our method compares to the state-ofthe-art single-molecule localisation microscopy (SMLM) we have employed a publicly available three-color Direct Stochastic Optical Reconstruction Microscopy (dSTORM) dataset [40]. In this dataset authors provide a Widefield and SMLM reconstructed high-resolution image containing mid-zygotene nucleus immunostained for SYCP3 (red), DMC1 (green) and RAD51 (blue) proteins. Images in this dataset were acquired by Zeiss Elyra PS1 microscope using a 100x 1.46NA oil immersion objective. Further imaging details are provided by the authors in the following publication [41]. Prospective Correlative Widefield-Confocal Microscopy Dataset Finally, to test how our model would perform on a prospectively acquired dataset, we have obtained a correlated widefield-confocal microscopy dataset. For this, A549 lung carcinoma cell line cells were seeded in 96-well imaging plates a night prior to imaging, then fixed with 4% paraformaldehyde (Sigma) and stained for DNA with Hoechst 33342 fluorescent dye (Sigma). Cell culture was maintained similarly to the procedures described in [42]. Next, stained cell nuclei were imaged using ImageXpress Confocal system (Molecular Devices) in either confocal or widefield mode employing Nikon 20X Plan Apo Lambda objective. To obtain 3D information images in both modes were acquired as Z-stacks with 0.3 µm and 0.7 µm for confocal and widefield modes respectively. Confocal z-stack was Nyquist sampled. The excitation wavelength was 405 nm and the emission 452 nm. Using these settings we have obtained 72 individual stacks for both modalities, with each stack covering 2048 by 2048 pixels or 699 by 699 µm. Experiments and Results Training, Testing and Comparison of Physics-informed Denoising Diffusion Probabilistic Model using Simulated Microscopy Dataset To train the Physics-informed Denoising Diffusion Probabilistic Model (PI-DDPM) we proposed, we have employed a simulated dataset microscopy dataset (see Datasets). For this, we have processed ImageNet/BioSR-derived images using our image acquisition model (IAM, see Methods). These images used for training bore a strong resemblance to images obtained using LM. Changes that such processing inflicts on the images are demonstrated in Fig. 2a. To compare the performance of our PI- DDPM to other models, alongside PI-DDPM we have trained a U-Net [24], DDPM [27]. Additionally, for comparison, we added the model-based Richardson-Lucy (RL) [10,12,43] algorithm (Fig. 2b,c). The visual comparison of the model's performance suggested that DDPM and PI-DDPM showed less noise and processing artefacts in comparison to RL and U-Net. Furthermore, PI-DDPM preserved much more high-frequency details in both ImageNet and BioSR-derived images. To obtain quantitative performance measurements, we have computed the performance metrics including PSNR, MS-SSIM, and NRMSE (see Supplementary Materials). These metrics suggest that DDPM and PI-DDPM perform best among the algorithms we compared, while both models performed comparably. Next, to test how the models perform on the reconstruction task from real rather than simulated microscopy we ran inference of U-Net, DDPM and PI-DDM on widefield images of the BioSR test set (Tab. 1). Both DDPM and PI-DDPM outperformed U-Net in all three metrics. Remarkably, on this real microscopy set PI-DDPM outperformed DDPM in all three metrics. Testing Physics-informed Denoising Diffusion Probabilistic Model using Direct Stochastic Optical Reconstruction Microscopy Dataset To test how PI-DDPM would perform on a previously unseen microscopy dataset we have employed a publicly available Direct Stochastic Optical Reconstruction Microscopy Dataset (dSTORM) [40]. To circumvent pixel shift between widefield image and processed image we have chosen regions in the centre of the image where the shift was minimal. Remarkably, PI-DDPM produced results visibly consistent with the ground truth (Fig. 3a). In comparison to the U-Net, both DDPM and PI-DDPM produced visibly sharper reconstructions (Fig. 2b). Remarkably, PI-DDPM produces visibly less artefacts in low signal (red channel, upper row) compared to DDPM. Additionally, PI-DDPM preserved the continuity of the filament structures better (red channel, lowest row) compared to DDPM. Interestingly, DDPM seems to overemphasise the green channel (DMC1) possibly due to the high signal-to-noise. Noteworthy, in comparison to the dSTORM reconstruction all models failed to capture the low signal-to-noise punctae in RAD51 (blue channel). We have next quantified the performance of all the models for each channel and averaged the results (Tab. 2). Results show that both DDPM and PI-DDPM perform best in all three metrics. Remarkably, PI-DDPM outperformed DDPM in PSNR and NRMSE metrics. Furthermore, PI-DDPM performed comparably to DDPM in MS-SSIM. Finally, to test our image reconstruction model on a dataset acquired prospectively, we obtained a correlative widefield-confocal microscopy dataset of cell nuclei (Fig. 4). In this dataset, both confocal and widefield stacks of the same region were taken by automated microscopy. Since confocal microscopy ( Fig. 4b) is known to have better resolution compared to widefield microscopy ( Fig. 4a) it can serve as a guide on the correctness of image restoration in the absence of bona fide ground truth. Consistent with our previous observations, we noted that DDPM and PI-DDPM have shown significantly lower blur in the reconstructions (Fig. 4a, top row). However, compared to PI-DDPM, the conventional DDPM had notable artefacts in chromatin structures distorting the resulting image. Furthermore, comparing correlated fields of view ( Fig. 4a,b, white asterisk) PI-DDPM reconstructions show more consistency with the confocal image than the conventional DDPM or U-Net irrespective of whether the input image comes from widefield or confocal microscopy. Furthermore, PI-DDPM showed more consistent output in the case of a low signal-to-noise ratio (Fig. 4b, bottom). Limitations The performance of the model suggested here is similar to or better than that of the benchmarks. However, the quality of the output can be further improved in several ways. Firstly, due to the lack of available microscopy data, our models are trained on a simulated dataset. While the simulation model employs bona fide complete physics understanding microscope image acquisition, this model can be further improved by incorporating specific physical parameters such as lens aberrations. Secondly, the performance could be improved by training the model on a larger amount of real microscopy data. Finally, training of DDPM and PI-DDPM models is computationally expensive and requires high-performance computing infrastructure. Discussion Despite the immense progress in microscopy, the ability to visualise the microscopic world remains limited due to hardware imperfections and physics boundaries [8]. While recent advances in deep learning and generative models promise to assist in overcoming these barriers, these models come with their own set of limitations. Since their introduction, denoising diffusion probabilistic models (DDPMs) have shown great promise for generative modelling [27,44,45]. However, in the case of demanding applications like biomedical image restoration and superresolution microscopy, the absence of artefacts and hallucinations, as well as the correctness of produced structures are of paramount importance. In this work, we show that the introduction of a physical prior to a DDPM model can improve stability and generate more realistic reconstruction results. Our approach extends a recently introduced paradigm of physics-informed neural networks [33] and extends the applicability of these methods to microscopy. Furthermore, since our models learn the distribution from the data, the mean and variance of the obtained reconstructions can be estimated directly. This, in turn, may provide a convenient way to obtain confidence of our reconstructions, which could facilitate broader adoption by the biomedical community. The model is trained to predict ϵ at each timestep. min θ E t,x0,ϵ ||ϵ − ϵ θ (x t , t)|| 2 2 . For the variance, it is suggested to use a constant such as β t I orβ t I, which corresponds to the upper and lower bounds for the true variance of the reverse process [27]. Source Code The source code for this work is available at https://github.com/casus/pi-ddpm. To train the model place your data generated by the dataset_generation script (if you are generating simulated data) or the STORM script if you are generating the respective STORM dataset. In the train_ddpm or train_unet script change the paths of the loading data to the ones that you generated. Next, choose a training modality, either widefield or confocal. Finally, run the script. To test the model generate your testing dataset using the dataset_generation script. Change the paths corresponding to your data. Next, change the paths to the weights files that you wish to use. Finally, run the test script. Due to the size limitation of the submission for the prospective dataset containing correlative widefield-confocal fluorescence microscopy, we provide only a single stack as an example (data/teaser_c_w_test.npz). The full dataset will be made available under CC-BY license upon acceptance of the manuscript. To train our models on the image reconstruction task for microscopy, we required a dataset large enough to prevent overfitting. For this we have constructed a simulated dataset using photography images from ImageNet [16] containing 1.2 million training and 100 000 test images. To simulate micrographs, we have processed each image using a forward microscopy model, which we have termed the Image Acquisition Model (see Methods and Fig. 1c). We have then employed the processed ImageNet-derived dataset in accordance with the training and test holdouts defined in the original dataset and trained a U-Net [24], DDPM [27] models, as well as the PI-DDPM model proposed here on the task of reconstructing the high-resolution image from the simulated blurred micrograph. Supplementary Results Simulated Microscopy Dataset Supplementary Table 1 contain test-time performance using multi-scale structural similarity index measure (MS-SSIM) [39], normalised root mean square (NRMSE), and peak single-to-noise ratio (PSNR) metrics. Figure 1 : 1Proposed model with physics-informed probabilistic denoising diffusion for Microscopy image reconstruction. (a,b) simplified schematic depiction of widefield and confocal microscopy. (c) schematic depiction of synthetic dataset generation using our acquisition model. (d) schematic depiction of a U-Net architecture. (e) illustration of a denoising diffusion probabilistic model and the physics-informed version we propose. Figure 2 : 2Comparison of model performance on Im-ageNet and BioSR-derrived datasets. (a) Examples of processing performed using Image Acquisition Model (IAM) on (left-to-right) Shepp-Logan phantom, Ima-geNet and BioSR images. (b, c) Examples of input (Widefield) and reconstructed images using Richardson-Lucy (RL), U-Net, Denoising Diffusion Probabilistic Models (DDPM) and physics-informed DDPM (PI-DDPM) in ImageNet and BioSR-derived images respectively. GT stands for ground truth. MT stands for microtubules. Scale bar in micrographs is 1.5 µm. Figure 3 : 3Model performance on unseen superresolution dataset. (a) image containing mid-zygotene nucleus immunostained for SYCP3 (red), DMC1 (green) and RAD51 (blue) proteins from [40]. (b) examples of input (Widefield) and reconstructed images using U-Net, Denoising Diffusion Probabilistic Models (DDPM) and physics-informed DDPM (PI-DDPM) in dSTORM images. The scale bar is 5 µm. Figure 4 : 4Model performance on prospective correlative widefield-confocal microscopy. (a) examples of widefield images of cell nuclei and their reconstructions using U-Net, Denoising Diffusion Probabilistic Models (DDPM) and physics-informed DDPM (PI-DDPM). (b) examples of confocal images and their reconstructions. The scale bar is 10 µm. Asterisk (*) marks correlated images of the same cell and focal plane. Table 1 : 1Performance on Widefield Microscopy using BioSR Test SetMetric / Model Original U-Net DDPM PI-DDPM (ours) PSNR 18.649 19.706 23.703 23.974 MS-SSIM 0.628 0.652 0.784 0.795 NRMSE 0.147 0.126 0.070 0.069 Table 2 : 2Performance on dSTORM Test Set. Error stands for standard deviation.Metric / Model Original U-Net DDPM PI-DDPM (ours) PSNR 16.487 16.712 15.541±0.232 16.778±0.807 MS-SSIM 0.293 0.479 0.638±0.007 0.612±0.039 NRMSE 0.150 0.146 0.167±0.005 0.145±0.015 5.3 Testing Physics-informed Denoising Diffusion Probabilistic Model using Prospective Correlated Widefield-Confocal Microscopy Dataset Table 3 : 3[Supplementary] Performance on Simulated Microscopy using BioSR Test Set Metric / Model Original Richardson-Lucy U-Net DDPM PI-DDPMPSNR 16.127 13.394 18.301 20.446 20.217 MS-SSIM 0.745 0.684 0.812 0.859 0.859 NRMSE 0.156 0.214 0.122 0.095 0.098 AcknowledgmentsWe thank Michael Hecht and his group for critical reading of this work. This work was partially funded by the Center for Advanced Systems Understanding (CASUS) which is financed by Germany's Federal Ministry of Education and Research (BMBF) and by the Saxon Ministry for Science, Culture, and Tourism (SMWK) with tax funds on the basis of the budget approved by the Saxon State Parliament. MK was supported by the Heisenberg award from the DFG (KU 3222/2-1), as well as funding from the Helmholtz Association.Supplementary Methods DescriptionDenoising Diffusion Probabilistic ModelsDenoising Probabilistic Diffusion Models (DDPMs) are latent variable models that use a sequence of latent variables to model the data[27]. These models use a Markov chain in which noise is gradually used to diffuse the data sample signal. This process is usually called the forward process. Formally, given x 0 ∼ q(x 0 ), the data distribution. We define a Markovian chain q with states x 1:T , adding noise at each state according to a variance schedule β 1:T . The transitions of the Markov chain are defined as follows:This definition allows computing q(x t |x 0 ), which can be expressed as the following normal distribution:Then we can find the posterior q(x t−1 |x t , x 0 ) using the Bayes Theorem. The posterior is:where the meanμ t is:μand the variance is:βThen to sample from q(x 0 ), we can sample from q(x T ) which under sensible parameter selections β 1:T and T , approaches a normal distribution N (x T ; 0, I)[30]and then follow the reverse process q(x t−1 |x t ). However, since we do not know the distribution q(x 0 ), we train a neural network to approximate the reverse transition probability distribution. This function approximates a normal diagonal distribution[46]in which, as the number of steps approaches infinity, the covariance matrix norm approaches 0. Then we only need to predict each timestep's mean and variance. Formally, the learnable transition function is defined as follows:Lastly, we can solve a lower-bound variational problem to optimise the model. However, it is possible to train the model by matching the mean of the posterior of the forward process and the mean of the reverse process: min Fast, high-precision autofocus on a motorised microscope: Automating blood sample imaging on the openflexure microscope. Joe Knapper, Joel T Collins, Julian Stirling, Samuel Mcdermott, William Wadsworth, Richard W Bowman, Journal of Microscopy. 2851Joe Knapper, Joel T Collins, Julian Stirling, Samuel McDermott, William Wadsworth, and Richard W Bowman. Fast, high-precision autofocus on a motorised microscope: Automating blood sample imaging on the openflexure microscope. Journal of Microscopy, 285(1):29-39, 2022. A review on lowcost microscopes for o pen s cience. Microscopy research and technique. Jesus Salido, Gloria Bueno, Jesus Ruiz-Santaquiteria, Gabriel Cristobal, 85Jesus Salido, Gloria Bueno, Jesus Ruiz-Santaquiteria, and Gabriel Cristobal. A review on low- cost microscopes for o pen s cience. Microscopy research and technique, 85(10):3270-3283, 2022. TheC 100 lab: A 3d-printable open-source platform for fluorescence microscopy, optogenetics, and accurate temperature control during behaviour of zebrafish, drosophila, and caenorhabditis elegans. Andre Maia Chagas, Lucia L Prieto-Godino, B Aristides, Tom Arrenberg, Baden, PLoS biology. 1572002702Andre Maia Chagas, Lucia L Prieto-Godino, Aristides B Arrenberg, and Tom Baden. TheC 100 lab: A 3d-printable open-source platform for fluorescence microscopy, optogenetics, and accurate temperature control during behaviour of zebrafish, drosophila, and caenorhabditis elegans. PLoS biology, 15(7):e2002702, 2017. Lowcost, sub-micron resolution, wide-field computational microscopy using opensource hardware. Tomas Aidukas, Regina Eckert, R Andrew, Laura Harvey, Pavan C Waller, Konda, Scientific reports. 917457Tomas Aidukas, Regina Eckert, Andrew R Harvey, Laura Waller, and Pavan C Konda. Low- cost, sub-micron resolution, wide-field computational microscopy using opensource hardware. Scientific reports, 9(1):7457, 2019. Fluorescence microscopy. J A Conchello, Lichtman, Nat. Methods. 212JA Conchello and JW Lichtman. Fluorescence microscopy. Nat. Methods, 2(12):910-919, 2005. Memoir on inventing the confocal scanning microscope. Marvin Minsky, Scanning. 104Marvin Minsky. Memoir on inventing the confocal scanning microscope. Scanning, 10(4): 128-138, 1988. Super-resolution microscopy demystified. Lothar Schermelleh, Alexia Ferrand, Thomas Huser, Christian Eggeling, Markus Sauer, Oliver Biehlmaier, Gregor Pc Drummen, Nature cell biology. 211Lothar Schermelleh, Alexia Ferrand, Thomas Huser, Christian Eggeling, Markus Sauer, Oliver Biehlmaier, and Gregor PC Drummen. Super-resolution microscopy demystified. Nature cell biology, 21(1):72-84, 2019. Principles of Light Microscopy: From Basic to Advanced. Volodymyr Nechyporuk-Zloy, Springer Nature2022Volodymyr Nechyporuk-Zloy. Principles of Light Microscopy: From Basic to Advanced. Springer Nature, 2022. The point-spread function of a confocal microscope: its measurement and use in deconvolution of 3-d data. J Peter, David J Shaw, Rawlins, Journal of Microscopy. 1632Peter J Shaw and David J Rawlins. The point-spread function of a confocal microscope: its measurement and use in deconvolution of 3-d data. Journal of Microscopy, 163(2):151-165, 1991. Bayesian-based iterative method of image restoration. William Hadley Richardson, JoSA. 621William Hadley Richardson. Bayesian-based iterative method of image restoration. JoSA, 62 (1):55-59, 1972. Realistic modeling of the illumination point spread function in confocal scanning optical microscopy. J Michael, Nasse, C Jörg, Woehl, Josa a. 272Michael J Nasse and Jörg C Woehl. Realistic modeling of the illumination point spread function in confocal scanning optical microscopy. Josa a, 27(2):295-302, 2010. An iterative technique for the rectification of observed distributions. The astronomical journal. B Leon, Lucy, 79745Leon B Lucy. An iterative technique for the rectification of observed distributions. The astronomical journal, 79:745, 1974. Contentaware image restoration: pushing the limits of fluorescence microscopy. Martin Weigert, Uwe Schmidt, Tobias Boothe, Andreas Müller, Alexandr Dibrov, Akanksha Jain, Benjamin Wilhelm, Deborah Schmidt, Coleman Broaddus, Siân Culley, Nature methods. 1512Martin Weigert, Uwe Schmidt, Tobias Boothe, Andreas Müller, Alexandr Dibrov, Akanksha Jain, Benjamin Wilhelm, Deborah Schmidt, Coleman Broaddus, Siân Culley, et al. Content- aware image restoration: pushing the limits of fluorescence microscopy. Nature methods, 15 (12):1090-1097, 2018. Content-aware image restoration for electron microscopy. Tim-Oliver Buchholz, Alexander Krull, Réza Shahidi, Gaia Pigino, Gáspár Jékely, Florian Jug, Methods in cell biology. 152Tim-Oliver Buchholz, Alexander Krull, Réza Shahidi, Gaia Pigino, Gáspár Jékely, and Florian Jug. Content-aware image restoration for electron microscopy. Methods in cell biology, 152: 277-289, 2019. Learning deconvolution network for semantic segmentation. Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionHyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE international conference on computer vision, pages 1520-1528, 2015. Imagenet: A largescale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee, 2009. Deconvolution methods for 3-D fluorescence microscopy images. P Sarder, A Nehorai, 1558-0792. doi: 10.1109/ MSP.2006.1628876Conference Name: IEEE Signal Processing Magazine. 23P. Sarder and A. Nehorai. Deconvolution methods for 3-D fluorescence microscopy images. IEEE Signal Processing Magazine, 23(3):32-45, May 2006. ISSN 1558-0792. doi: 10.1109/ MSP.2006.1628876. Conference Name: IEEE Signal Processing Magazine. Three-dimensional imaging by deconvolution microscopy. J G Mcnally, T Karpova, J Cooper, J A Conchello, 10.1006/meth.1999.0873Methods. 193J. G. McNally, T. Karpova, J. Cooper, and J. A. Conchello. Three-dimensional imaging by deconvolution microscopy. Methods (San Diego, Calif.), 19(3):373-385, November 1999. ISSN 1046-2023. doi: 10.1006/meth.1999.0873. Sparse MRI: The application of compressed sensing for rapid MR imaging. Michael Lustig, David Donoho, John M Pauly, 10.1002/mrm.21391Magnetic Resonance in Medicine. 586Michael Lustig, David Donoho, and John M. Pauly. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magnetic Resonance in Medicine, 58(6):1182-1195, December 2007. ISSN 0740-3194. doi: 10.1002/mrm.21391. Joint L1 and total variation regularization for fluorescence molecular tomography. Joyita Dutta, Sangtae Ahn, Changqing Li, Simon R Cherry, Richard M Leahy, 10.1088/0031-9155/57/6/1459Physics in Medicine & Biology. 5761459IOP PublishingJoyita Dutta, Sangtae Ahn, Changqing Li, Simon R. Cherry, and Richard M. Leahy. Joint L1 and total variation regularization for fluorescence molecular tomography. Physics in Medicine & Biology, 57(6):1459, March 2012. ISSN 0031-9155. doi: 10.1088/0031-9155/57/6/1459. URL https://dx.doi.org/10.1088/0031-9155/57/6/1459. Publisher: IOP Publishing. Superpenetration optical microscopy by iterative multiphoton adaptive compensation technique. Jianyong Tang, Ronald N Germain, Meng Cui, https:/www.pnas.org/doi/10.1073/pnas.1119590109Proceedings of the National Academy of Sciences. the National Academy of Sciences109Proceedings of the National Academy of SciencesJianyong Tang, Ronald N. Germain, and Meng Cui. Superpenetration optical microscopy by iterative multiphoton adaptive compensation technique. Proceedings of the National Academy of Sciences, 109(22):8434-8439, May 2012. doi: 10.1073/pnas.1119590109. URL https: //www.pnas.org/doi/10.1073/pnas.1119590109. Publisher: Proceedings of the National Academy of Sciences. Bayesian Approach for Automatic Joint Parameter Estimation in 3D Image Reconstruction from Multi-Focus Microscope. Seunghwan Yoo, Pablo Ruíz-Hernández, Xiang Huang, Kuan He, Xiaolei Wang, Itay Gdor, Alan Selewa, Matthew Daddysman, Nicola Ferrier, Mark Hereld, Norbert Scherer, Oliver Cossairt, Aggelos Katsaggelos, 10.1109/ICIP.2018.8451309Seunghwan Yoo, Pablo Ruíz-Hernández, Xiang Huang, Kuan He, Xiaolei Wang, Itay Gdor, Alan Selewa, Matthew Daddysman, Nicola Ferrier, Mark Hereld, Norbert Scherer, Oliver Cossairt, and Aggelos Katsaggelos. Bayesian Approach for Automatic Joint Parameter Estimation in 3D Image Reconstruction from Multi-Focus Microscope. October 2018. doi: 10.1109/ICIP.2018. 8451309. Deep convolutional neural network for image deconvolution. Li Xu, Jimmy S Ren, Ce Liu, Jiaya Jia, Advances in neural information processing systems. 27Li Xu, Jimmy S Ren, Ce Liu, and Jiaya Jia. Deep convolutional neural network for image deconvolution. Advances in neural information processing systems, 27, 2014. U-net: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference. Munich, GermanySpringerProceedings, Part III 18Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234-241. Springer, 2015. Dfgan: Image deblurring through fusing light-weight attention and gradient-based filters. Ali Syed Saqlain, Fang Fang, Li-Yun Wang, Tanvir Ahmad, Zain Ul Abidin, 2022 IEEE 2nd International Conference on Information Communication and Software Engineering (ICICSE). IEEEAli Syed Saqlain, Fang Fang, Li-Yun Wang, Tanvir Ahmad, and Zain Ul Abidin. Dfgan: Image deblurring through fusing light-weight attention and gradient-based filters. In 2022 IEEE 2nd International Conference on Information Communication and Software Engineering (ICICSE), pages 110-114. IEEE, 2022. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Chang Qiao, Di Li, Yuting Guo, Chong Liu, Tao Jiang, Qionghai Dai, Dong Li, Nature Methods. 182Chang Qiao, Di Li, Yuting Guo, Chong Liu, Tao Jiang, Qionghai Dai, and Dong Li. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nature Methods, 18(2):194-202, 2021. Denoising Diffusion Probabilistic Models. Jonathan Ho, Ajay Jain, Pieter Abbeel, Advances in Neural Information Processing Systems. Curran Associates, Inc33Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems, volume 33, pages 6840-6851. Cur- ran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 4c5bcfec8584af0d967f1ab10179ca4b-Abstract.html. Haoying Li, Yifan Yang, Meng Chang, Huajun Feng, Zhihai Xu, Qi Li, Yueting Chen, Srdiff, arXiv:2104.14951arXiv: 2104.14951Single Image Super-Resolution with Diffusion Probabilistic Models. Haoying Li, Yifan Yang, Meng Chang, Huajun Feng, Zhihai Xu, Qi Li, and Yueting Chen. SRDiff: Single Image Super-Resolution with Diffusion Probabilistic Models. arXiv:2104.14951 [cs], May 2021. URL http://arxiv.org/abs/2104.14951. arXiv: 2104.14951. Palette: Image-to-Image Diffusion Models. Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, Mohammad Norouzi, https:/dl.acm.org/doi/10.1145/3528233.3530757In Special Interest Group on Computer Graphics. ACMChitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim Salimans, David Fleet, and Mohammad Norouzi. Palette: Image-to-Image Diffusion Models. In Special Interest Group on Computer Graphics, pages 1-10, Vancouver BC Canada, August 2022. ACM. ISBN 978-1-4503-9337-9. doi: 10.1145/3528233.3530757. URL https://dl.acm.org/doi/10. 1145/3528233.3530757. Diffusion Models Beat GANs on Image Synthesis. Prafulla Dhariwal, Alex Nichol, arXiv:2105.05233cs, statPrafulla Dhariwal and Alex Nichol. Diffusion Models Beat GANs on Image Synthesis, June 2021. URL http://arxiv.org/abs/2105.05233. arXiv:2105.05233 [cs, stat]. On Hallucinations in Tomographic Image Reconstruction. Sayantan Bhadra, A Varun, Frank J Kelkar, Mark A Brooks, Anastasio, 10.1109/TMI.2021.3077857IEEE transactions on medical imaging. 4011Sayantan Bhadra, Varun A. Kelkar, Frank J. Brooks, and Mark A. Anastasio. On Hallucinations in Tomographic Image Reconstruction. IEEE transactions on medical imaging, 40(11):3249- 3260, November 2021. ISSN 0278-0062. doi: 10.1109/TMI.2021.3077857. URL https: //www.ncbi.nlm.nih.gov/pmc/articles/PMC8673588/. A systematic comparison of generative models for medical images. Hristina Uzunova, Matthias Wilms, Nils D Forkert, Heinz Handels, Jan Ehrhardt, 10.1007/s11548-022-02567-6International Journal of Computer Assisted Radiology and Surgery. 177Hristina Uzunova, Matthias Wilms, Nils D. Forkert, Heinz Handels, and Jan Ehrhardt. A systematic comparison of generative models for medical images. International Journal of Computer Assisted Radiology and Surgery, 17(7):1213-1224, July 2022. ISSN 1861-6429. doi: 10.1007/s11548-022-02567-6. URL https://doi.org/10.1007/s11548-022-02567-6. Maziar Raissi, Paris Perdikaris, George Em Karniadakis, arXiv:1711.10561Data-driven solutions of nonlinear partial differential equations. arXiv preprintpart iMaziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learn- ing (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561, 2017. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, David J Fleet, arXiv:2204.03458arXiv: 2204.03458Video Diffusion Models. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J. Fleet. Video Diffusion Models. arXiv:2204.03458 [cs], April 2022. URL http: //arxiv.org/abs/2204.03458. arXiv: 2204.03458. Handbook of biological confocal microscopy. James Pawley, Springer Science & Business Media236James Pawley. Handbook of biological confocal microscopy, volume 236. Springer Science & Business Media, 2006. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light. Max Born, Emil Wolf, Cambridge University Press05216422217th Edition. 7th editionMax Born and Emil Wolf. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (7th Edition). Cambridge University Press, 7th edition, 1999. ISBN 0521642221. A 3d vectorial optical transfer function suitable for arbitrary pupil functions. Colin Jr Matthew R Arnison, Sheppard, Optics communications. 2111-6Matthew R Arnison and Colin JR Sheppard. A 3d vectorial optical transfer function suitable for arbitrary pupil functions. Optics communications, 211(1-6):53-63, 2002. Vectorial pupil functions and vectorial transfer functions. Colin Sheppard, Kieran Larkin, Optik -International Journal for Light and Electron Optics. 107Colin Sheppard and Kieran Larkin. Vectorial pupil functions and vectorial transfer functions. Optik -International Journal for Light and Electron Optics, 107:79-87, February 1997. Multiscale structural similarity for image quality assessment. Zhou Wang, P Eero, Alan C Simoncelli, Bovik, The Thrity-Seventh Asilomar Conference on Signals. Ieee2Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for im- age quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398-1402. Ieee, 2003. dSTORM Imaging and Analysis of Recombination Foci in Mouse Spread Meiotic Nuclei. Willy Baarends, Lieke Koornneef, Johan Slotman, Willy Baarends Lieke Koornneef and Johan Slotman. dSTORM Imaging and Analysis of Re- combination Foci in Mouse Spread Meiotic Nuclei. https://www.ebi.ac.uk/biostudies/ bioimages/studies/S-BIAD627, 2023. Multi-color dstorm microscopy in hormad1-/-spermatocytes reveals alterations in meiotic recombination intermediates and synaptonemal complex structure. Lieke Koornneef, Johan A Slotman, Esther Sleddens-Linkels, Marco Wiggert A Van Cappellen, Attila Barchi, Joost Tóth, Gribnau, B Adriaan, Willy M Houtsmuller, Baarends, PLoS Genetics. 1871010046Lieke Koornneef, Johan A Slotman, Esther Sleddens-Linkels, Wiggert A Van Cappellen, Marco Barchi, Attila Tóth, Joost Gribnau, Adriaan B Houtsmuller, and Willy M Baarends. Multi-color dstorm microscopy in hormad1-/-spermatocytes reveals alterations in meiotic recombination intermediates and synaptonemal complex structure. PLoS Genetics, 18(7):e1010046, 2022. Plaque2. 0-a high-throughput analysis framework to score virus-cell transmission and clonal cell expansion. Artur Yakimovich, Vardan Andriasyan, Robert Witte, I-Hsuan Wang, Vibhu Prasad, Maarit Suomalainen, Urs F Greber, PloS one. 109138760Artur Yakimovich, Vardan Andriasyan, Robert Witte, I-Hsuan Wang, Vibhu Prasad, Maarit Suomalainen, and Urs F Greber. Plaque2. 0-a high-throughput analysis framework to score virus-cell transmission and clonal cell expansion. PloS one, 10(9):e0138760, 2015. Richardson-Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution. Nicolas Dey, Laure Blanc-Feraud, Christophe Zimmer, Pascal Roux, Zvi Kam, Jean-Christophe Olivo-Marin, Josiane Zerubia, https:/onlinelibrary.wiley.com/doi/10.1002/jemt.20294Microscopy Research and Technique. 694Nicolas Dey, Laure Blanc-Feraud, Christophe Zimmer, Pascal Roux, Zvi Kam, Jean-Christophe Olivo-Marin, and Josiane Zerubia. Richardson-Lucy algorithm with total variation regular- ization for 3D confocal microscope deconvolution. Microscopy Research and Technique, 69 (4):260-266, April 2006. ISSN 1059-910X, 1097-0029. doi: 10.1002/jemt.20294. URL https://onlinelibrary.wiley.com/doi/10.1002/jemt.20294. Improved denoising diffusion probabilistic models. Alexander Quinn, Nichol , Prafulla Dhariwal, International Conference on Machine Learning. PMLRAlexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162-8171. PMLR, 2021. Repaint: Inpainting using denoising diffusion probabilistic models. Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionAndreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11461-11471, 2022. Deep unsupervised learning using nonequilibrium thermodynamics. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, Surya Ganguli, International Conference on Machine Learning. PMLRJascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015. Comparison of model performance on BioSR-derrived dataset. Denoising Diffusion Probabilistic Models (DDPM) and physics-informed DDPM (PI-DDPM) in BioSR images. GT stands for ground truth. MT stands for microtubules. 5Scale bar in micrographs is 1.5 µmFigure 5: [Supplementary] Comparison of model performance on BioSR-derrived dataset. Denoising Diffusion Probabilistic Models (DDPM) and physics-informed DDPM (PI-DDPM) in BioSR images. GT stands for ground truth. MT stands for microtubules. Scale bar in micrographs is 1.5 µm. Similarly, to the results shown in the main manuscript, these test results suggest that the performance of DDPM and PI-DDPM models surpassed all the other models. However, upon closer inspection of the reconstructed structures, DDPM shows a significant amount of hallucinated structures. Sup. FigSimilarly, to the results shown in the main manuscript, these test results suggest that the performance of DDPM and PI-DDPM models surpassed all the other models. However, upon closer inspection of the reconstructed structures, DDPM shows a significant amount of hallucinated structures (Sup. Fig. The examples shown in the lower row of insets demonstrate an opposite hallucination. Specifically in the top row of zoomed-in insets closing of the filament structure may be observed in DDPM, which is absent in both PI-DDPM and ground truth (GT). A structure that is meant to be continuous according to the ground truth and PI-DDPM, appears broken in DDPM reconstructionSpecifically in the top row of zoomed-in insets closing of the filament structure may be observed in DDPM, which is absent in both PI-DDPM and ground truth (GT). The examples shown in the lower row of insets demonstrate an opposite hallucination. A structure that is meant to be continuous according to the ground truth and PI-DDPM, appears broken in DDPM reconstruction.
[ "https://github.com/casus/pi-ddpm." ]
[ "Emergence of collective self-oscillations in minimal lattice models with feedback", "Emergence of collective self-oscillations in minimal lattice models with feedback" ]
[ "Dmitry Sinelschikov \nBiofisika Institutua (UPV/EHU\nCSIC) and Fundación Biofísica Bizkaia\nE-48940LeioaSpain\n\nHSE University\n34 Tallinskaya Street123458MoscowRussian Federation\n", "Anna Poggialini \nDipartimento di Fisica\nSapienza Università di Roma\nP.le A. Moro, 2I-00185RomeItaly\n\nEnrico Fermi' Research Center (CREF)\nVia Panisperna 89A00184-RomeItaly\n", "Maria Francesca Abbate \nLaboratoire de physique de l'École normale supérieure\nCNRS\nPSL University\nSorbonne Université\n\nUniversité de Paris\n24 rue Lhomond75005ParisFrance\n", "Daniele De Martino \nBiofisika Institutua (UPV/EHU\nCSIC) and Fundación Biofísica Bizkaia\nE-48940LeioaSpain\n\nIkerbasque Foundation\n48013BilbaoSpain\n" ]
[ "Biofisika Institutua (UPV/EHU\nCSIC) and Fundación Biofísica Bizkaia\nE-48940LeioaSpain", "HSE University\n34 Tallinskaya Street123458MoscowRussian Federation", "Dipartimento di Fisica\nSapienza Università di Roma\nP.le A. Moro, 2I-00185RomeItaly", "Enrico Fermi' Research Center (CREF)\nVia Panisperna 89A00184-RomeItaly", "Laboratoire de physique de l'École normale supérieure\nCNRS\nPSL University\nSorbonne Université", "Université de Paris\n24 rue Lhomond75005ParisFrance", "Biofisika Institutua (UPV/EHU\nCSIC) and Fundación Biofísica Bizkaia\nE-48940LeioaSpain", "Ikerbasque Foundation\n48013BilbaoSpain" ]
[]
The emergence of collective oscillations and synchronization is a widespread phenomenon in complex systems. While widely studied in the setting of dynamical systems, this phenomenon is not well understood in the context of out-of-equilibrium phase transitions in many body systems. Here we consider three classical lattice models, namely the Ising, the Blume-Capel and the Potts models, provided with a feedback among the order and control parameters. With the help of the linear response theory we derive low-dimensional nonlinear dynamical systems for mean field cases. These dynamical systems quantitatively reproduce many-body stochastic simulations. In general, we find that the usual equilibrium phase transitions are taken over by more complex bifurcations where nonlinear collective self-oscillations emerge, a behavior that we illustrate by the feedback Landau theory. For the case of the Ising model, we obtain that the bifurcation that takes over the critical point is non-trivial in finite dimensions. Namely, we provide numerical evidence that in two dimensions the most probable value of a cycle's amplitude follows the Onsager law 1 arXiv:2306.01823v1 [cond-mat.stat-mech] 2 Jun 2023 for slow feedback. We illustrate multi-stability for the case of discontinuously emerging oscillations in the Blume-Capel model, whose tricritical point is substituted by the Bautin bifurcation. For the Potts model with q = 3 colors we highlight the appearance of two mirror stable limit cycles at a bifurcation line and characterize the onset of chaotic oscillations that emerge at low temperature through either the Feigenbaum cascade of period doubling or the Aifraimovich-Shilnikov scenario of a torus destruction. We also show that entropy production singularities as a function of the temperature correspond to qualitative change in the spectrum of Lyapunov exponents. Our results show that mean-field collective behaviour can be described by the bifurcation theory of low-dimensional dynamical systems, which paves the way for the definition of universality classes of collective oscillations.
null
[ "https://export.arxiv.org/pdf/2306.01823v1.pdf" ]
259,076,067
2306.01823
09bfeb44143b483d1f6272057aaab1e613d930db
Emergence of collective self-oscillations in minimal lattice models with feedback Dmitry Sinelschikov Biofisika Institutua (UPV/EHU CSIC) and Fundación Biofísica Bizkaia E-48940LeioaSpain HSE University 34 Tallinskaya Street123458MoscowRussian Federation Anna Poggialini Dipartimento di Fisica Sapienza Università di Roma P.le A. Moro, 2I-00185RomeItaly Enrico Fermi' Research Center (CREF) Via Panisperna 89A00184-RomeItaly Maria Francesca Abbate Laboratoire de physique de l'École normale supérieure CNRS PSL University Sorbonne Université Université de Paris 24 rue Lhomond75005ParisFrance Daniele De Martino Biofisika Institutua (UPV/EHU CSIC) and Fundación Biofísica Bizkaia E-48940LeioaSpain Ikerbasque Foundation 48013BilbaoSpain Emergence of collective self-oscillations in minimal lattice models with feedback 5 Digital Biologics Platform (DBxP) Site Lead France, Large Mol. Res. Platform at Sanofi The emergence of collective oscillations and synchronization is a widespread phenomenon in complex systems. While widely studied in the setting of dynamical systems, this phenomenon is not well understood in the context of out-of-equilibrium phase transitions in many body systems. Here we consider three classical lattice models, namely the Ising, the Blume-Capel and the Potts models, provided with a feedback among the order and control parameters. With the help of the linear response theory we derive low-dimensional nonlinear dynamical systems for mean field cases. These dynamical systems quantitatively reproduce many-body stochastic simulations. In general, we find that the usual equilibrium phase transitions are taken over by more complex bifurcations where nonlinear collective self-oscillations emerge, a behavior that we illustrate by the feedback Landau theory. For the case of the Ising model, we obtain that the bifurcation that takes over the critical point is non-trivial in finite dimensions. Namely, we provide numerical evidence that in two dimensions the most probable value of a cycle's amplitude follows the Onsager law 1 arXiv:2306.01823v1 [cond-mat.stat-mech] 2 Jun 2023 for slow feedback. We illustrate multi-stability for the case of discontinuously emerging oscillations in the Blume-Capel model, whose tricritical point is substituted by the Bautin bifurcation. For the Potts model with q = 3 colors we highlight the appearance of two mirror stable limit cycles at a bifurcation line and characterize the onset of chaotic oscillations that emerge at low temperature through either the Feigenbaum cascade of period doubling or the Aifraimovich-Shilnikov scenario of a torus destruction. We also show that entropy production singularities as a function of the temperature correspond to qualitative change in the spectrum of Lyapunov exponents. Our results show that mean-field collective behaviour can be described by the bifurcation theory of low-dimensional dynamical systems, which paves the way for the definition of universality classes of collective oscillations. Abstract The emergence of collective oscillations and synchronization is a widespread phenomenon in complex systems. While widely studied in the setting of dynamical systems, this phenomenon is not well understood in the context of out-of-equilibrium phase transitions in many body systems. Here we consider three classical lattice models, namely the Ising, the Blume-Capel and the Potts models, provided with a feedback among the order and control parameters. With the help of the linear response theory we derive low-dimensional nonlinear dynamical systems for mean field cases. These dynamical systems quantitatively reproduce many-body stochastic simulations. In general, we find that the usual equilibrium phase transitions are taken over by more complex bifurcations where nonlinear collective self-oscillations emerge, a behavior that we illustrate by the feedback Landau theory. For the case of the Ising model, we obtain that the bifurcation that takes over the critical point is non-trivial in finite dimensions. Namely, we provide numerical evidence that in two dimensions the most probable value of a cycle's amplitude follows the Onsager law for slow feedback. We illustrate multi-stability for the case of discontinuously emerging oscillations in the Blume-Capel model, whose tricritical point is substituted by the Bautin bifurcation. For the Potts model with q = 3 colors we highlight the appearance of two mirror stable limit cycles at a bifurcation line and characterize the onset of chaotic oscillations that emerge at low temperature through either the Feigenbaum cascade of period doubling or the Aifraimovich-Shilnikov scenario of a torus destruction. We also show that entropy production singularities as a function of the temperature correspond to qualitative change in the spectrum of Lyapunov exponents. Our results show that mean-field collective behaviour can be described by the bifurcation theory of low-dimensional dynamical systems, which paves the way for the definition of universality classes of collective oscillations. Introduction The emergence of oscillations in complex systems is an widespread phenomenon that appear across various fields of science. For example, oscillatory behavior can arise in biological systems, chemical reactions and mechanical systems (see, e.g. [1][2][3]). These oscillations often manifest themselves as collective behavior, where individual components interact and synchronize their activities over time, and studying underlying mechanisms is an important problem for understanding of complex systems. One theoretical framework that could potentially shed light on the emergence of oscillations is the one of non-equilibrium phase transitions in statistical physics. While a deep and powerful physical theory underlies the classification of equilibrium critical points, this same task has not been yet achieved for out-of-equilibrium phase transitions, in particular for the emergence of collective oscillations and synchronization phenomena. After the work of Landau and co-workers [4] it was realized that the plethora of experimental data on continuous phase transitions could be unified on the basis of the symmetries of the underlying interacting degrees of freedom. One of paradigmatic examples is the equivalence of critical exponents for liquid-gas and paramagnetic-ferromagnetic phase transitions [4]. This led to the concept of universality classes [5], that is connected to field theories and the renormalization group [6]. Recent works try to establish general and universal concepts in outof-equilibrium phenomena: example ranges from the proposal of the directed percolation universality class, including reaction-diffusion and epidemic spreading [7] to the modeling of non-reciprocal phase transitions [8]. Moreover, studies of synchronization phenomena in many body systems stands out as a very active area of research taking into account its importance for modeling complex systems [9][10][11]. One of the difficulties related to out-ofequilibrium system is a lack of general variational principles and reference free energy landscapes. Consequently, a great majority of the studied models are variations of the Kuramoto model [12][13][14][15][16], where the interacting units are postulated as oscillators at the outset and their phase coherence is analyzed. Dynamical phase transitions in classical lattice models has been studied as well, by postulating oscillating control parameters (external fields) and considering the synchronization properties of the associated order parameters, including the Ising [17,18], the Blume-Capel [19] and the Potts models [20] and the Ginzburg-Landau theory [21]. Alternatively, in the very seminal works on this topic the emergent character of oscillations (thus, called self-oscillations) without the need to postulate them was remarked [1,2,22]. This area has been considerably developed during recent years and appearance and synchronization of periodic, quasiperiodic, chaotic and even hyperchaotic self-sustained oscillations in mechanical, physical and biological models, described by dynamical systems, has been demonstrated and studied [2,[23][24][25][26]. Moreover, the importance of bifurcations or their sequences leading to the onset of regular or chaotic selfoscillations has been demonstrated in numerous works (see, e.g. [2,[27][28][29][30][31]). Therefore, following this route, in this work we study scenarios of the emergence of collective oscillations in minimal spin lattice models [32] in presence of a feedback between the control and the order parameter trying to enforce homeostasis in the symmetric phase. We show that the feedback can generically put these systems out-of-equilibrium, in particular triggering coherent collective oscillations. The static counterparts of these system have a well defined free energy landscape and an important question is to what extent the latter can provide insights into the actual dynamics. We will illustrate this mechanism in the Landau theory with feedback that gets mapped into nonlinear Van der Pol type oscillators. Then we will explore it concretely in the Ising, Blume-Capel and Potts models. We demonstrate the onset of collective periodic oscillations in the Ising model with feedback after the corresponding mean field dynamical system undergoes the Andronov-Hopf bifurcation. We also obtain that for slow feedback the most probable value of a cycle's amplitude follows the Onsager law. For the Blume-Capel model we show that there is the Bautin bifurcation in the mean field dynamical system that correspond to the tricritical point in its collective counterpart. The existence of such bifurcation naturally leads to the presence of multistability in both mean filed and full microscopic models, which is numerically illustrated. As far as the Potts model with three colors in concerned, we observe even more complex behaviour. For the re-gions with low feedback and high temperature we obtain that there is the onset of self-oscillations in the way similar to the Blume-Capel model and again with a region of multistability in the parameters' space. However, for bigger feedback we demonstrate that the behaviour is much more complex and quasiperiodic and chaotic oscillations emerge in both mean filed and microscopic models. Two bifurcation scenarios for the onset of chaotic oscillations are observed: the Fiegenbaum cascade of period doubling and the Aifraimovich-Shilnikov scenario of torus destruction. These results manifest that limit cycles bifurcations, complex bifurcation scenarios and collective quasiperiodic and chaotic oscillations can be observed in full microscopic models. Furthermore, we demonstrate that the low-dimensional dynamical systems obtained via linear response provide for an effective mean field description of the complex behavior of the full system. On the whole, we believe that this work provides a way of understanding the onset of the complex collective behaviour in out-of-equilibrium systems. The rest of the article is organized as follows. In the next Section we present our main results on collective oscillations in feedback lattice models. In the first Subsection we briefly discuss the feedback Landau theory. Subsection 2.2 is devoted to Ising model where we deal with the dynamics of the system in a fully connected geometry with a general linear feedback and in a 2D square lattice. The subsequent section 2.3 presents our results for the Blume-Capel model, that is an extension of the Ising model in presence of vacancies. Section 2.4 is focused on the Potts model with q = 3 colors with feedback and we summarize and discuss our findings in the Conclusion. Results Feedback Landau theory Here we generalize the results presented in [33] on the effect of the presence of feedback in the Landau theory with higher order relevant terms. Let us consider the expansion of the free energy density up to the 6th power of a scalar order parameter ϕ L(ϕ) = −hϕ − β − 1 2 ϕ 2 + a 4 ϕ 4 + b 6 ϕ 6 ,(1) where h is the external field, β is the inverse temperature and a, b are arbitrary real parameters. Upon considering a negative feedback between h and ϕ aiming at controlling the system into the symmetric phase, by linear response we have, upon neglecting noise in the thermodynamic limit, the dynamical systeṁ ϕ = − ∂L ∂ϕ = h + (β − 1)ϕ − aϕ 3 − bϕ 5 , h = −cϕ.(2) The system has an equilibrium state at (ϕ = 0, h = 0) whose stability is associated to the eigenvalues of the Jacobian of the linearized system λ ± = β − 1 ± (β − 1) 2 − 4c 2 (3) so, it is stable iff β < 1, being a stable focus if 4c > (β − 1) 2 and a stable node otherwise. Since the system is confined (as it can be seen by looking to the gradient at the boundary of the square with vertex ), at β c = 1 the eigenvalues are purely imaginary and the real part changes sign and the system is undergoing the Andronov-Hopf bifurcation with emergent nonlinear oscillations. The character of the bifurcation can be assessed by calculating the first Lyapunov coefficient (see, e.g. [23,27]), that in this case it can be calculated as l 1 = − 3 8 a. Thus, we have at a = 0 the bifurcation of the system changes from supercritical to subcritical behavior with a discontinuous onset of oscillations. This analysis seems to support an equivalence between the character of the Hopf bifurcations and the one of the underlying equilibrium static transition. If that would be the case the supercritical emergence corresponds to a second order phase transition and subcritical emergence corresponds to a first order phase transition, where by the n-th order we mean the order of singularity of the free energy function, in particular, independently of the feedback strength c. We will show that this Landau feedback theory captures the qualitative behavior of the Ising and Blume-Capel models while it fails to do so for the Potts model as soon as q = 3. Feedback Ising model We will extend in this section the results of [33] analytically in presence of a general linear control and numerically in finite dimension. We consider the Ising model, i.e. a model of Z 2 spins s i = ±1 variables sitting on the nodes of a lattice graph whose Hamiltonian is [34,35] where the bracket in the first sum stands for the graph bonds and we indicate with J and h the interaction strength and the external magnetic field, respectively. Upon generalizing from [33] we apply a negative feedback between the instantaneous magnetization m = 1 N i s i and the external magnetic field h, with the aim of setting the former to a prescribed value m 0 , |m 0 | < 1, that is a parameter of the system. H = −J ⟨i,j⟩ s i s j − h i s i ,(4) We consider the mean-field Curie-Weiss approximation, where in the limit of large N the free energy of the system is f = − J 2 m 2 + 1 β log(cosh(β(Jm + h))).(5) This is equivalent to consider the system in a fully connected graph and rescaling the interaction energy J → J/N . The free energy is obtained from the partition function [32] Z = {⃗ s} e −βH = dme −N βf (m)(6) In regard to the dynamics we will assume linear response, i.e. that the time derivative of the magnetization is proportional to the gradient of the free energy function [36]. Upon considering the feedback the system is described by the equationsṁ = −m + tanh(β(Jm + h)), h = −c(m − m 0 ).(7) This system admits the only stationary point m s = m 0 , h s = atanh(m 0 )/β − Jm 0 ,(8) whose linear stability can be assessed studying the eigenvalues of the Jacobian matrix (β 2 = βJ(1 − m 2 0 )) λ ± = β 2 − 1 ± (β 2 − 1) 2 − 4β 2 c 2 (9) The equilibrium point is stable iff β < β c = 1 J(1−m 2 0 ) and its character changes from a node to a focus (dynamical crossover), where eigenvalues develop an imaginary part, if c > (β/βc−1) 2 4β/βc J. In the latter case the loss of stability at β = β c (phase transition) implies an Andronov-Hopf bifurcation triggering self oscillations. The character of the bifurcation, supercritical (continuous) or subcritical (discontinuous) can be assessed from the sign of the first Lyapunov coefficient l 1 . For (7) at β = β c it is l 1 = − c + J (cJ) 3/2 (1 − m 2 0 ) .(10) Consequently, we see that the Andoronov-Hopf bifurcation is supercritical provided that |m 0 | < 1. Beyond the critical point, an approximate analytical solution can be worked out for β >∼ β c (harmonic oscillations) by a two-time expansion [2], if we call ϵ = β − β c we have m − m 0 ∼ √ ϵ cos ((1 + 1 2 ϵ) √ ct + ϕ 0 )(11) On the other hand, in the limit β → ∞, where the system is performing relaxational oscillations, the equations are piece-wise linear and the shape of the limit cycle can be found by matching the boundary conditions of solutions in half-spaces [1]. Notice that the two regimes differ qualitatively by the fact that the quasi-harmonic oscillations are centered in m 0 and pass Simulations are made via the metropolis-hastings method [34,35]. equal time up and down from this value, while in the relaxational regime they are centered in 0 and pass uneven times upon having positive and negative values such that their average will be in the end m 0 . These results are in quantitative agreement with numerical simulations on the lattice system in a fully connected geometry as we show in Fig. 1 where we recapitulate the behavior in a phase diagram in the plane (m 0 , β). The mean-field approximation of the static Ising model is known to capture qualitatively the behavior of the system in finite dimensional geometry, but it fails quantitatively on the surroundings of the critical point. The study of the Ising model in finite dimension is arguably one of the most important areas in statistical physics, touching upon issues related to field theories and renormalization . One of the most important and earlier results, due to Onsager [37] and achieved by combinatorial counting, is an analytical solution for the system in a 2D square lattice where we have the formula for the spontaneous magnetization: m = (1 − sinh −4 2β) 1/8 ,(12) valid for β > β c = 1 2 log(1 + √ 2)(13) The analogy with the static counterpart would suggest that for the case with feedback in finite dimension we could have a limit cycle emerging with a non-trivial exponent and we explored this issue by numerically studying the Ising model with feedback in a 2D square lattice graph. The corresponding results are shown in Fig. 2. Simulations have been performed for a system with N = 10 4 spins, for a neutral m 0 = 0 and slow c = 10 −4 feedback. First of all we provide evidence (see fig 2A and 2B) that for β > β c a limit cycle emerges whereas for β < β c the dynamics is relaxing into fixed point with β c ≃ 0.44 consistent with the analytical value. We investigate in further detail this aspect by quantitatively comparing the limit cycle amplitude with the Onsager formula. This has been done by observing the trajectory of the system in the phase space (m, h) and extracting the radial coordinate. The distribution of the latter is shown in fig 2C and for β > β c it develops a second peak in correspondence with the formation of a limit cycle. The value at peak has been evaluated against β in fig 2D and the trend is compared with Onsager formula: we find a good agreement within errors, evaluated as the width at half maximum. Feedback Blume-Capel model Here we consider a generalization of the Ising model in presence of vacancies, i.e. the spin variables admit the null value s i = 0, ±1 and the Hamiltonian has an additional term counting the number of filled sites whose average is fixed by the chemical potential ∆: H = −J ⟨i,j⟩ s i s j − h i s i + ∆ i s 2 i(14) The static free energy of the system in a fully connected geometry can be calculated analytically [38] f = − J 2 m 2 + 1 β log(1 + 2 cosh(β(Jm + h))e β∆ )(15) The system with feedback fulfills the approximate equationṡ m = −m + sinh(β(Jm + h)) e β∆ 2 + cosh(β(Jm + h)) ,(16) h = −cm. In order to simplify analysis of (16) we introduce new variables m ′ = βJm and h ′ = βh. We also redefine the parameters as follows: c ′ = c/J > 0, β ′ = Jβ > 0, ∆ ′ = ∆/J and e β ′ ∆ ′ = 2ν > 0. As a result, from (16) we obtain (the primes are omitted)ṁ = −m + β sinh(m + h) ν + cosh(m + h) , h = −cm.(17) It is clear that the origin O = (0, 0) is the fixed point of (17). We begin with the analysis of co-dimension one bifurcation in (17) and suppose that the β is the control parameter and c and ν have some fixed values. The Jacobi matrix for (17) at O is J = β−ν−1 ν+1 β ν+1 −c 0 .(18) Consequently, the eigenvalues of (18) can be expressed via σ = tr (J ) = β − ν − 1 ν + 1 , δ = det J = βc ν + 1 ,(19) as follows λ 1,2 = 1 2 σ ± √ σ 2 − 4δ .(20) First, we are interested in the Andronov-Hopf bifurcation and, hence, we assume that 4δ − σ 2 > 0, so that we have complex conjugate eigenvalues. These means that the β lies in the interval (2c + 1 − 2 c(c + 1))(ν + 1) < β < (2c + 1 + 2 c(c + 1))(ν + 1). (21) The first condition for the Andronov-Hopf bifurcation σ(β 0 ) = 0 results in β 0 = ν + 1. The second condition δ(β 0 ) = c > 0 holds automatically. The non-degeneracy conditions µ ′ (β 0 ) = σ ′ (β 0 )/2 ̸ = 0 and l 1 (β 0 ) ̸ = 0 hold at ν ̸ = 2. Indeed, 2µ ′ (β 0 ) = 1/(ν + 1) > 0 and the first Lyapunov coefficient for (17) is given by l 1 (β 0 ) = (ν − 2)(c + 1) 4c 3 2 (ν + 1) ̸ = 0, ν ̸ = 2.(22) We see that l 1 ̸ = 0 except at ν = 2, when the Andronov-Hopf bifurcation switches from supercritial to subcritical one. At ν = 2 the first Lyapunov coefficient vanished and we have that β = 3 and ν = 2 is the point of the Bautin bifurcation (see, e.g. [27]). In order to In Fig. 3 we demonstrate bifurcation diagram for (17) at c = 0.01. The black line is the line of the Andronov-Hopf bifurcation, where continuous part corresponds to supercritical bifurcation and broken part to subcritical one. The green line is the line of saddle-node bifurcation of limit cycles, which is computed numerically with the help of the MATCONT [39]. The star denotes the Bautin point. One can see that parameters space is separated into three regions. Broken black and green lines form a region of multistability, where a stable limit cycle coexists with a stable fixed point. Below solid black and green broken lines the dynamics is defined by a stable fixed point. Above the black line there exists a limit cycle, the correspond to oscillations in both (17) and full stochastic microscopic model. Finally, let us remark that the Bautin bifurcation for a spin system has been also described in [40], where they consider a dissipative term in the Curie-Weiss model in presence of local random fields. Feedback Potts model In the Potts model the lattice variables can assume one of q given states ("colors") s i = 1 . . . q. Variables with equal colors in neighbouring sites lower the energy by a factor J and we consider q external fields h a fixing the relative color fractions x a = 1 N i δ s i ,a in the system. The corresponding the Hamiltonian is H = −J ⟨i,j⟩ δ s i ,s j − a h a i δ s i ,a .(23) In the fully connected case the expression for the free energy is [41]: A = − J 2 a x 2 a − a h σ x a + T a x a log x a + λ a x a − 1(24) where x a are the fractions of spins belonging to the same colour a and the last term is a Lagrange multiplier enforcing normalization. Following the same scheme used for the Ising model we have that the dynamics shall approximately follow the set of equationṡ which describes the motion for a linear response approximation. System (25) possesses two conservation law, that are a x a = 1 + C 0 e −t ,(26) and a h a = C 1 ,(27) where C 0 and C 1 are arbitrary constants. Since at t = 0 the sum of all x a is equal to 1, we set C 0 = 0. Moreover, transformation h a → h a + const results in additional constant term in Hamiltonian (23) and, hence, does not affect the dynamics. Consequently, without loss of generality, we assume that C 1 = 0 in (27). Now we consider the case of three colors, i.e. q = 3. Taking into account (26) and (27) and rescaling variables as follows h a = Jh ′ a , βJ = β ′ , c/J = c ′ from (25) at q = 3 we obtain (the primes are omitted) x 2 = −x 2 + e β(x 2 +h 2 ) W ,(28)h 2 = −c x 2 − 1 3 ,(29)x 3 = −x 3 + e β(x 3 +h 3 ) W ,(30)h 3 = −c x 3 − 1 3 ,(31) and System (28) is symmetric with respect to swapping of indices (2 ←→ 3) and has one equilibrium point A = (1/3, 0, 1/3, 0). The eigenvalues of the Jacobi matrix at this fixed point are W = e β(1−x 2 −x 3 −h 2 −h 3 ) + e β(x 2 +h 2 ) + e β(x 3 +h 3 )(32)λ 1,2,3,4 = β − 3 2 ± β 2 − 6(2c + 1)β + 9 6 .(33) Suppose that β ∈ 3[2c + 1 − 2 √ c 2 + c], 3[2c + 1 + 2 √ c 2 + c](34) then the eigenvalues are complex conjugated. Passing through β = 3 the real part of eigenvalues cross the imaginary axis and, hence, the fixed point A losses its stability. Due to the symmetry of (28), the Jacobi matrix at A has multiple eigenvalues and we have the resonant double Hopf bifurcation (see, e.g. [42] and references therein). Consequently, the analytical treatment of the behaviour of (28) near β = 3 is connected with some difficulties. However, numerically we observe that the dynamics in the vicinity of the bifurcation line β = 3 is similar to those of the Blume-Capel system near the line of the Andoronov-Hopf bifurcation (see, Fig. 4 and cf. Fig. 3). From Fig.4 we see that in the lower left part of the bifurcation line β = 3 there is a region of multistability (see, Fig. 4E, D), where a fixed point coexists with a periodic orbit. This suggest that there exists some c * such that for c < c * the bifurcation at β = 3 is subcritical. Increasing the value of c we observe that the bifurcation type switches from subcritical to supercritical one: there is no multistaability and at β = 3 a stable small-amplitude limit cycle is born (see, Fig. 4 B, F). Let us also remark that due to the symmetry of (28) there are always two coexisiting orbits, which can be seen from Fig. 4 plates C, D. In order to study possible type of dynamics that appear in (28) [43] ). We present the corresponding two dimensional chart of Lypaunov exponents in Fig. 5. We color each point in this chart according to the signs of two largest Lyapunov exponents as follows: • λ 1 = 0, 0 > λ 2 > λ 3 > λ 4 : periodic regime and blue color; • λ 1 = λ 2 = 0, 0 > λ 3 > λ 4 : quasiperiodic regime and green color; • λ 1 > 0, λ 2 = 0, 0 > λ 3 > λ 4 : chaotic regime and red color. In Fig. 5 we also demonstrate phase portraits and Poincare sections of typical attractors that appear in (28). Throughout this work the Poincaré map is constructed by considering intersections of the flow governed by (28) with the plane x 2 = 1/2, if it is not stated otherwise. The vast blue regions in Fig. 5 correspond to the existence of a stable periodic orbit. There are also two separated red regions of the chaotic dynamics. The appearance of the chaotic attractors in these regions is governed by different scenarios. In the upper left region of Fig. 5 we see that a thin green stripe of quasiperidic dynamics that is adjacent to the thin region of periodic behaviour with chaotic one next to it. This suggests that chaotic attractors can appear through the Afraimovich-Shilnikov scenario of am invariant torus destruction (see, e.g. [28,44,45]). In the region of lower feedback one can observe a big blue region of periodic oscillations next to a narrow red strip of chaotic ones. From Fig. 5F,E one can see that if we approach this red region from the right the period of oscillations increases. Therefore, one can expect the appearance of the cascade of period doubling bifurcations, which we confirm below. Let us begin with the onset of chaotic oscillations in the upper left part of the bifurcation diagram (see Fig. 5). We see that inside the blue regions adjacent to red one there is a narrow green path. The left border of this green patch is the line of the supercritical Neimark-Sacker bifurcation, that corresponds to the birth of a stable quasiperiodic regime, which is called an invariant torus. The right border of the green region represents the formation of resonant stable and unstable periodic orbits that appear through the saddle-node bifurcation. Then, if we move further to the right in the bifurcation diagram, the stable resonant orbit becomes chaotic and so-called torus-chaos attractor appears. We demonstrate realization of this scenario in (28) at c = 0.75 and β ∈ [3.4, 22]. In Fig. 6 we show the Lyapunov spectra and the bifurcations trees for this region. From Fig. 6B one can clearly see that, after one period-doubling bifurcation, a periodic attractor becomes quasiperiodic one, undergoing the supercritical Neimark-Sacker bifurcation. We can observe a narrow but distinct region of the existence of quasiperiodic dynamics in Fig. 6B,C. Then this quasiperiodic orbit becomes resonant and a long-periodic orbit is born. Then this resonant periodic orbit becomes chaotic after undergoing a cascade of period doubling bifurcations and a chaotic attractor is born on the basis of former quasi-periodic orbit. We demonstrate Poincaré sections of the attractors along the line of the Afraimovich-Shilnikov bifurcation scenario in Fig. 7. We also show the dependence of the entropy production for the Potts model on the parameter β and compare it with those of the Lyapunov spectrum for (28) (see, Fig. 8 ). The average rate of entropy production can be calculated from the out-of-equilibrium definition of work for feedback-driven systems [46,47], we have for the feedback Potts model the formula σ = aḣ a x a(35) From Fig. 8 one can see that the changes in the entropy production are in direct correlation with the bifurcations in the feedback model. Chaotic attractors in (28) can also appear as a result of the Feigenbaum cascade of period doubling. This is typical for the lower part of the bifurcation chart in Fig.5. We suppose that c = 0.55 and β ∈ [3.1, 8.6] and present the corresponding graphs of largest Lyapunov exponents and bifurcation trees in Fig.9. One can observe a typical cascade of period doubling which a periodic orbit undergoes. We demonstrate Poincaré sections of the attractors along the line of the Feigenbaum scenario in Fig. 10. One can see that period doubling can be observed both in dynamical system (28) and in the microscopic stochastic simulations of a system of size N = 10 6 spins. Conclusion In this work we have analyzed classical spin lattice models, namely the Ising model, the Blume-Capel model and the Potts model in presence of a negative feedback between the order parameter and the external field(s). These models are representative of the universality classes of equilibrium phase transitions and in this work we have showed that their usual critical points and phase transition lines get transformed onto more complex bifurcations with the emergence of periodic, quasiperiodic and chaotic oscillatory patterns. At odds with the case of driven systems [17][18][19][20][21] these oscillations are emerging self-oscillations and the system is autonomous, with no explicit dependence on time. Our first general result is the derivation from linear response theory of simple lower dimensional systems of differential equations that quantitatively reproduce the many body stochastic simulations for fully connected models. These systems of equations can then be analyzed by classical tools of bifurcation theory. In some cases the system inherits the main features of its equilibrium counterpart, including analytical tractability. This is the case of the Ising model, where we have shown that in finite dimensions, namely 2D, selfoscillations can emerge with a non-trivial exponent for the amplitude β = 1/8 1 in line with the celebrated Onsager solution for the static system. We have demonstrated that for the Blume-Capel model on a fully connected graph the usual tricritical point gets transformed into the Bautin bifurcation point and the second and first order phase transition lines are taken over by sub-critical and super-critical Andronov-Hopf bifurcation lines, respectively. These results are independent of the feedback strength parameter c, as soon as this does not break the validity of the continuous approximation, that is valid in the thermodynamic limit and they are nicely summarized qualitatively by the Landau theory of an homogeneous self-compatible field in presence of a feedback. On the other hand we have shown that for the case of the Potts model with feedback, the dynamical picture is much more complex, due to the increased effective dimensions in which the order and control parameters exist (from two to four in going from the Ising model to the Potts model with q = 3 colors). The character of the bifurcation where self-oscillations emerge that substitutes the equilibrium phase transition (at β c = q) depends on the strength of the feedback c. For low enough c it is discontinuous (subcritical) and there exists a region where a stable fixed point coexists with self-oscillations. This corresponds well with the static equilibrium transition, that is known to be first-order for the fully connected Potts model at q = 3. However, if one increase c, the amplitude of the emerging self oscillations decreases up to a certain critical point where it becomes continuous akin to the Bautin bifurcation. This time it depends on the feedback strength and at odds with respect to the underlying free energy landscape. Furthermore, if one increases β (i.e. decreases temperature), we have obtained that the bifurcation diagram of the Potts model with feedback endows complex scenarios with cascades of bifurcations leading to new limit cycles and quasiperiodic attractors and eventually to chaotic ones. The out-of-equilibrium thermodynamic features of these systems have been worked out numerically and we have demonstrated how singularities of the entropy production correspond to qualitative changes in the spectrum of the Lyapunov exponents and the underlying bifurcations. Among the many future directions of this work we believe it would be interesting to analyze feedback lattice models with generalized out-of-equilibrium Landau functional formalism [48] extending it beyond the Ising systems as well as to apply our framework to data analysis of collective oscillations in natural systems [49], especially synchronization of neuronal systems [50]. Figure 1 : 1(a): Mean field phase diagram of the feedback Ising model in the plane (m 0 , β) for J = 1, c = 0.1, both the critical line and the dynamical crossover line are highlighted. (b,c,d,e): magnetization time traces of the system (N = 10 4 spins) simulated on a fully connected geometry in four different points corresponding to different dynamical behavior. (b): m 0 = −0.3, β = 0.25. Simulations are made via the Metropolis-Hastings method Figure 2 : 2Simulations of the feedback Ising model on a 2D square lattice; system size N = 10 4 spins, feedback strength c = 10 −4 (A): Magnetization as function of time m(t) for β = 0.47, 0.45. (B) Trajectories in the phase plane (m, h) for β = 0.47, 0.45. (C) histograms of the probability distribution of the limit cycle amplitude at several βs. (D) Most probable limit cycle amplitude as function of β − β c compared with Onsager formula and β c = log Figure 3 : 3Sketch of the behaviour of system(17) nearby the Bautin bifurcation point: A -bifurcation diagram for(17), where black line is the line of the Andoronov-Hopf bifurcation and the green one is the line of saddlenode bifurcation of limit cycles. Phase portraits of solutions of (17) nearby bifurcation point (black line) and the results full microscopic stochastic simulations in the fully connected model with N = 10 4 spins (green dots): B -confirmation of multistability; C -stable fixed point; D, E -different limit cycles. Simulations are made via Metropolis-Hastings method[34,35] check the non-degeneracy conditions at the point of the Bautin bifurcation we compute the second Lyapunov coefficient for (17) at β = 3 and ν = 2, which is l 2 = − √ c(c + 1) 2 /18 < 0 for c > 0. Consequently, the line β = ν + 1 is the line of the Andronov-Hopf bifurcation with the Bautin point at(2,3) that separates the supercritical part of the line from the subcritical one. = −c(x a − 1/q), a = 1, . . . Figure 4 : 4The behaviour of system(28) in the vicinity of the bifurcation line β = 3. A: a sketch of a bifurcation diagram illustrating transition from a subcritical to supercritical bifurcation; B-E phase portraits of some attractors in the vicinity of β = 3. Black lines represent numerical solutions of (28), while green dots are obtained from fully microscopic stochastic simulations of a system of size N = 10 5 spins via the Metropolis-Hastings method. Figure 5 : 5Plate A represents the two-dimensional chart of the Lyapunov exponents for (28), Blue color corresponds to the periodic dynamics, green to quasyperiodic and red to chaotic one. Plates D,E,F correspond to the phase portraits of periodic attractors at β = 4, c = 0.4, β = 8.1, c = 0.55, β = 21, c = 0.6, respectively. In plates B,C,G,H we present Poincaré sections of three chaotic and one quasiperiodic attractors at β = 9.1, c = 0.9, β = 20, c = 0.4, β = 5.2, c = 0.65, β = 5.1, c = 0.75. Black lines corresponds to numerical solutions of system (28) and green dots are obtained from fully microscopic stochastic simulations of a system of size N = 10 6 spins via the Metropolis-Hastings method. away from the line β = 3 we compute the dependence of the Lyapunov spectrum on the parameters β and c for the region (β, c) ∈ [3, 5, 22] × [0.3, 1]. For computation of the Lyapunov spectrum we use the standard algorithm by Bennetin et al. (see Figure 6 : 6Lyapunov specta and bifurcation trees for system(28) at c = 0.75 and β ∈ [3.5, 6]. Figure 7 : 7Poincare sections of the attactors that appear along the line of the Afraimovich-Shilnikov bifurcation scenario at c = 0.75: A -a stable periodic orbit at β = 3.6; B -a stable periodic orbit after period doubling at β = 4.5; C -a stable quasiperiodic orbit at β = 4.8; D -a resonant periodic orbit at β = 4.9; E -a chaotic attractor at β = 5.1. By black lines we show a numerical solutions of (28) and green dots are obtained from fully microscopic stochastic simulations of a system of size N = 10 6 spins via the Metropolis-Hastings method. Figure 8 : 8Entropy production for the Potts model and the Lyapunov spectrum for (28) at c = 0.75 and β ∈[2,6]. Figure 9 : 9Lyapunov spectra and bifurcation trees for the Feigenbaum cascade of period doubling in (28) at c = 0.55 and β ∈ [3.1, 8.6]. Figure 10 : 10Poincare sections of the attactors that appear along the line of the Feigenbaum cascade of period doubling. In all plates c = 0.55 and β is 3.5, 4.5, 7.7, 8.1 and 8.3, respectively. Numerical solutions of (28) are given in black, while green dots are obtained from fully microscopic stochastic simulations of a system of size N = 10 6 spins via the Metropolis-Hastings method. As usual this shall not be confused with the inverse temperature. AcknowledgementsDDM, AP and MFA thank Prof. Enzo Marinari for support and nice discussions. DS is grateful to Alexei Kazakov and Natalia Stankevich for useful discussions. A Andronov, A Vitt, S Khaikin, Theory of Oscillators. DoverA. Andronov, A. Vitt, S. Khaikin, Theory of Oscillators, Dover, 1966. Nonlinear dynamics and chaos with student solutions manual: With applications to physics, biology, chemistry, and engineering. S H Strogatz, CRC pressS. H. Strogatz, Nonlinear dynamics and chaos with student solutions manual: With applications to physics, biology, chemistry, and engineer- ing, CRC press, 2018. Mathematical biology: I. An introduction. J Murray, SpringerJ. Murray, Mathematical biology: I. An introduction, Springer, 2002. L Landau, E Lifshitz, Statistical Physics: V. 5: Course of Theoretical Physics. Pergamon pressL. Landau, E. Lifshitz, Statistical Physics: V. 5: Course of Theoretical Physics, Pergamon press, 1969. Fluctuation theory of phase transitions. V Pokrovskii, A Patashinskii, V. Pokrovskii, A. Patashinskii, Fluctuation theory of phase transitions (1979). J Cardy, Scaling and renormalization in statistical physics. Cambridge university press5J. Cardy, Scaling and renormalization in statistical physics, Vol. 5, Cam- bridge university press, 1996. Non-equilibrium critical phenomena and phase transitions into absorbing states. H Hinrichsen, Advances in physics. 49H. Hinrichsen, Non-equilibrium critical phenomena and phase transi- tions into absorbing states, Advances in physics 49 (7) (2000) 815-958. Non-reciprocal phase transitions. M Fruchart, R Hanai, P B Littlewood, V Vitelli, Nature. 5927854M. Fruchart, R. Hanai, P. B. Littlewood, V. Vitelli, Non-reciprocal phase transitions, Nature 592 (7854) (2021) 363-369. Stability of synchronization in simplicial complexes. L V Gambuzza, F Di Patti, L Gallo, S Lepri, M Romance, R Criado, M Frasca, V Latora, S Boccaletti, Nature communications. 1211255L. V. Gambuzza, F. Di Patti, L. Gallo, S. Lepri, M. Romance, R. Cri- ado, M. Frasca, V. Latora, S. Boccaletti, Stability of synchronization in simplicial complexes, Nature communications 12 (1) (2021) 1255. The dual action of glioma-derived exosomes on neuronal activity: Synchronization and disruption of synchrony. R Spelat, N Jihua, C A Sánchez Triviño, S Pifferi, D Pozzi, M Manzati, S Mortal, I Schiavo, F Spada, M E Zanchetta, Cell Death & Disease. 138705R. Spelat, N. Jihua, C. A. Sánchez Triviño, S. Pifferi, D. Pozzi, M. Man- zati, S. Mortal, I. Schiavo, F. Spada, M. E. Zanchetta, et al., The dual action of glioma-derived exosomes on neuronal activity: Synchroniza- tion and disruption of synchrony, Cell Death & Disease 13 (8) (2022) 705. Synchronization in network geometries with finite spectral dimension. A P Millán, J J Torres, G Bianconi, Physical Review E. 99222307A. P. Millán, J. J. Torres, G. Bianconi, Synchronization in network geometries with finite spectral dimension, Physical Review E 99 (2) (2019) 022307. Self-entrainment of a population of coupled non-linear oscillators. Y Kuramoto, International Symposium on Mathematical Problems in Theoretical Physics. Kyoto University, Kyoto/JapanSpringerY. Kuramoto, Self-entrainment of a population of coupled non-linear oscillators, in: International Symposium on Mathematical Problems in Theoretical Physics: January 23-29, 1975, Kyoto University, Ky- oto/Japan, Springer, 1975, pp. 420-422. Topological synchronization of coupled nonlinear oscillators. K Sone, Y Ashida, T Sagawa, Physical Review Research. 4223211K. Sone, Y. Ashida, T. Sagawa, Topological synchronization of coupled nonlinear oscillators, Physical Review Research 4 (2) (2022) 023211. Global topological synchronization on simplicial and cell complexes. T Carletti, L Giambagli, G Bianconi, Physical review letters. 13018187401T. Carletti, L. Giambagli, G. Bianconi, Global topological synchroniza- tion on simplicial and cell complexes, Physical review letters 130 (18) (2023) 187401. Explosive higher-order kuramoto dynamics on simplicial complexes. A P Millán, J J Torres, G Bianconi, Physical Review Letters. 12421218301A. P. Millán, J. J. Torres, G. Bianconi, Explosive higher-order kuramoto dynamics on simplicial complexes, Physical Review Letters 124 (21) (2020) 218301. Dynamics of a population of oscillatory and excitable elements. K P O&apos;keeffe, S H Strogatz, Physical Review E. 93662203K. P. O'Keeffe, S. H. Strogatz, Dynamics of a population of oscillatory and excitable elements, Physical Review E 93 (6) (2016) 062203. Critical behavior of entropy production and learning rate: Ising model with an oscillating field. Y Zhang, A C Barato, Journal of Statistical Mechanics: Theory and Experiment. 11113207Y. Zhang, A. C. Barato, Critical behavior of entropy production and learning rate: Ising model with an oscillating field, Journal of Statistical Mechanics: Theory and Experiment 2016 (11) (2016) 113207. Dynamic phase transition in the twodimensional kinetic ising model in an oscillating field: Universality with respect to the stochastic dynamics. G M Buendia, P A Rikvold, Physical Review E. 78551108G. M. Buendia, P. A. Rikvold, Dynamic phase transition in the two- dimensional kinetic ising model in an oscillating field: Universality with respect to the stochastic dynamics, Physical Review E 78 (5) (2008) 051108. Dynamic phase transition in the kinetic spin-1 blume-capel model under a time-dependent oscillating external field. M Keskin, O Canko, Ü Temizer, Physical Review E. 72336125M. Keskin, O. Canko,Ü. Temizer, Dynamic phase transition in the kinetic spin-1 blume-capel model under a time-dependent oscillating ex- ternal field, Physical Review E 72 (3) (2005) 036125. J Mendes, E Lage, Dynamics of the infinite-ranged potts model, Journal of statistical physics. 64J. Mendes, E. Lage, Dynamics of the infinite-ranged potts model, Jour- nal of statistical physics 64 (1991) 653-672. Extended order parameter and conjugate field for the dynamic phase transition in a ginzburg-landau mean-field model in an oscillating field. D T Robb, A Ostrander, Physical Review E. 89222114D. T. Robb, A. Ostrander, Extended order parameter and conjugate field for the dynamic phase transition in a ginzburg-landau mean-field model in an oscillating field, Physical Review E 89 (2) (2014) 022114. Self-oscillation. A Jenkins, Physics Reports. 5252A. Jenkins, Self-oscillation, Physics Reports 525 (2) (2013) 167-222. Nonlinear oscillations, dynamical systems, and bifurcations of vector fields. J Guckenheimer, P Holmes, Springer Science & Business MediaJ. Guckenheimer, P. Holmes, Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, Springer Science & Business Media, 2013. Synchronization: a universal concept in nonlinear sciences. A Pikovsky, M Rosenblum, J Kurths, Self. 23A. Pikovsky, M. Rosenblum, J. Kurths, Synchronization: a universal concept in nonlinear sciences, Self 2 (2001) 3. . O Rössler, C Letellier, Hyperchaos , Chaos: The World of Nonperiodic Oscillations. O. Rössler, C. Letellier, Hyperchaos, Chaos: The World of Nonperiodic Oscillations (2020) 55-62. Bifurcation analysis of a neural network model. R M Borisyuk, A B Kirillov, Biological Cybernetics. 664R. M. Borisyuk, A. B. Kirillov, Bifurcation analysis of a neural network model, Biological Cybernetics 66 (4) (1992) 319-325. . Y Kuznetsov, Elements of Applied Bifurcation Theory. 42SpringerY. Kuznetsov, Elements of Applied Bifurcation Theory, Vol. 42, Springer New York, NY, 1998. Hyperchaos and multistability in the model of two interacting microbubble contrast agents. I Garashchuk, D Sinelshchikov, A Kazakov, N Kudryashov, Chaos: An Interdisciplinary Journal of Nonlinear Science. 29663131I. Garashchuk, D. Sinelshchikov, A. Kazakov, N. Kudryashov, Hyper- chaos and multistability in the model of two interacting microbubble contrast agents, Chaos: An Interdisciplinary Journal of Nonlinear Sci- ence 29 (6) (2019) 063131. Synchronous oscillations and symmetry breaking in a model of two interacting ultrasound contrast agents. I Garashchuk, A Kazakov, D Sinelshchikov, Nonlinear Dynamics. 1012I. Garashchuk, A. Kazakov, D. Sinelshchikov, Synchronous oscillations and symmetry breaking in a model of two interacting ultrasound con- trast agents, Nonlinear Dynamics 101 (2) (2020) 1199-1213. Bubbling transition as a mechanism of destruction of synchronous oscillations of identical microbubble contrast agents. I Garashchuk, D Sinelshchikov, Chaos: An Interdisciplinary Journal of Nonlinear Science. 31223130I. Garashchuk, D. Sinelshchikov, Bubbling transition as a mechanism of destruction of synchronous oscillations of identical microbubble contrast agents, Chaos: An Interdisciplinary Journal of Nonlinear Science 31 (2) (2021) 023130. Scenarios for the creation of hyperchaotic attractors in 3d maps. A Shykhmamedov, E Karatetskaia, A Kazakov, N Stankevich, Nonlinearity. 3673501A. Shykhmamedov, E. Karatetskaia, A. Kazakov, N. Stankevich, Scenar- ios for the creation of hyperchaotic attractors in 3d maps, Nonlinearity 36 (7) (2023) 3501. Exactly solved models in statistical mechanics. R J Baxter, ElsevierR. J. Baxter, Exactly solved models in statistical mechanics, Elsevier, 2016. Feedback-induced self-oscillations in large interacting systems subjected to phase transitions. D. De Martino, Journal of Physics A: Mathematical and Theoretical. 52445002D. De Martino, Feedback-induced self-oscillations in large interacting systems subjected to phase transitions, Journal of Physics A: Mathe- matical and Theoretical 52 (4) (2019) 045002. Equation of state calculations by fast computing machines. N Metropolis, A W Rosenbluth, M N Rosenbluth, A H Teller, E Teller, The journal of chemical physics. 216N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, E. Teller, Equation of state calculations by fast computing machines, The journal of chemical physics 21 (6) (1953) 1087-1092. Monte carlo sampling methods using markov chains and their applications. W Hastings, Biometrika: One Hundred Years. 57197W. Hastings, Monte carlo sampling methods using markov chains and their applications, Biometrika: One Hundred Years 57 (1) (2001) 97. R Zwanzig, Nonequilibrium statistical mechanics. Oxford University PressR. Zwanzig, Nonequilibrium statistical mechanics, Oxford University Press, 2001. Crystal statistics. i. a two-dimensional model with an orderdisorder transition. L Onsager, Physical Review. 653-4117L. Onsager, Crystal statistics. i. a two-dimensional model with an order- disorder transition, Physical Review 65 (3-4) (1944) 117. Theory of the first-order magnetic phase change in u o 2. M Blume, Physical Review. 1412517M. Blume, Theory of the first-order magnetic phase change in u o 2, Physical Review 141 (2) (1966) 517. New features of the software matcont for bifurcation analysis of dynamical systems. A Dhooge, W Govaerts, Y Kuznetsov, H Meijer, B Sautois, Mathematical and Computer Modelling of Dynamical Systems. 142A. Dhooge, W. Govaerts, Y. Kuznetsov, H. Meijer, B. Sautois, New features of the software matcont for bifurcation analysis of dynamical systems, Mathematical and Computer Modelling of Dynamical Systems 14 (2) (2008) 147-175. Effects of local fields in a dissipative curie-weiss model: Bautin bifurcation and large self-sustained oscillations. F Collet, M Formentin, Journal of Statistical Physics. 1762F. Collet, M. Formentin, Effects of local fields in a dissipative curie-weiss model: Bautin bifurcation and large self-sustained oscillations, Journal of Statistical Physics 176 (2) (2019) 478-491. The potts model. F.-Y Wu, Reviews of modern physics. 541235F.-Y. Wu, The potts model, Reviews of modern physics 54 (1) (1982) 235. Normal resonances in a double hopf bifurcation. H Broer, H Hanßmann, F Wagener, Indagationes Mathematicae. 321H. Broer, H. Hanßmann, F. Wagener, Normal resonances in a double hopf bifurcation, Indagationes Mathematicae 32 (1) (2021) 33-54. Lyapunov Characteristic Exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them. G Benettin, L Galgani, A Giorgilli, J.-M Strelcyn, 10.1007/BF02128236Meccanica. 151Part 1: TheoryG. Benettin, L. Galgani, A. Giorgilli, J.-M. Strelcyn, Lyapunov Char- acteristic Exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them. Part 1: Theory, Meccanica 15 (1) (1980) 9-20. doi:10.1007/BF02128236. Invariant two-dimensional tori, their breakdown and stochasticity. V Afraimovich, L Shilnikov, Amer. Math. Soc. Transl. 1492V. Afraimovich, L. Shilnikov, Invariant two-dimensional tori, their breakdown and stochasticity, Amer. Math. Soc. Transl 149 (2) (1991) 201-212. Cascade of torus birth bifurcations and inverse cascade of shilnikov attractors merging at the threshold of hyperchaos. I Sataev, N Stankevich, Chaos: An Interdisciplinary Journal of Nonlinear Science. 31223140I. Sataev, N. Stankevich, Cascade of torus birth bifurcations and inverse cascade of shilnikov attractors merging at the threshold of hyperchaos, Chaos: An Interdisciplinary Journal of Nonlinear Science 31 (2) (2021) 023140. Nonequilibrium thermodynamics of feedback control. T Sagawa, M Ueda, Physical Review E. 85221104T. Sagawa, M. Ueda, Nonequilibrium thermodynamics of feedback con- trol, Physical Review E 85 (2) (2012) 021104. Oscillations in feedback-driven systems: Thermodynamics and noise. D. De Martino, A C Barato, Physical Review E. 100662123D. De Martino, A. C. Barato, Oscillations in feedback-driven systems: Thermodynamics and noise, Physical Review E 100 (6) (2019) 062123. L Guislain, E Bertin, arXiv:2211.08009Nonequilibrium phase transition to temporal oscillations in mean-field spin models. arXiv preprintL. Guislain, E. Bertin, Nonequilibrium phase transition to temporal oscillations in mean-field spin models, arXiv preprint arXiv:2211.08009 (2022). Attractor competition enriches cortical dynamics during awakening from anesthesia. N Tort-Colet, C Capone, M V Sanchez-Vives, M Mattia, Cell Reports. 3512109270N. Tort-Colet, C. Capone, M. V. Sanchez-Vives, M. Mattia, Attractor competition enriches cortical dynamics during awakening from anesthe- sia, Cell Reports 35 (12) (2021) 109270. Statistical modeling of adaptive neural networks explains co-existence of avalanches and oscillations in resting human brain. F Lombardi, S Pepić, O Shriki, G Tkačik, D De Martino, Nature Computational Science. F. Lombardi, S. Pepić, O. Shriki, G. Tkačik, D. De Martino, Statistical modeling of adaptive neural networks explains co-existence of avalanches and oscillations in resting human brain, Nature Computational Science (2023) 1-10.
[]
[ "SELFEVOLVE: A Code Evolution Framework via Large Language Models", "SELFEVOLVE: A Code Evolution Framework via Large Language Models" ]
[ "Shuyang Jiang [email protected] \nShanghai AI Laboratory\n\n", "Yuhao Wang \nShanghai AI Laboratory\n\n", "Yu Wang \nShanghai AI Laboratory\n\n", "Shanghai Jiao \nShanghai AI Laboratory\n\n", "Tong University \nShanghai AI Laboratory\n\n" ]
[ "Shanghai AI Laboratory\n", "Shanghai AI Laboratory\n", "Shanghai AI Laboratory\n", "Shanghai AI Laboratory\n", "Shanghai AI Laboratory\n" ]
[]
Large language models (LLMs) have already revolutionized code generation, after being pretrained on publicly available code data. However, while various methods have been proposed to augment LLMs with retrieved knowledge and enhance the quality of code generation, the performance of these retrieval-based methods is limited by the strength of the retrievers used. In addition, while LLMs show great emergent ability, they still struggle to produce the correct code in one turn. To address these challenges, we propose a novel two-step pipeline, called SELF-EVOLVE, that leverages LLMs as both knowledge providers and self-reflective programmers. Unlike retrieval-based methods, SELFEVOLVE obtains the knowledge from input prompts and generates intermediate code based on the generated knowledge. After that, SELFEVOLVE asks LLM to act as an expert programmer to perform debugging for the generated code. This is achieved by receiving the error message from the interpreter, without requiring special test cases for correctness verification. We evaluate SELFEVOLVE on three code generation datasets, including DS-1000 for data science code, HumanEval for software engineering code, and TransCoder for C++-to-Python translation. Our empirical experiments show that SELFEVOLVE outperforms strong baselines by a significant margin on all datasets. We also conduct exhaustive analytical experiments to validate the effectiveness of the two stages of SELFEVOLVE, and find that both are superior to other prompting-based methods. Further scalability analysis demonstrates that SELFEVOLVE can be adapted to other more advanced models, such as GPT-4, and bring consistent efficacy improvement.
null
[ "https://export.arxiv.org/pdf/2306.02907v1.pdf" ]
259,076,266
2306.02907
eb36681fc4c5dfce4f3e05540fc92b007de278ca
SELFEVOLVE: A Code Evolution Framework via Large Language Models Shuyang Jiang [email protected] Shanghai AI Laboratory Yuhao Wang Shanghai AI Laboratory Yu Wang Shanghai AI Laboratory Shanghai Jiao Shanghai AI Laboratory Tong University Shanghai AI Laboratory SELFEVOLVE: A Code Evolution Framework via Large Language Models Large language models (LLMs) have already revolutionized code generation, after being pretrained on publicly available code data. However, while various methods have been proposed to augment LLMs with retrieved knowledge and enhance the quality of code generation, the performance of these retrieval-based methods is limited by the strength of the retrievers used. In addition, while LLMs show great emergent ability, they still struggle to produce the correct code in one turn. To address these challenges, we propose a novel two-step pipeline, called SELF-EVOLVE, that leverages LLMs as both knowledge providers and self-reflective programmers. Unlike retrieval-based methods, SELFEVOLVE obtains the knowledge from input prompts and generates intermediate code based on the generated knowledge. After that, SELFEVOLVE asks LLM to act as an expert programmer to perform debugging for the generated code. This is achieved by receiving the error message from the interpreter, without requiring special test cases for correctness verification. We evaluate SELFEVOLVE on three code generation datasets, including DS-1000 for data science code, HumanEval for software engineering code, and TransCoder for C++-to-Python translation. Our empirical experiments show that SELFEVOLVE outperforms strong baselines by a significant margin on all datasets. We also conduct exhaustive analytical experiments to validate the effectiveness of the two stages of SELFEVOLVE, and find that both are superior to other prompting-based methods. Further scalability analysis demonstrates that SELFEVOLVE can be adapted to other more advanced models, such as GPT-4, and bring consistent efficacy improvement. Introduction Code generation functions as a crucial and challenging component of various applications [2,19,20,26]. However, the performance of large language models (LLM) on diverse tasks and domains has substantially improved as the pretraining corpus expands. As a result, LLM has become the preferred model for code generation [9,28]. In fact, LLM performs much better than previous deep neural models dedicated to generating code [12,26,32]. Meanwhile, previous methods have been augmented by LLM's ability to digest various prompt contents and perform text generation [3,22,31]. Various auxiliary augmentation signals have been added to the prompt to obtain more accurate code [8,18,44,56]. However, most prior work usually obtains such signals via an external retriever and a large knowledge base. They leverage the problem description or natural language intents to retrieve relevant knowledge, including similar code snippets [35,36], API documentation [49,56], or focal methods [25]. Despite their success, retriever models can suffer from domain mismatch when adapting to different tasks, requiring finetuning or even training from scratch on the target domain, which limits their generality. Moreover, current retrievers are not well-suited for semi-structured knowledge items like library documentation, which can result in poor retrieved results. To avoid domain mismatch and inaccurate retrieval results, we propose a two-stage paradigm, SELFEVOLVE, which treats LLM itself as a knowledge source. Previous work has demonstrated that LLM has encoded various domain knowledge [14] and can be treated as a large knowledge base [1,39]. Therefore, SELFEVOLVE chooses to prompt LLM to generate multi-form necessary knowledge by itself. In particular, we prompt the language models to extract necessary knowledge from trial solutions or problem intents ( §3.2), depending on whether the problem intents contain explicit demands. This process excludes the intervention of retrievers since generating concrete knowledge based on roughly encoded ones in LLM parameters is easier than searching them in a large database with vague natural language statements. To the best of our knowledge, SELFEVOLVE is the first LLM-driven self-augmented code generation framework. Furthermore, inspired by the fact that human programmers rely on both related knowledge and a debugger to ensure implementation correctness, we inject an automatic refinement mechanism. This refinement mechanism teaches language models to depend on an executor like a Python interpreter to correct the preliminary code. We construct a runnable program from generated code and test cases extracted from the problem description ( §3.2) and execute it to obtain either pass or error messages, which serve as correction feedback. Compared to prompting LLM to generate test cases like CodeT [8], which may produce incorrect samples, SELFEVOLVE maintains the correctness of test cases. Additionally, we do not grab evaluation samples from the test set like the recently proposed Self-Debugging [10], which hardly generalizes to daily coding scenarios. Instead, the example cases in the problem description appear in coding tasks mostly and describe the behavior that the programmer needs to accomplish with little ambiguity. Leveraging these authentic and common test cases makes SELFEVOLVE a reliable and general method for self-augmented code generation. We primarily build SELFEVOLVE using gpt-3.5-turbo 1 (ChatGPT) and evaluate its performance on various tasks, including the data science code generation task DS-1000 [26], the general code generation task HumanEval [9], and the C++-to-Python translation task TransCoder [43]. Extensive experiments show that SELFEVOLVE achieves a significant improvement in execution-based measurements compared to strong DocPrompting [56] and Self-Debugging [10] baselines on the data science code generation task ( §4.2). On HumanEval, SELFEVOLVE still outperforms each strong baseline, and the self-refinement module brings noticeable performance improvements for the base LLM after using self-generated knowledge ( §4.2). Even on the code translation, which is a much simpler task for ChatGPT, SELFEVOLVE still brings considerable improvement ( §4.2). Furthermore, our analysis studies indicate that SELFEVOLVE can provide more accurate knowledge than retrieval-based methods ( §4.3), generalize to various datasets with only a few debugging turns ( §4.3), and scales easily to more powerful models like GPT-4 ( §4.3). Finally, we use two intuitive cases to demonstrate how the two stages of SELFEVOLVEimprove the generated code. These cases illustrate the effectiveness of the model in generating high-quality code and highlight its potential for a range of applications. Related Work Augmented code generation In addition to problem descriptions and code snippets, many works provide auxiliary information to generate code. Before the era of LLM, researchers trained an encoder-decoder model that aimed to generate code based on a programmatic environment and function documentation [21]. JuPyT5 [7] conditions on Jupyter notebooks' context cells to generate data science code. Recently, large language models (LLM), pretrained on a variety of corpus, have enabled an in-context learning pipeline for zero-shot or few-shot generation. Haluptzok et al. [17] introduced a method to solve programming puzzles with synthetic puzzles and solutions generated by LLM. Parvez et al. [35] augmented code generation models with retrieved similar code snippets. In contrast, API-specific documents retrieved via a CodeT5 [47] retriever serve as additional information in the prompt [56]. However, their methods involve retrieving semi-structured knowledge items, whose performance is bottlenecked by current retriever models. Moreover, it is hard for retrievers to adapt to the target domain when the corpus for finetuning the retriever is inaccessible. Compared to theirs, LLM is more suitable for bridging the gap between domains than any small retriever, which provides more accurate knowledge. Moreover, our method does not require domain-specific finetuning, offering higher accessibility and generality. Figure 1: The SELFEVOLVE pipeline. LLMs first generate corresponding knowledge for the related problem, then generate the trial answer conditioned on the knowledge. The iterative refinement step uses test cases and generated code snippets to form executable programs and then prompts LLM to refine the answer code based on the feedback thrown by the interpreter. Problem Knowledge Generation Model Automatic code refinement Language models often output unreliable information with high confidence [24]. This unreliability is reflected in bug code snippets when generating code. Some previous works have trained exclusive models dedicated to repairing incorrect code, which only accept the bug code as input [16,51]. Other works have fused auxiliary information obtained from execution, such as the stack trace [15,45] and error information raised by a compiler [50]. In recent work, LLMs have been leveraged to act as "teachers" to fix bugs hidden in the code. CodeT [8] uses synthetic test cases obtained through Codex [9] to select correct codes. However, CodeT is applied in an unnatural scenario where all coding problems are formatted as writing a function with certain inputs. Madaan et al. [30] iteratively refines the output with model-generated feedback in multiple tasks, except for code generation. Chen et al. [10] uses similar methods in code generation, but their work peeps one test case from the ground truth, which is impossible when solving real problems. Without exposing test cases, our method is much more flexible and closer to a real coding scenario. SELFEVOLVE: Code Evolution via Large Language Models This section first briefly introduces the code generation paradigm inspired by natural programming practice. We then present the concept of SELFEVOLVE, a two-step method that utilizes language models as a self-bootstrapping knowledge enhancer and an expert programmer with self-reflection, without external models or knowledge bases. After presenting the concept, we delve into the two primary components of SELFEVOLVE, which are built up to form a fully LLM-driven generation pipeline without the need for fine-grained prompt designs or further finetuning steps. The overall pipeline of our method is presented in Figure 1. BackGround Code generation formulation Given a problem description written in natural language d, and code context c, the autoregressive language model p θ predicts the solution as P (Y ) = i=1..n p θ (Y i |Y <i , X), Y <1 = ∅(1) where n is the prediction length and X = [d; c] is the concatenation of d and c. Two-step code generation pipeline Conditioning solely on problem description for generation is still hard for LLM. Inspired by most programmers who often refer to knowledge documentation [42] and struggle to debug with current tools [34], we divide prompt-based code generation methods into two steps. The first step prompts language models to comprehend extra knowledge and task-specific instructions [4,23,31,37], while the next one teaches models to revise the generated code solution through feedback from humans or an oracle instructor. In the two-step pipeline, the second generation step never deteriorates the intermediate output of the first step. Therefore, these two steps follow a topological order in terms of optimization, and can be optimized in order and fused together. SELFEVOLVE: the Proposed Two-step Pipeline Based on the above analysis, we propose SELFEVOLVE, which improves both steps by enabling generated code to evolve progressively using only a large language model, without requiring any learning. SELFEVOLVE generates code by conditioning on the knowledge in the prompt, as previous work has done. However, the knowledge is generated by LLM instead of being retrieved from external knowledge bases. After obtaining the output of the first step, SELFEVOLVE uses LLM to iteratively revise the generated code. This process follows Chen et al. [10] to correct code errors by leveraging feedback from a code executor, but does not necessitate the use of specific test cases. Generating knowledge with language models Conditioning a language model on the knowledge in the prompt is crucial, yet challenging. Given m knowledge items K[1. .m], the language model predicts the next token to generate the final code solution: P (Y |K) = i=1..n p θ (Y i |Y <i , X, K), Y <1 = ∅(2) Knowledge can be retrieved via a sparse retriever [41] or a dense retriever [13,40] as such: K := arg max K⊂B P (K|X, B)(3) where B is the whole database. However, the performance of current retriever models may be limited, resulting in K containing irrelevant knowledge items that add noise to LLM and harm the generation results. A widely-used approach to mitigate this problem is to retrieve as much knowledge as possible [3] to cover the necessary items. However, this method places demands on the ability of LLM to process long texts, which is still a work in progress [27,38]. To more accurately and conveniently obtain the necessary knowledge, we utilize language models as knowledge sources, prompting them to generate information. Large language models have encoded knowledge from a variety of databases into their parameters after being pre-trained on various corpora [14]. Additionally, models that undergo reinforcement learning from human feedback (RLHF) [33] can follow human instructions, serving as a natural knowledge source and providing miscellaneous knowledge based on appropriate input instructions. Based on this, we propose to use self-generated knowledge which is fetched via prompting LLMs as such: p(K) = i=1..k p θ (K i |X, K <i ), K <1 = ∅(4) where k is the length of generated knowledge tokens. When problem descriptions contain implicit intents, such as in StackOverflow [26], there is often a gap between the detailed knowledge required and the words used to describe the problem. This gap arises because deriving the required knowledge involves reasoning. To narrow this reasoning gap and obtain more precise knowledge, we decompose this extraction process when intents are implicitly given: p(K) = i=1..k p θ (K i |c, K <i ) · p θ (c|X), K <1 = ∅(5) where c is a trial code solution from LLM based only on problem contexts. It contains necessary but potentially misused knowledge that can benefit the extraction of knowledge (K). K can be formatted as any problem-specific structure to fit the problem instruction X, making it suitable for various tasks. When problem descriptions X contain explicit intents, SELFEVOLVE uses Eq. 4 instead, as LLM can easily extract knowledge with high accuracy in this case. Revision of generated solution Previous studies have shown that intermediate results generated from LLM may contain mistakes [29,46,48,54]. Such errors can introduce noise to the prompt context, reducing the accuracy of the final output. To reduce code errors, we mimic the debugging process of programmers and introduce an iterative self-refinement mechanism to rectify the buggy code. This mechanism leverages an external Python interpreter to revise erroneous programs. Our approach incorporates code context and sample test cases into the input prompt, along with the generated code solution, to form an executable program. We then execute this program in a sandbox environment to receive error information as well as standard output. Once error information is obtained, we prompt language models to revise the buggy programs, conditioned both on program requirements and error information: P (Y ′ |X, Y, K, e) = p θ (Y ′ |X, Y, e) · p θ (Y |X, K)(6) The revised output, Y ′ , may still contain bugs. Therefore, the above process is repeated until the code can be interpreted without exceptions, or until the iteration steps reach a fixed threshold. In practical applications, the modeling of p θ (Y ′ |X, Y, e) varies depending on the type of error. For simplicity, SELFEVOLVE only corrects API errors and incorrect assertion statements. We find that correcting these two types of errors contributes significantly to performance improvement in the empirical experiments. In conclusion, we combine two LLM-driven methods -generation based on self-generated knowledge and refinement via error message -to create a more effective method called SELFEVOLVE. These two components reinforce each other almost orthogonally. On one hand, self-generated knowledge boosts self-refinement steps. By conditioning on knowledge from models' parameters, intermediate output explicitly applies the knowledge. With more accurate output, the self-refinement steps require fewer iterations to repair the code, resulting in lower difficulty. On the other hand, the self-refinement steps improve the application of generated knowledge. The input knowledge may contain irrelevant information or noise during the generation process. The self-refinement steps eliminate this noise by introducing an external interpreter, improving the overall quality of the generated code. Later empirical experiments will demonstrate how these two modules reinforce each other. Experiments In this work, we present a novel pipeline that supports natural and reliable code generation for a variety of programming and data science problems. To evaluate its effectiveness, we conducted experiments using three different code generation tasks: data science code generation, simple algorithm coding, and C++-to-Python code translation. These tasks were assessed using the DS-1000 [26], HumanEval [9], and TransCoder [43] benchmarks, respectively. In all experiments, we set the top-p cutoff to 0.95 and the maximum generation length to 1024. For the specific prompts used for each task, please refer to Appendix B and C. Baselines 1. DocPrompting [56]: DocPrompting improves the LLM by retrieving problem-relevant documentation via a finetuned retriever, then condition on those documents and problem description to generate code. We use the same documentation pool for DocPrompting in DS-1000, as the problem source of DS-1000 is the same as that of CoNaLa [52]. We also use the same retrieval weights released by them, as DS-1000 are also built to test Python programming. 2. Self-Debugging [10] relies on a SQL application or Python interpreter to teach language models to revise SQL commands or Python code with bugs. They propose three debugging ways, including "simple", "unit test" and "explanation". Without training sets in DS-1000 and HumanEval, we implement it in a zero-shot way and use its "simple" variant for a fair comparison. 3. SELFEVOLVE: SELFEVOLVE is the code generation pipeline proposed in this work. In the main experiments, We use ChatGPT as the knowledge generator and the code refiner. Main Results of SELFEVOLVE Data science code generation For data science code generation tasks, We selected the DS-1000 [26] benchmark, which contains 1000 problems covering seven common data science libraries. DS-1000 introduces a novel "perturbation" concept, including Origin, Surface, Semantic, and Diff-Rewrite, representing the difficulty of problems in ascending order, making it a challenging benchmark. In this study, we prompted language models to generate problem-relevant API documentation as domain-required knowledge. For the self-refinement module, we checked the executable programs and prompted language models to fix syntax errors only. We used greedy decoding and reported the pass@1 [9] score for each method. Results are presented in Table 1. Without further refinement steps, SELFEVOLVE has already exceeded the strong ChatGPT baseline on the Surface and Diff-Rewrite perturbation types, by a margin of 6.58 and 2.47, respectively. Moreover, with an additional self-debug module, SELFEVOLVE substantially improves over other baselines, with a 7.8 (relatively 15.8%) pass@1 gain compared to ChatGPT, on average. SELFEVOLVE also surpasses the prompt-based method, Self-Debugging, by a convincing 4.1 performance margin. We also noticed that integrating the self-generated knowledge with the self-refinement module gains much higher improvement. Specifically, SELFEVOLVE improves the baseline in terms of all perturbation types, demonstrating that our method can impressively enhance the robustness of large language models. General code generation For general code generation tasks, we evaluated SELFEVOLVE on HumanEval [9]. This dataset contains 164 hand-written Python programming problems with an average of 7.7 test cases each. We implemented Self-Debugging methods on ChatGPT and reported its score. We did not implement DocPrompting since no library documentation is required in HumanEval. We also introduced the GPT-4 results from [5] for comparison. We reported a pass@1 score for greedy decoding and pass@10 for 10-sample decoding. For the 10-sample generation, we conducted a grid search to set the temperature to t = 1. Unlike DS-1000, we induced LLM to explicitly output problem-related algorithms as external knowledge and taught LLM to fix assertion errors and syntax errors. The results in Table 2 demonstrate that the strong ChatGPT baseline significantly benefits from our SELFEVOLVE method, with an 11.59 pass@1 gain and a 6.71 pass@10 gain. This leaves a small 3.95 pass@1 gap from GPT-4. Notably, with self-generated knowledge, the self-refinement module again harvested a larger improvement (+3.66 pass@10) than only applying a refinement module like Self-Debugging (+1.22 pass@10). This empirically verifies that self-generated knowledge helps to reduce most errors and produce more precise results. Python code translation As suggested by Roziere et al. [43], we experimented with our methods on the TransCoder [43] dataset. We used its test set, which requires translating C++ code to Python, and filtered out problems without testing scripts, resulting in 410 valid problems. In addition to pass@1, we followed Roziere et al. [43] by using another evaluation metric, computational accuracy, to test each model. Computational accuracy computes the accuracy score for each problem in a competition rule, where each sample code is scored as the percentile of its passed test cases, while the pass@1 metric is computed as whether the sample code has passed all test cases. We prompted LLM to generate the algorithm detail of the C++ code, which serves as the context for Python code generation. Results in Table 3 indicate that our proposed method, SELFEVOLVE, achieves the best performance among other prompt-based methods, even outperforming Self-Debugging which peeps one ground truth test case. Built upon a strong ChatGPT baseline, SELFEVOLVE further improves Discussion In this section, we conduct various analysis experiments to validate the efficacy of our proposed SELFEVOLVE . We first present the impact of the number of iteration step on the final performance of SELFEVOLVE. After that, we demonstrate how our generated knowledge is superior to retrieved knowledge, through a human evaluation experiment. Finally, we extend our method to an even more intelligent language model (GPT-4) to empirically show the scalability of SELFEVOLVE. How do iteration steps affect performance? In §3.2, we declared that the refinement module is iteratively run to fix bugs. In this experiment, we determine under what conditions we should stop the refinement module. We ran the self-refinement module for different iteration turns on three datasets using greedy decoding and tested the pass@1 score of ChatGPT using the same prompt for each iteration stage. Figure 2a presents the detailed results. We observed that the major improvement came from the first refinement step for the HumanEval and TransCoder datasets. On DS-1000, however, the performance improved almost uniformly as we increased the number of refinement steps until the third iteration. This discrepancy across datasets resulted from the much more difficult nature of the problems in DS-1000 compared to the other two datasets. Therefore, the self-refinement Human evaluation of self-generated knowledge To better understand the superiority of generated knowledge in realistic scenarios, we conducted a human evaluation study to demonstrate that the generated knowledge is more relevant to the problem topic than a retrieval-based one. We randomly selected 200 problems from seven libraries of DS-1000 and asked two data science experts to count the number of correctly provided API documents to determine whether the API knowledge matched the solution. We then used two common metrics, precision and recall [6], to assess the accuracy of the knowledge in accordance with the oracle answer. Precision is defined as the percentage of correctly provided documents in the provided document set, while recall measures the percentage of correct document items in the oracle document set. More details on the experiment are presented in Appendix A. The comparison bar chart is shown in Figure 2b. We observed that in all libraries, the generated knowledge was much more accurate than the retrieved one in both metrics. Notably, the retrieved knowledge showed little match with oracle solutions in most libraries because the retrieval queries in DS-1000 are too complicated and contain implicit API demands. In the Matplotlib library, where the queries are simple and the demands are explicitly stated, the retrieved knowledge matched the problem requirements slightly but still lagged far behind the generated one. One key reason for the superiority of generated knowledge is that LLMs can bridge the reasoning gap between problem descriptions and knowledge terminology better than a retriever model. This is also the reason behind Eq. 5. In other words, generated knowledge is able to provide a more comprehensive and accurate understanding of the problem topic, which is crucial in realistic scenarios. Scaling to more powerful models To evaluate the scalability of SELFEVOLVE in more advanced language models, we integrated SELFEVOLVE with GPT-4 without requiring excessive prompt engineering. GPT-4 has demonstrated significantly greater intelligence and reasoning abilities compared to ChatGPT [5]. However, due to limited GPT-4 API-Key access, we only conducted experiments on the Scipy, Pytorch, Sklearn, and Matplotlib libraries of DS-1000, which includes a total of 444 problems and HumanEval. We used the same prompt as ChatGPT and used greedy decoding to report the pass@1 score. The results for both datasets are shown in Table 4. This experiment demonstrates that our approach can benefit from more advanced backbone models instead of degrading them. In contrast to the ChatGPT-based version, SELFEVOLVE on GPT-4 achieved higher pass@1 scores on Scipy (+4.72), Pytorch (+19.12), and Sklearn (+6.09). A similar performance gain was also observed in HumanEval, where SELFEVOLVE improved the already higher pass@1 score by 2.76. Furthermore, after adding the lightweight self-refinement plugin, GPT-4 demonstrated further improvements on all datasets. This highlights the effectiveness of leveraging advanced backbone models and the potential of our approach in producing superior results. Case Study This section demonstrates the effectiveness of SELFEVOLVE with two representative examples shown in Figure 3. In the first example, LLM generates specific documentation tf.one_hot for the problem. Compared to the vanilla output without documentation, language models output extra codes that cannot generalize to other test cases. In contrast, conditioning on the concrete API documentation, language models output more deterministic code without transforming the original labels to a tensor type. The provided documentation sharpens the probability curve and contributes to a more accurate and general answer. In the second example, language models read the np.asarray documentation, but forgets to compute the AVG value. By leveraging the traceback information without the specific test cases, language models can revise their code and use a more general method to solve the problem. Both examples illustrate how the two methods in SELFEVOLVE enhance each other to improve performance. SELFEVOLVE helps make the output of language models more general and accurate. Limitation & Future Work Although SELFEVOLVE has shown promising results in generating knowledge and improving the performance of large language models, there are still some limitations that need to be addressed. One of the main challenges of SELFEVOLVE is that it may not always be automatic when used in different tasks due to the hand-written prompting words. This means that its effectiveness may be limited when applied to other use cases. Another limitation of SELFEVOLVE is that the generated knowledge may not always be suitable for every task and may require fine-grained selection to be effective. However, these issues can be mitigated by developing suitable prompting skills. For example, a more comprehensive set of prompting words could be developed to make it easier to adapt SELFEVOLVE to new tasks. Additionally, a more sophisticated algorithm could be developed to automatically select the appropriate knowledge for a given task. We believe that addressing these issues will make SELFEVOLVE a more versatile and useful framework in different contexts. Conclusion We propose SELFEVOLVE, a simple yet effective method for solving code generation problems using large language models (LLMs) as a fully LLM-driven framework. It acts as both a knowledge provider and a self-reflective programmer to generate high-quality code in two steps, both of which are run with a single LLM. This makes it more flexible and extendable to other datasets, such as Spider [53] or APPS [19], without requiring an extra retriever or previously set up database. Substantial experiments on diverse code generation tasks have verified that SELFEVOLVE can bring great performance gains under various tasks and datasets, and outperform two strong prompting-based methods by a good margin. Furthermore, various analysis experiments indicate that SELFEVOLVE is more suitable for providing problem-related knowledge compared to traditional dense retrievers, and can be easily scaled to more intelligent language models to bring further improvements. A Comparison between Generated Knowledge and Retrieved Knowledge We here present the experiment details of this human evaluation experiment. We randomly select one-fifth of the problems in each library to form the 200 problem set, whose formation is shown in Table 5. API is defined as a function call, a method call or an attribute getter and setter in this case. For retrieved knowledge, we follow DocPrompting [56] to take problem descriptions as queries and their provided document pool as the target sets to perform the retrieval. We use their pretrained CodeT5 retriever and retrieve k = 5 knowledge items from the target pool. For irrelevant documents, we filter them for both retrieved documents and generated ones, so the final number of documents for each problem may be smaller than k. After retrieval, we follow Zhao et al. [55] to put items with higher scores near the generation position to ensure the best generation result. B Prompt for the First-Stage of SELFEVOLVE in Each task We show the detailed prompt for the first step of SELFEVOLVE used in each task below. We show prompts for DS-1000, HumanEval and TransCoder in Figure 4, Figure 5, and Figure 6, respectively. C Prompt for Self-Refinement in Each task We show the detailed prompt for the self-refinement module used in each task below. We show prompts for DS-1000, HumanEval and TransCoder in Figure 7, Figure 8, and Figure 9, respectively. ``{ Problem description}`F or the above question, could you briefly teach me how to solve it step by step in natural language? Don't write the code in this step. Based on the above idea, help me complete the function. Be attention, you should only output the codes without any explanation and natural language. Wrap your code with "```" problem-related algorithms final solution Can you tell me how to fix this bug? You only need to output the codes. You do not need to output explanations. Wrap your code with "```" and "```". ut the python function has some bugs and I cannot find them. Can you tell me which part of my python code is different from C++ code? Help me correct those codes. Wrap your code with "```". revise error Figure 9: Prompt for self-refinement in the TransCoder dataset. Figure 2 : 2(a) Performance-iteration curves of SELFEVOLVE on DS-1000, HumanEval and TransCoder datasets. (b) Precision and recall comparisons between generated knowledge and retrieved one. Figure 3 : 3Two examples to show the efficacy of our proposed SELFEVOLVE methods, where red codes are wrong codes. (a) Comparison between with and without generated documentation. (b) Comparison between with and without self-refinement module. Figure 5 :Figure 6 : 56Prompt for the first step of SELFEVOLVE in the HumanEval dataset. Can you help me to explain how a piece of code work from the perspective of algorithms and syntaxes? here is the code:`{ Cpp code}`Ỳ our explanation:Based on your explanation, write a same function in python with the same function name, the same function argument and the same functionality. Prompt for the first step of SELFEVOLVE in the TransCoder dataset. Figure 7 :Figure 8 : 78Prompt for self-refinement in the DS-1000 dataset.``{ error code}`Ẁ hen I run {test case}, I meet {syntax error}. Help me refine the code. You should only output the codes without any explanation and natural language. Wrap your code with "```" revise syntax`{ error code}`T he expected output of {test case} is {ground truth}. However, the above code output {wrong output}. Help me refine the code. You should only output the codes without any explanation and natural language. Wrap your code with "```" revise error`{ error code}`T he expected output of {test case} is {ground truth}. However, the above code output {wrong output}.Help me refine the code. You should only output the codes without any explanation and natural language. Wrap your code with "```" revise error Prompt for self-refinement in the HumanEval dataset. the above code, it raises such error: {error_msg} python function to implement the same functionality with the above c++ code:`{ py_code}`B ut the python function has some bugs and I cannot find them. Can you tell me which part of my python code is different from C++ code? Help me correct those codes. Wrap your code with "```". revise error I have a c++ function like this:`{ cpp_code}`Ì write a python function to implement the same functionality with the above c++ code:`{ py_code}`B Table 1 : 1Pass@1 results on the DS-1000 dataset. † denotes that the results are referred from[26]. Other baselines are implemented with the same prompt and hyperparameter setting.Method Perturbation Overall Origin Surface Semantic Diff-Rewrite Prior work Codex (Completion) † 44.93 37.94 34.35 16.94 39.20 Codex (Insertion) † 47.76 50.18 38.39 21.05 43.30 DocPrompting 53.95 50.00 39.57 25.93 45.50 Self-Debugging 63.38 59.21 45.65 28.40 53.00 This work ChatGPT 60.31 52.63 41.30 26.54 49.30 SELFEVOLVE 66.23 67.11 48.70 33.95 57.10 w/o self-refinement 60.09 59.21 41.30 29.01 50.60 Table 2 : 2Pass@1 and pass@10 scores comparisons with different methods on HumanEval. We use the same prompt to implement each method.† denotes that scores are cited from[5].Model Pass@1 Pass@10 Prior Work GPT-4 † 82.00 - text-davinci-003 † 65.00 - ChatGPT 66.46 86.58 CodeT [8] 65.20 86.80 Self-Debugging 73.78 87.80 Ours SELFEVOLVE 78.05 93.29 w/o self-refinement 70.73 89.63 Table 3 : 3Performance comparison on TransCoder dataset where we follow[10,43] to translate C++ code to Python code. All methods in this work are implemented with greedy decode. "Acc." refers to computational accuracy.Method TransCoder Acc. Pass@1 Piror Work PaLM [11] 51.8 - PaLM-Coder [11] 55.1 - Codex [9] 80.4 - Self-Debugging [10] 89.3 - Ours ChatGPT 92.7 90.0 SELFEVOLVE 94.8 92.4 w/o self-refinement 93.4 90.5 Table 4 : 4Comparison between SELFEVOLVE using ChatGPT and GPT-4 baselines. We bind SELFE-VOLVE with ChatGPT and GPT-4 to test its generalization.Method DS-1000 HumanEval Scipy Pytorch Sklearn Matplotlib SELFEVOLVE (ChatGPT) 52.83 64.71 73.04 78.06 78.05 GPT-4 52.83 44.12 60.00 69.03 82.00 SELFEVOLVE (GPT-4) 58.49 70.59 70.43 84.52 89.02 w/o self-refinement 57.55 63.24 66.09 69.03 84.76 module brought consistent improvement under different refinement stages. This finding suggests that SELFEVOLVE requires more refinement steps when processing difficult problems, but only one debugging turn is sufficient to bring major improvement for less complicated problems. Table 5 : 5Problem counts for each library in the selected set.LibraryTensorflow Pytorch Numpy Matplotlib Pandas Sklearn Scipy#Problems 10 15 40 37 44 28 26 description} {code context} {error code} After running the above code, it raises such error: {error message} It seems that this line {bug_code} has bugs. Can you tell me what causes this bug? Conclude your code so that your answer should fit in the following code context: {error code} You only need to output codes that can fill in the [insert] block. You do not need to output the codes before the [insert] block.You only need to output the codes without explanation. Wrap your code with "```".explanation https://platform.openai.com/docs/api-reference/chat Millicent Badr Alkhamissi, Asli Li, Mona Celikyilmaz, Marjan Diab, Ghazvininejad, arXiv:2204.06031A review on language models as knowledge bases. arXiv preprintBadr AlKhamissi, Millicent Li, Asli Celikyilmaz, Mona Diab, and Marjan Ghazvininejad. A review on language models as knowledge bases. arXiv preprint arXiv:2204.06031, 2022. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, arXiv:2108.07732Program synthesis with large language models. arXiv preprintJacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021. Improving language models by retrieving from trillions of tokens. Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, International conference on machine learning. PMLRSebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206-2240. PMLR, 2022. Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yi Zhang, Sparks of artificial general intelligence: Early experiments with gpt-4. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with gpt-4, 2023. The relationship between recall and precision. Michael Buckland, Fredric Gey, Journal of the American society for information science. 451Michael Buckland and Fredric Gey. The relationship between recall and precision. Journal of the American society for information science, 45(1):12-19, 1994. Training and evaluating a jupyter notebook data science assistant. Shubham Chandel, Colin B Clement, Guillermo Serrato, Neel Sundaresan, arXiv:2201.12901arXiv preprintShubham Chandel, Colin B Clement, Guillermo Serrato, and Neel Sundaresan. Training and evaluating a jupyter notebook data science assistant. arXiv preprint arXiv:2201.12901, 2022. Codet: Code generation with generated tests. CoRR, abs/2207.10397, 2022. Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, Weizhu Chen, 10.48550/arXiv.2207.10397Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. Codet: Code generation with generated tests. CoRR, abs/2207.10397, 2022. URL https://doi.org/10. 48550/arXiv.2207.10397. . Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, arXiv:2107.03374arXiv preprintet al. Evaluating large language models trained on codeMark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. Teaching large language models to self-debug. Xinyun Chen, Maxwell Lin, Nathanael Schärli, Denny Zhou, arXiv:2304.05128arXiv preprintXinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won, Charles Chung, Sebastian Sutton, Gehrmann, arXiv:2204.02311Scaling language modeling with pathways. arXiv preprintAakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. Incoder: A generative model for code infilling and synthesis. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Luke Wen Tau Yih, Mike Zettlemoyer, Lewis, 10.48550/arXiv.2204.05999The Eleventh International Conference on Learning Representations. Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen tau Yih, Luke Zettlemoyer, and Mike Lewis. Incoder: A generative model for code infilling and synthesis. In The Eleventh International Conference on Learning Representations, 2023. URL https://doi.org/10. 48550/arXiv.2204.05999. SimCSE: Simple contrastive learning of sentence embeddings. Tianyu Gao, Xingcheng Yao, Danqi Chen, 10.18653/v1/2021.emnlp-main.552Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsTianyu Gao, Xingcheng Yao, and Danqi Chen. SimCSE: Simple contrastive learning of sentence embed- dings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910, Online and Punta Cana, Dominican Republic, November 2021. Association for Com- putational Linguistics. doi: 10.18653/v1/2021.emnlp-main.552. URL https://aclanthology.org/ 2021.emnlp-main.552. Transformer feed-forward layers are key-value memories. Mor Geva, Roei Schuster, Jonathan Berant, Omer Levy, arXiv:2012.14913arXiv preprintMor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913, 2020. Synthesize, execute and debug: Learning to repair for neural program synthesis. Kavi Gupta, Peter Ebert Christensen, Xinyun Chen, Dawn Song, Advances in Neural Information Processing Systems. 33Kavi Gupta, Peter Ebert Christensen, Xinyun Chen, and Dawn Song. Synthesize, execute and debug: Learning to repair for neural program synthesis. Advances in Neural Information Processing Systems, 33: 17685-17695, 2020. Deepfix: Fixing common c language errors by deep learning. Rahul Gupta, Soham Pal, Aditya Kanade, Shirish Shevade, Proceedings of the aaai conference on artificial intelligence. the aaai conference on artificial intelligence31Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish Shevade. Deepfix: Fixing common c language errors by deep learning. In Proceedings of the aaai conference on artificial intelligence, volume 31, 2017. Language models can teach themselves to program better. CoRR, abs/2207.14502, 2022. Patrick Haluptzok, Matthew Bowers, Adam Tauman Kalai, 10.48550/arXiv.2207.14502Patrick Haluptzok, Matthew Bowers, and Adam Tauman Kalai. Language models can teach themselves to program better. CoRR, abs/2207.14502, 2022. URL https://doi.org/10.48550/arXiv.2207. 14502. Language models can teach themselves to program better. Patrick Haluptzok, Matthew Bowers, Adam Tauman Kalai, Patrick Haluptzok, Matthew Bowers, and Adam Tauman Kalai. Language models can teach themselves to program better, 2023. . Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, arXiv:2105.09938Horace He, Dawn SongarXiv preprintet al. Measuring coding challenge competence with appsDan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938, 2021. Execution-based evaluation for data science code generation models. Junjie Huang, Chenglong Wang, Jipeng Zhang, Cong Yan, Haotian Cui, Jeevana Priya Inala, Colin Clement, Nan Duan, Jianfeng Gao, arXiv:2211.09374arXiv preprintJunjie Huang, Chenglong Wang, Jipeng Zhang, Cong Yan, Haotian Cui, Jeevana Priya Inala, Colin Clement, Nan Duan, and Jianfeng Gao. Execution-based evaluation for data science code generation models. arXiv preprint arXiv:2211.09374, 2022. Mapping language to code in programmatic context. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer, 10.18653/v1/D18-1192Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsSrinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1643-1652, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1192. URL https://aclanthology.org/D18-1192. Atlas: Few-shot learning with retrieval augmented language models. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, Edouard Grave, 2022arXiv preprint arXiv, 2208Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. Atlas: Few-shot learning with retrieval augmented language models. arXiv preprint arXiv, 2208, 2022. Few-shot learning with retrieval augmented language models. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, Edouard Grave, arXiv:2208.03299arXiv preprintGautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299, 2022. Survey of hallucination in natural language generation. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, Pascale Fung, ACM Computing Surveys. 5512Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1-38, 2023. Matthew Jin, Syed Shahriar, Michele Tufano, Xin Shi, Shuai Lu, arXiv:2303.07263Neel Sundaresan, and Alexey Svyatkovskiy. Inferfix: End-to-end program repair with llms. arXiv preprintMatthew Jin, Syed Shahriar, Michele Tufano, Xin Shi, Shuai Lu, Neel Sundaresan, and Alexey Svy- atkovskiy. Inferfix: End-to-end program repair with llms. arXiv preprint arXiv:2303.07263, 2023. Ds-1000: A natural and reliable benchmark for data science code generation. Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen-Tau Yih, Daniel Fried, Sida Wang, Tao Yu, arXiv:2211.11501arXiv preprintYuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen-tau Yih, Daniel Fried, Sida Wang, and Tao Yu. Ds-1000: A natural and reliable benchmark for data science code generation. arXiv preprint arXiv:2211.11501, 2022. Mukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, Lingpeng Kong, arXiv:2302.04931-context learning with many demonstration examples. arXiv preprintMukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, and Lingpeng Kong. In-context learning with many demonstration examples. arXiv preprint arXiv:2302.04931, 2023. Competition-level code generation with alphacode. Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Science. 3786624Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 378(6624):1092-1097, 2022. Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch, arXiv:2301.13379Faithful chain-of-thought reasoning. arXiv preprintQing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. Faithful chain-of-thought reasoning. arXiv preprint arXiv:2301.13379, 2023. Self-refine: Iterative refinement with self-feedback. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, arXiv:2303.17651arXiv preprintAman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. Augmented language models: a survey. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Timo Baptiste Rozière, Jane Schick, Asli Dwivedi-Yu, Celikyilmaz, arXiv:2302.07842arXiv preprintGrégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, et al. Augmented language models: a survey. arXiv preprint arXiv:2302.07842, 2023. Codegen: An open large language model for code with multi-turn program synthesis. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong, The Eleventh International Conference on Learning Representations. Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview. net/forum?id=iaYcJKpY2B_. Training language models to follow instructions with human feedback. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, Advances in Neural Information Processing Systems. 35Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744, 2022. Are automated debugging techniques actually helping programmers?. Chris Parnin, Alessandro Orso, Proceedings of the 2011 international symposium on software testing and analysis. the 2011 international symposium on software testing and analysisChris Parnin and Alessandro Orso. Are automated debugging techniques actually helping programmers? In Proceedings of the 2011 international symposium on software testing and analysis, pages 199-209, 2011. Retrieval augmented code generation and summarization. Wasi Md Rizwan Parvez, Saikat Ahmad, Baishakhi Chakraborty, Kai-Wei Ray, Chang, 10.18653/v1/2021.findings-emnlp.232Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsMd Rizwan Parvez, Wasi Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. Re- trieval augmented code generation and summarization. In Findings of the Association for Compu- tational Linguistics: EMNLP 2021, pages 2719-2734, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-emnlp.232. URL https://aclanthology.org/2021.findings-emnlp.232. Controllable semantic parsing via retrieval augmentation. Panupong Pasupat, Yuan Zhang, Kelvin Guu, 10.18653/v1/2021.emnlp-main.607Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsPanupong Pasupat, Yuan Zhang, and Kelvin Guu. Controllable semantic parsing via retrieval augmentation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7683-7698, Online and Punta Cana, Dominican Republic, November 2021. Association for Computa- tional Linguistics. doi: 10.18653/v1/2021.emnlp-main.607. URL https://aclanthology.org/2021. emnlp-main.607. Check your facts and try again: Improving large language models with external knowledge and automated feedback. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, arXiv:2302.12813arXiv preprintBaolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023. . Peng Bo, Rwkv-Lm, 82021Bo PENG. RWKV-LM. https://github.com/BlinkDL/RWKV-LM, 8 2021. Language models as knowledge bases?. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller, 10.18653/v1/D19-1250Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsFabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463-2473, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1250. URL https://aclanthology. org/D19-1250. Sentence-bert: Sentence embeddings using siamese bert-networks. Nils Reimers, Iryna Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.10084. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. E Stephen, Steve Robertson, Walker, SIGIR'94: Proceedings of the Seventeenth Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, organised by Dublin City University. SpringerStephen E Robertson and Steve Walker. Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In SIGIR'94: Proceedings of the Seventeenth Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, organised by Dublin City University, pages 232-241. Springer, 1994. How do professional developers comprehend software. Tobias Roehm, Rebecca Tiarks, Rainer Koschke, Walid Maalej, 2012 34th International Conference on Software Engineering (ICSE). IEEETobias Roehm, Rebecca Tiarks, Rainer Koschke, and Walid Maalej. How do professional developers comprehend software? In 2012 34th International Conference on Software Engineering (ICSE), pages 255-265. IEEE, 2012. Unsupervised translation of programming languages. Marie-Anne Baptiste Roziere, Lowik Lachaux, Guillaume Chanussot, Lample, Advances in Neural Information Processing Systems. 33Baptiste Roziere, Marie-Anne Lachaux, Lowik Chanussot, and Guillaume Lample. Unsupervised transla- tion of programming languages. Advances in Neural Information Processing Systems, 33, 2020. Natural language to code translation with execution. Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I Wang, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingAbu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsFreda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I. Wang. Natural language to code translation with execution. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3533-3546, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.emnlp-main.231. Dynamic neural program embeddings for program repair. Ke Wang, Rishabh Singh, Zhendong Su, International Conference on Learning Representations. Ke Wang, Rishabh Singh, and Zhendong Su. Dynamic neural program embeddings for program repair. In International Conference on Learning Representations, 2018. Self-consistency improves chain of thought reasoning in language models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Denny Zhou, arXiv:2203.11171arXiv preprintXuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. Yue Wang, Weishi Wang, Shafiq Joty, Steven C H Hoi, 10.18653/v1/2021.emnlp-main.685Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDominican RepublicAssociation for Computational LinguisticsOnline and Punta CanaYue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8696-8708, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main. 685. URL https://aclanthology.org/2021.emnlp-main.685. Chain of thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou, arXiv:2201.11903arXiv preprintJason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022. Incorporating external knowledge through pre-training for natural language to code generation. F Frank, Zhengbao Xu, Pengcheng Jiang, Bogdan Yin, Graham Vasilescu, Neubig, 10.18653/v1/2020.acl-main.538Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsFrank F. Xu, Zhengbao Jiang, Pengcheng Yin, Bogdan Vasilescu, and Graham Neubig. Incorporating external knowledge through pre-training for natural language to code generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6045-6052, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.538. URL https://aclanthology.org/2020.acl-main.538. Graph-based, self-supervised program repair from diagnostic feedback. Michihiro Yasunaga, Percy Liang, International Conference on Machine Learning. PMLRMichihiro Yasunaga and Percy Liang. Graph-based, self-supervised program repair from diagnostic feedback. In International Conference on Machine Learning, pages 10799-10808. PMLR, 2020. Break-it-fix-it: Unsupervised learning for program repair. Michihiro Yasunaga, Percy Liang, International Conference on Machine Learning. PMLRMichihiro Yasunaga and Percy Liang. Break-it-fix-it: Unsupervised learning for program repair. In International Conference on Machine Learning, pages 11941-11952. PMLR, 2021. Learning to mine aligned code and natural language pairs from stack overflow. Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, Graham Neubig, 10.1145/3196398.3196408International Conference on Mining Software Repositories, MSR. ACMPengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. Learning to mine aligned code and natural language pairs from stack overflow. In International Conference on Mining Software Repositories, MSR, pages 476-486. ACM, 2018. doi: https://doi.org/10.1145/3196398.3196408. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, Dragomir Radev, Proceedings of the. theTao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 10.18653/v1/D18-1425Conference on Empirical Methods in Natural Language Processing. Brussels, BelgiumAssociation for Computational LinguisticsConference on Empirical Methods in Natural Language Processing, pages 3911-3921, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1425. URL https://aclanthology.org/D18-1425. Automatic chain of thought prompting in large language models. Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola, arXiv:2210.03493arXiv preprintZhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022. Calibrate before use: Improving few-shot performance of language models. Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh, International Conference on Machine Learning. PMLRZihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pages 12697-12706. PMLR, 2021. Docprompting: Generating code by retrieving the docs. Shuyan Zhou, Uri Alon, Frank F Xu, Zhengbao Jiang, Graham Neubig, The Eleventh International Conference on Learning Representations. to write solution codes and store the answer in the variable result? You only need to output the codes which can fill the [insert] block. Please just output codes without any explanation and natural language. Wrap your code with "```"Shuyan Zhou, Uri Alon, Frank F. Xu, Zhengbao Jiang, and Graham Neubig. Docprompting: Generating code by retrieving the docs. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ZTCxT2t2Ru. to write solution codes and store the answer in the variable result? You only need to output the codes which can fill the [insert] block. Please just output codes without any explanation and natural language. Wrap your code with "```".
[ "https://github.com/BlinkDL/RWKV-LM," ]
[ "SYNAPTIC MOTOR ADAPTATION: A THREE-FACTOR LEARNING RULE FOR ADAPTIVE ROBOTIC CONTROL IN SPIKING NEURAL NETWORKS", "SYNAPTIC MOTOR ADAPTATION: A THREE-FACTOR LEARNING RULE FOR ADAPTIVE ROBOTIC CONTROL IN SPIKING NEURAL NETWORKS" ]
[ "Samuel Schmidgall samuel.schmidgall@email \nU.S. Naval Research Laboratory\n\n\nJohns Hopkins University\n\n", "Joe Hays \nU.S. Naval Research Laboratory\n\n" ]
[ "U.S. Naval Research Laboratory\n", "Johns Hopkins University\n", "U.S. Naval Research Laboratory\n" ]
[]
Legged robots operating in real-world environments must possess the ability to rapidly adapt to unexpected conditions, such as changing terrains and varying payloads. This paper introduces the Synaptic Motor Adaptation (SMA) algorithm, a novel approach to achieving real-time online adaptation in quadruped robots through the utilization of neuroscience-derived rules of synaptic plasticity with three-factor learning. To facilitate rapid adaptation, we meta-optimize a three-factor learning rule via gradient descent to adapt to uncertainty by approximating an embedding produced by privileged information using only locally accessible onboard sensing data. Our algorithm performs similarly to state-of-the-art motor adaptation algorithms and presents a clear path toward achieving adaptive robotics with neuromorphic hardware.
null
[ "https://export.arxiv.org/pdf/2306.01906v1.pdf" ]
259,076,351
2306.01906
e1ec853d49c36417b2fa0a1c9e07057fd66a0b8c
SYNAPTIC MOTOR ADAPTATION: A THREE-FACTOR LEARNING RULE FOR ADAPTIVE ROBOTIC CONTROL IN SPIKING NEURAL NETWORKS Samuel Schmidgall samuel.schmidgall@email U.S. Naval Research Laboratory Johns Hopkins University Joe Hays U.S. Naval Research Laboratory SYNAPTIC MOTOR ADAPTATION: A THREE-FACTOR LEARNING RULE FOR ADAPTIVE ROBOTIC CONTROL IN SPIKING NEURAL NETWORKS robot learningspiking neural networksynaptic plasticityneuromodulationonline learning Legged robots operating in real-world environments must possess the ability to rapidly adapt to unexpected conditions, such as changing terrains and varying payloads. This paper introduces the Synaptic Motor Adaptation (SMA) algorithm, a novel approach to achieving real-time online adaptation in quadruped robots through the utilization of neuroscience-derived rules of synaptic plasticity with three-factor learning. To facilitate rapid adaptation, we meta-optimize a three-factor learning rule via gradient descent to adapt to uncertainty by approximating an embedding produced by privileged information using only locally accessible onboard sensing data. Our algorithm performs similarly to state-of-the-art motor adaptation algorithms and presents a clear path toward achieving adaptive robotics with neuromorphic hardware. Introduction Legged robots have made significant progress in the last four decades using physical dynamics modeling and control theory, requiring considerable expertise from the designer [1,2,3,4]. In recent years, researchers have shown interest in using reinforcement and imitation learning techniques to reduce the designer's burden and enhance performance [5,6,7]. However, adaptation to new domains has remained a challenging problem due to various factors such as the differences in data distribution between the source and target domains, as well as the inherent complexity of the underlying relationships between the input and output variables (i.e. dynamic system uncertainties), which often necessitate significant modifications to the learning algorithms and architectures in order to achieve satisfactory results in the target domain [8]. Neuromorphic computing offers a promising approach to address the challenges of adaptation in legged robotics by enabling the development of more efficient and adaptive algorithms that can better emulate the neural structures and functions of biological systems. In addition, these systems are extremely energy efficient [9,10,11], enabling robotic learning algorithms to operate across long timescales without recharging. Many neuromorphic chips are betting on local learning rules, such as Hebbian and spike-timing dependent plasticity rules, to provide on-chip learning for efficient learning on edge-computing applications [11,12,9,13]. Local learning rules offer several advantages beyond their biological inspiration, including computational efficiency, scalability, and the ability to adapt to dynamic environments. Unlike traditional machine learning algorithms that require large amounts of training data and significant computational resources, local learning rules can learn from small amounts of data and adapt in real-time, making them particularly useful for applications in edge computing [14,15,16]. Furthermore, since the learning is distributed across the network of neurons, local learning rules are highly parallelizable, allowing for efficient processing of large amounts of data. Recently, there has been notable progress in developing algorithms that employ local learning rules, due to the advancements in the theory of three-factor learning in neuroscience [17,18]. This theory offers a solution for assigning credit to synapses over time without relying on the backpropagation of errors, which is typically used for credit assignment in machine learning applications. The current most effective local learning rules for neuromorphic devices are based on this theory, and show promising potential for enabling on-chip learning in various real-world applications [19,20,16]. Independently and in parallel, significant strides have been made toward developing adaptive robotic controllers for legged robots. These methods, termed motor adaptation (MA) algorithms, learn how to estimate their current environmental factors (e.g. friction coefficients, terrain, etc) from locally accessible data, which is provided as state input into the network [21,22,23,24,25]. In this work, we introduce a motor adaptation which uses neuroscience-derived rules of plasticity together with a third factor signal to dynamically update the synaptic weights of the network. This method, called Synaptic Motor Adaptation (SMA), provides a novel approach to motor adaptation by enabling the policy to learn from new experiences in real-time rather than simply updating its state input. SMA is particularly well-suited to legged robots as it allows the network to update its connections with respect to the current environment conditions, such as uneven terrain, while maintaining stable control over the robot. The proposed three-factor learning rule used in SMA build on the work of differentiable plasticity [20,26,27], which makes it amenable to gradient descent optimization. This approach has the potential to significantly improve the performance and adaptability of legged robots, which could have wide-ranging applications in the field of robotics, particularly for the deployment of neuromorphic devices. Background & Related work Motor Adaptation Algorithms Robotic learning has remained a major challenge in AI since successful deployment would require the algorithm to adapt in real-time to unseen situations, such as dynamic payloads, novel terrain dynamics, as well as hardware degradation over time. This problem has remained a major hurdle since the majority of deep learning algorithms would train a network in simulation offline and then fix the network weights for online deployment. Significant advancements have been realized recently with the introduction of motor adaptation algorithms [21,22,23,24,25], which act much like a system identification estimator, with the difference that (1) the estimate is a learned embedding containing only the most vital information for adaptation rather than the entirety of the system dynamics and (2) the estimate is made very rapidly from a temporal history of sensory information. Motor adaptation algorithms typically consist of two components: a base policy π and an environment factor encoder µ. During the first phase of simulated training, the factor encoder µ takes as input privileged information from the environment e(t) that would not be accessible to a deployed system (e.g. friction, motor strength, robot center of mass) and produces a low-dimensional output embedding z(t) referred to as an latent extrinsic vector. The latent vector z(t) is then provided as input to the base policy π and optimized by the base policy loss such that z(t) proves a useful latent representation for π so that it can better solve its objective. This process can be described by the following equations: (1) a(t) = π(x(t), a(t − 1), z(t)).(2) In the second phase of training, an environment factor estimator ϕ is trained via regression to match the output of the environment factor encoder µ using a time history of state and action pairs (a(t − N − 1), s(t − N ), ..., a(t − 1), s(t)). In essence, an online approximation of the extrinsics embedding z(t) is generated using information accessible to the robot.ẑ (t) = ϕ(a(t − N − 1), s(t − N ), ..., a(t − 1), s(t))(3)a(t) = π(x(t), a(t − 1),ẑ(t)).(4) Motor adaptation algorithm have been demonstrated to significantly improve the adaptation abilities of robotic learning in simulation, and has also been used for the deployment of networks trained entirely in simulation to robotic hardware (sim2real) [21,22,23,24,25]. Synaptic plasticity and three-factor learning Plasticity in the brain refers to the capacity of experience to modify the function of neural circuits. The plasticity of synapses refers to the modification of the strength of synaptic transmission based on local activity and is currently the most widely investigated mechanism by which the brain adapts to new information [28,29]. Methods in deep learning are based on changing weights from experience, typically through the use of the algorithm backpropagation, which makes predictions based on input and uses the chain rule to back-propagate errors through the network [30]. While there are parallels between backpropagation and synaptic plasticity, there are many significant ways in which they differ in operation compared to the brain [31]. Three-factor learning rules have been proposed as a much more plausible theoretical framework for understanding how meaningful changes are made in the brain [17,18]. Below, we introduce a pair-based model of plasticity and the theory of three-factor learning. Pair-based spike-timing dependent plasticity The pair-based spike-timing dependent (STDP) model is a plasticity rule that governs changes in synapses based on the timing relationship between pairs of pre-and post-synaptic spikes [32]. This model was derived from experiments which observed that the precise timing of spikes can describe synaptic long-term potentiation (LTP, increase in weight) and long-term depression (LTD, decrease in weight). We begin by describing the timing dynamics of pre-and post-synaptic spikes through an iterative update rule, referred to as a synaptic trace (also see Figure 1): x (l) i (t + ∆τ ) = α x x (l) i (t) + f (x (l) i (t))s (l) i (t).(5) The precise physiological interpretation of the activity trace x i (t) ∈ R > 0 is not well-defined, as there are several possible representations for this activity. In the case of pre-synaptic events, it could correspond to the quantity of bound glutamate or the number of activated NMDA receptors, while for post-synaptic events it could reflect the synaptic voltage generated by a backpropagating action potential or the amount of calcium influx through a backpropagating action potential [33]. The activity trace x (l) i (t) is reduced to zero with the variable α x ∈ (0, 1), where α x is commonly expressed as (1 − 1 τ ) and decays at a rate dependent on the time constant τ ∈ R > 1. The update of the synaptic trace is determined by a function f : R → R, which is proportional to the presence of a spike s i (t). This all-to-all synaptic trace scheme pairs each pre-synaptic spike with every post-synaptic spike indirectly via the decaying trace. In the linear update rule, which is used in this work, the trace is updated by a constant factor β when a spike s (l) i (t) occurs. x (l) i (t + ∆τ ) = α x x (l) i (t) + βs (l) i (t).(6) Next, we describe the pair-based STDP rule, which describes LTP (left-hand side of Equation 8) and LTD (right-hand side of Equation 8) via pairs of spikes and synaptic traces: W (l) i,j (t + ∆τ ) = W (l) i,j (t) + ∆ W (t) (7) ∆ W (t) = A +,i,j x (l−1) i (t)s (l) j (t) − A −,i,j x (l) j (t)s (l−1) i (t).(8) When a post-synaptic firing occurs (s (l) j (t) = 1), weight potentiation occurs by a quantity proportional to the presynaptic trace (x (l−1) i (t)). Similarly, when a pre-synaptic firing occurs (s (l−1) i (t) = 1), weight depression occurs by a quantity proportional to the post-synaptic trace (x (l) j (t)). Potentiation and depression are scaled by constants A +,i,j ∈ R and A −,i,j ∈ R, respectively, which characterize the rate of change of LTP and LTD. Typically, Hebbian pair-based STDP models define A +,i,j > 0 and A −,i,j > 0, while anti-Hebbian models define A +,i,j < 0 and A −,i,j < 0. We initialize our learning rule to be Hebbian, but do not constrain the optimization, thus allowing our initially Hebbian rule to become anti-Hebbian or any other variations of the pair-based STDP rule. Eligibility traces and three-factor plasticity Rather than directly modifying the synaptic weight, local synaptic activity leaves an activity flag, or eligibility trace, at the synapse [18]. The eligibility trace does not immediately produce a change, rather, weight change is realized in the presence of an additional signal, which is discussed below. In a Hebbian learning rule, the eligibility trace can be described by the following equation: E (l) i,j (t + ∆τ ) = γE (l) i,j (t) + α i,j f i (x (l−1) i )g j (x (l) j ).(9) The decay rate of the trace is determined by the constant γ ∈ [0, 1], where a higher value of γ results in a faster decay. The constant α i,j ∈ R determines the rate at which activity trace information is incorporated into the eligibility trace. The functions f i and g j depend on the pre-and post-synaptic activity traces, x (l−1) i and x (l) j , respectively. These functions are indexed by the corresponding pre-and post-synaptic neuron, i and j, as the eligibility dynamics of synaptic activity may be influenced by neuron type or the region of the network. Theoretical neuroscience literature suggests that eligibility traces alone cannot bring about a change in synaptic efficacy [18,17]. Rather, weight changes require the presence of a third signal. W (l) i,j (t + ∆τ ) = W (l) i,j (t) + M j (t)E (l) i,j (t).(10) Here, M j (t) ∈ R is a regional third factor known as a neuromodulator, acting as an abstract representation of a biological process. Without the presence of the neuromodulatory signal (M j (t) = 0), weight changes do not occur. In the presence of certain stimuli, the magnitude and direction of change in M j (t) determine both long-term potentiation (LTP) and long-term depression (LTD), causing them to scale and reverse. Three-factor learning rules are powerful in their descriptive capabilities, and have been used to describe approximations to Backpropagation Through Time (BPTT) [34,19] and Bayesian inference [35]. Synaptic Motor Adaptation Recent advances in machine learning and theoretical neuroscience have led to the ability to optimize neurosciencederived three-factor learning rules with backpropagation through time [20,26,27], making powerful gradient-descent based approaches accessible for the optimization of local learning rules. These algorithms can be meta-trained through a bi-level optimization to adapt the underlying behavior of the network toward an objective during deployment (innerloop) via gradient descent of an objective function after deployment (outer-loop). We extend these ideas toward the development of a motor adaptation algorithm whereby the synaptic weights of the network change based on a meta-optimized three-factor learning rule to adapt in real-time to environmental conditions which we call Synaptic Motor Adaptation (SMA). A three-factor synaptic motor adaptation rule In MA algorithms, the role of the factor encoding module µ (Equation 1) is to provide a context signal for the robot so it can adapt its behavior to better suited for its environment which is constantly changing, such as walking on uneven surfaces, the existence of limb damage, or when the ground becomes slippery. This context signal changes the behavior of the robot by providing a learned embedding from the factor encoder as input to another policy network. While this elegantly allows the robot to adapt to new environmental challenges, the fundamental behavior of the policy is not capable of changing (i.e. the synaptic weights), rather just the information the robot has about the environment is constantly being re-estimated (its state input). This prevents the policy from actually learning from new experience, instead, it can only update its state input based on the time history of events. Like other motor adaptation algorithms, SMA consists of a base policy π which takes in robot sensory information, x t , and an environment factor encoder µ which takes in privileged information e t . However, SMA differs from other MA algorithms because it uses the environment factor encoder µ to produce a neuromodulatory learning signal m(t) (in our model m − (t) and m + (t)) which dictates the degree with which connections are updated. This can be explained by the following equations: m + (t), m − (t) = µ(e(t)) (11) a(t) = π(x(t), a(t − 1), W(t), m + (t), m − (t)).(12) In this equation, instead of a time-varying adaptive signal z(t) being produced by µ there are two modulatory signals m + (t), m − (t), and instead of z(t) being given as input to π there is a time-dependent weight parameter W(t). The adaptive weight parameter W(t) is updated by the following equations: W (l) i,j (t + ∆τ ) = W (l) i,j (t) + α(t)∆ W (t)(13)∆ W (t) = m +,i (t)E (l) +,i,j (t) + m −,j (t)E (l) −,i,j (t).(14) We note here that unlike in Equation 8, there are two eligibility traces, one for the LTP dynamics E Table 1: Simulation testing results. Each entry is defined as i R i · P i where the total sum of rewards for a rollout is R i and the probability of the domain randomization sample is P i . P i 's are sampled at discrete intervals along the noise ranges listed above. SMA and RMA experts are the SMA and RMA models being provided with privileged information e(t) together with their extrinsics encoding module µ(t). The non-experts are using approximations. and m − (t)), one for each of the eligibility traces. We see that Equations 13 and 14 update the weights of π using the modulatory dynamics m + (t)m − (t) produced by the environment factor encoder. That is to say, instead of determining how privileged information can best inform the network at the sensory level like traditional MA algorithms, SMA determines how to utilize privileged information to best update the base policy π synaptic weights. This is possible since recent work enabled the dynamics of the three-factor learning rule to be differentiated through in spiking neural networks [20,26], and thus the policy gradient loss is backpropagated through the plasticity dynamics to optimize the modulatory signals m + (t), m − (t) given privileged information e(t). Furthermore, the modulatory signal dynamics m + (t), m − (t) are approximated by an environment factor estimator ϕ, enabling the learned adaptive dynamics to be utilized without privileged information. Once the weight delta ∆ W (t) has been computed via the eligibility and modulatory trace dynamics, it is multiplied by a time-varying term η(t) = exp(1/t) − 1 before being incorporated into the synaptic weights. We refer to this term as the stabilization variable, and it exponentially decays to zero as t → ∞ in order to stabilize the weight dynamics as the quadruped adapts to its environment conditions. We found that without this term, the weight dynamics are unstable across time horizons greater than what the network was trained for and, with the addition of the stabilization term, sustained control of the quadruped can be maintained over long time horizons. Experimental setup Parallel reinforcement learning We use a modified implementation of the Proximal Policy Optimization (PPO) algorithm [36] specifically designed for massively parallelized reinforcement learning [7] on the GPU. This algorithm allows learning from thousands of robots in parallel with minimal algorithmic adjustments. The batch size, B = n steps n robots , is a critical hyper-parameter for successful learning in on-policy algorithms such as PPO. If the batch size is too small, the algorithm will not learn effectively, while if it is too large, the samples become repetitive, leading to wasted simulation time and slower training. To optimize training times, a small n steps must be chosen, where n steps is the number of steps each robot takes per policy update, and n robots is the number of robots simulated in parallel. The algorithm requires trajectories with coherent temporal information to learn effectively, and the Generalized Advantage Estimation (GAE) [37] requires rewards from multiple time steps to be effective. In previous work [7], a minimum of 25 consecutive steps or 0.5 s of simulated time is demonstrated to be sufficient for the algorithm to converge effectively. It is shown that using mini-batches of tens of thousands of samples can stabilize the learning process without increasing the total training time for massively parallel use cases. During the training of the PPO algorithm, robots need to be reset periodically to encourage exploration of new trajectories and terrains. However, resets based on time-outs can lead to inferior critic performance if not handled carefully. These resets break the infinite horizon assumption made by the critic, which predicts an infinite horizon sum of future discounted rewards. To address this issue, like in [7], the environment interface is modified to detect time-outs and implement a bootstrapping solution that maintains the infinite horizon assumption. This approach mitigates the negative impact of resets on critic performance and overall learning, as demonstrated through its effect on the total reward and critic loss. The handling of resets must also take into account the temporal dynamics of the synaptic state variables, e.g. the eligibility and synaptic traces. When working with PPO, which iteratively recalculates log probabilities from old data, this is not necessarily trivial. The challenge lies in the PPO algorithm's non-temporal treatment of data, where typically minibatches randomly sample states at arbitrary points in time. This is a challenge because as a temporally-dependent policy changes across time (e.g. recurrent networks, plastic networks), unlike non-temporal ANNs, the dynamic equation that led to an action a(t) at time t was dependent on all timesteps 0 ≤ τ < t. Thus, to calculate a(t), PPO must be modified to incorporate rollout mini-batches where, instead of randomly sampling points in time for evaluation, entire robotic trajectories are randomly sampled and the dynamic equations (e.g. synaptic weights) are rolled out in time. In other words, since B = n steps n robots , minibatches sample along n steps but since there is temporal dynamics and the entire n steps must be rolled out we sample along n robots . Observations, actions, and noise Base linear and angular velocities, measurement of the gravity vector, velocity commands, joint positions and velocities, and the previous actions taken by the policy. Each of these values are scaled by a constant factor (see Appendix). Additionally, random noise is added to the sensor readings sampled from the following uniform distributions: Observation noise is added to account for the inherent variability in the environment, such as sensor noise and measurement errors. Introducing noise to the observations helps the policy learn to be robust to variations, improves its ability to generalize to new situations, and otherwise better benchmarks adaptivity of the controller. The action a(t) taken by the policy π is a desired joint position which is sent to a PD controller to calculate torques for the joints of the robot via the following equation: τ (t) = K p (c a a(t) + q 0 − q(t)) − K dq (t)(15) where: τ (t) is the torque output at time t. q(t) andq(t) are the current joint position and velocity, respectively. q 0 is the default joint position. c a a(t) is the scaled action at time t, with c a being the action scale factor. K p and K d are the PD gains, which are optimized by the experimenter as a hyperparameter. For our experiments we chose K p = 20 and K d = 0.5. Actions are further scaled by a constant 0.25 to account for a physics simulator decimation size of four (simulation updates per policy update). Directly outputting torques is also an option, but we found outputting a target position into a PD controller provides quicker learning and smoother gaits. Reward Terms The reward function reinforces the robot to follow a velocity command along the x, y, and angular (ω) axes and penalizes inefficient and unnatural motions. The total reward is a weighted stum of nine terms detailed below. To create smoother motions we penalize joint torques, joint acceleartions, joint target changes, and collisions. Additionally, there is a reward term to encourage taking longer steps which produces a more visually appealing set of behaviors. 1. Tracking forward velocity: ϕ(v * b,xyz − v b,xyz ) 2. Tracking angular velocity: ϕ(ω * b,z − ω b,z ) 3. Angular velocity penalty: -||ω * b,xy || 2 4. Torque penalty: -||τ j || 2 5. DOF Acceleration: -||q j || 2 6. Action rate penalty: -||q * j || 2 7. Collision: -n collisions 8. Feet air time: 4 f =0 (t air,f − 0.5) In these equations we define ϕ(x) = exp(− ||x|| 2 0.25 ). Values v * b,xyz , ω * b,z , and ω * b,xy are superscripted with * to represent that they are the target command, with the non-superscripted value as the true value. Finally, the total sum of reward terms r(t) = i r i (t) at each timestep t is clipped to be a positive value. This requires more careful reward tuning upon initialization, but prevents the robot from finding self-terminating solutions. Pre-training an SNN Instead of training the SMA network entirely from scratch, which takes significant compute resources, we initially train a non-plastic SNN without any noise in the simulation to act as the foundation. Once this network is fully trained, plasticity is added to the third layer of the policy network and noise is added to the simulator, from which the network is optimized as is outlined in the section Synaptic Motor Adaptation. Results and analysis We report the performance measurements of several models including: a non-plastic SNN (fixed-weights), a plastic SNN without SMA, RMA without adaptation, RMA with adaptation, and SMA. Additionally, the performance of the RMA and SMA experts are recorded, with an expert being defined as the motor adaptation algorithm provided with exact extrinsics information as defined in [21] rather than its embedding approximation (see Equation 3). The performance measurements in Table 1 are defined as follows: i R i · P i where the total sum of rewards for a rollout is R i and the probability of the domain randomization sample is P i , with P i 's sampled at discrete intervals along the noise ranges listed at the bottom of Table 1. Adaptation to noise The discrepancies between the physics simulator and real hardware are what lead to difficulties translating models trained in simulation to real robots. Recent ideas in robotic learning have led to the belief that adding significant "domain noise," which is noise added to the environment during training to change the physical dynamics (e.g. contact dynamics and friction), could prevent simulation overfitting and lead to a policy that can provide control in a variety of physical conditions. However, these methods tend to provide robust policies that have unsophisticated and jerky movements. Thus, recent efforts have gone toward developing policies that adapt to domain noise, fine tuning their control with respect to noise instead of simply becoming robust to all forms of noise. Both the motor gain and P-gain were areas in which the SMA policy demonstrated improvements in adaptation compared with an RMA policy, with the D-gain and friction not being too far behind. However, D-gain and friction noise were not demonstrated to outperform RMA, but were close in performance. Compared with the three non-MA algorithms, there are clear benefits in performance compared with MA algorithms. However, between RMA and SMA, the performance difference is relatively small, even among the tasks that SMA obtains higher performance. This could suggest one of several things: (1) RMA and SMA are approaching a performance upper bound on adaptivity given the defined degree of noise or (2) these algorithms approached similar performance and there is more progress to be made. However, more experimentation is needed to determine this. Adaptation to terrain The ability to adapt to novel forms of terrain that were outside the scope of training is a crucial capability required for legged robots. This is because the complexity of the real world cannot adequately be captured by simulated environments, and thus, in addition to model-derived forms of noise, adaptation to terrain must be demonstrated for a motor adaptation algorithm that will be useful in the real world. For the introduction of rough terrain in our work we used perlin fractal noise. Perlin noise generates natural-looking noise by defining the slope of the noise function at regular intervals, creating peaks and valleys at irregular intervals instead of defining the value of the noise function at regular intervals. Perlin fractal noise is a type of perlin noise that uses multiple octaves (layers) of noise to create a more complex and varied pattern. Each octave is a version of the perlin noise function with a different frequency and amplitude, and the outputs of each octave are combined together to create a final noise pattern. By adjusting the frequency, amplitude, and number of octaves used, the resulting noise can range from smooth and gentle to rough and jagged, making it useful for generating natural-looking textures and terrain. The parameters for the fractal noise are as follows: number of octaves = 2, fractal lacunarity = 2.5, fractal gain = 1.5, fractal amplitude = 1, vertical scale = 0.35, and horizontal scale = 0.08. As is demonstrated in Figure 2, robots are trained to produce locomotion entirely on flat terrain. Unlike the analysis of adaptation to noise (e.g. motor strength noise), terrain noise is not explicitly encountered during training. Adaptation to rough terrain was among the least transferable skill from the algorithms without motor adaptation, with the non-adaptive SNN, plastic SNN, and RMA without adaptation failing to demonstrate clear generalization to the rough terrain domain. However, both the RMA and the SMA trained robots were successfully able to walk across rough terrain (without falling) despite being trained entirely on flat terrain. Interestingly, this is in spite of terrain and foothold data not being provided to the MA algorithms as privileged information. Discussion We presented the SMA algorithm for real-time adaptation of a quadrupedal robot toward changes in motor strength, P and D gains, friction coefficients, and rough terrain using three-factor learning. This algorithm was compared to the state-of-the-art motor adaptation algorithm RMA [21] and was demonstrated to perform similarly or better on motor control problems that required real-time adaptation. While adaptation improvements are relatively modest compared to the RMA algorithm, we expect further improvements with using more dynamically rich plasticity rules (e.g. triplet, voltage-based), neuron models (e.g. adaptive, resonate-and-fire), propagation delays, surrogate gradient techniques, and modulatory dynamics. Another potential direction is toward developing methods of synaptogenesis, such that the value of the weights along with the network connectivity mapping is learned. Previous methods have incorporated synaptogenesis through genetic algorithms [38], neural cellular automata [39,40,41], and online random mutations [42]-a solution utilizing backpropagation has yet to be developed. Much further work aims to enable learning completely novel behaviors (e.g. vaulting) purely through meta-optimized three-factor learning. There are two clear directions that this work intends building toward: (1) using three-factor learning to transfer from simulation to real hardware and (2) exemplifying this algorithm on neuromorphic hardware. While the path toward transferring from simulation to hardware is clear, further advancements toward the optimization of plasticity rules is required before utilizing current neuromorphic systems. This is because many current neuromorphic systems (1) have propagation delays which are not incorporated into the plasticity dynamics of this work, and (2) are heavily numerically quantized whereas this work was built on fixed-point math. While much of the work toward differentiating through these dynamics has already been solved [20], meta-optimizing three-factor learning rules through these dynamics is a less explored direction (see [27]). A primary limitation to our approach lies in the addition of the stabilization term α(t) in Equation 13. This stabilization term allows for rapid weight modifications at the beginning of the episode as the quadruped learns from interacting with the environment, and then exponentially decays its effect over time to consolidate the weights. This decay is important for long-term adaptation, particularly for an additive pair-based STDP rule which is not temporally stable [43]. While the neuromodulatory dynamics are capable of modulating these changes, we found that without the stabilization term, the weights still tend to diverge into bimodal distributions. This is potentially an effect of truncating the gradient in time, which does not allow proper credit assignment caused by temporally distant modifications. Future work will aim to determine when weight modifications should occur and which synapses should be modified in the meta-optimization dynamics. Overall, this work introduces an exciting path toward rapid adaptation on robotic systems using neuroscience-derived models of three-factor learning and we hope it inspires further applications of three-factor learning on robotic systems. Figure 1 : 1Graphical description of three-factor learning. (Left) Long-term potentiation and depression based on pre-and post-synaptic spike timings. (Middle) Membrane potential dynamics of the post-synaptic neuron. (Right) Neuromodulator quantity affects the growth and consolidation of the synaptic weights. Figure 2 : 2Overview of synaptic motor adaptation algorithm. (Left) Privileged and sensory information vectors. (Middle) Training phase 1 consists of reinforcement leanring with privileged sensing with the neuromodulatory extrinsics embedding µ(t) and the actor π(t). Phase 2 consists of approximating the dynamics of the neuromodulatory extrinsics embedding. (Right) Description of the network structures of each model used. (Right bottom) Image of robot on rough terrain, not encountered during training period. z(t) = µ(e(t)) i,j (t) + m +,j (t) and another for the LTD dynamics E (l) −,i,j (t). This necessitates the incorporation of two modulatory signals (m + (t) Base angular velocities: ±0.2 rad/s Appendix Training HyperparametersBase policy hyperparameters (PPO)The motor adaptation algorithm presented in the section Motor Adaptation Algorithms was Rapid Motor Adaptation (RMA)[21]. Improvements to this algorithm were realized by understanding that there is an information gap between the full state available to the environment factor encoder µ and the environment factor estimator ϕ in a follow on work[25]. Due to this, the factor encoder may generate an embedding that is not possible for the estimator to predict based on its current information, and hence there is a regression gap. To overcome this, a penalty can be added to the RMA training equations as follows:which is a resulting algorithm known as Regularized Online Adaptation (ROA). Here, the encoder and decoder are trained jointly, where the adaptation module is training by imitating z µ online and z µ is regularized to avoid large deviations from the embedding estimate z ϕ . Future advancements in SMA should include this regularization to learn a signal that can be better represented by local information.Hardware Resources M H Raibert, Legged robots that balance. MIT pressRaibert, M. H. Legged robots that balance (MIT press, 1986). M Raibert, K Blankespoor, G Nelson, R Playter, Bigdog, the rough-terrain quadruped robot. IFAC Proceedings Volumes. 41Raibert, M., Blankespoor, K., Nelson, G. & Playter, R. Bigdog, the rough-terrain quadruped robot. IFAC Proceedings Volumes 41, 10822-10825 (2008). Optimization based full body control for the atlas robot. S Feng, E Whitman, X Xinjilefu, C G Atkeson, IEEE-RAS International Conference on Humanoid Robots. IEEEFeng, S., Whitman, E., Xinjilefu, X. & Atkeson, C. G. Optimization based full body control for the atlas robot. In 2014 IEEE-RAS International Conference on Humanoid Robots, 120-127 (IEEE, 2014). Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot. S Kuindersma, Autonomous robots. 40Kuindersma, S. et al. Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot. Autonomous robots 40, 429-455 (2016). Data efficient reinforcement learning for legged robots. Y Yang, Conference on Robot Learning. PMLRYang, Y. et al. Data efficient reinforcement learning for legged robots. In Conference on Robot Learning, 1-10 (PMLR, 2020). Learning quadrupedal locomotion over challenging terrain. J Lee, J Hwangbo, L Wellhausen, V Koltun, M Hutter, Science robotics. 55986Lee, J., Hwangbo, J., Wellhausen, L., Koltun, V. & Hutter, M. Learning quadrupedal locomotion over challenging terrain. Science robotics 5, eabc5986 (2020). Learning to walk in minutes using massively parallel deep reinforcement learning. N Rudin, D Hoeller, P Reist, M Hutter, Conference on Robot Learning. PMLRRudin, N., Hoeller, D., Reist, P. & Hutter, M. Learning to walk in minutes using massively parallel deep reinforcement learning. In Conference on Robot Learning, 91-100 (PMLR, 2022). S Höfer, arXiv:2012.03806Perspectives on sim2real transfer for robotics: A summary of the r: Ss 2020 workshop. arXiv preprintHöfer, S. et al. Perspectives on sim2real transfer for robotics: A summary of the r: Ss 2020 workshop. arXiv preprint arXiv:2012.03806 (2020). Spinnaker: A 1-w 18-core system-on-chip for massively-parallel neural network simulation. E Painkras, IEEE Journal of Solid-State Circuits. 48Painkras, E. et al. Spinnaker: A 1-w 18-core system-on-chip for massively-parallel neural network simulation. IEEE Journal of Solid-State Circuits 48, 1943-1953 (2013). Convolutional networks for fast, energy-efficient neuromorphic computing. S K Esser, CoRR abs/1603.08270Esser, S. K. et al. Convolutional networks for fast, energy-efficient neuromorphic computing. CoRR abs/1603.08270 (2016). URL http://arxiv.org/abs/1603.08270. 1603.08270. Loihi: A neuromorphic manycore processor with on-chip learning. M Davies, IEEE Micro. 38Davies, M. et al. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro 38, 82-99 (2018). The brainscales-2 accelerated neuromorphic system with hybrid plasticity. C Pehle, Frontiers in Neuroscience. 16Pehle, C. et al. The brainscales-2 accelerated neuromorphic system with hybrid plasticity. Frontiers in Neuro- science 16 (2022). Implementing spike-timing-dependent plasticity on spinnaker neuromorphic hardware. X Jin, A Rast, F Galluppi, S Davies, S Furber, The 2010 international joint conference on neural networks (IJCNN). IEEEJin, X., Rast, A., Galluppi, F., Davies, S. & Furber, S. Implementing spike-timing-dependent plasticity on spinnaker neuromorphic hardware. In The 2010 international joint conference on neural networks (IJCNN), 1-8 (IEEE, 2010). Unsupervised learning of an efficient short-term memory network. P Vertechi, W Brendel, C K Machens, Advances in neural information processing systems. 27Vertechi, P., Brendel, W. & Machens, C. K. Unsupervised learning of an efficient short-term memory network. Advances in neural information processing systems 27 (2014). Synaptic plasticity dynamics for deep continuous local learning (decolle). J Kaiser, H Mostafa, E Neftci, Frontiers in Neuroscience. 14424Kaiser, J., Mostafa, H. & Neftci, E. Synaptic plasticity dynamics for deep continuous local learning (decolle). Frontiers in Neuroscience 14, 424 (2020). Brain-inspired global-local learning incorporated with neuromorphic computing. Y Wu, Nature Communications. 1365Wu, Y. et al. Brain-inspired global-local learning incorporated with neuromorphic computing. Nature Communi- cations 13, 65 (2022). Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules. N Frémaux, W Gerstner, Frontiers in neural circuits. 985Frémaux, N. & Gerstner, W. Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules. Frontiers in neural circuits 9, 85 (2016). Eligibility traces and plasticity on behavioral time scales: experimental support of neohebbian three-factor learning rules. W Gerstner, M Lehmann, V Liakoni, D Corneil, J Brea, Frontiers in neural circuits. 1253Gerstner, W., Lehmann, M., Liakoni, V., Corneil, D. & Brea, J. Eligibility traces and plasticity on behavioral time scales: experimental support of neohebbian three-factor learning rules. Frontiers in neural circuits 12, 53 (2018). A solution to the learning dilemma for recurrent networks of spiking neurons. G Bellec, Nature communications. 113625Bellec, G. et al. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature communi- cations 11, 3625 (2020). Differentiable plasticity in spiking neural networks. S Schmidgall, J Ashkanazy, W Lawson, J Hays, Spikepropamine, Frontiers in neurorobotics. 120Schmidgall, S., Ashkanazy, J., Lawson, W. & Hays, J. Spikepropamine: Differentiable plasticity in spiking neural networks. Frontiers in neurorobotics 120 (2021). Rapid motor adaptation for legged robots. A Kumar, Z Fu, D Pathak, J Malik, Rma, arXiv:2107.04034arXiv preprintKumar, A., Fu, Z., Pathak, D. & Malik, J. Rma: Rapid motor adaptation for legged robots. arXiv preprint arXiv:2107.04034 (2021). Adapting rapid motor adaptation for bipedal robots. A Kumar, 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEKumar, A. et al. Adapting rapid motor adaptation for bipedal robots. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1161-1168 (IEEE, 2022). Legged locomotion in challenging terrains using egocentric vision. A Agarwal, A Kumar, J Malik, D Pathak, Conference on Robot Learning. PMLRAgarwal, A., Kumar, A., Malik, J. & Pathak, D. Legged locomotion in challenging terrains using egocentric vision. In Conference on Robot Learning, 403-415 (PMLR, 2023). In-hand object rotation via rapid motor adaptation. H Qi, A Kumar, R Calandra, Y Ma, J Malik, Conference on Robot Learning. PMLRQi, H., Kumar, A., Calandra, R., Ma, Y. & Malik, J. In-hand object rotation via rapid motor adaptation. In Conference on Robot Learning, 1722-1732 (PMLR, 2023). Deep whole-body control: learning a unified policy for manipulation and locomotion. Z Fu, X Cheng, D Pathak, Conference on Robot Learning. PMLRFu, Z., Cheng, X. & Pathak, D. Deep whole-body control: learning a unified policy for manipulation and locomotion. In Conference on Robot Learning, 138-149 (PMLR, 2023). Learning to learn online with neuromodulated synaptic plasticity in spiking neural networks. S Schmidgall, J Hays, bioRxivSchmidgall, S. & Hays, J. Learning to learn online with neuromodulated synaptic plasticity in spiking neural networks. bioRxiv 2022-06 (2022). Learning to learn with synaptic plasticity in spiking neural networks. S Schmidgall, J Hays, Meta-Spikepropamine, Frontiers in neuroscience. Schmidgall, S. & Hays, J. Meta-spikepropamine: Learning to learn with synaptic plasticity in spiking neural networks. Frontiers in neuroscience (2023). Synaptic plasticity: multiple forms, functions, and mechanisms. A Citri, R C Malenka, Neuropsychopharmacology. 33Citri, A. & Malenka, R. C. Synaptic plasticity: multiple forms, functions, and mechanisms. Neuropsychopharma- cology 33, 18-41 (2008). Is plasticity of synapses the mechanism of long-term memory storage. W C Abraham, O D Jones, D L Glanzman, Abraham, W. C., Jones, O. D. & Glanzman, D. L. Is plasticity of synapses the mechanism of long-term memory storage? . Learning representations by back-propagating errors. D E Rumelhart, G E Hinton, R J Williams, nature. 323Rumelhart, D. E., Hinton, G. E. & Williams, R. J. Learning representations by back-propagating errors. nature 323, 533-536 (1986). Backpropagation and the brain. T P Lillicrap, A Santoro, L Marris, C J Akerman, G Hinton, Nature Reviews Neuroscience. 21Lillicrap, T. P., Santoro, A., Marris, L., Akerman, C. J. & Hinton, G. Backpropagation and the brain. Nature Reviews Neuroscience 21, 335-346 (2020). Spike timing-dependent plasticity: a hebbian learning rule. N Caporale, Y Dan, Annu. Rev. Neurosci. 31Caporale, N. & Dan, Y. Spike timing-dependent plasticity: a hebbian learning rule. Annu. Rev. Neurosci. 31, 25-46 (2008). Neuronal dynamics: From single neurons to networks and models of cognition. W Gerstner, W M Kistler, R Naud, L Paninski, Cambridge University PressGerstner, W., Kistler, W. M., Naud, R. & Paninski, L. Neuronal dynamics: From single neurons to networks and models of cognition (Cambridge University Press, 2014). Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets. G Bellec, arXiv:1901.09049arXiv preprintBellec, G. et al. Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets. arXiv preprint arXiv:1901.09049 (2019). Synaptic plasticity as bayesian inference. L Aitchison, Nature neuroscience. 24Aitchison, L. et al. Synaptic plasticity as bayesian inference. Nature neuroscience 24, 565-571 (2021). J Schulman, F Wolski, P Dhariwal, A Radford, O Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintSchulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017). High-dimensional continuous control using generalized advantage estimation. J Schulman, P Moritz, S Levine, M Jordan, P Abbeel, arXiv:1506.02438arXiv preprintSchulman, J., Moritz, P., Levine, S., Jordan, M. & Abbeel, P. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438 (2015). Structural learning in artificial neural networks using sparse optimization. M Manngård, J Kronqvist, J M Böling, Neurocomputing. 272Manngård, M., Kronqvist, J. & Böling, J. M. Structural learning in artificial neural networks using sparse optimization. Neurocomputing 272, 660-667 (2018). Growing neural cellular automata. A Mordvintsev, E Randazzo, E Niklasson, M Levin, Distill. 523Mordvintsev, A., Randazzo, E., Niklasson, E. & Levin, M. Growing neural cellular automata. Distill 5, e23 (2020). E Najarro, S Sudhakaran, C Glanois, S Risi, Hypernca, arXiv:2204.11674Growing developmental networks with neural cellular automata. arXiv preprintNajarro, E., Sudhakaran, S., Glanois, C. & Risi, S. Hypernca: Growing developmental networks with neural cellular automata. arXiv preprint arXiv:2204.11674 (2022). Cellular automata as convolutional neural networks. W Gilpin, Physical Review E. 10032402Gilpin, W. Cellular automata as convolutional neural networks. Physical Review E 100, 032402 (2019). S Schmidgall, arXiv:2103.15692Self-constructing neural networks through random mutation. arXiv preprintSchmidgall, S. Self-constructing neural networks through random mutation. arXiv preprint arXiv:2103.15692 (2021). Spike-timing-dependent plasticity: common themes and divergent vistas. A Kepecs, M C Van Rossum, S Song, J Tegner, Biological cybernetics. 87Kepecs, A., Van Rossum, M. C., Song, S. & Tegner, J. Spike-timing-dependent plasticity: common themes and divergent vistas. Biological cybernetics 87, 446-458 (2002). factor: 0.95GAE discount. GAE discount factor: 0.95 Epochs: 5 6. PPO Clip: 0.2 7. Minibatches: 4 8. Initial learning rate. 5. PPO Epochs: 5 6. PPO Clip: 0.2 7. Minibatches: 4 8. Initial learning rate: 1e-3 Learning rate decay: 0.999 steps. Learning rate decay: 0.999 steps: 2000 . LIF time constant. 11NeuronNeuron, network, and plasticity hyperparameters 1. LIF time constant: exp(-1 . Initial STDP trace constant: exp. 110Initial STDP trace constant: exp(-1 10 ) Plastic weights update scale. Plastic weights update scale: 1e-3 Network hidden dimensions: 512. 12864Network hidden dimensions: 512, 128, 64 threshold: 1.0LIF firing. LIF firing threshold: 1.0 Weight initialization ranges. Weight initialization ranges: Initial trace learning rate: lr i,j ∼ U(0, 1) (lr *. Initial trace learning rate: lr i,j ∼ U(0, 1) (lr * 1e-3) Batch size: 61440 (30*2048). Batch size: 61440 (30*2048) . coefficient: 0.005Entropy. Entropy coefficient: 0.005 factor: 0.95GAE discount. GAE discount factor: 0.95 Initial learning rate. Initial learning rate: 3e-4 Gradient steps. 5000Gradient steps: 5000 BPTT truncation window. 30BPTT truncation window: 30 Synaptic Trace Penalty : 1e-2 ROA policy hyperparameters (A2C) 1. Batch size: 61440. Synaptic Trace Penalty : 1e-2 ROA policy hyperparameters (A2C) 1. Batch size: 61440 (30*2048) . coefficient: 0.005Entropy. Entropy coefficient: 0.005 GAE discount factor: 0.95 5. Initial learning rate. GAE discount factor: 0.95 5. Initial learning rate: 3e-4 Learning rate. decay: 0.999Learning rate decay: 0.999 Gradient steps. 5000Gradient steps: 5000 ROA state history length. 10ROA state history length: 10 Action and observation scaling 1. Joint positions: 1.0 rad. 2Joint velocities: 0.05 rad/s 3. Base linear velocities: 2.0 m/s 4. Base angular velocities: 0.25 rad/sAction and observation scaling 1. Joint positions: 1.0 rad 2. Joint velocities: 0.05 rad/s 3. Base linear velocities: 2.0 m/s 4. Base angular velocities: 0.25 rad/s Similarly action torques are clipped between torque limits, which are defined by the robot manufacturer. Observations are clipped between [-100, 100. Reward scaling 1. Tracking forward velocity: 1.0 2. Tracking angular velocity: 0.5 3. Angular velocity penalty: -0.05 4. Torque penalty: -0.0002Observations are clipped between [-100, 100]. Similarly action torques are clipped between torque limits, which are defined by the robot manufacturer. Reward scaling 1. Tracking forward velocity: 1.0 2. Tracking angular velocity: 0.5 3. Angular velocity penalty: -0.05 4. Torque penalty: -0.0002 DOF Acceleration: -2. DOF Acceleration: -2.5e-7 Action rate penalty. : -0.01 7. Collision: -1.0 8. Feet air time: 1.0Action rate penalty: -0.01 7. Collision: -1.0 8. Feet air time: 1.0
[]
[ "DEK-Forecaster: A Novel Deep Learning Model Integrated with EMD-KNN for Traffic Prediction", "DEK-Forecaster: A Novel Deep Learning Model Integrated with EMD-KNN for Traffic Prediction" ]
[ "Sajal Saha \nDepartment of Computer Science\nUniversity of Western Ontario\nLondonONCanada\n", "Sudipto Baral [email protected] \nDepartment of Computer Science\nUniversity of Western Ontario\nLondonONCanada\n", "Anwar Haque [email protected] \nDepartment of Computer Science\nUniversity of Western Ontario\nLondonONCanada\n" ]
[ "Department of Computer Science\nUniversity of Western Ontario\nLondonONCanada", "Department of Computer Science\nUniversity of Western Ontario\nLondonONCanada", "Department of Computer Science\nUniversity of Western Ontario\nLondonONCanada" ]
[]
Internet traffic volume estimation has a significant impact on the business policies of the ISP (Internet Service Provider) industry and business successions. Forecasting the internet traffic demand helps to shed light on the future traffic trend, which is often helpful for ISPs' decision-making in network planning activities and investments. Besides, the capability to understand future trend contributes to managing regular and long-term operations. This study aims to predict the network traffic volume demand using deep sequence methods that incorporate Empirical Mode Decomposition (EMD) based noise reduction, Empirical rule based outlier detection, and K-Nearest Neighbour (KNN) based outlier mitigation. In contrast to the former studies, the proposed model does not rely on a particular EMD decomposed component called Intrinsic Mode Function (IMF) for signal denoising. In our proposed traffic prediction model, we used an average of all IMFs components for signal denoising. Moreover, the abnormal data points are replaced by K nearest data point's average, and the value for K has been optimized based on the KNN regressor prediction error measured in Root Mean Squared Error (RMSE). Finally, we selected the best time-lagged feature subset for our prediction model based on AutoRegressive Integrated Moving Average (ARIMA) and Akaike Information Criterion (AIC) value. Our experiments are conducted on real-world internet traffic datasets from industry, and the proposed method is compared with various traditional deep sequence baseline models. Our results show that the proposed EMD-KNN integrated prediction models outperform comparative models.
null
[ "https://export.arxiv.org/pdf/2306.03412v1.pdf" ]
259,088,556
2306.03412
ade0c1c222cdc7782fdc956d35a5a998f277b016
DEK-Forecaster: A Novel Deep Learning Model Integrated with EMD-KNN for Traffic Prediction Sajal Saha Department of Computer Science University of Western Ontario LondonONCanada Sudipto Baral [email protected] Department of Computer Science University of Western Ontario LondonONCanada Anwar Haque [email protected] Department of Computer Science University of Western Ontario LondonONCanada DEK-Forecaster: A Novel Deep Learning Model Integrated with EMD-KNN for Traffic Prediction Index Terms-deep learninginternet trafficnoise reductionoutlier detectiontraffic forecast Internet traffic volume estimation has a significant impact on the business policies of the ISP (Internet Service Provider) industry and business successions. Forecasting the internet traffic demand helps to shed light on the future traffic trend, which is often helpful for ISPs' decision-making in network planning activities and investments. Besides, the capability to understand future trend contributes to managing regular and long-term operations. This study aims to predict the network traffic volume demand using deep sequence methods that incorporate Empirical Mode Decomposition (EMD) based noise reduction, Empirical rule based outlier detection, and K-Nearest Neighbour (KNN) based outlier mitigation. In contrast to the former studies, the proposed model does not rely on a particular EMD decomposed component called Intrinsic Mode Function (IMF) for signal denoising. In our proposed traffic prediction model, we used an average of all IMFs components for signal denoising. Moreover, the abnormal data points are replaced by K nearest data point's average, and the value for K has been optimized based on the KNN regressor prediction error measured in Root Mean Squared Error (RMSE). Finally, we selected the best time-lagged feature subset for our prediction model based on AutoRegressive Integrated Moving Average (ARIMA) and Akaike Information Criterion (AIC) value. Our experiments are conducted on real-world internet traffic datasets from industry, and the proposed method is compared with various traditional deep sequence baseline models. Our results show that the proposed EMD-KNN integrated prediction models outperform comparative models. I. INTRODUCTION With the growth and widespread use of the Internet and associated digital services and applications, there are an increasing number of different Internet business models, network traffic volumes, and network management challenges. These events present new difficulties in guaranteeing network service quality, enhancing network load capacity, fully utilizing network resources, and enhancing user experience [1]. Network traffic is perhaps the most basic statistic used to estimate the operational health of each access point and serves as a fundamental performance indicator for the system. Numerous new difficulties, including poor network throughput and challenging network monitoring, have been brought on by the network's complexities [2]. An essential component of network management is the amount of network activity that network equipment experiences over time. The network management strategy can be proactively altered to maximize network resources if the traffic pattern of network components can be forecasted [3]. Moreover, traffic forecasting can be helpful in improving overall customer experience and networking quality of service (QoS) significantly. The increasing reliance on digital communication and online services has led to a massive growth in Internet traffic, which has created significant challenges for Internet Service Providers (ISPs) in managing their networks. According to projections in the study of Cisco [4], the total number of internet users worldwide is expected to increase from 3.9 billion in 2018 to 5.3 billion by 2023, which corresponds to a compound annual growth rate (CAGR) of 6%. This means that in 2018, about 51% of the world's population was using the internet, while by 2023, it is estimated that about 66 percent of the global population will have access to the internet. Although the growth of internet users is a worldwide phenomenon, there are regional disparities, as indicated in Fig While North America (and subsequently Western Europe) is projected to have the highest adoption rate throughout the forecast period, the Middle East and Africa are expected to experience the fastest growth, with a projected CAGR of 10% from 2018 to 2023. Therefore, accurate forecasting of internet traffic volume is essential for ISPs to make informed decisions in network planning activities and investments. However, existing forecasting models often suffer from limitations, such as insufficient accuracy or the inability to handle the complexity and variability of internet traffic patterns. Previous research has focused on various techniques for predicting internet traffic volumes, such as statistical models [5], machine learning algorithms [6], and deep learning models [7]. Despite these efforts, there is still a need for more accurate and reliable forecasting methods that can adapt to the dynamic nature of internet traffic. This study aims to address this gap by developing a novel forecasting model that combines Empirical Mode Decomposition (EMD) and K-Nearest Neighbors (KNN) methods with deep learning techniques to achieve higher accuracy and robustness in predicting internet traffic volume. This study is significant because it addresses a critical problem faced by ISPs, who need to forecast internet traffic accurately. By improving the accuracy of internet traffic forecasting, ISPs can optimize their network performance, reduce operational costs, and better meet customer demands. Additionally, this study contributes to the field of deep learning and signal processing by proposing a novel approach that integrates EMD-KNN methods with deep sequence models. Our results show that the proposed model outperforms traditional statistical and deep sequence baseline models. The potential implications of this research are far-reaching, as it can inform the development of more accurate and reliable internet traffic forecasting models for ISPs. The proposed approach can be applied to other related fields, such as predicting energy demand, traffic flow, and financial market trends. Overall, this study advances our understanding of the research problem and provides a valuable contribution to the field of deep learning and signal processing. As a unique time-series, network traffic reflects the interaction and influence between network services through complex features like nonlinearity, fractality, bursts, disorder, and heterogeneity [8]. In this work, we proposed a traffic prediction methodology integrating empirical mode decomposition (EMD) based denoising and Empirical rule-based outlier detection. There are several works where EMD-based hybrid models have been proposed for traffic prediction. Most of them used EMD to decompose a signal into several components, where each component was modeled separately using either a linear or non-linear model. This multiple-model prediction strategy is time-consuming, and selecting a suitable model for a particular component is non-trivial. In this study, we aim to predict network traffic volume demand using single deep sequence methods that incorporate Empirical Mode Decomposition (EMD) based noise reduction, Empirical rule based outlier detection, and K-Nearest Neighbour (KNN) based outlier mitigation. We used real-world internet traffic datasets from industry for our experiments and compared the proposed EMD-KNN integrated prediction models with various statistical and traditional deep sequence baseline models. Our methodology involved several steps. First, we performed EMD-based noise reduction on the dataset to remove any high-frequency noise that might affect the accuracy of our predictions. We then applied Empirical rule based outlier detection to identify any abnormal data points in the dataset. These data points were then replaced by the K nearest data point's average using KNN-based outlier mitigation. To identify the best value for K, we performed a grid search algorithm based on the KNN regressor. We also used AutoRegressive Integrated Moving Average (ARIMA) and Akaike Information Criterion (AIC) values to select the best time-lagged feature subset for our prediction model. Finally, we used deep learning techniques to develop our EMD-KNN Traffic Forecaster model, which combines EMD-KNN methods with a deep neural network to predict network traffic volume demand accurately. The main contribution of this work is as follows: 1) We introduce a novel method that combines Empirical Mode Decomposition (EMD) and K-Nearest Neighbors (KNN) with deep learning techniques to create a more accurate and robust internet traffic volume forecasting model. This model significantly improves upon the predictive performance of traditional deep sequence models. 2) We enhance the forecasting model with a unique EMDbased denoising mechanism and an empirical rule-based outlier detection system. This introduces a more efficient way to handle the complexity and variability of internet traffic patterns. 3) We incorporate a KNN-based outlier mitigation strategy, which uses the average of the nearest K data points to replace identified outliers. This novel approach provides a more effective way to manage and predict network traffic volume demand. 4) We conduct comprehensive testing and benchmarking of the proposed model using real-world traffic data collected from an ISP provider. The comparative analysis demonstrates the superior performance of our model against other existing state-of-the-art methods. 5) We make a significant contribution to the field of deep learning and signal processing by proposing a unique approach to internet traffic volume prediction. The proposed model can be adapted for predicting a wide range of time-series trends in various fields like energy, traffic flow, and finance. This paper is organized as follows. Section II describes the literature review of current traffic prediction using machine learning models. Section III presents our proposed methodology. Section IV and V summarizes the experimentation configuration and discusses results for comparative analysis, respectively. Finally, section VI concludes our paper and sheds light on future research directions. II. LITERATURE REVIEW Researchers have recently investigated and proposed various methods for predicting internet traffic. Generally, the three types of network flow forecasts are linear models, nonlinear models, and hybrid models. The foundation of a linear traffic forecasting model is the fitting of the polynomial that may capture the trends in historical network traffic data, which are then used to forecast future values. To make the polynomials more closely match the actual internet traffic amount, these linear models must establish various parameters to restrict them. Linear models are extensively used for short-term forecasting and are comparatively fast. As real-world internet traffic is nonlinear, periodic, and has random characteristics, linear models are inappropriate for medium, or long-term forecasting [9]. For example, selfsimilar characteristics and long-range dependence (LRD) of real internet traffic pose challenges to accurately modeling it using AutoRegressive Moving Average (ARMA) methods [10]. Therefore, various nonlinear prediction models have been proposed to handle complex internet traffic prediction tasks. For example, Fractional Autoregressive Integrated Moving Average (FARIMA) and Seasonal Autoregressive Integrated Moving Average (SARIMA) possess the capability of capturing both short-range dependent (SRD) and LRD characteristics [11] [12]. All of the aforementioned linear and nonlinear models are individual prediction models. Every prediction model has flaws and issues of its own. As a result, numerous academics have suggested some hybrid prediction models [13] where more than two prediction methods are included in the combination forecasting model. These outcomes include a linear model combined nonlinear model, a weight mixture model, decomposition combined model. For example, a new hybrid model that combines two different types of models such as FARIMA or FARIMA/GARCH (Generalized Autoregressive Conditional Heteroskedasticity) and neural network has been proposed in [14]. It can handle both long-term and short-term correlations in network traffic. But this method is computationally inefficient. By incorporating FARIMA with alpha stable distribution, researchers have been able to predict wifi traffic with high accuracy and provide Quality of Service (QoS) [15]. However, this approach is complex and falls short as it fails to reconcile conflicting model properties, and it does not perform well in non-stationary cases that are prevalent in real-world network traffic [16]. Compared to statistical models performance in dealing with non-linear traffic data, neural network based nonlinear models are very popular for handling complex traffic data. For example, Echo State Networks (ESN) [17], Fuzzy Neural Networks [18], and Radial Basis Function Neural Networks [19] have been used for traffic prediction. The self-organization and selflearning capabilities of neural networks are strong, and they can effectively explain the nonlinear properties of data. The Echo State Network is a desirable option for network traffic prediction due to its robust nonlinear capabilities and efficient short-term memory capacity. Nonetheless, the generation of the ESN's reservoir is either random or specific, and once created, it remains fixed and cannot be adjusted [20]. As a result, the ESN with a fixed reservoir often exhibits suboptimal performance when dealing with diverse traffic data. To mitigate this limitation, researchers have proposed an ESN with an adaptive reservoir (ESN-AR) [17]. However, the use of a Generative Adversarial Network (GAN) to adjust the reservoir leads to computational complexity in the overall process. Although utilizing a loop reservoir in ESN architecture results in a reduction of computational complexity [21], the accuracy of ESN is contingent upon the size of the reservoir, as larger reservoirs may lead to overfitting, thereby limiting the neural network's capacity significantly. Researchers have proposed Fuzzy Neural Network (FNN) models with backpropagation (BP) to address the dynamic mapping problems in the case of internet traffic forecasting [22]. However, BP has certain limitations such as slow learning speed and the tendency to become trapped in local minima [23]. Although FNN-based networks achieve greater accuracy, their high computing demand limits their scalability and simplicity, thereby restricting their utility in production environments. The conventional Radial Basis Function (RBF) networks exhibit limitations such as slow convergence and the possibility to stuck in local optima due to their use of Gradient descent for parameter optimization. Furthermore, this approach leads to slower search speeds and extensive computational requirements, necessitating substantial memory usage. Consequently, the obtained parameters may not be optimal, leading to the limited application of traditional RBF in network traffic prediction. The basic principle of the neural network lacks a logical mathematical justification and statistical coherence because it is a "black box" concept. The number of input nodes, output nodes, network layers, nodes in each layer, and other setting methods in the training process, which are relied on experience to establish these hyper-parameters, lack a clear theoretical foundation. Many internet traffic prediction methods based on deep learning have been developed recently due to the advancement of deep learning theory. Long short-term memory (LSTM) [29], deep belief network (DBN), convolutional neural network (CNN) [27], stacked autoencoders (SAE) [30], Recurrent Neural Network (RNN) [31], and others are examples of these models. LSTM networks exhibit the ability to retain information in their long-term memory, enabling them to identify patterns of indeterminate lengths. Furthermore, these neural networks can circumvent the vanishing gradient problem that arises with Recurrent Neural Networks (RNNs). However, proposed stacked LSTM networks for anomaly detection in time series [32] can be computationally intensive. Other studies have proposed the use of the firefly algorithm with LSTM, introducing the IFA-LSTM method to enhance accuracy [29]. Nonetheless, LSTM's learning speed is relatively sluggish when faced with a significant volume of network traffic data. Researchers have also examined the performance of DBN across various Artificial Neural Network architectures. While the DBN exhibits impressive results, the investigation reveals that its effectiveness is heavily reliant on the selection of an optimal number of hidden neurons within each layer [28]. Studies observed that beyond a certain point, there is no noticeable correlation between performance and the number of hidden layers, posing a challenge in identifying the ideal number of neurons. Coupled with the inherent complexity of Deep Neural Networks, this feature makes it computationally intensive in real-world applications. On the other hand, due to the growth of smart mobile devices and edge computing devices in the Internet of Things (IoT) has resulted in an ultra-dense network environment with dynamic network topology [33]. Consequently, classifying network flow data with high volumes, velocities, varieties, and veracity has become a challenge. To tackle this problem, a new approach was suggested by researchers that involves using multiple Bayesian auto-encoders stacked together to learn the complex relationships among the multi-source network flows [34]. However, the asymmetry in network traffic can result The analysis was conducted using only two datasets [22] Glowworm Swarm Algorithm Auto-correlation method with average displacement N/A BP neural network The use of the Glowworm Swarm Algorithm resulted in a higher level of accuracy when compared to a standard Backpropagation Neural Network(BPNN). Incorporating the autocorrelation technique along with the calculation of average displacement increases computational complexity. [24] N/A N/A N/A Threshold-based FARIMA The Mean Squared Error (MSE) showed a 72% improvement compared to the conventional FARIMA method The effectiveness of filtering depends on filter order, difference coefficient, and test data quantity [25] Modified Ensemble Empirical Mode Decomposition (MEEMD) Modified Ensemble Empirical Mode Decomposition (MEEMD) N/A Quantum Neural Network (QNN) Better identification of long-range dependence (LRD) and short-range dependence (SRD) Computing resource requirements of the method pose challenges for lowresource devices [26] N/A Autoencoders N/A SARIMA, Hybrid of LSTM and Autoencoder Achieved a 50% reduction in false positive rate The testing was performed exclusively on a dataset that followed a particular distribution [27] Ensemble Empirical Mode Decomposition Ensemble Empir- ical Mode De- composition N/A Three-dimensional convolutional neural network (3D CNN) 3D CNN outperforms 2D CNN in extracting spatial- temporal features Does not take into account medium-term or longterm traffic prediction [28] NA N/A N/A Deep Belief Net- work (DBN) Pretraining a Deep Belief Network (DBN) using unsupervised learning techniques enhanced its performance While accurately selecting hyperparameters is crucial for achieving the desired outcome, no hyperparameter tuning method has been suggested. in suboptimal classification accuracy when using this kind of deep-learning model. In recent years, deep learning models have emerged as a powerful approach for time-series prediction due to their ability to automatically learn complex features from the input data. However, deep learning models face challenges in handling the high dimensionality and non-stationarity of time-series data. To address these challenges, researchers have integrated signal decomposition techniques and optimization algorithms with deep learning models to improve their accuracy and efficiency. Signal decomposition techniques such as Fourier analysis, wavelet transforms, and singular spectrum analysis have been used to break down the time-series signal into its constituent parts, such as trend, seasonality, and noise. The extracted features from signal decomposition techniques have then been used as input to deep learning models, leading to improved prediction accuracy. For example, a hybrid model combining Modified Ensemble Empirical Mode Decomposition (MEEMD) and Quantum Neural Network (QNN) has been proposed in [25] and it shown a superior outcomes in capturing characteristics of LRD and SRD, while concurrently reducing computational complexity. Similarly, optimization algorithms such as stochastic gradient descent, Adam, and RMSprop have been used to optimize the parameters of deep learning models, leading to more accurate predictions. For example, researchers suggested various enhancements to the conventional RBF neural network through the implementation of advanced algorithms such as the improved gravitation search algorithm [35] and particle swarm optimization algorithm [36]. The Adaptive Quantum Particle Swarm Optimization Algorithm (AQPSO) has demonstrated improved results in [37]. Nevertheless, due to the excessive number of parameters involved in RBF optimization, the computation scale and training time increase considerably, impeding the convergence rate. As a result, the applicability of RBF in small networking devices with limited computational capacity becomes constrained. According to the comparative summary of existing works in Table 1, we noticed different types of prediction models. There are still some issues with choosing the prediction model and its parameters, carrying out a reasonable fusion of the prediction results, and selecting the right decomposition or .. optimization algorithm, despite the fact that many research results have shown that the combination prediction model has achieved better performance than the single prediction model. If these issues cannot be adequately resolved, it may result in the hybrid prediction model not outperforming the single prediction model in terms of performance metrics despite the computational complexity being increased. Moreover, we noticed a lack of investigation in noise reduction and outlier mitigation in traffic prediction. Both noise reduction and treatment of outlier data points are crucial pre-processing methods for time-series prediction, but they focus on various sorts of data imperfections. While outlier data point handling strategies try to find and fix data points that dramatically depart from the expected pattern, noise reduction approaches strive to eliminate random shifts in the time-series data. The specific attributes of the time-series data and the objectives of the prediction task determine which techniques should be used. Therefore, in this work, we proposed a novel traffic prediction framework called DEK-Forecaster integrating deep (D) sequence model with Empirical Mode Decomposition (E) based traffic denoising and K-Nearest Neighbour (K) based outlier mitigation and performed a comprehensive experiment with a real-world traffic dataset. III. OUR PROPOSED FORECASTING METHODOLOGY This section briefly explains our proposed traffic prediction methodology depicted in Fig. 2. A. Empirical Mode Decomposition Based Noise Reduction Empirical Mode Decomposition (EMD) is a technique to extract several components from a signal assuming every signal comprises of sub-components. This approach is also known as Hilbert-Huang transforms (HHT) and is extensively used for time-frequency analysis of non-stationary and nonlinear time series data. EMD decomposed an original signal into several zero mean and quasi-periodic components called Intrinsic Mode Functions (IMF) alongside a residue element representing the trend as shown in Eq.1 where each h i (t) stands for the ith IMF, r(t) is the residual component, and y(t) is the original value of the data. y(t) = n i h i (t) + r(t)(1) Real-world internet traffic has random, non-stationary characteristics influenced by various external and internal factors related to ISP companies. These factors can be categorized as geographic factors, economic factors, ISP new service, service decommission factors, weather, time, day, season, special event, etc. Due to these factors, the ISP traffic is composed of many individual components, and EMD can be helpful for better analysis and forecasting of internet traffic. Noise filtering or reduction or signal denoising is a process of removing noise from time-series data. Any time series may consist of three systematic elements: level, trend, and seasonality, and one non-systematic element, noise. The noise reduction approach for better learning and forecasting by the machine learning model should minimize noise elements in time series. Among different noise filtering approaches, EMD based denoising technique has been applied extensively in different areas. Classic EMD-based denoising techniques choose a particular IMF component to eliminate noise elements from the signal, but there is no formal logic for selecting an IMF from decomposed components. Moreover, choosing one IMF for denoising is difficult as the number of IMF depends on the original signal. Therefore, a new EMD-based noise filtering method has been proposed in [38]. They showed that the average of all IMF (avgIMF) components is normally distributed. The avgIMF corresponds to the maximum signal noise level and has the most white Gaussian noise features of any IMF element. y(t) = y(t) − avgIM F (t) 5 Calculate average traffic, AV G y(t) = N i=0 y(i) N 6 Calculate standard deviation, SD y(t) = N i=0 (y(i) − AV G y(t) ) 2 N 7 Traffic upper limit, u lim = AV G y(t) + 3 * SD y(t) The EMD technique is used in this study to remove noise from the internet traffic time series data, which is considered one-dimensional data. The abrupt changes in our traffic forecasting data have been smoothed by removing noise based on steps 3 and 4 in Algorithm 1. We extracted the average of all IMF elements, avgIMF, from our original signal, y(t), to obtain a noise-free traffic data y n (t). B. Empirical Rule and KNN Based Outlier Management Outlier detection is a necessary preprocessing step for realworld internet traffic analysis. The data points significantly different from most of the values are considered outliers. Outliers are characteristically different than noise in the time series. Noise is a random error in the data and needs to be removed entirely from the original signal for a better prediction model. On the contrary, the outliers are the part of the time series that impacts different statistical parameters, such as mean, standard deviation, correlation, etc., of the original signal. Outliers can lead to incorrect future predictions of internet traffic. A statistical principle known as the empirical rule, the threesigma rule or 68-95-99.7 rule, holds that almost all observed data will lie within three standard deviations of the mean with a normal distribution. However, this rule is also applicable for non-normally distributed data where 88.8% of data fall within the three-sigma interval as opposed to 99.97% for normal distribution. According to Chebyshev's inequality, 75% of the data lie inside two standard deviations for a wide range of various probability distributions, while the empirical rule claim 95% data points within the second standard deviation for normal distribution [39]. In this work, we set an upper and lower limit for most of the data instances in our original signal. The individual data point is outside the three-standard deviation. We considered them as point outliers. Those point outliers in our dataset have been mitigated by K nearest data points based on the standard KNN-Imputation algorithm. Each outlier point is imputed using the mean value from K nearest neighbors in the training set. The training dataset's members' distances are calculated using a Euclidean distance measure that is NaN aware, which excludes NaN values from the calculation. For optimum K value, we apply KNN-regressor in our traffic dataset where past observations are used to predict the following data points, and this experiment has been conducted for different K values ranging from 2 to 24. The minimum prediction error measured in terms of RMSE (Root Mean Squared Error) is the criteria for best choosing the best K for imputing outlier data points. C. Time-lagged Feature Extraction Generally, the time series prediction task uses previous data samples to predict the following values. We extract the timelagged feature from our original dataset for training and testing our prediction model in this work. Based on ACF analysis in subsection IV-B2, we concluded that our traffic data is nonrandom. Hence, we considered previous timestamps features for our deep-learning models to predict the following values. We performed a grid search based on the Akaike Information Criterion (AIC) value and a statistical prediction model called ARIMA to determine the optimum number of lagged features. The AIC measures the relative quality of statistical models for a given set of data and predicts prediction error. AIC calculates each model's efficiency in relation to the other predictions given a set of models for the data. As a result, AIC offers a model ranking method. However, we compare ARIMA model prediction performance with various settings of hyperparameters and rank them based on the AIC value. ARIMA model predicts the future value based on the past values of the time series, that is, its own lagged values. The model requires three parameters such as AR(p), M A(q), and I(d), which represent the Autoregressive, Moving Average and differencing order. Among these parameters, the AR term defines the number of lagged features used to forecast the next value. Therefore, we performed a grid search using a different combination of p, q, and d to perform singlestep prediction using the ARIMA model and select the best model by comparing their prediction performance based on our selection criteria. We consider AR term, p, from best performing model based on minimum AIC and p indicates the time-lagged feature which gave us better prediction. So, the prediction task in this study is performed as in Eq.2. where y(t) is the traffic volume for the current time step, y(t − 1) to y(t − p) represents the previous p data points, and h is the prediction function. y(t + 1) = h(y(t − 1), y(t − 2), ...., y(t − p))(2) D. Time and Space Complexity Analysis of Proposed Method There are several individul module in our proposed model. Among them, time complexity for EMD based noise reduction is O(n 2 ) where n is the length of the input data. The time complexity of calculating the average IMF signal is O(i * n), where i is the number of IMFs and n is the length of the input data. Finally the noise removal step involves subtracting the average IMF from the original traffic data, which has a time complexity of O(n). The outlier management module time complexity depends on outlier detection and outlier mitigation. Outlier detection step involves calculating the average and standard deviation of traffic, which both have a time complexity of O(n). Then the outlier mitigation steps requires to execute KNN algorithm and its time complexity is O(n * log(n) * k). Therefore, the overall time complexity of our propsoed model is dominated by the EMD-based noise reduction step, which has a time complexity of O(n 2 ). Thus, the time complexity of our proposed algorithm can be approximated as O(n 2 ). The space complexity of the EMD algorithm is O(n 2 ), as it involves storing the decomposed IMFs. The space complexity of calculating the average IMF signal is O(n), as it involves storing the average IMF signal. The noise removal step involves modifying the original traffic data in-place, so it does not require any additional space. In case of outlier detection, calculation of average traffic and standard deviation requires storing a single value, so their space complexity is O(1). Then finding traffic upper and lower limit involves storing two values, so the space complexity is O(1). The outlier detection step needs to store a list of K values, so the space complexity is O(k). As we used KNN regressor to find the best K, it has a space complexity of O(n * k). Finally, outlier handling involves modifying the original traffic data in-place, so it does not require any additional space. Therefore, the overall space complexity of the algorithm is also dominated by the EMD step, which has a space complexity of O(n 2 ). Thus, the space complexity of the algorithm can be approximated as O(n 2 ). IV. EXPERIMENT DETAIL A. Dataset Description In this study, we utilized a dataset comprised of real-world traffic volume for our experiments. We gathered telemetry information by sampling the value of the 'ifOutOctets' counter on a core-facing interface of a provider edge router's SNMP (Simple Network Management Protocol) interface. To calculate the bps (bits per second) value for each interval, we multiplied the difference between observations at both ends of the interval by eight. Each sampling was performed at fiveminute intervals. Given that we were working with a 40 Gbs connection, no skips occurred during the sampling duration (the 'ifOutDiscards' remained unchanged). Our dataset incorporated 29 days' worth of traffic volume data, equating to a total of 8,352 data samples. We allocated 70% of this data, approximately equivalent to 21 days' worth of traffic data, for the purpose of training our prediction model. The remaining data, covering the subsequent seven days, was utilized for testing our proposed forecasting model. B. Data Preprocessing In this section, we undertake certain data preprocessing tasks before initiating the training of our prediction model. 1) Managing Missing Values and Data Normalization: For our traffic data, we used the most recent valid data entry to replace any missing values. While there are multiple approaches to dealing with missing data in time series analysis, including mean value replacement, linear interpolation, and quadratic interpolation, we found these methods to be unsuitable for our dataset. The linear interpolation method assumes a linear relationship between data points and replaces null values by drawing a straight line between these points. However, given the non-linear nature of our dataset, this approach proved ineffective. The polynomial interpolation method requires predefining the order to replace missing values, which could lead to inaccuracies as it fits the smallest possible degree polynomial through the available data points. Mean value replacement was also unsuitable as real-world traffic measurements can contain outliers and unexpected data points, which can skew the mean. Ultimately, we adopted the forward-filling strategy typically used for time-series data, where the previous value is used to fill in the subsequent missing value. This approach was found to be beneficial for our experiment. 2) Traffic AutoCorrelation Function (ACF) Analysis: An AutoCorrelation Function (ACF) plot serves as a useful tool to evaluate the stationarity and predictability of time-series data. Additionally, data from the ACF plot can inform us about the seasonality and trends within the time-series data. Each bar in an ACF plot represents the strength and direction of correlation between data points at different lags. For data characterized as random, or white noise, the bar values for all lags should be close to zero. In contrast, non-random timeseries data will exhibit at least one lag value with strong correlation. This allows us to construct a predictive model utilizing time-lagged features for such non-random time-series data. As depicted in Fig. 3, our traffic data demonstrates considerable lags along with high correlations. For instance, the correlation value is near 1 for lags such as 1, 24, etc., which indicates that our traffic data is not random. This observation supports the decision to incorporate time-lagged features into our regression model, enhancing its accuracy. C. Evaluation Metrics We used Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Mean Average Percentage Error (MAPE)to estimate the performance of our traffic forecasting models. RMSE is widely used in time-series prediction because it takes into account the magnitude of the errors. This means that larger errors are penalized more heavily than smaller errors, which is useful when the size of the errors matters. MAE is another well-known metric to evaluate the average magnitude of the errors. Unlike RMSE, MAE treats all errors equally, which can be useful to evaluate the model's performance in terms of the absolute deviation of the predictions from the true values. If the scale of the target variable is large then MAPE can be used to evaluate the accuracy of the model's predictions in percentage terms. We define our performance metrics mathematically as follow, where y i andŷ i are predicted, and the original value, respectively, and n is the total number of a test instance. RM SE = n i=1 (y i −ŷ i ) 2 n (3) M AE = 1 n n i=1 |y i −ŷ i | (4) M AP E = 100% n n i=1 y i −ŷ i y i(5) 1) Software and Hardware Preliminaries: We used Python and the machine learning library Tensorflow-Keras [40] to conduct the experiments. Our computer has the configuration of Intel (R) i3-8130U [email protected], 8GB memory, and a 64-bit Windows operating system. V. RESULT ANALYSIS AND DISCUSSION This section provides an overview of the performance of our proposed traffic model. We first delve into the results from the optimal feature selection using ARIMA in subsection V-A. We then turn our attention to the performance of our model that integrates outlier detection and mitigation in subsection V-B. Lastly, we examine the efficacy of the traffic denoising component within our proposed traffic model in subsection V-C. A. Optimum Feature Selection for Prediction Model We identify the optimal time-lagged feature sets for our deep learning prediction model based on ARIMA model performance in single-step prediction. The AR term in ARIMA refers to the number of lagged values of the dependent variable (i.e., the time series data) that are used to predict future values. The AR term is denoted by p and it indicates the order of autoregression. For example, ARIMA(p, d, q), where p represents the order of the AR term. Table II summarizes five best-performing ARIMA model hyper-parameters and their corresponding AIC value. We considered AR and MA term ranges from 2 to 24 for finding the best combination of model parameters so that we can select the optimum AR term that represents the most relevant time lag for capturing the temporal dependencies in the traffic data. This time-lagged feature was then incorporated into our deep learning prediction model to enhance its performance. According to our experimental result, the ARIMA model with the lowest AIC value, which was ARIMA (13,1,16) with an AIC of 3782.588307, is the bestperforming model for our analysis. Therefore, we selected 13 time-lagged features to train our deep sequence model for traffic prediction. B. Performance Analysis of Prediction Model with Outlier Management In this section, we discuss the characteristics of outliers identified by our outlier detection module. Then we analyze the impact of outlier management on traffic prediction tasks compared with several baseline models. Based on our experimental results, there are 43 outlier data samples in our traffic dataset, which lie outside three standard deviations from the mean value, depicted in Fig. 4. We analyze some statistical properties of outliers samples based on their distribution in Fig. 5. It is a right-skewed histogram where the data is clustered towards the left side of the histogram and extends further to the right. This type of distribution is also called a positive skew. In this diagram, we identified the outlier points on the right side of the right-skewed histogram and marked that portion using a red rectangle. A right-skewed histogram with an outlier on the right side indicates that at least one data point has an extremely high value relative to the rest of the data, and Fig. 4 depicts several data points which are relatively very high. These outliers can have a significant impact on the overall distribution of the data, as they can affect the mean and median of the dataset. In general, it is important to identify and handle outliers in a dataset, especially if they are affecting the distribution of the data. Therefore, we treated those outlier points before using them to train our prediction model. We applied KNN imputation to handle these outlier data points, and the basic idea behind KNN imputation is to replace the outlier value with the average of the K nearest neighbors. One of the core challenges of this approach is to find the best value for K. A smaller value of K will result in a stricter imputation estimate, focusing on a local perspective of the domain. On the other hand, a larger value of K will result in a more general estimate, focusing on a global perspective of the domain. A small K value may result in over-fitting, where the imputed values are too similar to the original data and may not be accurate. A large K value may result in under-fitting, where the imputed values are too different from the original data and may not be representative of the underlying distribution. Therefore, we used the KNN regressor to determine the optimal value of K. A range of K values has been considered in our experiment and reported their corresponding RMSE in Table III for predicting the next data value. The lower RMSE indicates a strong relationship among K previous values to forecast the next value in the series. And based on this, we replaced the outlier data samples with the average of the K nearest neighbors traffic volume data. According to our experimental results in Table III, K value 11 gave us the lowest RMSE of 1.78 Gbps. Based on this, we considered the average of 11 previous data points to replace corresponding outlier data. After treating those outlier data samples in our traffic dataset, we compare our KNN-integrated traffic prediction model performance with several baseline deep learning models. The baseline conventional deep learning models used in the analysis are RNN, LSTM, LSTM Seq2Seq, LSTM Seq2Seq ATN, and GRU. These models were evaluated based on the performance metrics such as RMSE, MAE, and MAPE, and all results are summarized in Table . Our proposed deep learning model integrated with KNN-based outlier detection outperforms the conventional model. As depicted in Fig. 9, the prediction error dropped significantly for KNN-integrated models compared to the standard deep learning models. For example, in the case of RNN, the prediction error has been reduced from 7.51% to 4.27% when outliers were handled, and it is approximately 43% less error compared to traditional RNN. Similarly, LSTM KNN, LSTM Seq2Seq KNN, LSTM Seq2Seq Atn KNN, and GRU KNN gave better prediction accuracy compared to their corresponding traditional model. From Fig. 9, we noticed approximately 24%, 8%, 9%, and 40% less error we achieved respectively for LSTM KNN, LSTM Seq2Seq KNN, LSTM Seq2Seq Atn KNN, and GRU KNN by managing outliers before using them to train our model. Consequently, we noticed a significant accuracy improvement in KNN integrated models compared to the conventional model in Fig. 7. Overall, incorporating KNN-based outlier mitigation in conventional deep learning models helped improve prediction accuracy by reducing the influence of outliers or extreme values in the input data. Outliers can be problematic in time-series prediction models, as they can introduce noise and bias into the model and cause it to make inaccurate predictions. By removing or reducing the influence of outliers, the model is able to learn more effectively from the remaining less noisy data, and our experimental results also indicate the adverse effect of outlier data points in model performance. In our study, we evaluated the performance of various models for internet traffic forecasting using three error metrics: Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE). As shown in Table V C. Performance Analysis of Prediction Model with Noise Reduction and Outlier Mitigation To further improve model performance, we analyze noise in our traffic data. Noise refers to random variations or errors in the data that can affect the accuracy of a model's predictions. Noise can arise from various sources, such as measurement errors, data recording issues, or other random fluctuations. By applying noise reduction techniques, such as smoothing, filtering, or denoising algorithms, the noisy data can be cleaned up, resulting in a more accurate representation of the underlying pattern in the data. This can lead to an improved model performance by reducing the impact of noise on the model's predictions. After removing outliers from a dataset using techniques such as KNN imputation, it is often the case that the remaining data still contain some level of noise or unwanted variability. This noise can obscure the underlying patterns and relationships in the data, making it more difficult to analyze and interpret. Therefore, it can be useful to apply additional noise reduction techniques to further improve the quality of the data. EMD-based noise reduction is one such technique we used to remove unwanted noise and variability from the data, resulting in a smoother and more interpretable signal. We summarized Noise-to-Signal (SNR) ratio in Table IV for analyzing the signal quality after denoising. The SNR values that we obtained indicate that the EMD-based noise reduction method has significantly improved the quality of the signal. A negative SNR value, -7.05 dB, for the noisy signal indicates that the noise in the signal is actually stronger than the signal itself. This can make it difficult to accurately analyze and interpret the data. However, after applying EMDbased noise reduction, the denoised signal has a much higher SNR value (21.47 dB), indicating that the quality of the signal has been significantly improved relative to the noise. This means that the denoised signal is now much easier to analyze and can provide more accurate and reliable insights into the underlying patterns and relationships in the data. In summary, the significant improvement in SNR value for the denoised signal compared to the noisy signal suggests that the EMD- based noise reduction method has been effective in reducing unwanted noise and variability in the data, resulting in a higher quality and more interpretable signal. In Fig. 6, we depicted actual traffic, denoised traffic, and the noise in the dataset. Amount of traffic (Gbps) Our proposed RNN KNN EMD model prediction error is lower by more than 3% compared to RNN. This states the effectiveness of our proposed traffic prediction method in handling real-world internet traffic, which might have noise and outliers due to various external and internal factors. The experiment was expanded by including two extensions of RNN called Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) because these models had a greater capacity for knowledge retention from longer sequences than the RNN. Traditional LSTM and GRU outperformed RNN by 2.48% and 1.1% more prediction accuracy, respectively. Since RNN has an inherent problem of vanishing gradient problem in handling longer sequence data, its prediction accuracy was smaller than LSTM and GRU, specifically designed to address RNN limitations. Our experimental results indicate that LSTM and GRU have more power to retain information from sequential data compared to RNN. We further improved LSTM and GRU performance by integrating our proposed denoising and outlier detection module. Our proposed LSTM KNN EMD and GRU KNN EMD perform better than conventional LSTM and GRU. The LSTM KNN EMD prediction accuracy is increased by 1.73% compared to LSTM, while for GRU KNN EMD, the accuracy improvement was 2.89% than GRU. Since deep sequence models considered only the temporal information for learning, the noise and outlier in the training data ultimately affect its generalization capability resulting in lower accuracy. Our EMD-based noise reduction and empirical rule-based outlier mitigation provide traffic with random abrupt changes for model training, which eventually increase the prediction accuracy and decrease the average prediction error between actual and predicted traffic. Finally, we got our best prediction accuracy from our proposed LSTM Seq2Seq ATN EMD KNN model, where we integrated an attention layer. The extra layer helps our Seq2Seq architecture to extract strong contextual information from the traffic data. The conventional LSTM Seq2Seq ATN model without the proposed module provided the lowest prediction error of 3.95% compared to other deep learning models. Moreover, our proposed LSTM Seq2Seq ATN KNN EMD prediction accuracy is the highest among all prediction models with 96.48% accurate forecast. Our proposed model performs better than the traditional Seq2Seq model with nearly 1% more prediction accuracy. Among all deep sequence model architectures, the Seq2Seq model with an extra attention layer performs better by extracting strong contextual information from longer sequence data. A comparison between actual and predicted traffic by our best-performing model has been made depicted in Fig. 8. In Fig. 9, we depicted error reduction by our proposed model compared to conventional and only KNN-integrated models. We achieved better prediction accuracy for all models by combining both outlier and noise handling in the traffic data. For example, LSTM Seq2Seq KNN EMD gave approximately 34% less error than LSTM Seq2Seq while LSTM Seq2Seq KNN provided approximately 24% less prediction error. So, it is evident from Fig. 9 that combining outlier management and noise reduction would be a better approach to dealing with real-world internet traffic. In Fig. 9, the model integrated with both outlier and noise performed better than models with only the outlier management module. For example, our best performing LSTM Seq2Seq KNN EMD model provided us approximately 11% less error than LSTM Seq2Seq KNN. Therefore, based on all models' performance, we can conclude that traffic analysis is crucial before using traffic data to develop the model. Especially the outliers and noise in the traffic might affect the model prediction accuracy, and it is essential to deal with them first before using traffic data to train and evaluate the model. VI. CONCLUSION Traffic volume forecasting is an essential tool for the ISP industry to assist them in their network capacity planning activities and network investment decisions. Assessing the network traffic trend accurately helps ISPs to define, develop, and adjust their current and new infrastructure and services. Therefore, it is worthwhile to improve the accuracy of internet traffic volume predictions. This study proposes a deep learning methodology that integrates an EMD-based noise reduction and an empirical rule-based outlier detection module. Most of the previous hybrid models use EMD to obtain ensemble prediction models. Unlike the earlier studies, the proposed algorithm is not an ensemble model and does not depend on a specific Intrinsic Mode Function (IMF) for model learning. The proposed algorithm applies EMD method for denoising the original signal to grasp the general tendency of the data. After EMD denoising, the deep learning model is trained on the noise-free dataset. However, the EMD process requires the selection of a stopping criterion to determine the number of IMFs to be extracted. The choice of this criterion can significantly affect the quality of the decomposition and the effectiveness of noise reduction. In future, we plan to explore other methods of noise reduction such as Singular value decomposition (SVD)-based methods, Non-local means (NLM)-based methods, and deep learning-based methods. We identified the point outlier based on the empirical rule, and these points are mitigated with near K values, optimized based on KNN regressor. There are few limitations of using of KNN for parameter optimization. For example, the KNN algorithm can be computationally expensive when the dataset has a large number of features or dimensions. As the number of features increases, the distance between the nearest neighbors can become more similar, which can make it difficult to identify the K nearest neighbors. Also, the choice of distance metric can have a significant impact on the quality of the imputed values, and different distance metrics may be more appropriate for different types of data. Results are evaluated with widely used MAPE and mean accuracy measures to perform a favorable comparison. The proposed method is also compared with traditional statistical and deep sequence models and is trained on the original signal. According to the results, the proposed method outperforms all baseline prediction models. The performance of our proposed algorithm clearly shows its potential in accurately forecasting internet traffic demand compared to the other approaches. Fig. 1 . 1Regional increase in internet traffic user. Fig. 2 . 2High-level framework of our proposed methodology. 8 8Traffic lower limit, l lim = AV G y(t) − 3 * SD y(t) 9 k list = [2 to 24] 10 MIN RMSE = INF 11 BEST K = 0 12 foreach K in k list do 13 apply KNN regressor on y(t) for next-step prediction 14 if prediction RM SE < M IN RM SE Noise free and outlier adjusted traffic, y(t) Fig. 3 . 3ACF plot of our traffic data. Fig. 4 . 4, the LSTM Seq2Seq ATN KNN model achieves the lowest RMSE (0.59), MAE (0.31), and MAPE (3.60) values, indicating the best overall performance among the evaluated models. The consistent pattern of low error values across all three metrics suggests that this model provides accurate predictions in both relative and absolute terms. The other models, such as LSTM Seq2Seq KNN and GRU KNN, also exhibit a similar trend, with lower MAPE values corresponding to lower RMSE and MAE values. These results demonstrate the effectiveness of our chosen models in predicting internet traffic, with the LSTM Seq2Seq ATN KNN model outperforming the others. The consistency in the error metrics suggests that our model is performing well and is accurately capturing the underlying patterns in the data. A low MAPE means that the model's predictions are close to the actual values in relative terms, while low RMSE and MSE values imply that the model's predictions are also close to the actual values in absolute terms. Outlier points identified using empirical rule. Fig. 5 .Fig. 6 . 56Distribution of data with outlier points highlighted. Denoised traffic data. Fig. 7 . 7Prediction accuracy comparison between conventional model and proposed KNN-EMD integrated model. Fig. 9 . 9A comparison of prediction error reduction (%) by best-performing techniques with standard deep learning model and another approach. 1.51% 52% 65% 60% 24% 90% 82% 66% 72% 78% 70% 35% 92% 87% 0% 20% 40% 60% 80% 100% Global Asia Pacific Central and Eastern Europe Latin America Middle East and Africa North America Western Europe Percentage of regional population 2018 2023 TABLE I COMPARATIVE ISUMMARY AMONG EXISTING WORKS.Ref. Decomposition Noise Reduction Outlier Detection Prediction Model Benefits Drawbacks [17] N/A Local Preserving Projection(LPP) N/A ESN with double loop reservoir Accuracy improved after denoising the data using LPP The performance is crit- ically influenced by the identification of the opti- mal value of the "Window size" hyperparameter [18] N/A N/A N/A ARIMA-SA-BPNN Improved accuracy by op- timizing network weights with SA(Simulated An- nealing) algorithm TABLE II TOP IITEN BEST-PERFORMING MODEL PARAMETER CONFIGURATIONSL. (p, d, q) AIC 1 (13, 1, 16) 3782.588307 2 (21, 1, 2) 3782.751381 3 (14,1,17) 3783.706900 4 (13, 1, 18) 3785.061129 5 (21, 1, 3) 3785.332434 6 (22, 1, 3) 3786.325772 7 (16, 1, 17) 3786.547150 8 (17,1,17) 3786.761965 9 (22, 1, 4) 3786.964070 10 (20, 1, 2) 3787.326614 TABLE III KNN III-REGRESSOR PREDICTION ERROR (RMSE) FOR DIFFERENT KVALUES. K-value RMSE 2 1856644563 3 1828291277 4 1816389491 5 1826662669 6 1811743008 7 1815585493 8 1817635025 9 1810554111 10 1797064368 11 1796783992 12 1804298868 13 1809809913 14 1814868475 15 1818749549 16 1831109596 17 1841230237 18 1856738081 19 1866925893 20 1881460309 21 1892611532 22 1902113203 23 1914844430 24 1922448158 TABLE IV SIGNAL IV-TO-NOISE(SNR) RATIO COMPARISION.Signal-to-Noise (SNR) Ratio Noisy signal SNR -7.05 dB Denoised signal SNR 21.47 dB TABLE V PERFORMANCE SUMMARY OF PROPOSED PREDICTION MODELS Baseline Model Model RMSE MAE MAPE RNN 1.49 0.84 7.51 LSTM 1.22 0.62 5.03 LSTM Seq2Seq 0.58 0.34 3.94 LSTM Seq2Seq ATN 0.58 0.34 3.95 GRU 1.47 0.75 6.41 Proposed Model Model RMSE MAE MAPE RNN KNN 0.76 0.49 4.27 LSTM KNN 0.64 0.33 3.77 LSTM Seq2Seq KNN 0.60 0.32 3.63 LSTM Seq2Seq ATN KNN 0.59 0.31 3.60 GRU KNN 0.63 0.33 3.88 Proposed Model Model RMSE MAE MAPE RNN KNN EMD 0.60 0.345 4.02 LSTM KNN EMD 0.57 0.30 3.30 LSTM Seq2Seq KNN EMD 0.55 0.29 3.24 LSTM Seq2Seq ATN KNN EMD 0.54 0.28 3.22 GRU KNN EMD 0.57 0.31 3.52 Fig. 8. Actual vs. predicted traffic by LSTM Seq2Seq ATN EMD KNN. Error Reduction Comparison Standard Model vs. Model_KNN Standard Model vs. Model_KNN_EMD Model_KNN vs. Model_KNN_EMD12/8 12/15 12/14 12/13 12/12 12/11 12/10 12/9 Actual traffic Predicted traffic date RNN LSTM LSTM_Seq2Seq LSTM_Seq2Seq_Atn GRU Model_KNN vs. Model_KNN_EMD 5.85 12.47 10.74 10.56 9.28 Standard Model vs. Model_KNN_EMD 46.45 34.34 17.86 18.52 45.07 Standard Model vs. Model_KNN 43.12 24.99 7.97 8.91 39.46 0 20 40 60 80 100 Qoe-aware efficient content distribution scheme for satelliteterrestrial networks. D Jiang, F Wang, Z Lv, S Mumtaz, S Al-Rubaye, A Tsourdos, O Dobre, IEEE Transactions on Mobile Computing. D. Jiang, F. Wang, Z. Lv, S. Mumtaz, S. Al-Rubaye, A. Tsourdos, and O. Dobre, "Qoe-aware efficient content distribution scheme for satellite- terrestrial networks," IEEE Transactions on Mobile Computing, 2021. A performance measurement and analysis method for software-defined networking of iov. D Jiang, Z Wang, L Huo, S Xie, IEEE Transactions on Intelligent Transportation Systems. 226D. Jiang, Z. Wang, L. Huo, and S. Xie, "A performance measurement and analysis method for software-defined networking of iov," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 6, pp. 3707-3719, 2020. A compressive sensingbased approach to end-to-end network traffic reconstruction. D Jiang, W Wang, L Shi, H Song, IEEE Transactions on Network Science and Engineering. 71D. Jiang, W. Wang, L. Shi, and H. Song, "A compressive sensing- based approach to end-to-end network traffic reconstruction," IEEE Transactions on Network Science and Engineering, vol. 7, no. 1, pp. 507-519, 2018. White Paper -cisco.com. Cisco Annual Internet Report -Cisco Annual Internet ReportAccessed 15-Mar-2023"Cisco Annual Internet Report -Cisco Annual Internet Report (2018-2023) White Paper -cisco.com," https://www.cisco.com/c/ en/us/solutions/collateral/executive-perspectives/annual-internet-report/ white-paper-c11-741490.html, [Accessed 15-Mar-2023]. Forecasting traffic congestion using arima modeling. T Alghamdi, K Elgazzar, M Bayoumi, T Sharaf, S Shah, IEEE15th international wireless communications & mobile computing conference (IWCMCT. Alghamdi, K. Elgazzar, M. Bayoumi, T. Sharaf, and S. Shah, "Forecasting traffic congestion using arima modeling," in 2019 15th international wireless communications & mobile computing conference (IWCMC). IEEE, 2019, pp. 1227-1232. Traffic flow prediction based on combination of support vector machine and data denoising schemes. J Tang, X Chen, Z Hu, F Zong, C Han, L Li, Physica A: Statistical Mechanics and its Applications. 534120642J. Tang, X. Chen, Z. Hu, F. Zong, C. Han, and L. Li, "Traffic flow prediction based on combination of support vector machine and data denoising schemes," Physica A: Statistical Mechanics and its Applica- tions, vol. 534, p. 120642, 2019. Cellular traffic prediction using recurrent neural networks. S Jaffry, S F Hasan, 2020 IEEE 5th International Symposium on Telecommunication Technologies (ISTT). IEEES. Jaffry and S. F. Hasan, "Cellular traffic prediction using recur- rent neural networks," in 2020 IEEE 5th International Symposium on Telecommunication Technologies (ISTT). IEEE, 2020, pp. 94-98. Big data analysis based network behavior insight of cellular networks for industry 4.0 applications. D Jiang, Y Wang, Z Lv, S Qi, S Singh, IEEE Transactions on Industrial Informatics. 162D. Jiang, Y. Wang, Z. Lv, S. Qi, and S. Singh, "Big data analysis based network behavior insight of cellular networks for industry 4.0 applications," IEEE Transactions on Industrial Informatics, vol. 16, no. 2, pp. 1310-1320, 2019. Network traffic forecasting model based on long-term intuitionistic fuzzy time series. X Fan, Y Wang, M Zhang, Information sciences. 506X. Fan, Y. Wang, and M. Zhang, "Network traffic forecasting model based on long-term intuitionistic fuzzy time series," Information sci- ences, vol. 506, pp. 131-147, 2020. A network traffic forecasting method based on sa optimized arima-bp neural network. H Yang, X Li, W Qiang, Y Zhao, W Zhang, C Tang, Computer Networks. 193108102H. Yang, X. Li, W. Qiang, Y. Zhao, W. Zhang, and C. Tang, "A network traffic forecasting method based on sa optimized arima-bp neural network," Computer Networks, vol. 193, p. 108102, 2021. Farima model based on particle swarm-genetic hybrid algorithm optimization and application. J Yang, H Sheng, H Wan, F Yu, 2021 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST). IEEEJ. Yang, H. Sheng, H. Wan, and F. Yu, "Farima model based on particle swarm-genetic hybrid algorithm optimization and application," in 2021 3rd International Academic Exchange Conference on Science and Technology Innovation (IAECST). IEEE, 2021, pp. 188-192. Evaluating statistical models for network traffic anomaly detection. P Kromkowski, S Li, W Zhao, B Abraham, A Osborne, D E Brown, 2019 Systems and Information Engineering Design Symposium (SIEDS). P. Kromkowski, S. Li, W. Zhao, B. Abraham, A. Osborne, and D. E. Brown, "Evaluating statistical models for network traffic anomaly detec- tion," in 2019 Systems and Information Engineering Design Symposium (SIEDS), 2019, pp. 1-6. Modeling and forecasting of timescale network traffic dynamics in m2m communications. Y Wu, Y Cui, W Yu, C Lu, W Zhao, 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEEY. Wu, Y. Cui, W. Yu, C. Lu, and W. Zhao, "Modeling and forecasting of timescale network traffic dynamics in m2m communications," in 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2019, pp. 711-721. Dynamic bandwidth allocation for video traffic using farima-based forecasting models. C Katris, S Daskalaki, Journal of Network and Systems Management. 271C. Katris and S. Daskalaki, "Dynamic bandwidth allocation for video traffic using farima-based forecasting models," Journal of Network and Systems Management, vol. 27, no. 1, pp. 39-65, 2019. Alpha stable distribution based farima modeling and forecasting for network traffic data. H Sheng, Q Yan, K Li, Journal of Physics: Conference Series. IOP Publishing157412135H. Sheng, Q. Yan, and K. Li, "Alpha stable distribution based farima modeling and forecasting for network traffic data," in Journal of Physics: Conference Series, vol. 1574, no. 1. IOP Publishing, 2020, p. 012135. Network traffic prediction of mobile backhaul capacity using time series forecasting. G A Christian, I P Wijaya, R F Sari, 2021 International Seminar on Intelligent Technology and Its Applications (ISITIA). IEEEG. A. Christian, I. P. Wijaya, and R. F. Sari, "Network traffic prediction of mobile backhaul capacity using time series forecasting," in 2021 International Seminar on Intelligent Technology and Its Applications (ISITIA). IEEE, 2021, pp. 58-62. Network traffic prediction method based on improved echo state network. J Zhou, X Yang, L Sun, C Han, F Xiao, IEEE Access. 6J. Zhou, X. Yang, L. Sun, C. Han, and F. Xiao, "Network traffic prediction method based on improved echo state network," IEEE Access, vol. 6, pp. 70 625-70 632, 2018. Fuzzy neural network optimization and network traffic forecasting based on improved differential evolution. Y Hou, L Zhao, H Lu, Future Generation Computer Systems. 81Y. Hou, L. Zhao, and H. Lu, "Fuzzy neural network optimization and network traffic forecasting based on improved differential evolution," Future Generation Computer Systems, vol. 81, pp. 425-432, 2018. Prediction for network traffic of radial basis function neural network model based on improved particle swarm optimization algorithm. W Zhang, D Wei, Neural Computing and Applications. 294W. Zhang and D. Wei, "Prediction for network traffic of radial basis function neural network model based on improved particle swarm optimization algorithm," Neural Computing and Applications, vol. 29, no. 4, pp. 1143-1152, 2018. Echo-state networks for network traffic prediction. O A Adeleke, 2019 IEEE 10th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON). O. A. Adeleke, "Echo-state networks for network traffic prediction," in 2019 IEEE 10th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), 2019, pp. 0202-0206. Time series forecasting by the novel gaussian process wavelet self-join adjacent-feedback loop reservoir model. Y Zhou, M Zhang, K.-P Lin, Expert Systems with Applications. 198116772Y. Zhou, M. Zhang, and K.-P. Lin, "Time series forecasting by the novel gaussian process wavelet self-join adjacent-feedback loop reservoir model," Expert Systems with Applications, vol. 198, p. 116772, 2022. Network traffic prediction of the optimized bp neural network based on glowworm swarm algorithm. H Li, Systems Science & Control Engineering. 72H. Li, "Network traffic prediction of the optimized bp neural network based on glowworm swarm algorithm," Systems Science & Control Engineering, vol. 7, no. 2, pp. 64-70, 2019. An algorithm for extracting fuzzy rules based on rbf neural network. W Li, Y Hori, IEEE Transactions on Industrial Electronics. 534W. Li and Y. Hori, "An algorithm for extracting fuzzy rules based on rbf neural network," IEEE Transactions on Industrial Electronics, vol. 53, no. 4, pp. 1269-1276, 2006. Farima modelbased communication traffic anomaly detection in intelligent electric power substations. Q Yang, W Hao, L Ge, W Ruan, F Chi, IET Cyber-Physical Systems: Theory & Applications. 41Q. Yang, W. Hao, L. Ge, W. Ruan, and F. Chi, "Farima model- based communication traffic anomaly detection in intelligent electric power substations," IET Cyber-Physical Systems: Theory & Applica- tions, vol. 4, no. 1, pp. 22-29, 2019. Backbone network traffic prediction based on modified eemd and quantum neural network. W Huang, J Zhang, S Liang, H Sun, Wireless Personal Communications. 99W. Huang, J. Zhang, S. Liang, and H. Sun, "Backbone network traffic prediction based on modified eemd and quantum neural network," Wireless Personal Communications, vol. 99, pp. 1569-1588, 2018. Evaluating statistical models for network traffic anomaly detection. P Kromkowski, S Li, W Zhao, B Abraham, A Osborne, D E Brown, 2019 Systems and Information Engineering Design Symposium (SIEDS). P. Kromkowski, S. Li, W. Zhao, B. Abraham, A. Osborne, and D. E. Brown, "Evaluating statistical models for network traffic anomaly detec- tion," in 2019 Systems and Information Engineering Design Symposium (SIEDS), 2019, pp. 1-6. Networkwide traffic speed forecasting: 3d convolutional neural network with ensemble empirical mode decomposition. S Zhang, L Zhou, X Chen, L Zhang, L Li, M Li, Computer-Aided Civil and Infrastructure Engineering. 3510S. Zhang, L. Zhou, X. Chen, L. Zhang, L. Li, and M. Li, "Network- wide traffic speed forecasting: 3d convolutional neural network with ensemble empirical mode decomposition," Computer-Aided Civil and Infrastructure Engineering, vol. 35, no. 10, pp. 1132-1147, 2020. An application of internet traffic prediction with deep neural network. S Narejo, E Pasero, Multidisciplinary Approaches to Neural Computing. S. Narejo and E. Pasero, "An application of internet traffic prediction with deep neural network," Multidisciplinary Approaches to Neural Computing, pp. 139-149, 2018. Network traffic forecasting using ifa-lstm. X Han, F Qi, International Conference on Computer Engineering and Networks. SpringerX. Han and F. Qi, "Network traffic forecasting using ifa-lstm," in Inter- national Conference on Computer Engineering and Networks. Springer, 2018, pp. 681-692. A network traffic flow prediction with deep learning approach for large-scale metropolitan area network. W Wang, Y Bai, C Yu, Y Gu, P Feng, X Wang, R Wang, NOMS 2018-2018 IEEE/IFIP Network Operations and Management Symposium. IEEEW. Wang, Y. Bai, C. Yu, Y. Gu, P. Feng, X. Wang, and R. Wang, "A network traffic flow prediction with deep learning approach for large-scale metropolitan area network," in NOMS 2018-2018 IEEE/IFIP Network Operations and Management Symposium. IEEE, 2018, pp. 1-9. Blatta: Early exploit detection on network traffic with recurrent neural networks. B A Pratomo, P Burnap, G Theodorakopoulos, Security and Communication Networks. 2020B. A. Pratomo, P. Burnap, and G. Theodorakopoulos, "Blatta: Early exploit detection on network traffic with recurrent neural networks," Security and Communication Networks, vol. 2020, 2020. Deep anomaly detection for time-series data in industrial iot: A communication-efficient on-device federated learning approach. Y Liu, S Garg, J Nie, Y Zhang, Z Xiong, J Kang, M S Hossain, IEEE Internet of Things Journal. 88Y. Liu, S. Garg, J. Nie, Y. Zhang, Z. Xiong, J. Kang, and M. S. Hossain, "Deep anomaly detection for time-series data in industrial iot: A communication-efficient on-device federated learning approach," IEEE Internet of Things Journal, vol. 8, no. 8, pp. 6348-6358, 2021. Traffic modelling for iot networks: A survey. Y Li, W Tu, Proceedings of the 10th International Conference on Information Communication and Management. the 10th International Conference on Information Communication and ManagementY. Li and W. Tu, "Traffic modelling for iot networks: A survey," in Proceedings of the 10th International Conference on Information Communication and Management, 2020, pp. 4-9. An improved stacked auto-encoder for network traffic flow classification. P Li, Z Chen, L T Yang, J Gao, Q Zhang, M J Deen, IEEE Network. 326P. Li, Z. Chen, L. T. Yang, J. Gao, Q. Zhang, and M. J. Deen, "An improved stacked auto-encoder for network traffic flow classification," IEEE Network, vol. 32, no. 6, pp. 22-27, 2018. Network traffic prediction based on rbf neural network optimized by improved gravitation search algorithm. D Wei, Neural Computing and Applications. 28D. Wei, "Network traffic prediction based on rbf neural network op- timized by improved gravitation search algorithm," Neural Computing and Applications, vol. 28, pp. 2303-2312, 2017. A network traffic prediction model based on quantum-behaved particle swarm optimization algorithm and fuzzy wavelet neural network. K Zhang, Z Hu, X.-T Gan, J.-B Fang, Discrete Dynamics in Nature and Society. 2016K. Zhang, Z. Hu, X.-T. Gan, and J.-B. Fang, "A network traffic pre- diction model based on quantum-behaved particle swarm optimization algorithm and fuzzy wavelet neural network," Discrete Dynamics in Nature and Society, vol. 2016, 2016. A self-organizing fuzzy neural network modeling approach using an adaptive quantum particle swarm optimization. H Zhou, Y Li, H Xu, Y Su, L Chen, Applied Intelligence. H. Zhou, Y. Li, H. Xu, Y. Su, and L. Chen, "A self-organizing fuzzy neural network modeling approach using an adaptive quantum particle swarm optimization," Applied Intelligence, pp. 1-24, 2022. Prediction of exchange rates using averaging intrinsic mode function and multiclass support vector regression. B Premanode, J Vongprasert, C Toumazou, Artif. Intell. Res. 22B. Premanode, J. Vongprasert, and C. Toumazou, "Prediction of ex- change rates using averaging intrinsic mode function and multiclass support vector regression." Artif. Intell. Res., vol. 2, no. 2, pp. 47-61, 2013. The essentials of biostatistics for physicians, nurses, and clinicians. M R Chernick, John Wiley & SonsM. R. Chernick, The essentials of biostatistics for physicians, nurses, and clinicians. John Wiley & Sons, 2011. Keras. F Chollet, F. Chollet et al., "Keras," https://keras.io, 2015.
[]
[ "NEW CLASS OF GIBBS MEASURES FOR TWO STATE HARD-CORE MODEL ON A CAYLEY TREE", "NEW CLASS OF GIBBS MEASURES FOR TWO STATE HARD-CORE MODEL ON A CAYLEY TREE" ]
[ "R M Khakimov ", "M T Makhammadaliev ", "F H Haydarov " ]
[]
[]
In this paper, we consider a Hard-Core (HC) model with two spin values on Cayley trees. The conception of alternative Gibbs measure is introduced and translational invariance conditions for alternative Gibbs measures are found. Also, we show that the existence of alternative Gibbs measures which are not translation-invariant. In addition, we study free energy of the model.
null
[ "https://export.arxiv.org/pdf/2306.03429v1.pdf" ]
259,088,608
2306.03429
02f98dc7db2ded143fa10470d07c5df897aaabf7
NEW CLASS OF GIBBS MEASURES FOR TWO STATE HARD-CORE MODEL ON A CAYLEY TREE R M Khakimov M T Makhammadaliev F H Haydarov NEW CLASS OF GIBBS MEASURES FOR TWO STATE HARD-CORE MODEL ON A CAYLEY TREE Cayley treeconfigurationhard-core modelGibbs measuretranslation- invariant measureAlternating Gibbs Measurefree energy AMS Subject Classification: 20B0720E06 In this paper, we consider a Hard-Core (HC) model with two spin values on Cayley trees. The conception of alternative Gibbs measure is introduced and translational invariance conditions for alternative Gibbs measures are found. Also, we show that the existence of alternative Gibbs measures which are not translation-invariant. In addition, we study free energy of the model. Introduction The problems arising in the study of the thermodynamic properties of physical and biological systems are typically solved within the framework of the theory of Gibbs measures. The Gibbs measure is a fundamental concept that determines the probability of a microscopic state of a given physical system (defined by a specific Hamiltonian). It is known that each Gibbs measure is associated with one phase of a physical system, and if the Gibbs measure is not unique, then there exists a phase transition. For a wide class of Hamiltonians, it is known that the set of all Gibbs measures (corresponding to a given Hamiltonian) is a nonempty, convex, compact subset of the set of all probability measures (see, e.g., [1], [3]) and each point of this convex set can be uniquely decomposed in terms of its extreme points. In this regard, it is of particular interest to describe all the extreme points of this convex set, i.e., extreme Gibbs measures. For convenience, we first describe the basic concepts used in this paper and then give the statement of the problem and the history of its study. The Cayley tree. Let k = (V, L, i), k ≥ 1, be the Cayley tree of order k, i.e., an infinite tree with exactly k + 1 edges coming out of each vertex, and let V be the set of vertices, L the set of edges of k and i is the incidence function setting each edge l ∈ L into correspondence with its endpoints x, y ∈ V . If i(l) = {x, y}, then the vertices x and y are called the nearest neighbors, denoted by l = x, y . For an arbitrary point x 0 ∈ V we set W n = {x ∈ V | d(x, x 0 ) = n}, V n = n m=0 W m , L n = {l = x, y ∈ L | x, y ∈ V n }, where d(x, y), x, y ∈ V is the distance between x and y on the Cayley tree, i.e., the number of edges of the path connecting x and y. The set of the direct successors of x is denoted by S(x), i.e., if x ∈ W n , then S(x) = {y i ∈ W n+1 |d(x, y i ) = 1, i = 1, 2, . . . , k}. The HC-model. We assume that Φ = {0, 1}, and σ ∈ Ω = Φ V is a configuration, i.e., σ = {σ(x) ∈ Φ : x ∈ V }, where σ(x) = 1 means that the vertex x on the Cayley tree is occupied, and σ(x) = 0 means it is vacant. The configuration σ is said to be an admissible if σ(x)σ(y) = 0 for any neighboring x, y from V (V n or W n , respectively) and we denote the set of such configurations by Ω (Ω Vn and Ω Wn ). Obviously, Ω ⊂ Φ V . Concatenation configurations σ n−1 ∈ Φ V n−1 and ω n ∈ Φ Wn is defined by the following formula (see [28]) σ n−1 ∨ ω n = {{σ n−1 (x), x ∈ V n−1 }, {ω n (y), y ∈ W n }}. The Hamiltonian of the HC -model is defined by the formula H(σ) =    J x∈V σ(x), if σ ∈ Ω, +∞, if σ / ∈ Ω, where J ∈ R. Finite-dimensional distributions. Let B be the σ-algebra generated by cylindrical sets with finite base of Ω. For any n we let B Vn = {σ ∈ Ω : σ| Vn = σ n } denote the subalgebra of B, where σ| Vn -restriction of σ to V n and σ n : x ∈ V n → σ n (x) an admissible configuration in V n . Definition 1. For λ > 0, the HC -model Gibbs measure is a probability measure µ on (Ω, B) such that for any n and σ n ∈ Ω Vn µ{σ ∈ Ω : σ| Vn = σ n } = Ω µ(dω)P n (σ n |ω W n+1 ), where P n (σ n |ω W n+1 ) = e −H(σn) Z n (λ; ω| W n+1 ) 1(σ n ∨ ω| W n+1 ∈ Ω V n+1 ). Here Z n (λ; ω| W n+1 )-normalization factor with boundary condition ω| Wn : Z n (λ; ω| W n+1 ) = σn∈Ω Vn e −H( σn) 1( σ n ∨ ω| W n+1 ∈ Ω V n+1 ). For σ n ∈ Ω Vn we denote that #σ n = x∈Vn 1(σ n (x) ≥ 1) which the number of occupied vertices in σ n . Let z : x → z x = (z 0,x , z 1,x ) ∈ R 2 + vector-valued function on V . For n = 1, 2, . . . and λ > 0 consider the probability measure µ (n) on Ω Vn , defined as µ (n) (σ n ) = 1 Z n λ #σn x∈Wn z σ(x),x .(1) Here Z n is the normalizing divisor: Z n = σn∈Ω Vn λ # σn x∈Wn z σ(x),x . The sequence of probability measures µ (n) is said to be consistent if for any n ≥ 1 and σ n−1 ∈ Ω V n−1 : ωn∈Ω Wn µ (n) (σ n−1 ∨ ω n )1(σ n−1 ∨ ω n ∈ Ω Vn ) = µ (n−1) (σ n−1 ).(2) In this case, there is a unique measure µ on (Ω, B) such that for all n and σ n ∈ Ω Vn µ({σ| Vn = σ n }) = µ (n) (σ n ). Definition 2. The measure µ that is the limit of a sequence µ (n) defined by (1) with consistency condition (2) is called the splitting HC -Gibbs measure (SGM) with λ > 0 corresponding to the function z : x ∈ V \ {x 0 } → z x . Moreover, an HC -Gibbs measure corresponding to a constant function z x ≡ z is said to be translation-invariant (TI). Problem statement. The main task is to study the structure of the set G(H) of all Gibbs measures corresponding to a given Hamiltonian H. A measure µ ∈ G(H) is called extreme if it cannot be expressed as µ = λµ 1 + (1 − λ)µ 2 for some µ 1 , µ 2 ∈ G(H) with µ 1 = µ 2 . As noted above, the set G(H) of all Gibbs measures (for a given Hamiltonian H) is a nonempty convex compact set G(H) in the space of all probability measures on Ω. Using theorem (12.6) in [1] and section 1.2.4 in [8], we can note the following. • Any extreme Gibbs measure µ ∈ G(H) is an SGM; therefore, the problem of describing Gibbs measures reduces to describing the set of SGMs. For each fixed temperature, the description of the set G(H) is equivalent to a complete description of the set of all extreme SGMs, and hence we are only interested in SGMs on the Cayley tree. • Any SGM corresponds to the solution of Eq. (3) (see below). Thus, our main task reduces to solving functional equation (3). It is known [13] that each Gibbs measure for HC -model on the Cayley tree can be associated with the collection of values z = {z x , x ∈ V } satisfying z x = y∈S(x) (1 + λz y ) −1 ,(3) where λ = e −Jβ > 0 is a parameter, β = 1 T , T > 0 is a temperature. Let G k be a free product of k + 1 cyclic groups {e, a i } of order two with the respective generators a 1 , a 2 , ..., a k+1 , a 2 i = e. There is a one-to-one correspondence between the set of vertices V of the Cayley tree of order k and the group G k (see [5,6,29]). Let G k be a normal divisor of a finite index r ≥ 1 and G k / G k = {H 1 , ..., H r } be the quotient group. Definition 3. A collection of quantities z = {z x , x ∈ G k } is said to be G k -periodic if z yx = z x for ∀x ∈ G k , y ∈ G k . The G k -periodic collections are called translation invariant. For any x ∈ G k , the set {y ∈ G k : x, y } \ S(x) contains a unique element denoted by x ↓ (see [9,10]). Definition 4. A collection of quantities z = {z x , x ∈ G k } is called G k -weakly periodic if z x = z ij for any x ∈ H i , x ↓ ∈ H j for any x ∈ G k . Definition 5. A measure µ is called G k -(weakly) periodic if it corresponds to a G k -(weakly) periodic collection of quantities z. History of the study of SGMs for the HC -model. We present a brief overview of the work related to the Potts model on the Cayley tree. In [12] A. Mazel and Yu. Suhov introduced and studied the HC -model on the ddimensional lattice Z d . Studying Gibbs measures for the two state HC -model on the Cayley tree was the topic in [13]- [23]. In [13], the uniqueness of the translation-invariant Gibbs measure and the nonuniqueness of periodic Gibbs measures for the HC -model were proved. For the parameters of the HC -model, a sufficient condition was also found in [13] under which the translation-invariant Gibbs measure is nonextreme. In the case where the translation-invariant Gibbs measure is extreme, a sufficient condition was found in [14]. The range of the extremes of this measure was extended in [15]. Weakly periodic Gibbs measures for the HC -model in the case of a normal divisor of index 2 were studied in [16] and a complete description of the weakly periodic Gibbs measures was given. Weakly periodic Gibbs measures for the HC -model in the case of a normal divisor of index 4 were studied in [17]- [22]. In this case conditions for the existence of weakly periodic (nonperiodic) Gibbs measures are found. We also found conditions for the translationinvariance of the weakly periodic Gibbs measures (see Chap. 7 in [4] for other HC model properties and their generalizations on a Cayley tree). In this paper, we study a two-state HC -model on a Cayley tree. The concept of an alternative Gibbs measure is introduced. Translational invariance conditions for alternative Gibbs measures are found. In addition, the existence of alternative Gibbs measures that are not translation invariant is proved. A new class of Gibbs measures We consider the half-tree. Namely the root x 0 has k nearest neighbors. We construct below new solutions of the functional equation (3). Consider the following matrix where 0 ≤ m ≤ k and 0 ≤ r ≤ k are non-negative integers. This matrix defines the number of times the values h and l occur in the set S(x) for each z x ∈ {h, l}. More precisely, the boundary condition z = {z x , x ∈ G k } with fields taking values h, l defined by the following steps: • if at vertex x we have z x = h, then the function z y , which gives real values to each vertex y ∈ S(x) by the following rule h = 1 (1+λh) m · 1 (1+λl) k−m , l = 1 (1+λl) r · 1 (1+λh) k−r ,(4) where l > 0, h > 0, λ > 0. As was mentioned above, for any boundary condition satisfying the functional equation (3) there exists a unique Gibbs measure. A measure constructed in this way and which is not translation-invariant is called alternative Gibbs measure (AGM) and denoted as µ m,r . Remark 1. Note that the solution l = h in (4) corresponds to the only TIGM for the HC -model (see [13]). Therefore, we are interested in solutions of the form l = h. Remark 2. From (4) for m = r = 0 we obtain a system of equations whose solutions correspond to the G (2) k -periodic Gibbs measures for the HC -model. The following theorem holds. Theorem 1. Let k ≥ 2. If m + r ≥ k − 1 then for the HC -model there is a unique AGM, which coincides with the unique TIGM. Proof. For convenience, we denote h = x and l = y. Then (4) can be rewritten as follows: x = 1 (1+λx) m · 1 (1+λy) k−m , y = 1 (1+λy) r · 1 (1+λx) k−r .(5) If the first equation (5) is divided by the second, then x y = 1 + λx 1 + λy k−m−r(6) We denote m + r − k = t, t ≥ −1. Then by (6) we have x (1 + λx) t = y (1 + λy) t It is easy to check that the function f (x) = x (1 + λx) t is increasing for t ≥ −1. Therefore, if m + r ≥ k − 1, then the system of equations (5) has only a solution of the form x = y, and this solution corresponds to the TIGM and is known to be unique. The theorem is proved. By theorem 1 follows the next Consequence. Let k ≥ 2. If there are AGMs (non TI) for the HC -model, then m + r ≤ k − 2. Let k − m − r = n (n ∈ N, n ≥ 2). Then from the (6) we get x (1 + λy) n = y (1 + λx) n . From this equation after simple algebra, we obtain the equation (y−x) −1+C 2 n λ 2 xy+C 3 n λ 3 xy(x+y)+. . .+C n n λ n xy(x n−2 +x n−3 y+x n−4 y 2 +. . .+y n−2 ) = 0. Hence x = y or g(x, y) = 0, where g(x, y) = C 2 n λ 2 xy + C 3 n λ 3 xy(x + y) + . . . + C n n λ n xy(x n−2 + x n−3 y + x n−4 y 2 + . . . + y n−2 ) − 1. In the case x = y the corresponding measure is TIGM. The case x = y. We consider the equation g(x, y) = 0 with respect to the variable x (or y). Then it's clear that g(0, y) = −1 and g(x, y) → +∞ for x → +∞. Then the equation g(x, y) = 0 for variable x has at least one positive root. On the other hand, by Descartes' theorem, the equation g(x, y) = 0 for variable x has at most one positive root. Hence the equation g(x, y) = 0 for variable x has exactly one positive root, i.e., there exists a solution (x, y) of the system of equations (5), different from (x, x). Thus, the following statement is true. Statement 1. Let k ≥ 2. If m + r ≤ k − 2 then for the HC -model there exists AGM (not TI). In particular, if λ 2 xy = 1 for m + r = k − 2 (n = 2) or if λ 2 xy(3 + λ(x + y)) = 1 for m + r = k − 3 (n = 3), then in both cases there exists AGM (not TI). The case x = y. We check the multiplicity of the root x = y. In this case from g(x, x) = 0, we have C 2 n λ 2 x 2 + 2C 3 n λ 3 x 3 + . . . + (n − 1)C n n λ n x n − 1 = 0 (7) and the equation (7) also has exactly one positive root, i.e., (x, x) is a multiple root for the system of equations (5). This means that, AGM coincide with TIGM. In particular, if λx = 1 for m+r = k−2 (n = 2) or if 3λ 2 x 2 +2λ 3 x 3 = 1 for m+r = k−3 (n = 3), then in both cases there is no AGM (not TI). Let x = f (y), y = f (x),(8) where f (x) = 1 (1+λx) k . The next lemma is obvious. Lemma 1. If (x 0 , y 0 ) is a solution to the system of equations (8), then (y 0 , x 0 ) is also a solution to the system of equations (8). Remark 3. If the solution (x, y) of the system of equations (8) corresponds to alternative Gibbs measure denoted by µ, then the solution (y, x) corresponds to alternative Gibbs measure denoted by µ . Alternative Gibbs measures in the case m + r ≤ k − 2. In this section, we consider the cases k = 2, k = 3 and k = 4. In the case k = 2 we have only the case m = 0 and r = 0. In the case k = 3 we have m = 0 and r = 0; m = 0 and r = 1; m = 1 and r = 0. In the case k = 4 we have m = 0 and r = 0; m = 0 and r = 1; m = 1 and r = 0; m = 1 and r = 1; m = 0 and r = 2; m = 2 and r = 0. In all cases, by Remark 2, we will not consider the case m = 0 and r = 0. corresponds to the translation-invariant Gibbs measure and solutions (x 1 , y 1 ), (x 2 , y 2 ) in Statement 1 ((x * , y * ), (y * , x * ) in Statement 2) correspond to two-periodic Gibbs measures (see [23]). The case k = 3, m = 1 and r = 0. For m = 1 and r = 0 (resp. m = 0 and r = 1) the system of equations (5) can be rewritten x = 1 1+λx · 1 (1+λy) 2 , y = 1 (1+λx) 3 .(9) From the system of equations (9) due to (6) we obtain (x − y) λ 2 xy − 1 = 0. Hence, x = y or λ 2 xy = 1. The case x = y has already been considered. Let λ 2 xy = 1. Then λx = 1 λy for x = y. From here and from (9) after some algebras we can get: (1 + λx) 3 − λ 2 x = 0, (1 + λy) 3 − λ 3 y 2 = 0.(10) From λ 2 xy = 1 we find y and substitute into the second equation of the system (10). Then (1 + λx) 3 − λ 2 x = 0, (1+λx) 3 −λ 2 x λ 3 x 3 = 0. We introduce the notation f (x) = (1+λx) 3 −λ 2 x. Then the roots of the equation f (x) = 0 are also roots of the system (9). Using the Cardano formulas, we find the positive solution of the last equation λ 3 x 3 + 3λ 2 x 2 + 3 − λ 2 x + 1 = 0. Let x = q − 1 λ , then f (q) = λ 3 q 3 − λ 2 q + λ, D = 1 λ 4 1 4 − 1 27λ . If D > 0, i.e., λ < 27 4 then by Cardano's formula the equation f (q) = 0 has one negative root. If D = 0, i.e., λ = 27 4 then the equation f (q) = 0 has one multiple positive root of the form q = 2 9 , i.e. x = 2 27 , y = 8 27 . By Cardano's formula, the equation f (q) = 0 has three real roots if D < 0. Hence, f (x) = 0 has three real roots if λ > 27 4 . Let these solutions be x 1 , x 2 , x 3 . By the Vieta's formulas we have x 1 + x 2 + x 3 = − 3 λ , x 1 x 2 + x 1 x 3 + x 2 x 3 = 3 − λ 2 λ 3 < 0, x 1 x 2 x 3 = − 1 λ 3 . From equality x 1 x 2 + x 1 x 3 + x 2 x 3 = 3−λ 2 λ 3 we obtain that at least one and at most two roots of the equation are positive. From the equality x 1 x 2 x 3 = −1 it follows that exactly two roots are positive. Hence, f (x) = 0 has two positive roots if λ > 27 4 . These roots have the following form x 1 = 3 √ t 2 − 6 3 √ t + 12λ 6λ 3 √ t , x 2 = 6 3 √ p λ( 3 p 2 + (2λ − 6) 3 √ p + 4λ 2 − 24λ) . Here t = −108λ + 12λ √ −12λ + 81, p = 108λ + 8λ 3 − 72λ 2 + 12λ √ −12λ + 81. From the equality λ 2 xy = 1 we find y 1 and y 2 corresponding to x 1 and x 2 : y 1 = 6 3 √ t λ( 3 √ t 2 − 6 3 √ t + 12λ) , y 2 = p 2 + (2λ − 6) 3 √ p + 4λ 2 − 24λ 6 3 √ pλ . Thus, the following statement is true. Statement 2. Let k = 3 and λ cr = 27 4 . Then the system of equations (9): 1. for 0 < λ < λ cr has a unique solution (x, x); 2. for λ = λ cr has two solutions (x, x), ( 2 27 , 8 27 ); 3. for λ > λ cr has three solutions (x, x), (x 1 , y 1 ), (x 2 , y 2 ). Theorem 3. Let k = 3 and r + m ≤ 1, i.e., m = 1 and r = 0 or m = 0 and r = 1. Then for the HC-model there exists λ cr = 27 4 such that for 0 < λ < λ cr there is a unique AGM which coincides with the only TIGM µ 0 , for λ = λ cr there are exactly two AGMs µ 0 and µ , where µ is AGM (not TI) and for λ > λ cr there are exactly three AGMs µ 0 , µ 1 and µ 2 , where µ 1 and µ 2 are AGMs (not TI). The case k = 4, m = 1 and r = 0 (m = 0 and r = 1). In this case from (5) we get x = 1 1+λx · 1 (1+λy) 3 , y = 1 (1+λx) 4 .(11) From the system of equations (11) due to (6) we can get (y − x) λ 2 xy(3 + λ(x + y)) − 1 = 0. Hence x = y or λ 2 xy(3 + λ(x + y)) = 1. It is clear that in the case x = y we obtain a solution corresponding to the TIGM. Suppose x = y and λ 2 xy(3 + λ(x + y)) = 1. Then, substituting the expression for y from the second equation of the system (11) into the last equality, we obtain the equation f (x, λ) = λ 8 x 8 + 8λ 7 x 7 − λ 7 x 6 + 28λ 6 x 6 − 7λ 6 x 5 + 56λ 5 x 5 − 18λ 5 x 4 + 70λ 4 x 4 − 22λ 4 x 3 + +56λ 3 x 3 − 13λ 3 x 2 − λ 3 x + 28λ 2 x 2 − 3λ 2 x + 8λx + 1 = 0. Denoting λx = u, u > 0 we then have the equation f (u) = u 8 +8u 7 +(28−λ)u 6 +(56−7λ)u 5 +(70−18λ)u 4 +(56−22λ)u 3 +(28−13λ)u 2 −(λ 2 +3λ−8)u+1 = 0, which has a solution u = u(λ). But we regard this as an equation for λ and obtain solutions λ = λ(u): λ 1 (u) = (u + 1) 4 2u · √ u 4 + 6u 3 + 9u 2 + 4u − u 2 − 3u , λ 2 (u) = − (u + 1) 4 2u · √ u 4 + 6u 3 + 9u 2 + 4u + u 2 + 3u . Therefore, because λ 2 < 0 for u > 0, we have λ − λ 1 = 0 ⇒ λ = (u + 1) 4 2u · √ u 4 + 6u 3 + 9u 2 + 4u − u 2 − 3u = ψ(u). Analysis of the function ψ(u) shows that ψ(u) > 0. In addition, ψ(u) → +∞ as u → 0 and as u → +∞, and each value of λ therefore corresponds to at least two values of u for λ > ψ(u * ) but to one value at λ = ψ(u * ), and the equation λ = ψ(u) has no solutions for λ < ψ(u * ), where u * a solution of the equation ψ (u) = 0 (see Fig.2). We calculate the derivative ψ (u) = (u + 1) 3 − (5u 2 + 13u) √ u 2 + 4u + 5u 3 + 23u 2 + 16u − 2 2 √ u 2 + 4u . It is clear that if 5u 3 + 23u 2 + 16u − 2 < 0 then ψ (u) < 0 and the equation ψ (u) = 0 has no solutions. So it must be 5u 3 + 23u 2 + 16u − 2 > 0. 5u 3 + 23u 2 + 16u − 2 = 5(u + 1) u + 9 − √ 91 5 u + 9 + √ 91 5 ⇒ u > √ 91 − 9 5 . We solve the equation ψ (u) = 0 for u > 0: −(5u 2 + 13u) √ u 2 + 4u + 5u 3 + 23u 2 + 16u − 2 = 0 ⇒ 10u 3 + 41u 2 − 16u + 1 = 0. We solve the last equation by the Cardano method: we get the solution u * = u 1 . We set u 1 = √2161λ cr = ψ(u * ) ≈ 2.31. We note that if ψ (u) > 0, then each value of λ corresponds to only two values of u for λ > λ cr . We therefore prove that ψ (u) > 0. Indeed, Here h(u) = 5u 8 +56u 7 +234u 6 +463u 5 +460u 4 +210u 3 +26u 2 −u+3−(5u 2 +11u) (u 4 + 6u 3 + 9u 2 + 4u) 3 . ψ (u) = 2h(u) u(u + 1) (u 2 + 4u) 3 . From the inequality h(u) > 0 for u > 0 we obtain 5u 8 +56u 7 +234u 6 +463u 5 +460u 4 +210u 3 +26u 2 −u+3 2 − 5u 2 +11u 2 u 4 +6u 3 +9u 2 +4u 3 = = 10u 13 + 182u 12 + 1372u 11 + 5505u 10 + 12786u 9 + 17913u 8 + 15564u 7 + +9186u 6 + 5034u 5 + 3016u 4 + 1208u 3 + 156u 2 + (u − 3) 2 > 0. Thus, each value of λ corresponds to only two values of u for λ > λ cr . This can also be seen by computer analysis, i.e., computer analysis shows that the equation f (x, λ) = 0 for λ < λ cr has no positive solution, at λ = λ cr has one positive solution and for λ > λ cr there are exactly two positive solutions (see Fig. 3). Thus, the following statement is true. Statement 3. Let k = 4 and λ cr ≈ 2.31. Then the system of equations (11): 1. for 0 < λ < λ cr has a unique solution (x, x); 2. for λ = λ cr has two solutions (x, x), (x , y ); 3. for λ > λ cr has three solutions (x, x), (x 1 , y 1 ), (x 2 , y 2 ). Remark 5. The measures corresponding to the solution in the Statement 3 for x = y are AGMs (not periodic) and they different from previous AGMs. The case k = 4, m = 1 and r = 1. In this case from the system of equations (5) we obtain x = 1 1+λx · 1 (1+λy) 3 , y = 1 1+λy · 1 (1+λx) 3 .(12) From (12) due to (6) we can get (x − y) λ 2 xy − 1 = 0. Hence x = y or λ 2 xy = 1. The case x = y corresponds to the only TIGM. Let x = y and λ 2 xy = 1, i.e., λx = 1 λy . After some algebras the system of equations (12) has the form (1 + λx) 4 − λ 3 x 2 = 0, (1 + λy) 4 − λ 3 y 2 = 0. (13) Obviously, that the roots of the equation f (x) = (1 + λx) 4 − λ 3 x 2 = 0 are also roots of (12). The solutions of the equations f (x) = 0 and f (y) = 0 have the form x 1,2 = √ λ − 2 ± λ − 4 √ λ 2λ , y 1,2 = √ λ − 2 ± λ − 4 √ λ 2λ . It is easy to see that x 1,2 > 0 (y 1,2 > 0) for λ ≥ 16, and they take complex values for λ < 16. Moreover, x 1 = x 2 (y 1 = y 2 ) for λ = 16 and it coincides with the only translation-invariant solution of (12). By virtue of the equation λ 2 xy = 1 and Lemma 1, we obtain that in the case x = y the system of equations (12) has solutions of the form (x, y) and (y, x) for λ > λ cr = 16, where x = x 1 = √ λ − 2 + λ − 4 √ λ 2λ , y = y 2 = √ λ − 2 − λ − 4 √ λ 2λ , y = x 2 = √ λ − 2 − λ − 4 √ λ 2λ , x = y 1 = √ λ − 2 + λ − 4 √ λ 2λ . Thus, the following statement holds Statement 4. Let k = 4 and λ cr = 16. Then the system of equations (12): 1. for 0 < λ ≤ λ cr has a unique solution (x, x); 2. for λ > λ cr has three solutions (x, x), (x, y), (y, x). Remark 6. The measure corresponding to the solution (x, y), (y, x) in the Statement 4 are AGMs (not periodic) and they different from previous AGMs. The case k = 4, m = 2 and r = 0 (m = 0 and r = 2). In this case from (5) we obtain x = 1 (1+λx) 2 · 1 (1+λy) 2 , y = 1 (1+λx) 4 .(14) Using (6) from (14) we can get (x − y) λ 2 xy − 1 = 0. Hence, x = y or λ 2 xy = 1. The case x = y corresponds to the only TIGM. We consider the case x = y and λ 2 xy = 1 λx = 1 λy . After some algebras (14) has the form (1 + λx) 4 − λ 2 x = 0, (1 + λy) 4 − λ 4 y 3 = 0. From the equation λ 2 xy = 1 we find y and substitute it for the second equation (15). Then (1 + λx) 4 − λ 2 x = 0, (1+λx) 4 −λ 2 x λ 4 x 4 = 0. Let's rewrite the equation f (x) = (1 + λx) 4 − λ 2 x = 0 as λ 4 x 4 + 4λ 3 x 3 + 6λ 2 x 2 + λ(4 − λ)x + 1 = 0. We solve the last equation by the Ferrari method from linear algebra. We introduce the notation x = t − 1 λ . Then f t − 1 λ = λ 4 t 4 − λ 2 t + λ = (λ 2 t 2 + p) 2 − 2λ 2 p t + 1 4p 2 = = λ 2 t 2 + p − λ 2p t + 1 4p λ 2 t 2 + p + λ 2p t + 1 4p = 0, where p = 3 108λ 2 + 12 √ 81λ 4 − 768λ 3 12 + 4λ 3 108λ 2 + 12 √ 81λ 4 − 768λ 3 . Solutions have the following form t 1,2 = 2p 3 ± 2p 3 λ − 2p 3 2λp , t 3,4 = − 2p 3 ± − 2p 3 λ − 2p 3 2λp . By virtue x = t − 1 λ , for solutions we obtain x 1,2 = 2p 3 ± 2p 4 λ − 2p 3 − 2p 2λp , x 3,4 = − 2p 3 ± − 2p 3 λ − 2p 3 − 2p 2λp . Computer analysis shows that x 1,2 > 0 for λ > λ cr ≈ 9.48, and values x 3,4 are negative or take on complex values for λ > 0 (see Fig. 4). Values y 1 and y 2 corresponding to values x 1 and x 2 have the form: y 1,2 = 2p λ 2p 3 ± 2p 3 λ − 2p 3 − 2p . For λ = λ cr ≈ 9.4815 the system of equations (15) has solutions of the form x = 2p 3 − 2p 2λp , y = 2p λ 2p 3 − 2p . Thus, the following statement is true. Statement 5. Let k = 4 and λ cr ≈ 9.48. Then the system of equations (15): 1. for 0 < λ < λ cr has a unique solution (x, x); 2. for λ = λ cr has two solutions (x, x), (x , y ); 3. for λ > λ cr has three solutions (x, x), (x 1 , y 1 ), (x 2 , y 2 ). By using all propositions, we get the following theorem. Theorem 4. Let k = 4 and r + m ≤ 2. For the HC-model the following statements are true: 1. If m = 1 and r = 0 or m = 0 and r = 1, then there exists λ cr ≈ 2.31 such that for 0 < λ < λ cr there is a unique AGM which coincides with the only TIGM µ 0 , for λ = λ cr there are exactly two AGMs µ 0 and µ , where µ is AGM (not TI) and for λ > λ cr there are exactly three AGMs µ 0 , µ 1 and µ 2 , where µ 1 and µ 2 are AGMs (not TI). 2. If m = 1 and r = 1 then there exists λ cr = 16 such that for 0 < λ ≤ λ cr there is a unique AGM which coincides with the only TIGM µ 0 , for λ > λ cr there are exactly three AGMs µ 0 , µ 1 and µ 2 , where µ 1 and µ 2 are AGMs (not TI). 3. If m = 2 and r = 0 or m = 0 and r = 2, then there exists λ cr ≈ 9.48 such that for 0 < λ < λ cr there is a unique AGM which coincides with the only TIGM µ 0 , for λ = λ cr there are exactly two AGMs µ 0 and µ , where µ is AGM (not TI) and for λ > λ cr there are exactly three AGMs µ 0 , µ 1 and µ 2 , where µ 1 and µ 2 are AGMs (not TI). The case m + r ≤ k − 2 (m = r) The following lemma is known. Lemma 2. [20] Let f : [0, 1] → [0, 1] be a continuous function with a fixed point ξ ∈ (0, 1). We assume that f is differentiable at ξ and f (ξ) < −1. Then there exist points x 0 and x 1 , 0 ≤ x 0 < ξ < x 1 ≤ 1, such that f (x 0 ) = x 1 and f (x 1 ) = x 0 . For m = r by (5) we obtain    x = 1 (1+λx) m · 1 (1+λy) k−m ; y = 1 (1+λy) m · 1 (1+λx) k−m .(16) Here x, y ∈ (0; 1). After some transformations from (16) we obtain the following system of equations:    y = f (x); x = f (y),(17) where f (x) = 1 λ · 1 x(1 + λx) m 1 k−m − 1 λ . From (17) we get the equation f (f (x)) = x. First, we consider the equation f (x) = x. The function f (x) is differentiable and decreasing for 0 < x < 1: f (x) = − 1 + λ(m + 1)x λ(k − m)x k−m+1 k−m (1 + λx) k k−m < 0. We rewrite the equation f (x) = x: x = 1 λ · 1 x(1 + λx) m 1 k−m − 1 λ ⇒ (1 + λx) k = 1 x . It is known from [13] that the last equation has a unique solutionx, i.e., the equation f (x) = x has a unique solutionx. We solve the inequality f (x) < −1: − 1 + λ(m + 1)x λ(k − m)x k−m+1 k−m (1 + λx) k k−m < −1 ⇒ 1 + (m + 1)λx λ(k − m)x > 1 ⇒x < 1 λ(k − 2m − 1) . Then from f (x) =x we get 1 + 1 k − 2m − 1 k < λ(k − 2m − 1) ⇒ λ > λ cr = k − 2m k − 2m − 1 k · 1 k − 2m − 1 . Hence, by Lemma 1 and Lemma 2 the system of equations (16) for λ > λ cr has at least three positive solutions (x, y), (x,x), (y, x), where x = y. Thus, the following theorem is true. Theorem 5. Let k ≥ 2, m + r ≤ k − 2 (m = r) and λ cr = k−2m k−2m−1 k · 1 k−2m−1 . Then for the HC-model for λ > λ cr there are at least three Gibbs measures one of which is TI and the other are AGMs (not TI). 3.1. The case m + r = k − 2, k ≥ 2. In the case m + r = k − 2 (n = 2), the system of equations (5) has the form: x = 1 (1+λx) m · 1 (1+λy) k−m ; y = 1 (1+λy) k−m−2 · 1 (1+λx) m+2 .(18) From the system of equations (18) due to (6) we can get (x − y) λ 2 xy − 1 = 0. Hence, x = y or λ 2 xy = 1. The case x = y has already been considered. Let λ 2 xy = 1. Then λx = 1 λy for x = y. By virtue (18), after some algebras, we can obtain the system of equations    x = (λx) k−m (1+λx) k , y = (λy) m+2 (1+λy) k , which is equivalent to the system of equations: (1 + λx) k − λ k−m x k−m−1 = 0, (1 + λy) k − λ m+2 y m+1 = 0.(19) From the equation λ 2 xy = 1 we find y and substitute it for the second equation of the system equations (19). Then    (1 + λx) k − λ k−m x k−m−1 = 0, (1+λx) k −λ k−m x k−m−1 λ k x k = 0. We consider the function f (x) = (1 + λx) k − λ k−m x k−m−1 . Obviously, the roots of the equation f (x) = 0 are also roots of (19). Let's rewrite f (x) as a polynomial: f (x) = λ k x k + C 1 k λ k−1 x k−1 + · · · + C m+1 k λ k−m−1 x k−m−1 + · · · + C k−1 k λx + 1 − λ k−m x k−m−1 or f (x) = λ k x k + C 1 k λ k−1 x k−1 + · · · + (C m+1 k − λ)λ k−m−1 x k−m−1 + · · · + C k−1 k λx + 1. If λ < C m+1 On the other hand, it is easy to see that 0 < x < 1, f (0) = 1 and f (1) = (1 + λ) k − λ k−m > 0. Moreover, f 1 λ = 2 k − λ < 0, if λ > 2 k . It follows from the above that there exists λ cr : C m+1 k < λ cr ≤ 2 k such that for λ > λ cr the equation f (x) = 0 has two positive solutions, for λ = λ cr has twice multiplicity positive solution and λ < λ cr has no positive solution. When m = r, the system of equations (18) can be written as x = 1 (1+λx) m · 1 (1+λy) k−m ; y = 1 (1+λy) m · 1 (1+λx) k−m .(20) It follows from the Lemma 1 that if the number of solutions of the equation x = f (x) is odd or even, then the number of solutions of x = f (f (x)) is also respectively odd or even. As a result, when m = r and λ = λ cr , the number of solutions of the system of equations (20) cannot be even. Because the TI solution was unique. It follows that the solution of the system of equations (20) corresponding to λ = λ cr coincides with the translationinvariant solution. Thus, we have proved the following theorem. Theorem 6. Let k ≥ 2 and r + m = k − 2. Then for the HC-model there exists λ cr such that next statements are true: 1. for 0 < λ < λ cr there is a unique AGM and it coincides with the only TIGM µ 0 ; 2. if m = r and λ = λ cr then there is a unique AGM and it coincides with the only TIGM µ 0 ; 3. if m = r and λ = λ cr there is at least one AGM; 4. for λ > λ cr there are exactly three Gibbs measures µ 0 , µ 1 and µ 2 , where µ 1 and µ 2 are AGMs (not TI). Relation of the Alternative Gibbs measures to known ones Translation invariant measures. (see [13]) Such measures correspond to z x ≡ z, i.e. constant functions. These measures are particular cases of our measures mentioned which can be obtained for m = k, i.e. k − m = 0. In this case the condition (3) reads z = 1 (1 + λz) k .(21) The equation (21) has a unique solution for all λ > 0. Bleher-Ganikhodjaev construction. Consider an infinite path π = {x 0 = x 0 < x 1 < ...} on the half Cayley tree (the notation x < y meaning that paths from the root to y go through x). Associate to this path a collection z π of numbers given by the condition z π x =        l if x ≺ x n , x ∈ W n , h, if x n ≺ x, x ∈ W n , h, if x = x n . n = 1, 2, ... where x ≺ x n (resp. x n ≺ x) means that x is on the left (resp. right) from the path π and z xn ∈ {h, l} are arbitrary numbers. For any infinite path π, the collection of numbers z π satisfying relations (3) exists and is unique (see Fig. 5). Periodic Gibbs measures. (see [13]) Let G k be a free product of k + 1 cyclic groups of the second order with generators a 1 , a 2 , ..., a k+1 , respectively. It is known that there exists an one-to-one correspondence between the set of vertices V of the Cayley tree k and the group G k . Definition 6. Let G be a normal subgroup of the group G k . The set Let G z = {z x , x ∈ G k } is said to be G -periodic if z yx = z x for ∀x ∈ G k , y ∈ G k . (2) k = {x ∈ G k : the length of word x is even}. Note that G (2) k is the set of even vertices (i.e. with even distance to the root). Consider the boundary condition h and l: z x =    h if x ∈ G (2) k , l if x ∈ G k \ G (2) k . and denote by µ 1 , µ 2 the corresponding Gibbs measures. The G-periodic solutions of equation (3) are either translation-invariant (G k -periodic) or G (2) k -periodic, they are solutions to    h = 1 (1+λl) k , l = 1 (1+λh) k . We note that these measures are particular cases of measures of µ h,l which can be obtained for m = r = 0 (See figure 6, for k = 4). Weakly periodic Gibbs measures. Following [17], [18], [22] recall the notion of weakly periodic Gibbs measures. Let G k / G k = {H 1 , ..., H r } be a factor group, where G k is a normal subgroup of index r > 1. Definition 7. A set z = {z x , x ∈ G k } is called G k -weakly periodic, if z x = z ij , for any x ∈ H i , x ↓ ∈ H j , where x ↓ denotes the ancestor of x. We recall results known for the cases of index two. Note that any such subgroup has the form H A = x ∈ G k : i∈A w x (a i ) is even where ∅ = A ⊆ N k = {1, 2, . .., k + 1}, and w x (a i ) is the number of a i in a word x ∈ G k . We consider A = N k : when A = N k weak periodicity coincides with standard periodicity. Let G k /H A = {H 0 , H 1 } be the factor group, where H 0 = H A , H 1 = G k \ H A . Then, in view of (3), the H A -weakly periodic b.c. has the form z x =              z 1 , x ∈ H A , x ↓ ∈ H A , z 2 , x ∈ H A , x ↓ ∈ G k \ H A , z 3 , x ∈ G k \ H A , x ↓ ∈ H A , z 4 , x ∈ G k \ H A , x ↓ ∈ G k \ H A . where the h i satisfy the following equations: z 1 = 1 1 + λz 3 i 1 1 + λz 1 k−i , z 2 = 1 1 + λz 3 i−1 1 1 + λz 1 k−i+1 , z 3 = 1 1 + λz 2 i−1 1 1 + λz 4 k−i+1 , z 4 = 1 1 + λz 2 i 1 1 + λz 4 k−i .(22) It is obvious that the following sets are invariant with respect to the operator W : R 4 → R 4 defined by RHS of (22): I 1 = z ∈ R 4 : z 1 = z 2 = z 3 = z 4 , I 2 = z ∈ R 4 : z 1 = z 4 ; z 2 = z 3 It is obvious to see that • measures corresponding to solutions on I 1 are translation invariant • measures corresponding to solutions on I 2 are weakly periodic, which coincide with the measures given for m = k − i, k − m = i, r = i − 1, k − r = k − i + 1. Free energy In this section, we consider free energy of HC -model Gibbs measure. In fact, Gibbs measures give the probability of the system X being in state x ∈ X (equivalently, of the random variable X having value x) as µ(X = x) = 1 Z(β) exp(−βH(x)), where H(x) is a function from the space of states to the real numbers. The parameter β is (a free parameter) the inverse temperature. The normalizing constant Z(β) is the partition function. Consider an infinite graph G, and let Λ ⊂ G be finite subset. It is convinient to work with reduced free energy f = −βF , which per unit volume is f (β, Λ) = 1 |Λ| ln Z(β, Λ), where Z(β, Λ) is the restiriction of the partition function Z(β) on the set Λ, by fixing the state of the system outside of Λ. Note that by Theorem 6, we can construct Aternating Gibbs measures and by using these measures we compute the free energy for such measures. From [30,31] it's known that the free energy of a compatible boundary condition (b.c.) is defined as the limit: F (h) = − lim n→∞ 1 β|V n | ln Z n(23) if it exists. Here | · | denotes the cardinality of a set and Z n is a partition function. We recall that in our case: Z n = σn∈Ω Vn λ # σn x∈Wn z σ(x),x .(24) We consider ALT Gibbs measures on the half tree and from above, the family of probability measures ce compatible iff z = {z x , x ∈ G k } satisfies the equality (3). Also, we considered a special class of z = {z x , x ∈ G k } such that: • if at vertex x we have z x = h, then the function z y , which gives real values to each vertex y ∈ S(x) by the following rule Denote α n = |{x ∈ W n : z x = h}|; β n = |{x ∈ W n : z x = l}|. Recall that W n is the sphere with the center x 0 and radius n on the half tree. Consequently, the following recurrence system holds    α n+1 = mα n + (k − r)β n β n+1 = (k − m)α n + rβ n . Denoting ϕ n = α n + β n , from (26) one gets ϕ n+1 = kϕ n ⇒ ϕ n = k n , n ∈ N. Since α n = k n − β n , we get k n+1 − mk n = (k − m − r)β n + β n+1 . Put β n = (k − m)k n 2k − m − r + k n φ n . Then the last equation can be written as (m + r − k)φ n = kφ n+1 . After short calculations, we obtain φ n = β 1 (2k − m − r) − k(k − m) k m + r − k k n−1 . Hence, β n = (k − m)k n 2k − m − r + (β 1 (2k − m − r) − k(k − m)) (m + r − k) n−1 . Thus β 1 = (k − m)k 2k − m − r + (β 1 (2k − m − r) − k(k − m)) ⇒ β 1 = (k − m)k 2k − m − r . Then β n = (k − m)k n 2k − m − r . Note that α n + β n = k n , then α n = k n (k − r) 2k − r − m + (k(k − m) − β 1 (2k − m − r)) (m + r − k) n−1 . Since β 1 = (k − m)k 2k − m − r , one gets α n = k n (k − r) 2k − r − m . Consequently, it is easy to check that lim n→∞ (k − 1)α n k n+1 − 1 = (k − 1)(k − r) k(2k − m − r) and lim n→∞ (k − 1)β n k n+1 − 1 = (k − 1)(k − m) k(2k − m − r) .(29) Then F ALT (h) = − 1 β ·   (k − 1)(k − r) ln h − (k 2 − (m + 1)k + m) ln l k(2k − m − r) + lim n→∞ (k − 1) ln |Vn| i=0 λ C i |Vn| k n+1 − 1   .(30) By AM-GM inequality |Vn| i=0 λ C i |Vn| ≥ |V n | · |Vn| λ 2 |Vn| = |V n | · λ 2 |Vn| ·|Vn| −1 . Since ln x is an increasing function ln   |Vn| i=0 λ C i |Vn|   ≥ ln |V n | + 2 |Vn| · |V n | −1 ln λ. Then lim n→∞ (k − 1) ln |Vn| i=0 λ C i |Vn| k n+1 − 1 ≥ lim n→∞ ln |V n | |V n | + lim n→∞ 2 |Vn| · |V n | −2 ln λ. If λ > 1 then lim n→∞ (k − 1) ln |Vn| i=0 λ C i |Vn| k n+1 − 1 = ∞.(31) By (29), (30) and (31) Hence from above results and by Theorem 5 and Theorem 6 we can conclude the following theorem. Theorem 7. a) Let k ≥ 2, m + r ≤ k − 2 (m = r) and λ • if λ (1) cr ∈ (1, ∞) then free energies F ALT equals −∞. b) Let k ≥ 2, r + m = k − 2 and C m+1 k λ cr ≤ 2 k . Then the following statements hold: • if m = r and λ = λ cr then free energies F ALT equals −∞. Also, if λ > λ cr then free energies F ALT equals −∞. Figure 1 . 1In this figure the values of function z x on the vertices of the Cayley tree of order 5 are shown. This is the case when m = 3 and r = 2. m vertices of S(x), l on k − m remaining vertices, • if at vertex x we have z x = l, then the function z y , which gives real values to each vertex y ∈ S(x) by the following rule    l on r vertices of S(x), h on k − r remaining vertices.For an example of such a function seeFig.1.Then the system (3) has the form 1 , y 1 ), (x 2 , y 2 ) are AGMs (not TI). Figure 2 . 2Graph of the function λ 1 (u) Figure 3 . 3Graph of the function f (x, 2) (dotted line), f (x, 2.3143) (continuous line) and f (x, 2.5) (dashed line). Figure 4 . 4a) Graph of the function x 1 (λ) at λ ∈ [9.4; 12], b) Graph of the function x 2 (λ) at λ ∈ [9.4; 20]. k , then f (x) = 0 has no positive solutions, if λ > C m+1 k , then number of sign changes of the first equation of the last equality is two. Due to the Descartes' theorem, the equation f (x) = 0 has at most two positive solutions. Figure 5 .Figure 6 . 56In this figure the values of function z x on the vertices of the Cayley tree of order 5 are shown. This is the case when m = 4 and r In this figure the values of function z x on the vertices of the Cayley tree of order 4 are shown. m vertices of S(x), l on k − m remaining vertices, • if at vertex x we have z x = l, then the function z y , which gives real values to each vertex y ∈ S(x) by the following rule    l on r vertices of S(x), h on k − r remaining vertices. F ALT (h) = − 1 β · (k − 1)(k − r) ln h − (k 2 − (m + 1)k + m) ln l k(2k − m − r). , 1] (resp. λ ∈ (1, +∞)) then free energiesF ALT of b.c (3) is equal to − 1 β · (k − 1)(k − r) ln h − (k 2 − (m + 1)k + m) ln l k(2k − m − r) (resp. − ∞). Gibbs Measures and Phase Transitions. H.-O Georgii, De Gruyter Stud. Math. 9Walter de GruyterH.-O.Georgii, Gibbs Measures and Phase Transitions, De Gruyter Stud. Math., Vol. 9, Walter de Gruyter, Berlin, 1988. Gibbs States on Countable Sets. C J Preston, Cambridge Tracts Math. 68Cambridge Univ. PressC. J. Preston, Gibbs States on Countable Sets, Cambridge Tracts Math. 68, Cambridge Univ. Press, Cambridge, 1974. . G Ya, Sinai, Theory of Phase Transitions: Rigorous Results. PergamonYa.G. Sinai, Theory of Phase Transitions: Rigorous Results (Pergamon, 1982). Gibbs measures on Cayley trees. U A Rozikov, World Sci. U.A. Rozikov, Gibbs measures on Cayley trees, World Sci., Singapore, 2013. Group representation and automorphisms of the Cayley tree. N N Ganikhodzhaev, Dokl. Akad. Nauk Resp. Uzbekistan. 4N.N. Ganikhodzhaev, Group representation and automorphisms of the Cayley tree, Dokl. Akad. Nauk Resp. Uzbekistan 4, pp. 3-5 (1994). Group representation of the Cayley forest and some of its applications. N N Ganikhodzhaev, U A Rozikov, Izv. Math. 67N.N. Ganikhodzhaev and U.A. Rozikov. Group representation of the Cayley forest and some of its applications, Izv. Math. 67, 17-27 (2003). On pure phases of the Ising model on the Bethe lattice. P M Bleher, N N Ganikhodjaev, Theor. Probab. Appl. 352P.M. Bleher and N.N. Ganikhodjaev, On pure phases of the Ising model on the Bethe lattice, Theor. Probab. Appl. 35, No. 2, 216-227 (1990). On the uniqueness of Gibbs measure in the Potts model on a Cayley tree with external field. L V Bogachev, U A Rozikov, J. Stat. Mech. Theory Exp. 767ppL.V. Bogachev and U.A.Rozikov. On the uniqueness of Gibbs measure in the Potts model on a Cayley tree with external field. J. Stat. Mech. Theory Exp. 2019, no. 7, 76 pp. Description of weakly periodic Gibbs measures for the Ising model on a Cayley tree. U A Rozikov, M M Rakhmatullaev, Theor. Math. Phys. 1562U.A. Rozikov and M.M. Rakhmatullaev , Description of weakly periodic Gibbs measures for the Ising model on a Cayley tree, Theor. Math. Phys. 156: 2 (2008), 1218-1227. Countable state space Markov random fields and Markov chains on trees. S Zachary, Ann. Probab. 11S. Zachary, Countable state space Markov random fields and Markov chains on trees, Ann. Probab. 11 (1983), 894-903. On the purity of the limiting Gibbs states for the Ising model on the Bethe lattice. P M Bleher, J Ruiz, V A Zagrebnov, J. Stat. Phys. 79P.M. Bleher, J. Ruiz and V.A. Zagrebnov, On the purity of the limiting Gibbs states for the Ising model on the Bethe lattice, J. Stat. Phys. 79: 1-2 (1995), 473-482. Random surfaces with two-sided constraints: an application of the theory of dominant ground states. A E Mazel, Yu M Suhov, J. Statist. Phys. 64A. E. Mazel, Yu. M. Suhov, Random surfaces with two-sided constraints: an application of the theory of dominant ground states, J. Statist. Phys. 64 (1991), 111-134. A hard-core model on a Cayley tree: an example of a loss network. Yu M Suhov, U A Rozikov, Queueing Systems. 46Yu.M. Suhov and U.A. Rozikov, A hard-core model on a Cayley tree: an example of a loss network, Queueing Systems 46 (2004), 197-212. J B Martin, Reconstruction thresholds on regular trees, Discrete random walks. Paris; NancyDiscrete Math. Theor. Comput. Sci. Proc.J.B. Martin, Reconstruction thresholds on regular trees, Discrete random walks (Paris, 2003), 191- 204 (electronic), Discrete Math. Theor. Comput. Sci. Proc., AC, Assoc. Discrete Math. Theor. Comput. Sci., Nancy, 2003. An extremality of the translation-invariant Gibbs measure for the HC-model on a Cayley tree. U A Rozikov, R M Khakimov, Bulletin of the Institute of Math. 2in RussianU.A. Rozikov and R.M. Khakimov, An extremality of the translation-invariant Gibbs measure for the HC-model on a Cayley tree [in Russian], Bulletin of the Institute of Math No 2, 2019, pp. 17-22. Uniqueness of Weakly Periodic Gibbs Measure for HC-Models. R M Khakimov, Math. Notes. 945R.M. Khakimov, Uniqueness of Weakly Periodic Gibbs Measure for HC-Models, Math. Notes 2013, Vol. 94, No. 5, pp. 834-838. Weakly periodic Gibbs measures in the HC-model for a normal divisor of index four. R M Khakimov, Ukrainian Math. J. 67R.M. Khakimov, Weakly periodic Gibbs measures in the HC-model for a normal divisor of index four, Ukrainian Math. J. 67, 1584-1598 (2016). Weakly periodic Gibbs measures for HC-models on Cayley trees. R M Khakimov, Siberian Math. J. 59R.M. Khakimov, Weakly periodic Gibbs measures for HC-models on Cayley trees, Siberian Math. J. 59, 147-156 (2018). Weakly periodic Gibbs measures for two and three state HC models on a Cayley tree. R M Khakimov, G T Madgoziyev, Uzb. Math. Jour. 3R.M. Khakimov and G.T. Madgoziyev, Weakly periodic Gibbs measures for two and three state HC models on a Cayley tree, Uzb. Math. Jour., 2018, No 3, p. 116-131. Quadratic transformations: a model for population growth. I. H Kesten, Adv. Appl. Probab. 2H. Kesten, Quadratic transformations: a model for population growth. I, Adv. Appl. Probab. 2 (1970), 1-82. Stochastic models of computer communication systems. With discussion. F P Kelly, J. Roy. Stat. Soc. Ser. B. 47F.P. Kelly, Stochastic models of computer communication systems. With discussion, J. Roy. Stat. Soc. Ser. B 47 (1985), 379-395; . Mr-0844469, MR-0844469. Uniquess and nonuniquess conditions for wealy periodic Gibbs measures for the Hard-Core model. R M Khakimov, M T Makhammadaliev, Theor. Math. Phys. 2042R.M. Khakimov and M.T. Makhammadaliev, Uniquess and nonuniquess conditions for wealy periodic Gibbs measures for the Hard-Core model, Theor. Math. Phys. 204:(2): 1059-1078 (2020). Gibbs Periodic Measures for a Two-State HC-Model on a Cayley Tree. U A Rozikov, R M Khakimov, M T Makhammadaliev, Contemporary Mathematics. Fundamental Directions 2022. 68in RussianU.A. Rozikov, R.M. Khakimov and M.T. Makhammadaliev, Gibbs Periodic Measures for a Two- State HC-Model on a Cayley Tree [in Russian], Contemporary Mathematics. Fundamental Directions 2022, Vol. 68, No. 1, 95-109 A three state hard-core model on a Cayley tree. J B Martin, U A Rozikov, Yu M Suhov, J. Nonlin. Math. Phys. 12J.B. Martin, U.A. Rozikov and Yu.M. Suhov, A three state hard-core model on a Cayley tree, J. Nonlin. Math. Phys. 12: 3 (2005), 432-448. Fertile HC models with three states on a Cayley tree. U A Rozikov, A Sh, Shoyusupov, Theor. Math. Phys. 156U.A. Rozikov and Sh.A. Shoyusupov, Fertile HC models with three states on a Cayley tree, Theor. Math. Phys. 156, 1319-1330 (2008). Translation invariant Gibbs measures for fertile three-state "hard core" models on a Cayley tree. R M Khakimov, Theor. Math. Phys. 183R.M. Khakimov, Translation invariant Gibbs measures for fertile three-state "hard core" models on a Cayley tree, Theor. Math. Phys. 183, 829-835 (2015). Gibbs measures for the fertile three-state hard core models on a Cayley tree. U A Rozikov, R M Khakimov, Queueing Systems V. 811U.A. Rozikov and R.M. Khakimov, Gibbs measures for the fertile three-state hard core models on a Cayley tree, Queueing Systems V.81, No.1, (2015), 49-69. On the three state Potts model with competing interactions on the Bethe lattice. N N Ganikhodjaev, F M Mukhamedov, J F Mendes, Jour. Stat. Mech. 29N.N. Ganikhodjaev, F.M. Mukhamedov and J.F. Mendes, On the three state Potts model with competing interactions on the Bethe lattice, Jour. Stat. Mech. (2006), 29p. Description of periodic extreme Gibbs measures of some lattice models on the Cayley tree. N N Ganikhodzhaev, U A Rozikov, Theor. Math. Phys. 111N.N. Ganikhodzhaev and U.A. Rozikov, Description of periodic extreme Gibbs measures of some lattice models on the Cayley tree, Theor. Math. Phys. 111, 480-486 (1997). New Phase Transitions of the Ising Model on Cayley Trees. D Gandolfo, F H Haydarov, U A Rozikov, J Ruiz, J. Stat. Phys. 153D. Gandolfo, F.H. Haydarov, U.A. Rozikov, and J. Ruiz, New Phase Transitions of the Ising Model on Cayley Trees. J. Stat. Phys. 153: 400-411 (2013). On free energies of the Ising model on the Cayley tree. D Gandolfo, M M Rakhmatullaev, U A Rozikov, J Ruiz, J. Stat. Phys. 1506D. Gandolfo, M.M. Rakhmatullaev, U.A. Rozikov, and J. Ruiz, On free energies of the Ising model on the Cayley tree. J. Stat. Phys. 150(6): 1201-1217 (2013). R M Khakimov, M T Makhammadaliev, V I , Romanovskiy Institute of Mathematics of the Academy of Sciences of. Uzbekistan, Tashkent, Uzbekistan; Namangan, UzbekistanNamangan State UniversityEmail address: [email protected], [email protected]. M. Khakimov, M. T. Makhammadaliev, V. I. Romanovskiy Institute of Mathemat- ics of the Academy of Sciences of Uzbekistan, Tashkent, Uzbekistan., Namangan State University, Namangan, Uzbekistan. Email address: [email protected], [email protected]
[]
[ "GSHOT: Few-shot Generative Modeling of Labeled Graphs", "GSHOT: Few-shot Generative Modeling of Labeled Graphs" ]
[ "Sahil Manchanda [email protected] \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Delhi\n\n", "Shubham Gupta [email protected] \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Delhi\n\n", "Sayan Ranu [email protected] \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Delhi\n\n", "Srikanta Bedathur [email protected] \nDepartment of Computer Science and Engineering\nIndian Institute of Technology Delhi\n\n" ]
[ "Department of Computer Science and Engineering\nIndian Institute of Technology Delhi\n", "Department of Computer Science and Engineering\nIndian Institute of Technology Delhi\n", "Department of Computer Science and Engineering\nIndian Institute of Technology Delhi\n", "Department of Computer Science and Engineering\nIndian Institute of Technology Delhi\n" ]
[]
Deep graph generative modeling has gained enormous attraction in recent years due to its impressive ability to directly learn the underlying hidden graph distribution. Despite their initial success, these techniques, like much of the existing deep generative methods, require a large number of training samples to learn a good model. Unfortunately, large number of training samples may not always be available in scenarios such as drug discovery for rare diseases. At the same time, recent advances in few-shot learning have opened door to applications where available training data is limited. In this work, we introduce the hitherto unexplored paradigm of few-shot graph generative modeling. Towards this, we develop GSHOT, a meta-learning based framework for few-shot labeled graph generative modeling. GSHOT learns to transfer meta-knowledge from similar auxiliary graph datasets. Utilizing these prior experiences, GSHOT quickly adapts to an unseen graph dataset through self-paced fine-tuning. Through extensive experiments on datasets from diverse domains having limited training samples, we establish that GSHOT generates graphs of superior fidelity compared to existing baselines. * denotes equal contribution Preprint.
null
[ "https://export.arxiv.org/pdf/2306.03480v1.pdf" ]
259,089,065
2306.03480
c4741d961cde0d67d8028a2bf6b7a625880eda91
GSHOT: Few-shot Generative Modeling of Labeled Graphs Sahil Manchanda [email protected] Department of Computer Science and Engineering Indian Institute of Technology Delhi Shubham Gupta [email protected] Department of Computer Science and Engineering Indian Institute of Technology Delhi Sayan Ranu [email protected] Department of Computer Science and Engineering Indian Institute of Technology Delhi Srikanta Bedathur [email protected] Department of Computer Science and Engineering Indian Institute of Technology Delhi GSHOT: Few-shot Generative Modeling of Labeled Graphs Deep graph generative modeling has gained enormous attraction in recent years due to its impressive ability to directly learn the underlying hidden graph distribution. Despite their initial success, these techniques, like much of the existing deep generative methods, require a large number of training samples to learn a good model. Unfortunately, large number of training samples may not always be available in scenarios such as drug discovery for rare diseases. At the same time, recent advances in few-shot learning have opened door to applications where available training data is limited. In this work, we introduce the hitherto unexplored paradigm of few-shot graph generative modeling. Towards this, we develop GSHOT, a meta-learning based framework for few-shot labeled graph generative modeling. GSHOT learns to transfer meta-knowledge from similar auxiliary graph datasets. Utilizing these prior experiences, GSHOT quickly adapts to an unseen graph dataset through self-paced fine-tuning. Through extensive experiments on datasets from diverse domains having limited training samples, we establish that GSHOT generates graphs of superior fidelity compared to existing baselines. * denotes equal contribution Preprint. Introduction and Related Work Modeling and generating graphs have found applications in various domains such as drug design [32], molecular property discovery [21,8], model architectural search [30], data augmentation [36] and privacy-preserving applications [5]. Owing to its wide applications, the development of graph generative modeling has a rich history. Initial works on graph generative modeling relied on prior structural assumptions about graphs in order to model graphs from a pre-determined family such as those obeying small-world [9], Erdős-Rényi [26] and scale-free [1] properties. However, these approaches capture a limited structural properties of graphs making them impractical in many real-world settings. With recent advances in deep learning, there has been a surge in developing deep graph generative methods that directly learn the underlying hidden distribution of graphs from the data itself [32,11,8,3,1,22,28]. These techniques have shown significant improvement over the traditional methods for the graph generation task. Since many real-world graphs such as protein interaction networks [4] and drug molecules [23] are labeled and originate from diverse domains, our focus is on learning domain-agnostic, labeled graph generative [11,33] model which jointly models the relationships between a graph structure and its node/edge labels. A well-known fact about deep generative models is that they are not well suited for applications where training data is scarce [2]. In our study, we observe similar trends for graph deep generative modeling. In Fig. 1a we study the impact of limiting the number of training samples available to GRAPHGEN [11], which is the state-of-the-art method for domain agnostic labeled graph generation. Table 1) for GRAPHGEN [11]. A higher MMD corresponds to poor fidelity. (b) Our proposed architecture. We observe that GRAPHGEN's performance deteriorates significantly 2 when the size of the training dataset is reduced. The lack of training graphs is often severe in many important settings such as effective drug discovery for rare diseases [29] or speedy drug discovery during pandemics such as COVID-19 [7]. Similar issue appears in physics while developing generative models for computationally expensive N-body simulations [19,25,34]. In this context, we observe that although the availability of graphs exhibiting a specific desired property may be limited, it may be possible to identify graph repositories exhibiting similar properties. To elaborate, we may not have access to a large set of molecules exhibiting activity against COVID-19. However, million-scale repositories of chemical compounds are widely available [17], from which the broad characteristics of chemical compounds such as valency rules, correlated functional groups, etc. may be learned. Hence, potentially, the learning task from the smaller COVID-19 repository could be focused only on features that are unique to this set. We exploit this intuition and make the following contributions: • Problem Formulation: We formulate the problem of few-shot, domain-agnostic, labeled graph generative modeling. To the best of our knowledge, we are the first to investigate this problem. • Algorithm: We propose GSHOT, a novel meta-learning framework for few-shot labeled graph generative modeling, which learns inductive biases on auxiliary graph datasets. Subsequently, using a self-paced fine-tuning approach, GSHOT adapts to unseen target graph dataset using a small number of training samples. • Empirical Evaluation: We perform extensive experiments across multiple real labeled graph datasets spanning a variety of domains such as chemical compounds, proteins, and physical interaction systems. We establish that GSHOT is effective in learning graph distributions with high fidelity even on datasets containing as few as 50 training samples, and significantly improves over baselines that learn from scratch. Problem Formulation 3 Definition 1 (Graph). A graph is represented as G = (V, E), where V = {v 1 , · · · , v n } is a set of n nodes and E = {(v i , v j ) | v i , v j ∈ V } is a set of edges. Let L node : V → V and L edge : E → E be the node and edge label mappings respectively where V and E are the set of all node and edge labels respectively. We assume that the graph is connected and there are no self-loops. A graph dataset D={G 1 , · · · , G N } is a collection of N graphs. Graph dataset D 1 is considered to be an auxiliary dataset of graph dataset D 2 if D 1 is similar to D 2 . As discussed in Sec. 1, a generic set of chemical compounds may be considered as an auxiliary dataset to a specific subgroup of compounds that display a desired activity against a virus. Although our modeling is domain-agnostic, Figure 2: Architecture of GSHOT we hasten to add that the selection of a suitable auxiliary dataset is expected to be domain specific, and thus domain experts will be the best judge of what may be considered as auxiliary. Problem 1 (Graph Generative modelling). The goal of labeled graph 4 generative modeling of a dataset D of graphs is to learn a model, p θ (D), parameterized by θ, that approximates the true latent distribution p(D) of graphs in D. The learned generative model is effective if it is capable of generating graphs similar to those in D. In few-shot modeling, the goal is to learn a generative model over a target graph dataset D T , where |D T | is small (and hence, few). Since |D T | is small, accurate modeling is hard (Recall Fig. 1a). However, if D T is accompanied with a collection of auxiliary datasets, the generative model should be able to use this knowledge and augment its learning. Formally, it is defined as follows. Problem 2 (Few-shot Labeled Graph Generative Modelling). Input: A collection of auxiliary graph datasets D={D 1 , · · · , D B } and a target dataset D T . Goal: To learn a graph generative model p θ (D) that is capable of leveraging the knowledge from D and effectively adapt to the unseen target dataset D T . GSHOT: Our Proposed Methodology Given a set of auxiliary datasets D 1 , · · · , D B , first, GSHOT learns initial model parameters θ. θ is learned in a strategic manner such that, at inference time, when an unseen target dataset D T containing a small number of graphs is provided as input, we can fine-tune θ to new θ T where p θ T (D T ) best approximates the true distribution of D T . Finally, to generate graphs, we sample from p θ T (D T ). Fig. 1b provides a visual summary of this approach. The proposed approach draws inspiration from meta-learning [10]. The main objective of metalearning is to learn initial model parameters for a set of tasks in such a way that they can be adapted to various unseen target tasks having limited training data. In the context of our problem, each task T i refers to the graph generative modeling task for dataset D i in the auxiliary dataset D. Each task T i is associated with a loss function L i . During meta-training, the optimal initial parameter θ is learned using D. Then, given an unseen target task T T corresponding to unseen graph dataset D T with an associated loss L T , θ is fine-tuned for L T using few data samples of T T . When mapped to our problem of graph generative modeling, the loss function measures how well p θ T (D T ) mimics the true distribution of D T . Fig. 2 presents the architecture of GSHOT. In order to learn a generative model over labeled graphs on a dataset D, we first convert graphs to sequences. This conversion allows us to leverage the rich literature on auto-regressive generative models. Auto-regressive methods [11,33] have obtained superior fidelity and high scalability on domain-agnostic graph generative modeling task. Two popular encoding schemes for encoding graphs into sequence are BFS encoding [33] and DFS encoding [11]. In our work, we choose DFS encoding. This choice is motivated by the observation that minimum DFS codes, which is an instance of DFS encoding, provides one-to-one mapping from graphs to sequences. In contrast, in BFS encoding, the same graph may have multiple sequence representations, and may be exponential in the worst case with respect to the graph size. Consequently, one-to-one mapping is an attractive feature that our model can exploit, and as others have shown, it also improves the scalability and fidelity of graph generative modeling [11]. DFS code 2 is smaller than DFS code 1 since ⟨0, 1, X, a, Y ⟩ is less than ⟨0, 1, Y, a, X⟩ Once graphs are converted into sequences via minimum DFS codes, as shown in Fig. 2, meta-learning is conducted on the sequence representations to learn parameter set θ. To model sequences, we use LSTM as shown in Fig. 2. Finally, during target-adaptation phase, the target graph database D T is converted to the equivalent sequence representation S T , followed by fine-tuning to learn θ T . Architecture Overview To generate graphs, we sample sequences from p θ T (S T ), which are then converted to graphs. The conversion back from a sequence to its graph representation is trivial since our DFS-encoding enables one-to-one mapping. Hence, this conversion can be performed in O(|E|) time, where E is the set of edges. We next deep-dive into each of these individual steps. DFS Codes: Graph to Sequence encoding We first formalize the concept Graph Canonization. Definition 2 (Graph Isomorphism). Two graphs G i = (V i , E i ) and G j = (V j , E j ) are said to be isomorphic if there exists a bijection ϕ such that for every vertex v ∈ V i , ϕ(v) ∈ V j and for every edge e = (u, v) ∈ E i , ϕ(e) = (ϕ(u), ϕ(v)) ∈ E j . Furthermore, for labeled graphs to be isomorphic, in addition to above conditions, the labels of mapped nodes and edges should be same, i.e., L node (v) = L node (ϕ(v)) and L edge (e) = L edge (ϕ(e)). Definition 3 (Graph Canonization). Graph canonization refers to the process of converting a graph into a label such that graphs have the same label if and only if they are isomorphic to each other. A label that satisfies this criteria is called a canonical label. Now, we introduce minimum DFS codes and how it corresponds to canonical labels of graphs. DFS code [31] is a mapping function defined over a graph G, which encodes G into a sequence of edge tuples. To construct a DFS-code from G, first, a depth-first search (DFS) traversal is started from an arbitrary node. During this traversal, a timestamp is assigned to each node based upon when it is discovered. The first discovered node is assigned timestamp 0, the second discovered node is assigned 1, and so on. Following these timestamps, each edge (u, v) is assigned a tuple of five items ⟨t u , t v , L u = L node (u), L uv = L edge (uv), L v = L node (v)⟩. t u , t v are the discovery times of node u and v respectively. L u , L v and L uv are labels of node u, node v and edge (u, v) respectively. A partition of edges is created based upon the DFS traversal. The first partition consists of forward edges that are traversed by the DFS traversal. The second partition contains backward edges, that are not traversed during the DFS traversal. For example, in Fig. 3b, 3c, the edges depicted by solid lines depict the forward edges, and the one's which are dashed depict backward edges. A total ordering is imposed on these edges following the rules described in GSPAN [31] to obtain the DFS code of a graph. Specifically, for ordering forward edges, the process is straight forward. Forward edges are ordered based upon their discovery time in the DFS traversal. For backward edges, the ordering is derived based upon the following rules: • Backward edge (u, s) must appear before all forward edges of the form (u, t). • Backward edge (u, s) must appear after the forward edges of the form (t, u), i.e the first forward edge which points to u. • For backward edges of the form (u, s) and (u, s ′ ) originating from the same source u, Fig. 3 shows examples of two DFS codes of a graph based on two DFS traversals. For more details on DFS code, we refer to GSPAN [31]. Fig.3, a graph can have multiple DFS codes. We choose the lexicographically smallest DFS code among all DFS codes as the minimum DFS code. It has been shown that there exists a bijection between a graph and its minimum DFS code [31]. Hence, minimum DFS codes are canonical labels. Using minimum DFS codes, we encode each graph G = (V, E) in dataset D as a sequence of m edge tuples S = (s 1 , . . . , s m ) where m = |E| and each s i is an edge tuple of the form ⟨t u , t v , L u , L uv , L v ⟩. We use the notation F(G) = S to denote the minimum DFS code S of graph G. Applying F on all graphs of dataset D, we obtain a collection of edge tuple sequences S = {F(G) | ∀G ∈ D} for all graphs in dataset D. (u, s) is ordered before (u, s ′ ) if t s < t ′ s . Minimum DFS codes: As shown in Computation Complexity: We note that computing the minimum DFS code of a graph is equivalent to performing graph isomorphism tests. In the literature, no polynomial time algorithm exists for detecting graph isomorphism. Fortunately, for labeled graphs, it has been shown that minimum DFS codes can be computed very efficiently [31,11]. Modeling Graph Sequences Minimum DFS codes are of sequential nature. We model each sequence S=(s 1 , · · · , s m ) using an auto-regressive model [11] as follows: p(S) = p(s 0 ) m+1 i=1 p(s i |s 0 , · · · , s i−1 )(1) where m=|E| is the number of edges, s 0 is a start-of-sequence SOS token and s m+1 is end-ofsequence EOS token to allow variable length sequences. To learn the parameters for these sequential conditional distributions, we use Recurrent Neural Networks. Specifically, we use LSTM [15], which efficiently models long-range dependencies. Formally, h i = LST M hidden θ h 0 , f emb θ (s 0 ) . . . f emb θ (s i−1 ) = LST M hidden θ h i−1 , f emb θ (s i−1 ) (2) where LST M θ is a function representing an LSTM cell. f emb θ is an embedding function that takes one-hot encoding of s i−1 as input and produces a d-dimensional compressed vector. h 0 is initialized to 0. Finally, assuming that s i .t u , s i .t v , s i .L u , s i .L uv , s i .L v are independent given h i , we predict s i = ⟨t u , t v , L u , L uv , L v ⟩ as follows. s i = f tu θ (h i ), f tv θ (h i ), f Lu θ (h i ), f Luv θ (h i ), f Lv θ (h i )(3) where each f θ is a function representing a fully connected Multi-layered Perceptron (MLP). Note that every function in this discussion is parameterized by θ (indicated by the subscript). Finally, we define the loss L D specific to sequence (graph) generation task T D on dataset D as follows: L S = − m+1 i=1 c (s i [c] log s i [c] + (1 − s i [c]) log (1 − s i [c])) , L D = S∈S L S(4) where c is the component index of one-hot vector s i and predicted vector s i . S is the collection of graph sequences S derived by encoding every graph G∈D using minimum DFS coding function F(G). Meta-Learning for Few-shot Graph Generative Modeling Up until now, we have defined parameters θ of graph generative model p θ (D). As motivated, we want to find an initialization of θ such that it can quickly learn to generate graphs from unseen dataset D T having few training graph samples. Specifically, we train θ on graphs from auxiliary datasets to learn initial parameters. To do this, we build upon the REPTILE framework [24]. REPTILE is a first-order meta-learning algorithm, wherein it uses first-order gradients to learn θ, and is therefore computationally and memory efficient. GSHOT, using REPTILE, extracts the meta-knowledge to obtain an effective initialization and an ability to adapt to the target dataset using limited fine-tuning samples. More concretely, GSHOT optimizes the below objective function in order to learn good initialization of θ: min θ E L D∼D L D (θ K D ) ,(5) where θ K D are the updated parameters after K gradient updates of θ from dataset D as follows: θ 0 = θ and θ i D = θ i−1 D − α∇ θ i−1 L D ∀i ∈ [1 . . . K](6) Here hyper-parameter α controls the meta-learning rate. Finally using the K step updated parameters θ K D , we optimize Eq. 5 as follows: θ = θ + ϵ θ K D − θ(7) where ϵ and K are hyper-parameters of GSHOT. Eq. 7 updates the value of the meta-parameters θ using a weighted combination of θ and K-step fine-tuned parameter θ K D for dataset D. The parameter ϵ can be considered as a step-size in the direction of the gradient θ K D − θ. We iterate over D∼D by computing Eq. 6 for different tasks and then using it for optimizing Eq. 5. Algorithm 1 in App. describes the pseudocode of meta-training procedure of GSHOT. Fine-tuning for Target Adaptation Once GSHOT is meta-trained on diverse graph datasets, our next goal is to adapt the learned model parameters to the target dataset D T . Essentially, first we initialize the target model parameters to the value of the meta-trained model: θ T = θ (Initialization) Towards our goal to optimize parameters on the target dataset, a simple approach is to update the parameters of the model by applying multiple gradient updates using samples from target dataset D T with its associated loss as follows: θ T = θ T − α∇ θ T L T (gradient updates) The above equation assumes, for every gradient update, the training data is sampled in a random fashion from the target dataset. However, recent studies have discovered that gradually increasing the complexity of training instances results in better learning and faster convergence [35]. Motivated by this result, we adopt self-paced learning [20] in the fine-tuning phase of GSHOT. Towards this end, we modify the loss L T associated with the target dataset in a way that the model is presented with training samples of gradually increasing difficulty. Moreover, the training curriculum is dynamically determined by the model itself based upon its perception of the difficulty of a sample. Specifically, recall from Eq. 4 L D = S∈S L S where S is the collection of graph sequences of G ∈ D. For self-paced learning, we modify L T as follows: L T = |S T | i=1 β i L Si − λ |S T | i=1 β i β i ∈{0, 1} ∀i ∈ [1 . . . |S T |](8) where S T ={F(G) | ∀G ∈ D T }, S i ∈S T , and |S T | is the number of graphs in D T . λ is an evolving parameter that essentially controls the pace of learning. Specifically in our graph generative modeling setting, we solve this via an iterative approach [20]. Before every gradient update as described earlier, we first calculate the value of β i 's as follows: β i = 1 if L Si < λ 0 else(9) The value of β i indicates whether the i th training sample will be used or not in the loss computation in Eq. 8 . We substitute these values in Eq. 8 and update the parameters θ T . This process repeats until convergence. The value of λ, is increased periodically by a growth factor γ to gradually allow hard samples to be a part of the loss computation during the course of training. Algorithm 2 in App. describes the pseudocode of the fine-tuning procedure of GSHOT. Graph Generation After fine tuning p θ (D) on target dataset D T , we obtain p θ T (D T ). We sample graphs from this distribution as follows. First, we pass the initial hidden state h 0 =0 to LST M θ T along with the SOS symbol. At each step i, we sample s i from the updated hidden state h i as follows- s i .t u ∼ M ultinomial(f tu θ T (h i )) s i .t v ∼ M ultinomial(f tv θ T (h i )) s i .L u ∼ M ultinomial(f Lu θ T (h i )) s i .L uv ∼ M ultinomial(f Luv θ T (h i ))(10)s i .L v ∼ M ultinomial(f Lv θ T (h i ))(11) The process is repeated until the EOS symbol is sampled for any of the five components in the sampled tuple. Finally, this sampled sequence, representing the DFS code, is converted back to graph. Algorithm 3 in App. presents the pseudocode of the graph generation phase. Experiments We benchmark GSHOT against state of the art algorithms for graph generation and establish that: • Higher fidelity: GSHOT generates graphs of higher fidelity than the state-of-the-art methods. • Sample-efficient: Attributed to its few-shot capability, GSHOT better preserves graph properties compared to existing methods even when the number of fine-tuning samples used by GSHOT are relatively less compared to other methods. for meta-training and use the two smallest chemical datasets of AIDS-CA and Leukemia-Active as our target set. • Physics Domain: We meta-train GSHOT on auxiliary datasets consisting of four and six particle spring systems and then fine-tune on graphs containing five particles. Baselines:We benchmark the performance of GSHOT against the state-of-the-art techniques for domain-agnostic, labeled graph generative modeling, namely GRAPHGEN [11] and GRAPHRNN [33]. We do not include GRAN [22] as a baseline since it cannot generate labeled graphs. For GRAPHGEN, we used the code shared by authors. While, in theory, GRAPHRNN supports labeled graphs, the code shared by the authors do not. Hence, we extend the author's code as outlined in the supplementary section of GRAPHRNN [33]. Both GRAPHGEN and GRAPHRNN are trained only on the target dataset and we compare the quality of the generated graphs with that of GSHOT. This comparison allows us to evaluate how efficient the knowledge transfer of GSHOT is as opposed to relying only on the target dataset. In addition, we use a third pre-training baseline introduced by us, which we will refer to as PRE-TRAIN+FT. In this baseline, we first pre-train GRAPHGEN on the same auxiliary datasets used by GSHOT for meta-training. Then, we fine-tune it on the target dataset. This baseline allows us to systematically understand the impact of meta-learning against generic generative modeling. We do not consider pre-training on GRAPHRNN, since GRAPHGEN has been shown to be superior on the labeled graph generative modeling task [11], which is also reflected in our experiments that follows. Evaluation setup: During meta-training of GSHOT, we use ≈50% data for training and the same for validation. During fine-tuning to a new graph dataset, unless specifically mentioned, we use the default split among training, validation, and test as ≈40%, ≈30%, and ≈30% respectively. Unless specified otherwise, for training a model from scratch directly on the target dataset or fine-tuning a model on a target dataset, we use the same number of training samples of the target dataset. For each target dataset, this information is present in the #Target Training samples column of Table 2. The system configuration and parameter details can be found in App. D. Evaluation Metrics: The performance of a graph generative model is satisfactory (1) if it generates graphs with similar properties as the source graphs, (2) but without duplicating the source graphs themselves. To quantify these, we divide our metrics into two categories. • Fidelity: To quantify the preservation of graph properties, we compare the distributions of graph statistics between the ground truth graphs and the generated graphs using the following metrics. -Structural metrics: To quantify the preservation of original graph properties, we use the structural metrics used by GraphRNN and GraphGen: (1) node degree distribution (Degree), (2) clustering coefficient distribution of nodes (Clustering), and (3) orbit count distribution (Orbit) [14], which measures the number of orbits with 4 nodes. This metric captures the higher-level motifs that are shared between generated and test graphs. We utilize Maximum Mean Discrepancy (MMD) [12] to compute the distance between two distributions. Further, to compare the sizes of the generated graphs against the ground truth, we measure (4) Average node count and (5) Average edge count. -Labeled Graph Metrics: Our work is geared towards labeled graph generation. Hence, it is important to assess whether a generative model captures the label distribution well. Towards that end, we compare the distribution of (1) Node Labels, (2) Edge Labels, and (3) the joint distribution of node labels and degree in the ground truth and generated graphs. We again use MMD to quantify the distance from the ground truth. -Topological Similarity: Finally, in order to capture topological similarity of generated graphs with the ground truth graphs, we use Neighbourhood Sub-graph Pairwise Distance Kernel (NSPDK) [6]. NSPDK provides the benefit of incorporating both node and edge labels along with the structure of the graph. Specifically, NSPDK measures the distance between two graphs by matching pairs of subgraphs with different radii and distances. The lower the MMD score for NSPDK, the more aligned are the two graph distributions. • Duplication and Uniqueness: A model that generates graphs with high fidelity might not be useful in practice unless it is also capable of generating graphs that are not seen in the training data. In order to capture this requirement, we utilize the below metrics introduced by GRAPHGEN [11]: (1) Novelty measures the percentage of generated graphs that are not subgraphs of the set of the training graphs. Additionally, we compute (2) Uniqueness, which captures the diversity of the set of generated graphs. In order to quantify uniqueness, we remove the generated graphs that are subgraph isomorphic to any of the other generated graphs. This is different from the novelty metric as here we focus only on the generated graphs. A model that generates 100 graphs and out of which 90 are subgraph isomorphic to any of the other generated graphs has uniqueness=10%. In order to quantify the quality of a particular metric, we generate multiple graphs for each target dataset and compare them against the available ground truth target graphs. Details of number of graphs generated for each dataset is present in App. D.2. Quality Fidelity: Table 2 shows the performance of all the models across different datasets. We observe that, in most cases, GSHOT obtains lower MMD scores compared to baselines. In terms of the global level graph metric NSPDK, GSHOT achieves a significant improvement even against the best performing baselines. For instance in Leukemia-Active, for NSPDK, GSHOT obtains MMD value of 0.032 against a significantly higher value of 0.116 obtained by PRETRAIN+FT. With respect to labeled graph metrics, we observe that GSHOT improves over state-of-the-art techniques by achieving more than 50% lower MMD value in multiple cases. Further, GSHOT also outperforms existing techniques in the Joint Node-Label and Degree metric signifying its ability to better jointly model the graph structure and labels. The superior performance of GSHOT establishes the efficacy of the meta-training procedure to learn an effective set of initial model parameters, which adapts well to low-data regimes. Uniqueness and novelty: In addition to obtaining better fidelity in most cases, GSHOT also achieves a higher or similar score compared to baselines on the Novelty and Uniqueness aspect. On AIDS-CA dataset, we obtain an improvement of ≈1−2% in the uniqueness and novelty metrics against GRAPHGEN while also achieving better fidelity scores. Additionally, for AIDS-CA, GRAPHRNN's performance in terms of both fidelity as well as diversity is significantly inferior to other methods. In the case of the 5-body spring dataset, although GSHOT does not perform the best in terms of Figure 4: (a) The variation in MMD scores on different metrics when the number of fine-tuning samples for GSHOT are reduced from 500 to 50 on the Leukemia-Active dataset. Here the suffix of 500 after GRAPHGEN and PRETRAIN+FT depicts that the number of training samples used from the target dataset for these baselines is 500. Note that for PRETRAIN+FT 500, the value of the Node-Label metric (0.09) was skipped in the diagram to improve readability. (b) Ablation study showing the relative (%) improvement obtained by GSHOT when using self-paced fine-tuning compared to GSHOT with vanilla fine-tuning. fidelity scores, still its uniqueness and novelty scores are significantly higher than GRAPHGEN, which achieves a 33% novelty score and 13% uniqueness score. This indicates that GRAPHGEN mostly generated duplicated graphs. Further, for 5-body spring we observe that GRAPHRNN obtains the highest novelty and uniqueness scores. However, its fidelity scores are significantly poor. Overall, we observe that an efficient parameter initialization obtained by GSHOT also helps in improving the diversity of generated graphs while generating graphs with high fidelity. Robustness to number of fine-tuning samples: We also evaluate GSHOT's robustness to different sizes of the same fine-tuning dataset. Towards this end, we choose Leukemia-Active dataset as our target dataset since due to slightly higher availability of fine-tuning data, there is a reasonable scope to down-sample the fine-tuning data in order to understand its impact on performance. We vary the number of fine-tuning samples available to GSHOT from 500 to 50. However, we keep the number of fine-tuning samples for the baselines to the maximum value, i.e., 500. In Fig. 4a, we observe that GSHOT, while using less number of samples from the target dataset, still obtains lower MMD scores on different metrics in comparison to GRAPHGEN and the PRETRAIN+FT model that used 500 samples from the target dataset. Further, the MMD scores for GSHOT increase only slightly when the number of fine-tuning samples is reduced from 500 to 50. This is a direct consequence of our model's ability to adapt with a small number of training samples. Further, we would like to highlight that the novelty and uniqueness metrics did not show any observable change in this experiment. Ablation study: We study the improvement obtained by using self-paced fine-tuning in GSHOT over vanilla fine-tuning on different metrics. For a metric P , we define the improvement as P GSHOT(vanilla) −PGSHOT PGSHOT × 100. Here, P GSHOT refers to the value of the metric P obtained by our default model (with self-paced fine tuning), and P GSHOT(vanilla) refers to the value obtained by GSHOT with vanilla fine-tuning. In Fig. 4b we observe that a self-paced fine-tuning strategy can improve the fidelity metrics significantly. Performance against different auxiliary datasets: In App. E, we study the impact of the choice of auxiliary datasets on the performance of few-shot graph generative modeling. Conclusion Research on deep graph generative modeling has progressed significantly in several directions such as scalability to large graphs, domain agnostic modeling, handling node and edge labels, etc. However, the problem of learning to generate graphs in low-data regimes remained unexplored. In this work, we propose the paradigm of few-shot, domain-agnostic, labeled graph generative modeling. Our proposed architecture GSHOT learns to transfer meta-knowledge from auxiliary graph datasets to a target dataset. Utilizing these prior experiences, GSHOT quickly adapts to an unseen graph dataset through self-paced fine-tuning. GSHOT is effective in learning graph distributions on datasets with small number of available training samples. Extensive evaluation on real graph datasets demonstrate that graphs generated by GSHOT preserve graph structural properties significantly better than the stateof-the-art approaches. Although, our proposed method outperforms existing state-of-the-art methods, however, while generating these molecules, it does not take into account their molecular/chemical properties etc. In future, we would like to work on capturing these aspects. if l < λ then L T = L T + l θ T ← θ T − α∇ θ T L T λ = λ * γ // Increase difficulty periodically until stopping criteria // Typically when validation loss is minimized θ D ← θ // Dataset D specific parameters for K times do // K inner gradient steps S = [s1, s2 . . . sm] ∼ S s0 ← SOS h0 ← 0 L D ← 0 /* Computing loss L D of sequence S = [s0, s1, s2 . . . sm+1] */ for i from 1 to m + 1 do // sm+1 for EOS tokens hi ← LST M hidden θ (hi−1, f emb θ (si−1)) si ← f tu θ (hi), f tv θ (hi), f Lu θ (hi), f Luv θ (hi), f Lv θ (hi) L D ← L D + c (si[c] log si[c] + (1 − si[c]) log (1 − si[c])) θ D ← θ D − α∇ θ D L D // D specific parameters' update Update θ ← θ + ϵ(θ D − θ) // Meta D Experimental Setup and Reproducibility All experiments are performed on a machine with Intel Xeon Gold 6284 processor with 96 physical cores, 1 NVIDIA A100 GPU card with 40GB GPU memory, and 512 GB RAM running Ubuntu 20.04 operating system. D.1 Parameter details We set hidden dimension of f Lu θ , f Lv θ , f Luv θ , f tu θ , f tv θ to 512. We utilize Adam optimizer with learning rate as 0.003. Further to avoid over-fitting we use dropout with value of 0.2 and an L2 regularizer with value of 10 −5 . We set batch size to 32. For meta-training of GSHOT we used K=15 and ϵ = 0.8. During fine-tuning, we used the value of the growth-factor γ=1.001 for both Leukemia-Active and Enzyme, 1.006 for AIDS-CA and 1.1 for 5-body spring. For all methods, we stop training when validation loss is minimized or there is less than 0.05% change in validation loss over a number of extended epochs. D.2 Number of graphs generated Since our test datasets are of different sizes, AIDS-CA (108), Leukemia-Active (900), Enzyme-EC3 (20), 5-body Spring (500), we generate a different number of graphs for each target dataset. Specifically, for Leukemia-Active we generate 2560 graphs, 1024 graphs each for AIDS-CA and 5-body spring, and for Enzyme EC3 we generate 512 graphs. E Impact of auxiliary datasets In this section, we study the performance of our proposed architecture by selecting different auxiliary datasets during meta-training. Towards this, we choose Enzyme dataset since it consists of 5 auxiliary datasets and has reasonable scope to sample multiple sets of auxiliary datasets from it. For this experiment we sample(without repetition) sets of 3 auxiliary datasets 5 times(eg:-{EC1, EC4, EC5}, {EC2, EC5, EC6} etc.). We train 5 GSHOT models with these 5 different sets of auxiliary datasets. We then fine-tune these 5 trained models on the target dataset(EC3). We use the same set of auxiliary datasets for training the PRETRAIN+FT baseline. In Table 4, we report the mean performance on each metric along with standard deviation obtained using these 5 models. For results of training GRAPHGEN and GRAPHRNN from scratch directly on the target dataset EC3(without auxiliary datasets), refer to Table 2 in the main paper. Table 4: Performance on variation of auxiliary datasets: Performance comparison on the Enzyme EC3 dataset when different sets of auxiliary datasets are used for meta-training GSHOT and for training the PRETRAIN+FT baseline. For GSHOT and PRETRAIN+FT, we report the mean and standard deviation since their performance is averaged across models using different sets of auxiliary datasets used for (meta/pre) training. In Table 4, we observe that GSHOT obtains superior performance when trained using different sets of auxiliary datasets. For instance, on the Node label metric, GSHOT outperforms its closest competitor PRETRAIN+FT by around 50%. Further, it outperforms its closest competitor by over 10% on the Orbit metric. Overall, we observe that GSHOT learns to better utilize the knowledge gained from a variety of auxiliary datasets. Performance vs Number of training samples (b) The pipeline of steps in GSHOT. Figure 1 : 1(a) Increase in Maximum Mean Discrepancy (MMD) scores for different graph metrics when the number of training samples (log scale) are decreased in a chemical compound dataset (Dataset #2 in Figure 3 : 3Few possible DFS codes of graph G. G = (V, E) A Graph with vertex set V and edge set E n Number of nodes in G m Number of edges in G V Label set of vertices in G E Label set of edges in GD={G 1 , G 2 , · · · , G N } Dataset of N graphs D={D 1 , · · · , D B } Collection of B graph datasets t u DFS discovery time of node u t v DFS discovery time of node v L u Label of node u L uv Label of edge (u, v) L v Label of node v F(G)Function to map graph to Minimum DFS code S S = (s 1 , s 2 . . . s m ) Minimum DFS Codes of a graph S = {S 1 , S 2 · · · S N } Collection of Minimum DFS codes of a dataset with N graphs T Set of graph generative modelling tasks T i Graph generative modelling task for the i th dataset L i Loss associated with dataset D i Collection of Minimum DFS codes for target dataset D T θ T Parameters fine-tuned to the dataset D T β i Binary loss coefficient for i th sample in eq. 8 γ Growth parameter in self-paced fine-tuning Table 3: Notations used in the paper B Pseudocodes Algorithm 1: Pseudocode for meta-training phase of GSHOT Input :Collection of B graph datasets D = {D1, D2 . . . D B }, K, ϵ Output :Good initialization of parameters θ of generative model p θ (D) Initialise meta-parameters θ randomly. repeat Sample a dataset D ∈ D S = {S = F (G) | ∀G ∈ D} // Get Minimum DFS code l gradient update until stopping criteria // Typically when validation loss is minimized Algorithm 2: Fine-tuning GSHOT on target dataset D T Input :Target dataset D T , meta-trained parameters θ, batch size B, growth factor γ, λ Output :Fine tuned parameters θ T for target dataset D T S T = {S = F (G) | ∀G ∈ D T } // Get Minimum DFS code θ T ← θ // Initializing parameters specific to target dataset D T repeat L T ← 0; for B times do // Sample B graphs for every batch S = [s1, s2 . . . sm] ∼ S T s0 ← SOS h0 ← 0 l ← 0 // Instance specific loss /* Computing loss l of sequence S = [s0, s1, s2 . . . sm+1] */ for i from 1 to m + 1 do // sm+1 for EOS tokens hi ← LST M ← l + c (si[c] log si[c] + (1 − si[c]) log (1 − si[c])) Table 1 : 1Summary of the datasets Chemical Domain: We use anti-cancer screen datasets Yeast, Breast, and Lung as auxiliary datasets# Name Domain No. of graphs |V | |E| |V| |E| 1 Enzymes[4] Biological 600 [2, 125] [2, 149] 3 X 2 NCI-H23 (Lung)[23] Chemical 24k [6, 50] [6, 57] 11 3 5 Yeast[23] Chemical 47k [5, 50] [5, 57] 11 3 7 MCF-7 (Breast)[23] Chemical 23k [6, 111] [6, 116] 11 3 6 Leukemia-Active[23] Chemical 1900 [12, 107] [12, 111] 11 3 6 AIDS-CA[23] Chemical 328 [10, 189] [10, 196] 11 3 8 N-body Spring[34] Physics 1500 N [3, 13] 25 X 4.1 Experimental setup Datasets: Since our focus is on domain-agnostic labeled graph generative modeling, we show the effectiveness of our proposed approach using datasets from diverse domains. Moreover, in our experiments, we use target datasets having significantly low volumes of available graphs in comparison to other works in literature [11, 33, 18, 22]. Table 1 summaries the different datasets. Further details on semantics of the datasets are present in App. C. Train-test splits: We next briefly describe the train-test split of our datasets. • Biological Domain: Each enzyme in the Enzyme dataset[4] belongs to one of six classes, namely EC1, EC2, EC3, EC4, EC5, EC6. We treat enzymes in EC1, EC2, EC4, EC5, EC6 as auxiliary datasets and EC3 as our target dataset, which consists of 100 enzymes. • Table 2 : 2Summary of performance by GSHOT, GRAPHGEN, GRAPHRNN, and PRETRAIN+FT baseline on different datasets on multiple metrics. Values less than 10 −3 are approximated to 0. The best-performing model for each dataset is highlighted in bold.Auxiliary datasets Target dataset #Target Training Samples Model Deg. Clus. Orbit NSPDK Avg # Nodes (Gen/Gold) Avg # Edges (Gen/Gold) Node Label Edge Label Joint Node Label & Degree Novelty Uniqueness Enzyme: EC1,EC2, EC4,EC5, EC6 Enzyme: EC3 50 GRAPHGEN 0.90 0.58 0.127 0.266 17.51/26.90 22.22/52.85 0.015 x 0.714 100% 100% GRAPHRNN 0.30 0.73 0.13 0.214 20.51/26.90 36.23/52.85 0.019 x 0.696 100% 100% PRETRAIN+FT 0.72 0.63 0.053 0.18 23.73/26.90 33.2/52.85 0.0095 x 0.619 99% 99% GSHOT 0.45 0.47 0.025 0.16 24.5/26.90 37.69/52.85 0.004 x 0.457 100% 100% Yeast, Breast, Lung AIDS- CA 150 GRAPHGEN 0.026 0.016 0.003 0.127 17.51/37.14 17.62/39.60 0.05 0.001 0.20 98% 97% GraphRNN 0.15 0.47 0.045 0.14 30.5/37.14 40.19/39.60 0.193 0.005 0.836 86% 45% PRETRAIN+FT 0.021 0.004 ≈ 0 0.11 24.1/37.14 25.22/39.60 0.013 ≈ 0 0.173 99% 99% GSHOT 0.017 0.0015 ≈ 0 0.08 26.5/37.14 27.1/39.60 0.011 ≈ 0 0.14 99% 99% Leukemia- Active 500 GRAPHGEN 0.06 0.019 ≈ 0 0.17 40.02/47.71 42.23/50.37 0.02 ≈ 0 0.99 100% 100% GraphRNN 0.06 0.554 0.032 0.34 7.17/47.71 7.51/50.37 0.39 0.017 0.83 100% 100% PRETRAIN+FT 0.039 0.0064 ≈ 0 0.116 43.08/47.71 45.5/50.37 0.09 ≈ 0 0.79 98% 98% GSHOT 0.0069 ≈ 0 ≈ 0 0.032 42.35/47.71 44.33/50.37 0.0011 ≈ 0 0.24 100% 100% {4, 6} body Spring 5-body Spring 500 GRAPHGEN 0.004 0.015 ≈ 0 0.016 4.98/5 5.49/5.64 0.012 x 0.011 33% 13% GraphRNN 0.018 0.012 ≈ 0 0.029 4.71/5 5.03/5.64 0.044 x 0.017 87% 55% PRETRAIN+FT 0.021 0.047 0.0025 0.017 4.98/5 5.19/5.64 0.011 x 0.012 70% 49% GSHOT 0.008 0.035 ≈ 0 0.016 4.98/5 5.38/5.64 0.016 x 0.012 64% 41% See Sec. 4.1 for detailed understanding of the metrics.3 All notations used in our work are summarized inTable 3in the appendix. In our paper we use the keyword graph and labeled graph interchangeably S.append(si) until EOS ∈ {si.tu, si.tv, si.Lu, si.Luv, si.Lv} // Check if any item of tuple si contains EOS symbol G ← F −1 (S) // Convert DFScode back to graph return GC Dataset SemanticsBiological Domain: Proteins are biomolecules consisting of long chain of amino acids. They are highly essential to our lives and significantly interesting in certain biomedical tasks such as de novo protein design[16,13]. Enzymes, a set of specialized proteins, are catalysts that can speed up metabolic activities. In our work, we utilize the Enzyme dataset from the BRENDA enzyme database[27], which consists of protein tertiary structures. We convert enzymes to graphs where nodes represent secondary structures labeled into one of the three categories namely helices, turns, or sheets. This dataset does not have edge labels. The dataset is divided into six classes and each enzyme belongs to one of these classes, namely EC1, EC2, EC3, EC4, EC5, EC6. For our few-shot learning setup, we consider learning to generate graphs belonging to a certain enzyme class as a task. We treat the datasets EC1, EC2, EC4, EC5, EC6 as auxiliary and EC3 as our target dataset, which consists of 100 enzymes.Chemical Domain: Chemical compounds are composed of two or more atoms connected using chemical bonds. We utilize the following chemical compounds datasets to train and evaluate GSHOT.AIDS-CA [31]: This dataset comprises of a set of molecules that displayed activity against HIV.Breast, Lung, Yeast: Each of these three datasets contain molecules that were screened for activity against Breast cancer, Lung cancer and cancer in Yeast respectively [23].Leukemia-Active: This dataset consists of compounds that are active against Leukemia [23].In all chemical datasets, we convert compounds to labeled graphs where nodes represent atoms and their labels represent atom-type which are elements belonging to the chemical periodic table. Edges in the graphs represent bonds and edge labels encode the bond type i.e single, double, triple.For the few-shot learning setup for chemical domain, Yeast, Breast, and Lung are used as auxiliary datasets during meta-training. Further, we choose AIDS-CA and Leukemia-Active datasets as our target datasets. The reasons for this choice is (1) due to their relatively low availability of number of graph samples and (2) since they consist of compounds that are active against certain diseases, therefore have more practical utility.Physics Domain: Physics-based simulations are commonly used to understand interactions among different objects[34,25,19]. Dynamical systems such as N -body springs can be converted into graph structures where nodes represent particles and edges represent connections between particles. We utilize the dataset of the N -body spring simulations[34]. It consists of N particles in a twodimensional space partitioned into a 5×5 grid . Two particles are connected to each other via spring with a probability of 0.5. The label of a node is the partition it lies in. This system does not have edge labels[34]. For few-shot learning, we meta-train GSHOT on auxiliary datasets consisting of four and six particle systems and then fine-tune on graphs containing five particles. Statistical mechanics of complex networks. Réka Albert, Albert-László Barabási, Reviews of modern physics. 74147Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. Reviews of modern physics, 74(1):47, 2002. Few-shot generative modelling with generative matching networks. Sergey Bartunov, Dmitry Vetrov, International Conference on Artificial Intelligence and Statistics. PMLRSergey Bartunov and Dmitry Vetrov. Few-shot generative modelling with generative matching networks. In International Conference on Artificial Intelligence and Statistics, pages 670-678. PMLR, 2018. Netgan: Generating graphs via random walks. Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningStockholmsmässan, Stockholm, SwedenAleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, and Stephan Günnemann. Netgan: Generating graphs via random walks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pages 609-618, 2018. Protein function prediction via graph kernels. Karsten M Borgwardt, Cheng Soon, Stefan Ong, S V N Schönauer, Alexander J Vishwanathan, Hans-Peter Smola, Kriegel, Bioinformatics. 211SupplKarsten M. Borgwardt, Cheng Soon Ong, Stefan Schönauer, S. V. N. Vishwanathan, Alexander J. Smola, and Hans-Peter Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21 Suppl 1:i47-56, 2005. Privacy-preserving graph algorithms in the semi-honest model. Justin Brickell, Vitaly Shmatikov, Proceedings of the 11th International Conference on Theory and Application of Cryptology and Information Security, ASIACRYPT'05. the 11th International Conference on Theory and Application of Cryptology and Information Security, ASIACRYPT'05Springer-VerlagJustin Brickell and Vitaly Shmatikov. Privacy-preserving graph algorithms in the semi-honest model. In Proceedings of the 11th International Conference on Theory and Application of Cryptology and Information Security, ASIACRYPT'05, page 236-252. Springer-Verlag, 2005. Fast neighborhood subgraph pairwise distance kernel. Fabrizio Costa, ICML. Fabrizio Costa. Fast neighborhood subgraph pairwise distance kernel. In ICML, pages 255-262, 2010. Recent progress in the drug development targeting sars-cov-2 main protease as treatment for covid-19. Wen Cui, Kailin Yang, Haitao Yang, Frontiers in Molecular Biosciences. 72020Wen Cui, Kailin Yang, and Haitao Yang. Recent progress in the drug development targeting sars-cov-2 main protease as treatment for covid-19. Frontiers in Molecular Biosciences, 7, 2020. MolGAN: An implicit generative model for small molecular graphs. Nicola De, Cao , Thomas Kipf, ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models. Nicola De Cao and Thomas Kipf. MolGAN: An implicit generative model for small molecular graphs. ICML 2018 workshop on Theoretical Foundations and Applications of Deep Generative Models, 2018. Collective dynamics of 'small-world' networks. In Nature. D J Watts, S H Strogatz, Watts DJ and Strogatz SH. Collective dynamics of 'small-world' networks. In Nature, 1998. Model-agnostic meta-learning for fast adaptation of deep networks. Chelsea Finn, Pieter Abbeel, Sergey Levine, International conference on machine learning. PMLRChelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adap- tation of deep networks. In International conference on machine learning, pages 1126-1135. PMLR, 2017. Graphgen: a scalable approach to domainagnostic labeled graph generation. Nikhil Goyal, Sayan Harsh Vardhan Jain, Ranu, Proceedings of The Web Conference 2020. The Web Conference 2020Nikhil Goyal, Harsh Vardhan Jain, and Sayan Ranu. Graphgen: a scalable approach to domain- agnostic labeled graph generation. In Proceedings of The Web Conference 2020, pages 1253- 1263, 2020. A kernel two-sample test. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, Alexander J Smola, J. Mach. Learn. Res. 13Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander J. Smola. A kernel two-sample test. J. Mach. Learn. Res., 13:723-773, 2012. Generating tertiary protein structures via an interpretative variational autoencoder. Xiaojie Guo, Yuanqi Du, Sivani Tadepalli, Liang Zhao, Amarda Shehu, arXiv:2004.07119arXiv preprintXiaojie Guo, Yuanqi Du, Sivani Tadepalli, Liang Zhao, and Amarda Shehu. Generating tertiary protein structures via an interpretative variational autoencoder. arXiv preprint arXiv:2004.07119, 2020. . Tomaž Hočevar, Janez Demšar, A combinatorial approach to graphlet counting. Bioinformatics. 304Tomaž Hočevar and Janez Demšar. A combinatorial approach to graphlet counting. Bioinfor- matics, 30(4):559-565, 2014. Long Short-Term Memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, 11 1997. Generative models for graph-based protein design. John Ingraham, Vikas Garg, Regina Barzilay, Tommi Jaakkola, Advances in Neural Information Processing Systems. 32John Ingraham, Vikas Garg, Regina Barzilay, and Tommi Jaakkola. Generative models for graph-based protein design. Advances in Neural Information Processing Systems, 32, 2019. Zinc-a free database of commercially available compounds for virtual screening. J John, Brian K Irwin, Shoichet, Journal of chemical information and modeling. 451John J Irwin and Brian K Shoichet. Zinc-a free database of commercially available compounds for virtual screening. Journal of chemical information and modeling, 45(1):177-182, 2005. GRAM: scalable generative models for graphs with graph attention mechanism. CoRR, abs. Wataru Kawai, Yusuke Mukuta, Tatsuya Harada, Wataru Kawai, Yusuke Mukuta, and Tatsuya Harada. GRAM: scalable generative models for graphs with graph attention mechanism. CoRR, abs/1906.01861, 2019. Neural relational inference for interacting systems. Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, Richard Zemel, International Conference on Machine Learning. Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. In International Conference on Machine Learning, pages 2688-2697, 2018. Self-paced learning for latent variable models. M Kumar, Benjamin Packer, Daphne Koller, NeurIPS. Curran Associates, Inc23M. Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In NeurIPS, volume 23. Curran Associates, Inc., 2010. Multi-objective de novo drug design with conditional graph generative model. Yibo Li, Liangren Zhang, Zhenming Liu, Journal of cheminformatics. 101Yibo Li, Liangren Zhang, and Zhenming Liu. Multi-objective de novo drug design with conditional graph generative model. Journal of cheminformatics, 10(1):1-24, 2018. Efficient graph generation with graph recurrent attention networks. Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Charlie Nash, William L Hamilton, David Duvenaud, Raquel Urtasun, Richard Zemel, NeurIPS. Renjie Liao, Yujia Li, Yang Song, Shenlong Wang, Charlie Nash, William L. Hamilton, David Duvenaud, Raquel Urtasun, and Richard Zemel. Efficient graph generation with graph recurrent attention networks. In NeurIPS, 2019. Reptile: a scalable metalearning algorithm. Alex Nichol, John Schulman, arXiv:1803.0299924arXiv preprintAlex Nichol and John Schulman. Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2(3):4, 2018. Cosmological n-body simulations: a challenge for scalable generative models. Nathanaël Perraudin, Ankit Srivastava, Aurelien Lucchi, Tomasz Kacprzak, Thomas Hofmann, Alexandre Réfrégier, Computational Astrophysics and Cosmology. 61Nathanaël Perraudin, Ankit Srivastava, Aurelien Lucchi, Tomasz Kacprzak, Thomas Hofmann, and Alexandre Réfrégier. Cosmological n-body simulations: a challenge for scalable generative models. Computational Astrophysics and Cosmology, 6(1):1-17, 2019. On random graphs. Alfréd Rényi, Publ. Math. Debrecen. 6Alfréd Rényi. On random graphs. Publ. Math. Debrecen. v6, pages 290-297, 1959. Brenda, the enzyme database: updates and major new developments. Ida Schomburg, Antje Chang, Christian Ebeling, Marion Gremse, Christian Heldt, Gregor Huhn, Dietmar Schomburg, Nucleic acids research. 32Database issueIda Schomburg, Antje Chang, Christian Ebeling, Marion Gremse, Christian Heldt, Gregor Huhn, and Dietmar Schomburg. Brenda, the enzyme database: updates and major new developments. Nucleic acids research, 32 Database issue:D431-3, 2004. Graphvae: Towards generation of small graphs using variational autoencoders. Martin Simonovsky, Nikos Komodakis, Vera Kurková, Yannis Manolopoulos, Barbara Hammer, Lazaros S. Iliadis, and Ilias Maglogiannis. 11139Artificial Neural Networks and Machine Learning -ICANN 2018Martin Simonovsky and Nikos Komodakis. Graphvae: Towards generation of small graphs using variational autoencoders. In Vera Kurková, Yannis Manolopoulos, Barbara Hammer, Lazaros S. Iliadis, and Ilias Maglogiannis, editors, Artificial Neural Networks and Machine Learning -ICANN 2018, volume 11139 of Lecture Notes in Computer Science, pages 412-422, 2018. The discovery of medicines for rare diseases. C David, Shuangluo Swinney, Xia, Future medicinal chemistry. 69David C Swinney and Shuangluo Xia. The discovery of medicines for rare diseases. Future medicinal chemistry, 6(9):987-1002, 2014. Exploring randomly wired neural networks for image recognition. Saining Xie, Alexander Kirillov, Ross Girshick, Kaiming He, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionSaining Xie, Alexander Kirillov, Ross Girshick, and Kaiming He. Exploring randomly wired neural networks for image recognition. In Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, pages 1284-1293, 2019. gspan: Graph-based substructure pattern mining. Xifeng Yan, Proceedings of the 2002 IEEE International Conference on Data Mining (ICDM 2002. the 2002 IEEE International Conference on Data Mining (ICDM 2002Maebashi City, JapanXifeng Yan. gspan: Graph-based substructure pattern mining. In Proceedings of the 2002 IEEE International Conference on Data Mining (ICDM 2002), 9-12 December 2002, Maebashi City, Japan, pages 721-724, 2002. Graph convolutional policy network for goal-directed molecular graph generation. Jiaxuan You, Bowen Liu, Rex Ying, Vijay Pande, Jure Leskovec, NeurIPS, NIPS'18. Red Hook, NY, USACurran Associates IncJiaxuan You, Bowen Liu, Rex Ying, Vijay Pande, and Jure Leskovec. Graph convolutional policy network for goal-directed molecular graph generation. In NeurIPS, NIPS'18, page 6412-6422, Red Hook, NY, USA, 2018. Curran Associates Inc. Graphrnn: Generating realistic graphs with deep auto-regressive models. Jiaxuan You, Rex Ying, Xiang Ren, William L Hamilton, Jure Leskovec, Proceedings of Machine Learning Research. 80PMLRIn ICMLJiaxuan You, Rex Ying, Xiang Ren, William L. Hamilton, and Jure Leskovec. Graphrnn: Generating realistic graphs with deep auto-regressive models. In ICML, 2018, volume 80 of Proceedings of Machine Learning Research, pages 5694-5703. PMLR, 2018. Machine learning datasets for graph generation and transformation. Yuanqi, Graphgt, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 2021Round 2Yuanqi. Graphgt: Machine learning datasets for graph generation and transformation. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. Learning to execute. Wojciech Zaremba, Ilya Sutskever, Wojciech Zaremba and Ilya Sutskever. Learning to execute, 2015. Data augmentation for graph neural networks. Zhao, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Zhao. Data augmentation for graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 11015-11023, 2021.
[]
[ "A Chondritic Solar Neighborhood", "A Chondritic Solar Neighborhood" ]
[ "Isabella L Trierweiler \nDepartment of Earth, Planetary, and Space Sciences\nUniversity of California\n90095Los Angeles, Los AngelesCAUSA\n", "Alexandra E Doyle \nDepartment of Earth, Planetary, and Space Sciences\nUniversity of California\n90095Los Angeles, Los AngelesCAUSA\n", "Edward D Young \nDepartment of Earth, Planetary, and Space Sciences\nUniversity of California\n90095Los Angeles, Los AngelesCAUSA\n" ]
[ "Department of Earth, Planetary, and Space Sciences\nUniversity of California\n90095Los Angeles, Los AngelesCAUSA", "Department of Earth, Planetary, and Space Sciences\nUniversity of California\n90095Los Angeles, Los AngelesCAUSA", "Department of Earth, Planetary, and Space Sciences\nUniversity of California\n90095Los Angeles, Los AngelesCAUSA" ]
[]
A persistent question in exoplanet demographics is whether exoplanetary systems form from similar compositional building blocks to our own. Polluted white dwarf stars offer a unique way to address this question as they provide measurements of the bulk compositions of exoplanetary material. We present a statistical analysis of the rocks polluting oxygen-bearing white dwarfs and compare their compositions to rocks in the Solar System. We find that the majority of the extrasolar rocks are consistent with the composition of typical chondrites. Measurement uncertainties prevent distinguishing between chondrites and bulk Earth, but do permit detecting the differences between chondritic compositions and basaltic or continental crust. We find no evidence of crust amongst the polluted white dwarfs. We show that the chondritic nature of extrasolar rocks is also supported by the compositions of local stars. While galactic chemical evolution results in variations in the relative abundances of rock-forming elements spatially and temporally on galaxy-wide scales, the current sample of polluted white dwarfs are sufficiently young and close to Earth that they are not affected by this process. We conclude that exotic compositions are not required to explain the majority of observed rock types around polluted white dwarfs, and that variations between exoplanetary compositions in the stellar neighborhood are generally not due to significant differences in the initial composition of protoplanetary disks. Nonetheless, there is evidence from stellar observations that planets formed in the first several billion years in the Galaxy have lower metal core fractions compared with Earth on average.
null
[ "https://export.arxiv.org/pdf/2306.03743v1.pdf" ]
259,089,136
2306.03743
d342abe9b76d19572765a00a87a694f71c75199b
A Chondritic Solar Neighborhood June 7, 2023 Isabella L Trierweiler Department of Earth, Planetary, and Space Sciences University of California 90095Los Angeles, Los AngelesCAUSA Alexandra E Doyle Department of Earth, Planetary, and Space Sciences University of California 90095Los Angeles, Los AngelesCAUSA Edward D Young Department of Earth, Planetary, and Space Sciences University of California 90095Los Angeles, Los AngelesCAUSA A Chondritic Solar Neighborhood June 7, 2023Draft version Typeset using L A T E X twocolumn style in AASTeX63White dwarf stars (1799) A persistent question in exoplanet demographics is whether exoplanetary systems form from similar compositional building blocks to our own. Polluted white dwarf stars offer a unique way to address this question as they provide measurements of the bulk compositions of exoplanetary material. We present a statistical analysis of the rocks polluting oxygen-bearing white dwarfs and compare their compositions to rocks in the Solar System. We find that the majority of the extrasolar rocks are consistent with the composition of typical chondrites. Measurement uncertainties prevent distinguishing between chondrites and bulk Earth, but do permit detecting the differences between chondritic compositions and basaltic or continental crust. We find no evidence of crust amongst the polluted white dwarfs. We show that the chondritic nature of extrasolar rocks is also supported by the compositions of local stars. While galactic chemical evolution results in variations in the relative abundances of rock-forming elements spatially and temporally on galaxy-wide scales, the current sample of polluted white dwarfs are sufficiently young and close to Earth that they are not affected by this process. We conclude that exotic compositions are not required to explain the majority of observed rock types around polluted white dwarfs, and that variations between exoplanetary compositions in the stellar neighborhood are generally not due to significant differences in the initial composition of protoplanetary disks. Nonetheless, there is evidence from stellar observations that planets formed in the first several billion years in the Galaxy have lower metal core fractions compared with Earth on average. INTRODUCTION The growing sample of exoplanets has inspired many studies detailing their compositions and interiors. Analyses of exoplanet compositions using mass and radius relationships or through extrapolating stellar abundances have led to a wide range of possible exoplanet compositions (Bond et al. 2010), including Earth-like compositions, but also carbon-rich planets (e.g. Dorn et al. 2019), coreless super-Earths (e.g. Madhusudhan et al. 2012), and mineralogies with no Earth-rock counterparts (Putirka & Xu 2021). This hypothesized diversity of exoplanet compositions motivates us to benchmark the variety of putative non-Earth like planets against the compositions of exoplanetary rocks accreted by polluted white dwarfs (WDs). The metal pollution on WDs is caused by accretion of exoplanetary debris and provides direct measurements of bulk compositions of extrasolar rocks that are not susceptible to the same degeneracies as the mass/radius approach (e.g. Dorn et al. 2015). The vast majority of WD pollutants are rocky, with some fragments identified as specifically core or crust-like (e.g. Doyle et al. 2019;Harrison et al. 2018;Hollands et al. 2018;Melis & Dufour 2017;Jura & Young 2014). Some water-rich objects have also been identified, with possible parent bodies including Kuiper Belt analogs or exomoons (e.g. Doyle et al. 2021;Klein et al. 2021;Hoskin et al. 2020;Xu et al. 2017;Raddi et al. 2015). We analyze the abundances from 31 oxygen-bearing polluted WDs. The presence of O, along with other major rock-forming elements such as Si, Mg, and Fe indicate that these WDs are accreting rocky material. We compare the abundances of the WD pollution to rocks throughout the Solar System, an approach motivated by previous WD studies (e.g. Doyle et al. 2023;Swan et al. 2019;Xu et al. 2013). We also carry out the same analysis for local stars, as a proxy for proto-stellar disk environments and as a broad representation of the system's rocky planet compositions (e.g. Schulze et al. 2021). For this purpose, we use the Hypatia catalog of stars, which includes elemental abundances for thousands of stars within ∼ 500 pc of the Sun (Hinkel et al. 2014). Throughout, we compare WD and stellar compositions to solar system rocks using a reduced chi-squared goodness-of-fit test. While individual stars may show unusual amounts of particular elements, we find in this work that the majority of WD pollution is indistinguishable from chondrites in composition, when accounting for uncertainties in the measured abundances. The whole-rock compositions of CI chondrites are considered a proxy for the relative abundances of rock-forming elements of the Solar System, as they are the best compositional match to the Sun (e.g. Lodders et al. 2009;Anders & Grevesse 1989), and we use them here as representative of chondrites in general. This paper is organized as follows. In Section 2 we outline the χ 2 calculation used to test the goodness of fit of each set of abundances to CI chondrite. To demonstrate the method, we apply the χ 2 test to Solar System rocks in Section 3. We then carry out fits for the WD polluters in Section 4 and for the Hypatia catalog stars in Section 5. We discuss the impact of galactic chemical evolution on polluted WD and Hypatia compositions in Section 6 and present our conclusions in Section 7. METHODS Throughout this work we compare observed abundances to the CI chondritic composition (Lodders 2019) by computing reduced χ 2 values (χ 2 ν ). Measurement uncertainties for the WDs are propagated using a Monte Carlo approach. Uncertainties for the Hypatia catalog stars are gathered from the catalog (Hinkel et al. 2014). For each star, we use the relative concentrations of Si, Fe, Al, Ca, Ni, and Cr, where available, all normalized to Mg. We do not include more volatile elements such as C, N, or O in the comparisons as we are primarily concerned with rock compositions in this work. Because a very diverse range of physical processes can vary volatile abundances during planet formation (e.g. Bonsor et al. 2021), volatile abundances are not necessarily related to rock compositions. Excluding these elements therefore allows for more direct comparison of the underlying rock to Solar System samples. Additionally, while O is a major element in rocks, its abundance is correlated with the other included rock-forming elements in oxides, providing further motivation to exclude it from the χ 2 ν calculations. Starting with log abundances for each star and WD, we construct a random sample of abundances for each element assuming a normal distribution based on the reported logarithmic abundance ratios and their uncertainties. We then transform the distribution of logarithmic relative abundances to a distribution of number ratios for each element relative to Mg. The reported symmetric errors in the logs lead to asymmetric distributions in number ratios, so we select our assumed abundance ratios and uncertainties as the median, 16.5, and 83.5 percentiles from the distributions. Errors in ratios of elements are obtained by propagation of uncertainties in the individual elements using Monte Carlo sampling. To address the asymmetric uncertainties in the ratios of elements arising from reported symmetric errors in logs of the ratios for both the WDs and Hypatia catalog stars, we use the following equation to calculate the χ 2 goodness of fit for each element i relative to Mg: χ 2 i = δ i σ i 2 1 − 2A δ i σ i + 5A 2 δ i σ i 2 ,(1) where δ i is the difference between the observed and expected element ratio, σ i is the average of the upper and lower errors, and A describes the asymmetry in the errors as A = (σ + − σ − )/(σ + + σ − ), where σ + and σ − are the asymmetrical measurement uncertainties for element i (Barlow 2003). To find the reduced χ 2 , we sum over all elements and divide by the degrees of freedom, taken to be the number of elements (excluding Mg) measured for the given star. We define the passing conditions (accepting the alternative hypothesis H a that the rocks are chondritic) for the χ 2 ν tests using the parameter α, the probability of randomly obtaining a χ 2 ν value greater than the one calculated for the observed abundances (e.g. the probability of incorrectly rejecting the null hypothesis, H 0 , that the rocks are not chondritic). Following convention, we place the α limit at 0.05, so that any stars identified as chondritic compositions must have a χ 2 ν with an α < 0.05, implying a H a = 1 − α probability that the correspondence with chondrite is not due to random chance. Because our sample sizes are very small, we must account for errors in the χ 2 ν values. The error in χ 2 ν can be approximated as σ = 2/n (Andrae et al. 2010), where n is the number of data points for a given star's composition. We therefore define the critical reduced chi-square values as χ 2 ν,crit = χ 2 ν (α = 0.05) + 2 2/n, allowing for a 2σ error in χ 2 ν . These constraints give critical χ 2 ν values of ∼ 3 to 4, for n from 3 − 6 (excluding Mg), varying inversely with the number of elements observed for each star. For a given star, if the elements available define χ 2 ν ≲ 3 to 4, the data are taken as evidence for chondritic rocky parent bodies or planets. In order to identify outliers in the elemental abundances for each WD and Hypatia star, we apply a Dixon's Q test (Dean & Dixon 1951) with a confidence level of 95% (p = 0.05). We choose this test as it is best suited for small sample sizes. For this test we convert abundances to (n Z /n Mg )/(n Z /n Mg ) CI such that 1 represents a perfect fit to chondrite. Outlier elements are therefore the elements with the worst fits to chondrite (other than Mg), and we identify an outlier in six of the WDs. Stars which pass as chondritic when an outlier is ignored are considered "soft passes." SOLAR SYSTEM ROCKS To test our ability to differentiate between different rock types using the methods of Section 2, we first apply our test for chondritic compositions to rocks in the Solar System, including bulk Earth (BE) and bulk silicate Earth (BSE, McDonough (2003)), mid-ocean ridge basalt (MORB), continental crust (CC, Rudnick & Gao (2003)), bulk silicate Mars (BSM, Taylor (2013)), and E chondrites (EH, Wasson & Kallemeyn (1988)). For each element, we apply the mean uncertainty calculated from our sample of WDs for that element, with the resulting uncertainties generally ranging from 0.15−0.30 dex. We find that BE, BSE, BSM, and the E chondrites are indistinguishable from CI chondrites, while MORB and CC are very clearly not good matches to CI chondrite in these tests (Figure 1). Bulk Earth being indistinguishable from chondritic is in contrast with the distinction typically drawn between the two rock types in previous studies (e.g., Putirka & Rarick 2019;Drake & Righter 2002), and is the result of propagating the large uncertainties associated with the element ratios for the WDs compared with the comparatively small differences in composition among the rock compositions. Throughout this work, we report compositions as consistent or inconsistent with chondrites, while recognizing that with current measurement uncertainties, chondrites, Earth, and Mars are all indistinguishable. However, this test is able to definitively differentiate between chondrite-like compositions and crust, the latter representing products of igneous differentiation of chondrites. Error bars for each element correspond to the mean uncertainty in the WD abundances for that element. The reduced χ 2 value for each correlation with CI chondrite is indicated in the plots at the upper left. Where χ 2 ν ≲ 3 to 4, the data are taken as evidence for chondritic rocky parent bodies or planets. stellar properties, elemental abundances and uncertainties in abundances from the references listed in Table 1, supplemented by the Montreal White Dwarf Database (MWDD, Dufour et al. 2017). We use the elements Si, Fe, Al, Ca, Ni, and Cr where available. We ratio all abundances to Mg and propagate uncertainties using a Monte Carlo approach outlined in Section 2. We analyze both the raw and steady-state adjusted abundances for the WD pollution. The steady-state adjustment accounts for differential settling rates for different elements in the atmosphere of a WD. Settling rates also depend on the dominate element in the atmosphere of the WD, and range from days to millions of years (Koester 2009;Blouin et al. 2018). The steadystate settling factor we use is (n Z /n Mg ) SS = (n Z /n Mg ) × (τ Mg /τ Z ), where τ Mg and τ Z are the settling timescales for Mg and a given element Z, respectively. Settling timescales for the WDs in our sample are collected from the MWDD, using the WD parameters listed in Table 1. These adjustments are clearly necessary for the Hdominated WDs, where settling is generally much more rapid. Suitability of the adjustment to abundances in the He-dominated atmospheres is less clear. We note that the stated steady-state factor is a simplistic ap-proach to account for settling, which does not account for potential effects such as mixing in the WD atmosphere (e.g. Bauer & Bildsten 2019; Cunningham et al. 2019) For each WD, we compare the abundances normalized to Mg to the abundances measured in CI chondrites following the method outlined in Section 2. Figure 2 shows this comparison for all WDs in our sample and for both the raw (top) and steady-state adjusted values (bottom). Hydrogen-dominated WDs are marked with "H". From left to right on each plot, the element ratios are Cr/Mg, Ni/Mg, Ca/Mg, Al/Mg, Fe/Mg, and Si/Mg. The dark grey panels in Figure 2 indicate WDs which do not pass the χ 2 ν test for chondritic composition. The lighter shaded panels show the "soft pass" WDs, where ignoring an identified outlier allows the WD to pass as chondritic (see Section 2). Solar system rocks are shown for comparison (see Section 3 for discussion of Solar System fits). Figure 3 shows the χ 2 ν parameters for the WDs for the raw data versus steady-state abundances, separated by the dominant element in the WD atmosphere. We also group the WDs by the number of observed elements considered in the statistical comparison (n), to illustrate the dependence of χ 2 ν on n. Increasing n generally lowers both the calculated and critical χ 2 ν values. The condition for passing as chondritic at n = 3 is χ 2 ν ∼ 4.2 and at n = 6 is χ 2 ν ∼ 3.3. We find that 15 of the 31 WDs pass the χ 2 ν test as good matches to chondritic composition when using the raw abundances. One additional WD passes as chondritic when its outlier element is ignored. A larger fraction of pollution passes as chondritic with the steady-state adjustment (21/31 pass). Because the steady-state adjustment does not improve the fits for every WD (Figure 3), some WDs that pass as chondritic using the raw data do not pass in the steady-state case. We note that a larger proportion of WDs passing as chondritic in the steady-state case does not a priori mean the WDs are most likely to be in the steady state phase of accretion. In any case, over half of the WDs in the sample are consistent with chondritic compositions using either the raw or steady-state compositions. We find no compelling evidence for basaltic crust (MORB) or continental crust rocks among the polluted WDs. When carrying out the same χ 2 ν calculation for each WD relative to the other Solar System rock types considered here (Section 3), no WDs are better fit by MORB or continental crust relative to CI chondrite, even those with χ 2 ν values relative to chondrite of 100 and greater. White Dwarf Mineralogy Classification In addition to the χ 2 ν test, we also follow the common practice of representing rock chemistries as "normative mineralogies" in which elemental concentrations are converted to volumetric fractions of fictive minerals (Cross et al. 1902, see also Putirka & Xu 2021 Supplement for details). We recast the WD pollution by projecting the observed abundances to a normative mineralogy composed of the relative abundances of Mg-endmember Olivine (OLV), Orthopyroxene (OPX) and Clinopyroxene (CPX). These minerals comprise a reasonable nor-mative mineralogy used to classify ultramafic (e.g., peridotite) rocks, and chondrites are broadly similar to ultramafic rocks. The fractions of these minerals in terms of moles depend on the relative numbers of Mg, Si and Ca atoms comprising the rocks. By inverting the mineral formulae for these reference minerals where OLV = Mg 2 SiO 4 , OPX = Mg 2 Si 2 O 6 , and CPX = CaMgSi 2 O 6 , one obtains the function that transforms relative atomic abundances of Mg, Si, and Ca to the relative molar abundances of the minerals, which in matrix form is    n OLV n OPX n CPX    =    1 −1 1 −1 2 3 0 0 1    ×    n Mg n Si n Ca    . (2) The molar abundances of the normative minerals are converted to approximate volume fractions (as is common for reporting rock mineralogies) using nominal molar volumes for OLV, OPX, and CPX, or 4.37, 6.26, and 6.60 J/bar (J/bar = 0.1 cm 3 /mole). Fe and other less abundant elements are not included in this projection. Including Fe in this projection shifts the positions of the data somewhat, but does not substantially change the results. Figure 4 shows the WD pollution represented as the relative volume fractions of OLV, OPX, and CPX implied by each composition. For each polluted WD, we take Monte Carlo draws of Mg, Si, and Ca using the reported values and corresponding uncertainties as the parent populations, and calculate the resulting normative mineral abundances. CPX is constrained only by the relative amount of Ca in the pollution, and exhibits comparatively little scatter as Ca uncertainties are generally small. We note that the volumetric fractions resulting from this method are not necessarily physical. Because this is a projection, some of the WD abundances result in negative amounts of OLV, OPX, or CPX, leading to scatter beyond the bounds of the positive ternary coordinate system. Putirka & Xu (2021) previously used this method to report exotic mineralogies for WD pollution, however we find that the uncertainties in Si and Mg are sufficiently large as to produce hopelessly large spreads in OLV and OPX abundances, so that it is impossible to constrain the mineralogy of the implied rocks ( Figure 4). A similar spread in mineral abundances is derived from the steady-state data. We therefore conclude that categorizing rock pollution in WDs into rock types based on normative abundances of OLV, OPX, and CPX abundances, or similar normative mineralogies, is not possible. ν for each WD, relative to CI chondrite, for the raw abundances (circles) and steady-state adjusted values (triangles). The points are colored and grouped by n, the number of elements used to calculate χ 2 ν . The horizontal line shows a typical critical χ 2 ν value (χ 2 ν ∼ 3 to 4, based on the number of observed elements). In most cases, the steady-state values provide a better fit. HYPATIA CATALOG STARS Our Solar System exhibits a diversity of rock types originating from the same protoplanetary material, underscoring that samplings of rock can end up with very different compositions relative to the average starting material (e.g., crust vs. chondrites in Section 3). To benchmark the "final" exoplanetary rocks sampled by polluted WDs against protoplanetary material, we analyze the abundances of rock-forming elements in nearby stars by applying our compositional fitting method to stars in the Hypatia catalog (Hinkel et al. 2014). These stars should reflect protoplanetary material, to the extent that stellar abundances have been shown to broadly reflect compositions of planets around their stars (e.g. Thiabaud et al. 2015;Bonsor et al. 2021;Schulze et al. 2021). The stellar sample therefore represents a potential average of planet building materials, rather than the final rock compositions of individual rocky parent bodies sampled by the WDs. We select Hypatia catalog stars with Mg and at least two other elements among Si, Fe, Al, Ca, Ni, or Cr. All uncertainties are obtained directly from the catalog, where they are listed as either the uncertainty reported in the original study or the mean uncertainty of multiple studies, where stars are observed by multiple methods. Given the range of stars included in the Hypatia catalog, we explore how stellar type and distance may impact We also demonstrate the spread in OLV, OPX, and CPX that is due to the uncertainties in the WD abundances by showing the spread from 100 random draws of Mg, Si, and Ca for WD Ton345. The total spread in points is truncated for visibility. overall abundances. About 6500 stars in the catalog are classified as F, G, K, or M stars. In Figure 5, we show the range of distances from the Sun in each classification. M stars in the sample tend to be much closer to the Sun (<∼ 50pc) compared to the rest of the Hypatia stars. For the purposes of this work, we do not attempt to fully account for potential biases in the Hypatia catalog stars arising from the number of separate stellar surveys included in the catalog, but instead point out a few factors that are relevant to our compositional tests. First, in Figure 6 we plot the distributions of elemental abundances relative to solar abundances, colored by stellar type. In general we find the distributions are centered around solar abundances, however we note a peak in Ca in M stars at lower abundances relative to other stellar types as well as a larger fraction of F stars with low Al than other stellar types. Hinkel et al. (2014) point out potential biases for both of these elements, including a lack of Al abundance measurements at higher metallicities, which may be altering the distribution. Additionally, most of the low [Ca/H] stars were drawn from the same single survey which may be inducing a spurious, non-physical bias in the [Ca/H] abundances. We also note that abundance uncertainties in the Hypatia catalog are strongly peaked at about 0.05 dex. Distance appears to have a strong influence on the uncertainties, with a larger range of uncertainties for stars closer to the Sun, though it is unclear if this is a physical effect or due to the stellar samples included. Stars within about 500 pc have a large range of uncertainties, up to 1.75 dex, while stars that are farther away have a nearly flat distribution of uncertainties at around 0.05 dex. From the Hypatia catalog we obtain abundances relative to solar abundances for each element, in the form [Z/H] = log 10 (Z/H) * − log 10 (Z/H) ⊙ . We convert these relative abundances to molar ratios using the following equation: where A(Z) = log 10 (Z/H) ⊙ + 12 is the solar abundance of the element Z, as defined in Lodders et al. (2009). Uncertainties in the stellar abundances are propagated through this conversion using a Monte Carlo approach. We calculate the χ 2 ν goodness of fit parameter for Si, Fe, Al, Ca, Ni, and Cr, where available in each of the Hypatia catalog stars. For the elements considered in this work, we find median uncertainties of ∼ 0.05 dex for the raw abundances relative to solar. To avoid invalid values for χ 2 ν , we replace any uncertainties of 0 with the median uncertainty for the corresponding element. Figure 7 shows the abundances for 35 randomly selected Hypatia stars. As with the WDs, white panels indicate stars that pass as chondritic, light grey panels show stars that pass when an outlier is ignored, and dark grey panels do not pass as chondritic even if outliers are ignored. We find that outliers do not make a big difference, and that about 75% of stars pass as chondritic whether or not outliers are ignored. Similar to the WDs, we find that many of the stars that do not pass as chondritic are high in Mg, so that the abundances fall systematically below chondritic values. Because the uncertainties of the abundances vary strongly with the distance of each Hypatia star, we also compute fractions of chondritic stars considering only stars within 150 pc. We find that the results for the truncated sample are very similar to those for the full sample, with about 74% of stars providing good matches to chondrites. Hypatia Mineralogy Classification Projecting the Hypatia catalog stellar abundances into normative mineralogy ternary space, we find, as with the WDs, the uncertainties are too large to constrain the volumetric proportions of minerals in a meaningful way. To illustrate this, Figure 7 shows the abundances relative to chondrite for one of the Hypatia catalog samples, HIP 26834, yielding an excellent fit to a chondritic bulk composition. Figure 7 shows that the uncertainties in abundances are relatively low for this star, but they nonetheless create a very large spread in OLV and OPX fractions (Figure 8). Similar to the WDs, calculating the normative mineralogy for all of the Hypatia catalog stars results in a large spread in OLV and OPX values that reflect only uncertainties. This is consistent with Hinkel & Unterborn (2018), who find that much smaller measurement uncertainties than those of current observations are required to differentiate between unique planetary structures using stellar data. Abundance Ratio Trends in Hypatia Catalog Stars The Hypatia catalog stars exhibit some systematic trends in element abundances due to galactic chemical evolution (GCE) (Hinkel et al. 2014). In particular, we note decreasing abundances of α elements relative to iron with increasing [Fe/H], where the latter is a nonlinear proxy for time. This trend is well studied in the Milky Way and other galaxies in the local universe, and is broadly due to increased injection of Fe into the interstellar medium (ISM) at later times due to the delayed effects of Type Ia supernovae. The late injection alters the α element-to-Fe ratios established by core-collapse supernovae that dominated the ISM at earlier times (e.g. Hayden et al. 2015;Kobayashi et al. 2020). Of the elements considered in this study, Fe, Cr, and Ni abundances accelerated with time in the Galaxy as a result of late-forming Type Ia supernovae accounting for about half of their overall production. The α elements Mg, Si, and Ca, on the other hand, are produced in Type II core-collapse supernovae, and increase more steadily with time in the Galaxy. Aluminum is somewhat separate from these two groups; it is also produced by Type II supernovae like the α elements, but the yield depends more strongly on the metallicity of progenitor stars (Kobayashi & Nakasato 2011), and therefore ex- . Ternary diagram for 100 randomly selected Hypatia stars, and the spread in OLV, OPX, and CPX for one of the best Hypatia fits to chondrite, HIP 26834 (χ 2 ν ∼ 0.02). Despite being statistically indistinguishable from chondritic composition, the uncertainties in the measured abundances lead to huge spread in OLV and OPX quantities. Similar OLX, OPX, and CPX ranges resulting from propagation of uncertainties are found for all of the Hypatia stars. hibits a relatively small acceleration in abundance with time. The α elements and Al are lithophile elements while Fe, Cr, and Ni are siderophile. We note that the Hypatia catalog contains a few thousand stars in relatively close proximity to the Earth, and that trends in stellar composition therefore don't include the the wide ranges in ages or environmental affects that are observed in larger surveys (e.g. Horta et al. 2022). We find that the fits to chondrite are influenced by the evolving lithophile/siderophile ratios. In Figure 9, we show the fractional difference between the observed abundances and chondrite for the Hypatia stars that do not pass as chondritic. For the chondritic stars, all of Figure 9. The fractional difference between the measured abundances in the Hypatia catalog stars relative to chondrite, for the stars that do not pass as chondritic. For the non-chondritic stars, siderophiles (Fe, Cr, Ni) tend to be the worst fitting elements rather than lithophiles. these distributions are centered at zero. However, Figure 9 shows that Fe, Cr, and Ni abundances relative to chondrite are lower than those of the lithophile elements by about a factor of two. This suggests that Type Ia products are inflating the χ 2 ν s of non-chondritic stars relative to the α elements. Quantitatively, we find that of the ∼ 2000 Hypatia catalog stars that do not pass as chondritic, 71% have a siderophile element as their worst fitting abundance ratio. Of this subset of stars, 71% pass as chondritic if Fe, Cr, and Ni abundances are ignored, meaning that when stars have anomalous siderophile abundances relative to the chondrite, they typically fail as chondritic because of the siderophiles. Meanwhile, 4% of stars with lithophiles as the worst fitting element pass as chondritic when lithophiles are ignored. In other words, the majority of stars that fail with anomalous lithophile elements are not failing solely because of the lithophile elements. We do not see these same patterns in the WD data. Amongst the WDs, 7/15 of the failures in the raw data and 2/10 of the failures in the steady-state are due to siderophiles. We do not find higher recovery rates amongst the siderophile failures when removing siderophiles elements. In Figure 10 we show four plots of abundance ratios of the Hypatia stars, illustrating the effect of GCE on the goodness of fit to chondrite. The overall trends in relative abundances of lithophile and siderophile elements are plotted as [Mg/H] (lithophile, α nuclide) against [Fe/H] (siderophile, and a proxy for time) in panel A and the corresponding [Mg/Fe] ratios against [Fe/H] in panel B. As a zero-order approximation of chemical evolution in the local neighborhood, we categorize the trends in the data into two stages of pre-and post injection of Fe, Cr, and Ni by Type Ia supernovae. The break between trends is around [Fe/H] ∼ −0.5, corresponding to ∼ 8 billion years before present (Bellardini et al. 2022). The pre-Type Ia arrow in Figure 10 shows the general trends in α nuclides (lithophiles), represented here by Mg, relative to siderophile abundances at low metallicities prior to the influence of Type Ia supernovae on the ISM. The post-Type Ia arrow shows the influence of Type Ia supernovae on higher metallicity stars after the influence of Type Ia supernovae on the ISM. The line in panel B shows the induced correlation between [Mg/Fe] and [Fe/H] that would be expected if Mg abundances were completely independent of Fe. At lower metallicity, we find that Mg and Fe abundances increase at very nearly the same rate, resulting in nearly constant [Mg/Fe] with metallicity. The increase in ISM Fe at later times flattens the growth of Mg vs Fe, resulting in a negative slope in [Mg/Fe] with metallicity. In panel A, we fit the low and high metallicity ranges and find a slope of 0.98 for the low end and 0.88 for the high end, with uncertainties in the slopes of less than 0.005. The decrease in slope is a reflection of the influence of the Type Ia supernovae at later times. For panel B, we find slopes of −0.03 and −0.19 for the low and high metallicity ranges, respectively. We again show the [Mg/Fe] ratio as a function of metallacity in panel C, with points colored by whether the star passes as chondritic in the χ 2 ν tests, as well as the occurrence levels for chondritic and non-chondritic stars. The contours illustrate the somewhat different distributions of the chondritic and non-chondritic stars. We find that stars with very low metallicity, or Fe abundances, are those that are often classified as non-chondritic. Finally, in lithophile-lithophile space (panel D of Figure 10), we find that the Hypatia catalog stars have a range of ratios centered on the Sun (the white star in Figure 10D). Consistent with GCE models, no overarching trends in ratios are seen in this case, and we find very little separation between the ratios of the Hypatia catalog stars that are considered chondritic and those of non-chondritic stars. We conclude that older, lower metallicity stars are less likely to be consistent with a chondritic composition. In the χ 2 ν tests, stars that are statistically distinct from chondritic more often have low Fe, Cr, and Ni compared with solar, indicating that deviations from chondritic compositions are in part attributable to the delayed effects of Type Ia supernovae. The Hypatia catalog stars are all in relative close proximity to the Sun, so while we find that the rock-forming element ratios in most of the stars are consistent with chondrites, it is possible that this conclusion would not apply to older populations of stars or stars located outside of the local disk of the Milky Way due to the effects of GCE on lithophile/siderophile ratios. DISCUSSION A summary of the fractions of bodies that are consistent with chondritic compositions is shown in Table 2. The leave-out-outliers ("LOO") column includes samples that pass as chondritic using the χ 2 ν test when an element identified as an outlier is ignored (Section 2). We find that outliers do not significantly affect the fractions of stars that are consistent with chondritic composition. Ignoring outliers changes the classification from non-chondritic to chondritic for one WD, and shifts the fraction of Hypatia catalog stars consistent with chondritic composition by less than 1%. In Figure 11 we show the distribution of χ 2 ν values calculated for the Hypatia catalog sample and the raw and steady-state adjusted WD data. The χ 2 ν for all of the populations is most strongly peaked at low values, consistent with chondritic compositions. This suggests that the majority of extrasolar rocks in the solar neighborhood are built from material similar in composition to that which formed the Solar System. The overwhelming fraction of Hypatia stars with chondritic rock-forming element ratios suggests that any deviations from chondrite-like compositions observed in exoplanets are more likely to be a result of the specific processing during planet formation rather than the result of large differences in the initial protoplanetary source material from chondritic. M stars in the Hypatia data set exhibit a tail to higher χ 2 ν values, though the majority of M stars still pass as chondritic. The difference in the M dwarf Many of the WDs and Hypatia catalog stars that did not pass as chondritic have high relative Mg concentrations; their abundance ratios in Figures 2 and 7 (lower panel) all fall below the 1-1 line for chondritic composition due to an excess in the Mg concentration used as the denominator in all ratios. Because we ratio to Mg, high Mg can systematically draw chondritic abundance ratios away from chondrite, inflating the χ 2 ν values. Examples amongst the WDs (raw data) include WD1415+234, SDSSJ2339-0424, SDSSJ1242+5526, WD1232+563, SDSSJ0738+1835, WD1350-162, and WD1929+012. If outliers are ignored, then G241-6 and SDSSJ1043+0855 are also fall into this list. For the WDs, applying the steady-state adjustment brings some, though not all, of the elements from the apparently Mg-rich WDs back to or above chondritic abundances. We find, therefore, that for WDs with excess Mg, deviations from chondritic in these cases is due in no small measure to the effects of settling. Hypatia WD WD, SS Figure 11. Distribution of χ 2 ν for the WDs (raw and steadystate adjusted), compared to the Hypatia catalog stars. The vertical line shows the approximate critical χ 2 ν value (χ 2 ν ∼ 3 to 4, based on the number of observed elements), so that samples within the shaded regions are considered to be consistent with a chondritic composition. We now explore how this study of local stars and WDs fits in to both the overarching metallicity gradients in the Galaxy and the current landscape of inferred exoplanet compositions. Galactic Chemical Evolution The Milky Way experiences spatial and temporal variations in stellar compositions, begging the question of how representative the pervasive chondritic compositions we see in the solar neighborhood are with respect to time and place in the Galaxy. In Section 5.2 we showed that galactic chemical evolution (GCE) has implications for the relative lithophile to siderophile ratios in stars over time, though the bulk of the Hypatia catalog stars are still consistent with chondrites. Here we evaluate the significance of chondritic rock-forming element ratios in the context of large scale variability in the Galaxy, outside of the solar neighborhood. Spatial metallicity gradients in the Milky Way exist both radially and vertically (Galactic latitude) as a result of GCE. The disk midplane tends to have more metal-rich stars than above or below the plane and the disk itself exhibits a negative gradient, with generally higher metallicities towards the galactic center (e.g. Hayden et al. 2014;Donor et al. 2020). Radial compositional changes may arise as annuli of the Milky Way are differentially enriched by supernovae and stellar feedback. For example, Bellardini et al. (2022) find from cosmological simulations that the older, inner disk receives more material from Type Ia supernovae, leading to lower [Mg/Fe] compared to the outer disk. They also find that some azimuthal scatter in abundances is to be expected, though the scatter is relatively low (<∼ 0.05dex). While variations in [Fe/H] of > 1 dex are found across the entire galactic disk, much smaller variations in [Fe/H] of about ±0.2 dex are found for stars within 2-3 kpc of the Sun (Monson et al. 2017). Radial variations in metallicity may be further damped by radial migration and mixing of stars throughout the disk. Overall metallicity is expected to rise with time in the Galaxy. For example Timmes et al. (1995) showed that in the solar neighborhood at galactocentric radii of about 8 kpc, changes in [Fe/H] of about 1.5 dex are to be expected over 14 Gyr. However, the majority of this increase in metallicity occurs within the first few Gyr of galactic evolution, with changes of less than 0.5 dex [Fe/H] from about 2 Gyr onwards. Therefore, while significant compositional changes occurred very early in the Milky Way's evolution, or very close to the galactic center, we do not expect to find demonstrable effects of GCE in rock-forming element ratios among stars in the stellar disk at galactocentric radii between ∼ 4 kpc and ∼ 10 kpc as seen today. Given these trends in GCE, we now assess the impact on polluted WDs by estimating their formation times. The polluted WDs in our sample have cooling ages of about 50−600 Myr and masses between ∼ 0.5−0.75M ⊙ (Table 1). These WD masses translate to initial stellar masses of ∼ 1 − 3M ⊙ (Cummings et al. 2018;El-Badry et al. 2018) and stellar lifetimes of ∼ 300Myr − 10Gyr (Schaller et al. 1992). Meanwhile, radioactive dating has constrained the age of the Milky Way to ∼ 13.8Gyr (e.g. Cayrel et al. 2001;Hill et al. 2002;Cowan et al. 2002). Given overall metallicity changes are most significant in the first few Gyr of the Galaxy, this suggests that GCE trends could only manifest in the lowest-mass WD in our data set (GaiaJ0218+3625), with progenitor masses approximately that of the Sun, and corresponding lifetimes exceeding 10 Gyr. Within our sample, we do not find evidence that lower mass WDs are worse fits to chondrite, however it is possible that such trends may be evident in future, larger samples of WDs. We conclude that GCE could significantly alter the relative abundances of rock-forming material available for planet formation at early times and outside of the disk of the Milky Way. We explore a possible implication of this in the next section. We find that the majority of the current population of polluted WDs are derived from sufficiently young progenitors and nearby to the Sun that we do not expect GCE to have a strong affect on their compositions. Iron core mass fractions One of the possible consequences of GCE for exoplanets is the resulting core-mass fractions due to variations in lithophile/siderophile ratios. The mass fractions of iron-rich cores of rocky planets have been shown to be related to the iron mass fractions deduced from their host stars (e.g., Adibekyan et al. 2021;Rogers & Owen 2021). Consistent with these studies, we calculate the iron mass fractions of planets that might have formed from the material polluting WDs or around the Hypatia catalog stars as f Fe = m Fe / (m Fe + m Mg2SiO4 + m MgSiO3 + m SiO2 ), where m i is the abundance of the species i relative to H or He multiplied by the formula weight (Santos et al. 2015). The relative abundances by number of the silicate species are obtained from a linear transformation such that MgSiO 3 =2Si−Mg, Mg 2 SiO 4 = Mg−Si, and SiO 2 is any remaining Si. The mass of O in the rock is therefore derived from the Si and Mg abundances, which corrects for any O excesses due to water (for the WD sample) or O production in the Hypatia catalog stars. The mass fractions of Fe can be equated with the metal core fractions of planets given the expectation of small concentrations of Fe in the silicate (Doyle et al. 2019). The top panel of Figure 12 shows f Fe calculated from the abundances of the raw and steady-state adjusted WD data, and the Hypatia stellar abundances. The bottom panel illustrates the distribution of f Fe resulting for four example WDs when uncertainties in observation are propagated through the transformation. The Hypatia catalog stars with higher metallicities define a slightly skewed distribution of iron mass fractions, with a well-defined mode of about 32%, indistinguishable from the core mass fraction of the Earth, a tail towards lower values, and an approximate 1σ spread of about ±5%. The lower metallicity Hypatia catalog stars define a peak in the distribution of f Fe ∼ 20 ± 5% (Figure 12). A similar variation in iron mass fractions was calculated by Michel et al. (2020) for stars in the thin and thick disks and halo of the galaxy. The difference in most probable iron mass fractions obtained from the higher and lower metallicity Hypatia catalog stars suggests the possibility that planets formed in first several billion years in the Milky Way may have tended to have smaller metal cores compared with Earth, while planets formed later are generally similar to Earth in their metal core fractions. This is broadly due to the increase in siderophile elements at later times relative to lithophiles. The WD sample size is much smaller, and plagued by larger uncertainties. The bottom panel of Figure 12 shows the spread in f Fe obtained for each WD after taking 100 random draws of of Si, Mg, and Fe abundances from a parent population defined by the WD medians and uncertainties in each element. The 1σ uncertainty in iron mass fraction ranges from about 4 − 15%, with a median of ∼ 7%. In any case, the raw data define iron mass fractions peaking at 20 − 30% while the steadystate adjusted data yield a peak at 30 − 40%. While we do calculate low (f Fe < 10%) iron mass fractions for some WDs, lacking further information about the ages or initial metallicities of these stars prevents us from identifying whether the low iron is due to the systematic effects of GCE. As a whole, these data suggest the majority of planets that might have formed from these polluting materials have metal core mass fractions that are not significantly different from an Earth-like planet. Following the discussion in Section 6.1, abundance measurements for older WDs (likely lower-mass WDs), or WDs outside of the thin disk of the galaxy could help identify whether the lower core mass fractions are influenced by GCE. compositions that are consistent with chondrites. We use the χ 2 ν goodness of fit to test the composition of each star, with a threshold of α = 0.05 to select matches to chondritic composition, and allow for a 2σ error in the χ 2 ν to account for our small sample size of observed elements. We use Monte Carlo methods to propagate the uncertainties in the observed abundances of the WDs and Hypatia catalog stars. We find that many Solar System rocks, including bulk Earth and bulk silicate Earth, bulk silicate Mars, and E chondrites, are indistinguishable from CI chondrite given current uncertainties in WD pollution measurements. Additionally, we find that we are not able to characterize either Hypatia catalog stellar abundances or WD pollution by normative mineralogies due to the impossibly large uncertainties obtained by propagating measurement uncertainties. The polluted WD data indicate that the bulk of exorocks are consistent with chondritic compositions. This is supported by the compositions of rocks implied by the Hypatia catalog stars, which suggest most material in the solar neighborhood formed in protoplanetary disks with rock-forming element ratios similar to our Sun. The Hypatia catalog stars do suggest, however, that galactic chemical evolution can lead to exoplanet compositions statistically different from the Solar System in the first few billion years of the Galaxy or in galactic substructures with considerably different metallicities. One implication of this is that earlier in the evolution of the Milky Way, rocky planets may have formed with substantially less massive metal cores than Earth. Our methods do not suggest any of the WD polluters are composed of crust, either MORB or continental crust. No stars in our sample are better fits to MORB or continental crust than chondrite, even WDs with the largest deviations from chondritic composition (χ 2 ν ≫ 10). We conclude that the relative abundances of rock-forming elements in polluted WDs and local stars are relatively homogeneous, which suggests that the majority of extrasolar rocks in the solar neighborhood originate from chondrite-like compositions. Figure 1 . 1Compositions of Solar System rocks compared to CI chondrite, for Bulk Earth (BE), Bulk Silicate Earth (BSE), Mid-Ocean Ridge Basalt (MORB), Continental Crust (CC), Bulk Silicate Mars (BSM), and E chondrites (EH). Figure 2 . 2Comparison of WD pollution compositions to the CI chondrite composition, for the raw data (top) and steady-state adjusted abundances (bottom). The order of elements in each plot, from left to right, is Cr, Ni, Ca, Al, Fe, Si, all ratioed to Mg. The χ 2 ν parameter appears in the upper left of each panel. WDs with white backgrounds are consistent with having accreted a chondritic rock composition. WDs with dark grey backgrounds are not considered a good fit to chondrite, using an α parameter of 0.05 for goodness of fit. WDs with light grey backgrounds have an outlier element that allows the WDs to pass as chondritic when the outlier is removed (outliers highlighted in orange). Figure 3 . 3χ 2 Figure 4 . 4Ternary diagram for all of the WD samples, for raw abundances. The white points indicate the OLV, OPX, and CPX quantities derived from the median values of Mg, Ca, and Si for each WD. Figure 5 .Figure 6 . 56Distribution of the distances of Hypatia catalog stars from the Sun (classified by stellar type). M dwarfs in the sample tend to be much closer to the Sun than other stars in the catalog. K stars also appear to have a bimodal distribution in distance. Distribution of elemental abundances in Hypatia catalog stars relative to solar abundances, grouped by stellar type. All element distributions are well centered at solar abundances, with the exception of Ca in M stars and Al in F stars. Figure 7 . 7Comparison of Hypatia catalog metal abundances to chondritic composition, for the 35 randomly chosen Hypatia catalog stars. The χ 2 ν parameter is listed for each star. The order of elements in each plot, from left to right, is Cr, Ni, Ca, Al, Fe, Si, all ratioed to Mg. Stars with dark grey backgrounds are not considered a good fit to chondrite, using an α parameter of 0.05 for goodness of fit. Stars with light grey backgrounds are not a good fit for chondrite when all elements are used to calculate χ 2 ν , but do pass as chondritic when an outlier element is ignored (outliers highlighted in orange). Figure 8 8Figure 8. Ternary diagram for 100 randomly selected Hypatia stars, and the spread in OLV, OPX, and CPX for one of the best Hypatia fits to chondrite, HIP 26834 (χ 2 ν ∼ 0.02). Despite being statistically indistinguishable from chondritic composition, the uncertainties in the measured abundances lead to huge spread in OLV and OPX quantities. Similar OLX, OPX, and CPX ranges resulting from propagation of uncertainties are found for all of the Hypatia stars. Figure 10 . 10Elemental abundance ratios for the Hypatia catalog stars in dex. Throughout, the white star shows solar values. A) [Mg/H] vs. [Fe/H], showing growth of an α nuclide compared with Fe. [Fe/H] is broadly taken as an indicator of time. The influence of Type Ia supernovae on galactic chemical evolution is indicated by the two arrows. B) Same data as [Mg/Fe] vs. [Fe/H]. The solid negatively sloping line shows the effect of induced ratio correlation due to Fe appearing on both axes. C) Same as B) but with contours showing 50, 80 and 95% levels for the chondritic (solid contours) and non-chondritic (dashed contours) populations of stars based on χ 2 ν tests. The larger points indicate stars that pass as chondritic and the smaller points do not pass. The non-chondritic population generally extends to lower metallicities than the chondritic stars. D) Ratios of lithophile elements only, showing no clear trend.distribution relative to the others is evidently a result of different treatments of errors at near and far distances (M dwarfs are nearer) and potential systematic offsets in Ca. Figure 12 . 12Top: Distribution of iron mass fractions (fFe) for the WDs and Hypatia catalog stars. We show both raw and steady-state values for the WDs, and split the Hypatia catalog stars into low ([Fe/H]< −0.5) and high ([Fe/H]> −0.5) metallicity categories. Bottom: Median and 1σ spread in fFe for the sample of WDs based on 100 random draws from their median and uncertainty in the raw Si, Mg, and Fe abundances. Most WD iron fractions have a range of at least 10% when accounting for uncertainty in their abundances. 4 . 4WHITE DWARFS Our WD sample includes 31 WDs with detections of oxygen together with other rock-forming elements (Table 1). The vast majority of the WDs in our sample have atmospheres that are helium-dominated, and a handful are hydrogen-dominated. The WDs are all within about 200pc of the Sun. For each WD, we draw Table 1 . 1All WD parameters are collected from the references listed in the table. Any values not reported by the paper have been supplemented using the Montreal White Dwarf Database. Throughout this work we group WDs by the dominant element in their atmospheres (H or He-dominated only).White Dwarf* Type T eff (K) log(g) D (pc) M † (M⊙) Reference G29-38 H 11820 8.15 14 0.696 Xu et al. (2014) SDSSJ1043+0855 (SDSS J104341.53+085558.2) H 18330 8.05 169 0.649 Melis & Dufour (2017) PG1015+161 H 19226 8.04 88 0.645 Gänsicke et al. (2012) WD1226+110 H 20900 8.15 129 0.714 Gänsicke et al. (2012) WD1929+012 (GALEX J193156.8+011745) H 21200 7.91 53 0.578 Gänsicke et al. (2012) PG0843+516 (WD 0843+516) H 23095 8.17 139 0.730 Gänsicke et al. (2012) WD0446-255 He 10120 8.00 91 0.581 Swan et al. (2019) WD1350-162 He 11640 8.02 108 0.596 Swan et al. (2019) WD1232+563 He 11787 8.30 172 0.773 Xu et al. (2019) SDSSJ1242+5226 (SBSS 1240+527) He 13000 8.00 161 0.587 Raddi et al. (2015) SDSSJ2339-0424 (GALEX J233917.0-042425) He 13735 7.93 89 0.548 Klein et al. (2021) SDSSJ0738+1835 (SDSS J073842.56+183509.6) He 13950 8.40 172 0.842 Dufour et al. (2012) HS2253+8023 He 14000 8.10 72 0.648 Klein et al. (2011) WD1425+540 He 14490 7.95 52 0.560 Xu et al. (2017) WD1145+017 He 14500 8.11 146 0.655 Fortin-Archambault et al. (2020) GaiaJ0218+3625 (GALEX J021816.6+362507) He 14691 7.86 116 0.512 Doyle et al. (2023) EC22211-2525 He 14743 7.90 109 0.534 Doyle et al. (2023) WD2207+121 He 14752 7.97 164 0.572 Xu et al. (2019) WD1551+175 He 14756 8.02 162 0.601 Xu et al. (2019) WD1244+498 He 15150 7.97 164 0.573 Doyle et al. (2023) WD1248+1004 (SDSS J124810.23+100541.1) He 15178 8.11 73 0.656 Doyle et al. (2023) GD40 He 15300 8.00 120 0.591 Jura et al. (2012) G241-6 He 15300 8.00 65 0.591 Jura et al. (2012) GaiaJ1922+4709 (Gaia DR2 2127665711125011456) He 15497 7.95 127 0.562 Doyle et al. (2023) GD378 He 15620 7.93 44 0.551 Klein et al. (2021) SDSSJ1734+6052 (GALEX J173435.7+605203) He 16340 8.04 150 0.616 Doyle et al. (2023) GD61 He 17280 8.20 54 0.715 Farihi et al. (2013) WD1415+234 He 17312 8.17 127 0.696 Doyle et al. (2023) SDSSJ2248+2632 (SDSS J224840.97+263251.7) He 17369 8.02 123 0.606 Doyle et al. (2023) Ton345 He 19780 8.18 106 0.706 Wilson et al. (2015) WD1536+520 He 20800 7.96 201 0.578 Farihi et al. (2016) *Where applicable, alternate WD identifiers are listed in parentheses. † Masses are collected from the MWDD using the T eff and log(g) values in the table. Table 2 . 2Percentage of stars in the Hypatia and WDs samples that pass the χ 2 ν test and are considered good fits to chondritic composition, using all available elements of Si, Fe, Mg, Al, Ca, Ni, and Cr. Leave-out-outliers (LOO) percentages are calculated by considering a star to be a good fit to chondrite if removing the outlier element allows the star to pass.Sample % Chondritic % Chondritic, LOO Hypatia, All 75.0 75.5 Hypatia, <150 pc 74.7 75.0 Hypatia, F 66.5 67.0 Hypatia, G 75.8 75.9 Hypatia, K 76.6 76.7 Hypatia, M 56.1 56.8 WD, raw 48.4 51.6 WD, SS 67.7 67.7 7 . 7CONCLUSIONS In this work, we show that about half of polluted WDs, and well over half of the Hypatia catalog stars, have Density Hypatia, low [Fe/H] Hypatia, high [Fe/H] WD WD, SS0 10 20 30 40 50 60 fFe(%) 0.00 0.02 0.04 0.06 0.08 0.10 SDSSJ2248+2632 PG1015+161 GD61 SDSSJ1734+6052 WD1248+1004 GD378 WD1415+234 WD1232+563 GaiaJ0218+3625 G29-38 WD1226+110 WD1425+540 WD1350-162 SDSSJ2339-0424 SDSSJ1242+5226 WD1536+520 WD1551+175 WD1244+498 SDSSJ1043+0855 HS2253+8023 PG0843+516 G241-6 WD1145+017 WD0446-255 SDSSJ0738+1835 Ton345 EC22211-2525 GaiaJ1922+4709 GD40 WD2207+121 WD1929+012 ACKNOWLEDGMENTSThe authors thank Pratik Gandhi (University of California, Davis) for helpful discussions on galactic chemical evolution. We also thank the referees for their comments, which improved the manuscript. This work was supported by NASA 2XRP grant No. 80NSSC20K0270 to EDY.The research shown here acknowledges use of the Hypatia catalog Database, an online compilation of stellar abundance data as described inHinkel et al. (2014), which was supported by NASA's Nexus for Exoplanet System Science (NExSS) research coordination network and the Vanderbilt Initiative in Data-Intensive Astrophysics (VIDA). . V Adibekyan, C Dorn, S G Sousa, 10.1126/science.abg8794Science. 374330Adibekyan, V., Dorn, C., Sousa, S. G., et al. 2021, Science, 374, 330, doi: 10.1126/science.abg8794 . E Anders, N Grevesse, 10.1016/0016-7037(89)90286-XAnders, E., & Grevesse, N. 1989, GeoCoA, 53, 197, doi: 10.1016/0016-7037(89)90286-X . R Andrae, T Schulze-Hartung, P Melchior, arXiv:1012.3754arXiv preprintAndrae, R., Schulze-Hartung, T., & Melchior, P. 2010, arXiv preprint arXiv:1012.3754 . R Barlow, 10.48550/arXiv.physics/0306138Barlow, R. 2003, arXiv e-prints, physics/0306138, doi: 10.48550/arXiv.physics/0306138 . E B Bauer, L Bildsten, 10.3847/1538-4357/ab0028ApJ. 87296Bauer, E. B., & Bildsten, L. 2019, ApJ, 872, 96, doi: 10.3847/1538-4357/ab0028 . M A Bellardini, A Wetzel, S R Loebman, J Bailin, 10.1093/mnras/stac1637MNRAS. 5144270Bellardini, M. A., Wetzel, A., Loebman, S. R., & Bailin, J. 2022, MNRAS, 514, 4270, doi: 10.1093/mnras/stac1637 . S Blouin, P Dufour, N Allard, 10.3847/1538-4357/aad4a9ApJ. 863184Blouin, S., Dufour, P., & Allard, N. F. 2018, ApJ, 863, 184, doi: 10.3847/1538-4357/aad4a9 . J C Bond, D P O&apos;brien, D S Lauretta, 10.1088/0004-637X/715/2/1050ApJ. 7151050Bond, J. C., O'Brien, D. P., & Lauretta, D. S. 2010, ApJ, 715, 1050, doi: 10.1088/0004-637X/715/2/1050 . A Bonsor, P Jofré, O Shorttle, 10.1093/mnras/stab370MNRAS. 5031877Bonsor, A., Jofré, P., Shorttle, O., et al. 2021, MNRAS, 503, 1877, doi: 10.1093/mnras/stab370 . R Cayrel, V Hill, T C Beers, 10.1038/35055507Nature. 409691Cayrel, R., Hill, V., Beers, T. C., et al. 2001, Nature, 409, 691, doi: 10.1038/35055507 . J J Cowan, C Sneden, S Burles, 10.1086/340347ApJ. 572861Cowan, J. J., Sneden, C., Burles, S., et al. 2002, ApJ, 572, 861, doi: 10.1086/340347 . W Cross, J P Iddings, L V Pirsson, H S Washington, 10.1086/621030The Journal of Geology. 10555Cross, W., Iddings, J. P., Pirsson, L. V., & Washington, H. S. 1902, The Journal of Geology, 10, 555, doi: 10.1086/621030 . J D Cummings, J S Kalirai, P E Tremblay, E Ramirez-Ruiz, J Choi, 10.3847/1538-4357/aadfd6ApJ. 86621Cummings, J. D., Kalirai, J. S., Tremblay, P. E., Ramirez-Ruiz, E., & Choi, J. 2018, ApJ, 866, 21, doi: 10.3847/1538-4357/aadfd6 . T Cunningham, P.-E Tremblay, B Freytag, H.-G Ludwig, D Koester, 10.1093/mnras/stz1759MNRAS. 4882503Cunningham, T., Tremblay, P.-E., Freytag, B., Ludwig, H.-G., & Koester, D. 2019, MNRAS, 488, 2503, doi: 10.1093/mnras/stz1759 . R B Dean, W J Dixon, 10.1021/ac60052a025Analytical Chemistry. 23636Dean, R. B., & Dixon, W. J. 1951, Analytical Chemistry, 23, 636, doi: 10.1021/ac60052a025 . J Donor, P M Frinchaboy, K Cunha, 10.3847/1538-3881/ab77bcAJ. 159199Donor, J., Frinchaboy, P. M., Cunha, K., et al. 2020, AJ, 159, 199, doi: 10.3847/1538-3881/ab77bc . C Dorn, J H D Harrison, A Bonsor, T O Hands, 10.1093/mnras/sty3435MNRAS. 484712Dorn, C., Harrison, J. H. D., Bonsor, A., & Hands, T. O. 2019, MNRAS, 484, 712, doi: 10.1093/mnras/sty3435 . C Dorn, A Khan, K Heng, 10.1051/0004-6361/201424915A&A. 57783Dorn, C., Khan, A., Heng, K., et al. 2015, A&A, 577, A83, doi: 10.1051/0004-6361/201424915 . A E Doyle, S J Desch, E D Young, 10.3847/2041-8213/abd9baApJL. 90735Doyle, A. E., Desch, S. J., & Young, E. D. 2021, ApJL, 907, L35, doi: 10.3847/2041-8213/abd9ba . A E Doyle, E D Young, B Klein, B Zuckerman, H E Schlichting, 10.1126/science.aax3901Science. 366356Doyle, A. E., Young, E. D., Klein, B., Zuckerman, B., & Schlichting, H. E. 2019, Science, 366, 356, doi: 10.1126/science.aax3901 . A E Doyle, B L Klein, P Dufour, 10.48550/arXiv.2303.00063arXiv:2303.00063arXiv e-printsDoyle, A. E., Klein, B. L., Dufour, P., et al. 2023, arXiv e-prints, arXiv:2303.00063, doi: 10.48550/arXiv.2303.00063 . M J Drake, K Righter, 10.1038/416039aNature. 41639Drake, M. J., & Righter, K. 2002, Nature, 416, 39, doi: 10.1038/416039a P Dufour, S Blouin, S Coutu, Astronomical Society of the Pacific Conference Series. P. E. Tremblay, B. Gaensicke, & T. Marsh50920th European White Dwarf WorkshopDufour, P., Blouin, S., Coutu, S., et al. 2017, in Astronomical Society of the Pacific Conference Series, Vol. 509, 20th European White Dwarf Workshop, ed. P. E. Tremblay, B. Gaensicke, & T. Marsh, 3. https://arxiv.org/abs/1610.00986 . P Dufour, M Kilic, G Fontaine, 10.1088/0004-637X/749/1/6ApJ. 749Dufour, P., Kilic, M., Fontaine, G., et al. 2012, ApJ, 749, 6, doi: 10.1088/0004-637X/749/1/6 . K El-Badry, H.-W Rix, D R Weisz, 10.3847/2041-8213/aaca9cApJL. 86017El-Badry, K., Rix, H.-W., & Weisz, D. R. 2018, ApJL, 860, L17, doi: 10.3847/2041-8213/aaca9c . J Farihi, B T Gänsicke, D Koester, 10.1126/science.1239447Science. 342Farihi, J., Gänsicke, B. T., & Koester, D. 2013, Science, 342, 218, doi: 10.1126/science.1239447 . J Farihi, D Koester, B Zuckerman, 10.1093/mnras/stw2182MNRAS. 4633186Farihi, J., Koester, D., Zuckerman, B., et al. 2016, MNRAS, 463, 3186, doi: 10.1093/mnras/stw2182 . M Fortin-Archambault, P Dufour, S Xu, 10.3847/1538-4357/ab585aApJ. 88847Fortin-Archambault, M., Dufour, P., & Xu, S. 2020, ApJ, 888, 47, doi: 10.3847/1538-4357/ab585a . B T Gänsicke, D Koester, J Farihi, 10.1111/j.1365-2966.2012.21201.xMNRAS. 424333Gänsicke, B. T., Koester, D., Farihi, J., et al. 2012, MNRAS, 424, 333, doi: 10.1111/j.1365-2966.2012.21201.x . J H D Harrison, A Bonsor, N Madhusudhan, 10.1093/mnras/sty1700MNRAS. 4793814Harrison, J. H. D., Bonsor, A., & Madhusudhan, N. 2018, MNRAS, 479, 3814, doi: 10.1093/mnras/sty1700 . M R Hayden, J A Holtzman, J Bovy, 10.1088/0004-6256/147/5/116AJ. 147116Hayden, M. R., Holtzman, J. A., Bovy, J., et al. 2014, AJ, 147, 116, doi: 10.1088/0004-6256/147/5/116 . M R Hayden, J Bovy, J A Holtzman, 10.1088/0004-637X/808/2/132ApJ. 808132Hayden, M. R., Bovy, J., Holtzman, J. A., et al. 2015, ApJ, 808, 132, doi: 10.1088/0004-637X/808/2/132 . V Hill, B Plez, R Cayrel, 10.1051/0004-6361:20020434A&A. 387Hill, V., Plez, B., Cayrel, R., et al. 2002, A&A, 387, 560, doi: 10.1051/0004-6361:20020434 . N R Hinkel, F X Timmes, P A Young, M D Pagano, M C Turnbull, 10.1088/0004-6256/148/3/54AJ. 148Hinkel, N. R., Timmes, F. X., Young, P. A., Pagano, M. D., & Turnbull, M. C. 2014, AJ, 148, 54, doi: 10.1088/0004-6256/148/3/54 . N R Hinkel, C Unterborn, 10.3847/1538-4357/aaa5b4ApJ. 85383Hinkel, N. R., & Unterborn, C. T. 2018, ApJ, 853, 83, doi: 10.3847/1538-4357/aaa5b4 . M A Hollands, B T Gänsicke, D Koester, 10.1093/mnras/sty592MNRAS. 47793Hollands, M. A., Gänsicke, B. T., & Koester, D. 2018, MNRAS, 477, 93, doi: 10.1093/mnras/sty592 . D Horta, R P Schiavon, J T Mackereth, 10.1093/mnras/stac3179MNRAS. Horta, D., Schiavon, R. P., Mackereth, J. T., et al. 2022, MNRAS, doi: 10.1093/mnras/stac3179 . M J Hoskin, O Toloza, B T Gänsicke, 10.1093/mnras/staa2717MNRAS. 499171Hoskin, M. J., Toloza, O., Gänsicke, B. T., et al. 2020, MNRAS, 499, 171, doi: 10.1093/mnras/staa2717 . M Jura, S Xu, B Klein, D Koester, B Zuckerman, 10.1088/0004-637X/750/1/69ApJ. 75069Jura, M., Xu, S., Klein, B., Koester, D., & Zuckerman, B. 2012, ApJ, 750, 69, doi: 10.1088/0004-637X/750/1/69 . M Jura, E D Young, 10.1146/annurev-earth-060313-054740Annual Review of Earth and Planetary Sciences. 4245Jura, M., & Young, E. D. 2014, Annual Review of Earth and Planetary Sciences, 42, 45, doi: 10.1146/annurev-earth-060313-054740 . B Klein, M Jura, D Koester, B Zuckerman, 10.1088/0004-637X/741/1/64ApJ. 74164Klein, B., Jura, M., Koester, D., & Zuckerman, B. 2011, ApJ, 741, 64, doi: 10.1088/0004-637X/741/1/64 . B L Klein, A E Doyle, B Zuckerman, 10.3847/1538-4357/abe40bApJ. 91461Klein, B. L., Doyle, A. E., Zuckerman, B., et al. 2021, ApJ, 914, 61, doi: 10.3847/1538-4357/abe40b . C Kobayashi, A I Karakas, M Lugaro, 10.3847/1538-4357/abae65ApJ. 900179Kobayashi, C., Karakas, A. I., & Lugaro, M. 2020, ApJ, 900, 179, doi: 10.3847/1538-4357/abae65 . C Kobayashi, N Nakasato, 10.1088/0004-637X/729/1/16ApJ. 72916Kobayashi, C., & Nakasato, N. 2011, ApJ, 729, 16, doi: 10.1088/0004-637X/729/1/16 . D Koester, 10.1051/0004-6361/200811468A&A. 498Koester, D. 2009, A&A, 498, 517, doi: 10.1051/0004-6361/200811468 . K Lodders, 10.48550/arXiv.1912.00844arXiv:1912.00844arXiv e-printsLodders, K. 2019, arXiv e-prints, arXiv:1912.00844, doi: 10.48550/arXiv.1912.00844 . K Lodders, H Palme, H P Gail, 10.1007/978-3-540-88055-4_34Landolt B&ouml. rnstein, 4B, 712Lodders, K., Palme, H., & Gail, H. P. 2009, Landolt B&ouml;rnstein, 4B, 712, doi: 10.1007/978-3-540-88055-4 34 . N Madhusudhan, K K M Lee, O Mousis, 10.1088/2041-8205/759/2/L40ApJL. 75940Madhusudhan, N., Lee, K. K. M., & Mousis, O. 2012, ApJL, 759, L40, doi: 10.1088/2041-8205/759/2/L40 W F Mcdonough, 10.1016/B0-08-043751-6/02015-6Treatise on Geochemistry. 2568McDonough, W. F. 2003, Treatise on Geochemistry, 2, 568, doi: 10.1016/B0-08-043751-6/02015-6 . C Melis, P Dufour, 10.3847/1538-4357/834/1/1ApJ. 8341Melis, C., & Dufour, P. 2017, ApJ, 834, 1, doi: 10.3847/1538-4357/834/1/1 . A Michel, J Haldemann, C Mordasini, Y Alibert, 10.1051/0004-6361/201936916A&A. 63966Michel, A., Haldemann, J., Mordasini, C., & Alibert, Y. 2020, A&A, 639, A66, doi: 10.1051/0004-6361/201936916 . N N Monson, M R Morris, E D Young, 10.3847/1538-4357/aa67e6ApJ. 839123Monson, N. N., Morris, M. R., & Young, E. D. 2017, ApJ, 839, 123, doi: 10.3847/1538-4357/aa67e6 . K D Putirka, J C Rarick, 10.2138/am-2019-6787American Mineralogist. 104Putirka, K. D., & Rarick, J. C. 2019, American Mineralogist, 104, 817, doi: 10.2138/am-2019-6787 . K D Putirka, S Xu, 10.1038/s41467-021-26403-8Nature Communications. 126168Putirka, K. D., & Xu, S. 2021, Nature Communications, 12, 6168, doi: 10.1038/s41467-021-26403-8 . R Raddi, B T Gänsicke, D Koester, 10.1093/mnras/stv701MNRAS. 4502083Raddi, R., Gänsicke, B. T., Koester, D., et al. 2015, MNRAS, 450, 2083, doi: 10.1093/mnras/stv701 . J G Rogers, J E Owen, 10.1093/mnras/stab529MNRAS. 5031526Rogers, J. G., & Owen, J. E. 2021, MNRAS, 503, 1526, doi: 10.1093/mnras/stab529 R L Rudnick, S Gao, 10.1016/B0-08-043751-6/03016-4Treatise on Geochemistry. 3659Rudnick, R. L., & Gao, S. 2003, Treatise on Geochemistry, 3, 659, doi: 10.1016/B0-08-043751-6/03016-4 . N C Santos, V Adibekyan, C Mordasini, 10.1051/0004-6361/201526850A&A. 58013Santos, N. C., Adibekyan, V., Mordasini, C., et al. 2015, A&A, 580, L13, doi: 10.1051/0004-6361/201526850 . G Schaller, D Schaerer, G Meynet, A Maeder, A&AS. 96269Schaller, G., Schaerer, D., Meynet, G., & Maeder, A. 1992, A&AS, 96, 269 . J G Schulze, J Wang, J A Johnson, 10.3847/PSJ/abcaa8PSJ. 2113Schulze, J. G., Wang, J., Johnson, J. A., et al. 2021, PSJ, 2, 113, doi: 10.3847/PSJ/abcaa8 . A Swan, J Farihi, D Koester, 10.1093/mnras/stz2337MNRAS. 490202Swan, A., Farihi, J., Koester, D., et al. 2019, MNRAS, 490, 202, doi: 10.1093/mnras/stz2337 . G J Taylor, 10.1016/j.chemer.2013.09.006Chemie der Erde / Geochemistry. 73401Taylor, G. J. 2013, Chemie der Erde / Geochemistry, 73, 401, doi: 10.1016/j.chemer.2013.09.006 . A Thiabaud, U Marboeuf, Y Alibert, I Leya, K Mezger, 10.1051/0004-6361/201525963A&A. 58030Thiabaud, A., Marboeuf, U., Alibert, Y., Leya, I., & Mezger, K. 2015, A&A, 580, A30, doi: 10.1051/0004-6361/201525963 . F X Timmes, S E Woosley, T A Weaver, 10.1086/192172ApJS. 98617Timmes, F. X., Woosley, S. E., & Weaver, T. A. 1995, ApJS, 98, 617, doi: 10.1086/192172 . J T Wasson, G W Kallemeyn, 10.1098/rsta.1988.0066Philosophical Transactions of the Royal Society of London Series A. 325535Wasson, J. T., & Kallemeyn, G. W. 1988, Philosophical Transactions of the Royal Society of London Series A, 325, 535, doi: 10.1098/rsta.1988.0066 . D J Wilson, B T Gänsicke, D Koester, 10.1093/mnras/stv1201MNRAS. 4513237Wilson, D. J., Gänsicke, B. T., Koester, D., et al. 2015, MNRAS, 451, 3237, doi: 10.1093/mnras/stv1201 . S Xu, P Dufour, B Klein, 10.3847/1538-3881/ab4ceeAJ. 158242Xu, S., Dufour, P., Klein, B., et al. 2019, AJ, 158, 242, doi: 10.3847/1538-3881/ab4cee . S Xu, M Jura, B Klein, D Koester, B Zuckerman, 10.1088/0004-637X/766/2/132ApJ. 766132Xu, S., Jura, M., Klein, B., Koester, D., & Zuckerman, B. 2013, ApJ, 766, 132, doi: 10.1088/0004-637X/766/2/132 . S Xu, M Jura, D Koester, B Klein, B Zuckerman, 10.1088/0004-637X/783/2/79ApJ. 78379Xu, S., Jura, M., Koester, D., Klein, B., & Zuckerman, B. 2014, ApJ, 783, 79, doi: 10.1088/0004-637X/783/2/79 . S Xu, B Zuckerman, P Dufour, 10.3847/2041-8213/836/1/L7ApJL. 8367Xu, S., Zuckerman, B., Dufour, P., et al. 2017, ApJL, 836, L7, doi: 10.3847/2041-8213/836/1/L7
[]
[ "Topological phase transitions in a honeycomb ferromagnet with unequal Dzyaloshinskii-Moriya interactions", "Topological phase transitions in a honeycomb ferromagnet with unequal Dzyaloshinskii-Moriya interactions" ]
[ "Heng Zhu \nDepartment of Physics\nJishou University\n416000JishouChina\n", "Hongchao Shi \nDepartment of Physics\nJishou University\n416000JishouChina\n", "Zhengguo Tang \nDepartment of Physics\nJishou University\n416000JishouChina\n", "Bing Tang \nDepartment of Physics\nJishou University\n416000JishouChina\n" ]
[ "Department of Physics\nJishou University\n416000JishouChina", "Department of Physics\nJishou University\n416000JishouChina", "Department of Physics\nJishou University\n416000JishouChina", "Department of Physics\nJishou University\n416000JishouChina" ]
[]
This theoretical research is devoted to study topological phase transitions in a two-dimensional honeycomb ferromagnetic lattice with unequal Dzyaloshinskii-Moriya interactions for the two sublattices. With the help of a first-order Green's function formalism, we analyze the influence of magnon-magnon interaction on the magnon band topology. It is found that the existence of the antichiral Dzyaloshinskii-Moriya interaction can led to a tilting of the renormalized magnon bands near the Dirac momenta. Then, the renormalized magnon band gaps at Dirac points have different widths. Through changing the temperature, we can observe the renormalized magnon band gap closing-reopening phenomenon, which corresponds to the topological phase transition. Our results show that the critical temperature of the topological phase transition is related to the strength of the antichiral Dzyaloshinskii-Moriya interaction.
null
[ "https://export.arxiv.org/pdf/2306.02505v1.pdf" ]
259,076,299
2306.02505
7c54693090afe094ec80d88b130351c636424942
Topological phase transitions in a honeycomb ferromagnet with unequal Dzyaloshinskii-Moriya interactions Heng Zhu Department of Physics Jishou University 416000JishouChina Hongchao Shi Department of Physics Jishou University 416000JishouChina Zhengguo Tang Department of Physics Jishou University 416000JishouChina Bing Tang Department of Physics Jishou University 416000JishouChina Topological phase transitions in a honeycomb ferromagnet with unequal Dzyaloshinskii-Moriya interactions This theoretical research is devoted to study topological phase transitions in a two-dimensional honeycomb ferromagnetic lattice with unequal Dzyaloshinskii-Moriya interactions for the two sublattices. With the help of a first-order Green's function formalism, we analyze the influence of magnon-magnon interaction on the magnon band topology. It is found that the existence of the antichiral Dzyaloshinskii-Moriya interaction can led to a tilting of the renormalized magnon bands near the Dirac momenta. Then, the renormalized magnon band gaps at Dirac points have different widths. Through changing the temperature, we can observe the renormalized magnon band gap closing-reopening phenomenon, which corresponds to the topological phase transition. Our results show that the critical temperature of the topological phase transition is related to the strength of the antichiral Dzyaloshinskii-Moriya interaction. Introduction During the past decades, the quantum ferromagnet or antiferromagnet has appeared as a multifunction stage to realize the magnetic analog of the topological phase in electronic systems [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. Since topological properties of two dimensional(2D) electronic systems (i.e., graphene) have been extensively investigated, more and more attention has been paid to their boson analogues in various platforms,such as photonic [15][16][17], phononic [18,19], and magnonic systems [20][21][22][23]. Physically, magnons are the quantum counterpart of (linear) spin waves, which are the collective excitation modes in magnets [24,25]. Different from electrons, magnons have no electrical charge so that forming one magnon current does not invite Joule heating, which has application potential in the realization of the low-dissipation device [21,[26][27][28]. In fact, it is easy to control magnetic properties via external magnetic fields, which means that the magnonic band provide one unique platform to probe the plentiful and still developing basic principles of the band theory. In ferromagnets or antiferromagnets, the presence of the Dzyaloshinskii-Moriya interaction (DMI) shall destroy the spatial inversion symmetry of the system, which can give rise to the nontrivial topological magnon band structure and corresponding nonzero Berry curvature [3][4][5][28][29][30]. As (uncharged) magnons are not subjected to the Lorentz force, the DMI shall play the role of the effective magnetic field in momentum space via affecting the magnon motion in the magnetic system, which can cause a thermal magnon Hall effect [28,29]. The thermal magnon Hall effect caused by the DMI has been first predicted theoretically in the kagome and pyrochlore ferromagnets [3]. Afterwards, Onose et al. [4] have experimentally observed the thermal magnon Hall effect in a ferromagnetic insulator Lu2V2O7 with three-dimensional pyrochlore structure. Subsequently, thermal magnon Hall effect has been identified experimentally in one two-dimensional kagome magnet Cu(1-3,bdc) [31]. In addition, the magnon thermal Hall effect in honeycomb magnets has also been theoretically realized [29]. In honeycomb ferromagnet, a band gap opens at Dirac points as the inversion symmetry of the system has been broken by the second nearest neighbor DMI, which is analogous to the effect of the spin-orbit interaction in the Kane-Mele model on one honeycomb lattice [2]. In contrast to those electronic systems, the deficiency of the magnon number conservation allows for the nonconserving many-magnon interactions and spontaneous decays [32,33]. Unfortunately, a majority of theoretical works on the magnonic band topology have been only focused on the linear spin wave approximation, in which the magnon-magnon interaction has been ignored [8,29,34]. When the temperature is very low, there is not much debate about that the magnon-magnon interaction effect is frozen out [35]. With the increase of temperature, the magnon-magnon interaction has to be considered. What is wore, for the sake of evaluating the Hamiltonian including magnon-magnon interactions, those systematic perturbative expansions must be taken into account. Recently, the significance of the many-body effect has been realized in magnonic Dirac systems [32,[35][36][37]. Pershoguba et al. [32] have first studied the effect of the magnon-magnon interactions in two-dimensional honeycomb ferromagnets and showed that such interactions can give rise to the remarkable momentum-dependent renormalization of band structures. Their theory has perfectly worked out abnormal phenomena in the neutron-scattering experiment for CrBr3, which has not been explained for nearly half a century. It should be mentioned that the DMI has been ignored in their theory, though it has been experimentally identified in CrI3 and CrBr3 [38,39]. Alexander et al. [35] have showed that the orientation of the DMI can affect the magnetic topological characteristics and forms of the magnon-magnon interactions in the honeycomb ferromagnet. Very recently, Lu et al. [36] have incorporated a chiral DMI and the magnon-magnon interaction into the system Hamiltonian in order to study topological aspects of the honeycomb ferromagnets. Their results have indicated that the magnon-magnon interactions can cause topological phase transitions in the honeycomb ferromagnet, which are portrayed via the sign variation of the magnon thermal Hall conductivity. In this work, we study topological magnons in one Heisenberg honeycomb ferromagnet with unequal DMI on the different sublattices. According to the approach in Ref. [36], the effect of magnon-magnon interaction on the band topology shall be analyze by making use of Green's function formalism up to the first-order in perturbation theory. The existence of the antichiral DMI destroys the chiral symmetry of the honeycomb ferromagnet, which can cause a tilting of the renormalized magnon bands near the Dirac momenta. What is more, the renormalized magnon band gaps at non-equivalent Dirac points are no longer equal with the increase of the temperature. By varying the temperature, we can observe the band gap closing-reopening phenomenon, which is signature on the topological phase transition. Our results show that critical temperature of the topological phase transition is concerned with the strength of the antichiral DMI. More details will be presented in the following sections. Model Let us consider a two-dimensional Heisenberg honeycomb ferromagnet with unequal next-nearest-neighbor (NNN) DMI on the A and B sublattices, as shown in defined as 1 ( ) 2 A B D D D   and 1 ( ) 2 A B D D D    , which called chiral and antichiral DMI, respectively [40]. In the presence of an external magnetic field, the complete spin Hamiltonian of such a system can be written in the following form , , ,ˆz i j ij i j ij i j i i j i i j i j J S S D v z S S D v z S S B S                 H .(1) where the first term corresponds to the nearest-neighbor (NN) ferromagnetic exchange interaction, 0 J  . The second and third terms represent the chiral DMI and antichiral DMI between two next-nearest spins, respectively. Here, the DM vectors are constrained to the Z-axis positive direction, where the constants 1 In order to bosonize the Heisenberg Hamiltonian, we adopt the Holstein-Primakoff (HP) transformation [41]. In the low temperature limit, ij ji v v    2 i i S n a a   ≫ , and the square roots can be expanded in powers of 1/ S . Truncated to 1 2 S , these transformations relate the spin operators to the magnon creation or annihilation operators as 2 i i S S a   , 2 i i S Sa    , z i i i S S a a    . Thus, one can get the noninteracting bosonic Hamiltonian   0 0 k k k H k      H , where ( , ) k k k a b      is a spinor denoting the degrees of freedom for the two sublattices, and   0 H k reads           0 0 0 + x x y y z z H h h h h k k k k k        ,(2) Here,   0 3 2 k h JS B D S k      ,     Re x k h JS k    ,     Im y k h JS k   ,   2 z k h DS k   ,      0 0 h k k k      ,(3) where         h h h k k k k     and 1( 1)    corresponds to the up(down) band, namely, ( ) u d     . In Fig. 2 a a a b a b a a a b a Here, we shall make use of the retarded Green's function formalism developed by Zubarev [42] to investigate the renormalization of the magnon bands because of magnon-magnon interactions. A time domain form of the retarded Green's function can be defined via          for 0.1 D J  , 0.05 D J   , 0.1 B J  , 0.05 D J  , 0 D  . Interacting topological Dirac magnons i i k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k J b b b a b a b bb N D D D D a a a a b b b N N                                        H            , , , , ; , i t t r t r t r t r t                    ,k k k k k k k k                                    H H ,(5) Here,   , , k k k k k k k k k k k k k                 .(6) Here, the  −function corresponds to the random phase approximation, and a short explanation of its availability is suitable. In general, the expected value 2 3 k k    possesses a predominant time dependence       0 0 3 2 2 3 i t k k k k e          . When we take into account summation over all 2 k and 3 k , the dominant contribution from 2 3 k k  term is only retained. According to the perturbation theory, one can obtain a first order Dyson equation on interacting magnons, which has the following form                   1 0 0 0 , ; , ; , ; , ; R R R R G G G G k k k k k k k k k            .(7) Here,   , ; R G k k   is called the interacting Green's function,       , 0 0 , ; k k R G k k H k        isk k k k k k k k k k k k V V V  , , , 2 2 corresponds to the coefficient of the magnon-magnon interaction matrix, and 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 k k k k k k k k k k a a a b b a b a                   is referred to as the zeroth order population matrix. First-order renormalization of magnon band Next, let us turn our attention the first-order renormalized Hamiltonian. After some simple calculations, it easy to obtain the renormalized Hamiltonian for the present system, which is       0 int * * 1 3 2 2 ( ) , ( ) 3 2 2 m k k k k m k k k k H H H JS B D S Q DS JS M p k JS M p JS B D S Q DS k                                      (9) with   2 q J M q N     ,       2 1 , 2 q i k q q J p e q q k N                              2 0 1 2 , q q q k q q q k k q q J D D Q q q q q q q N N N D D q q q N N                                                    0 2 , m q q q q q k k q q J D D q q q q q N N N D D q q q N N                                      0 0 ( ) ( ) d u q f q f q       ,       0 0 ( ) ( ) d u q f q f q       .(10) Here,   0 1 ( ) 0 ( ) 1 q f q e            is the Bose-Einstein distribution function, 1/ B k T   , and       z h q q q    . By diagonalizing the renormalized Hamiltonian (12), one can obtain a renormalized magnon dispersion relation, which reads         2 2 2 3 2 m k k k DS JS M q JS B D S Q k k                 .(11) Physically, the topological nature of Dirac magnons is associated with the When the temperature is very low, the interaction between two magnons can be neglected, and at the same time the self-energy effect disappears. However, as the temperature grows, the interaction between two magnons can not be ignored, and then the self-energy has to be taken into account. In fact, the renormalized magnon band can be experimentally detected via some well-established techniques, such as the inelastic neutron scattering, the inelastic X-ray scattering, the Brillouin light scattering and so on [43][44][45][46][47][48]. 0.1 B J  , 0.1 D J  , 0.1 D J   . In Fig. 4, we display the renormalized magnon band structures for the system in the existence of the antichiral DMI. After introducing the antichiral DMI, the most interesting effect is that the band gaps at two non-equivalent Dirac points( K  and K  ) are no longer equal as temperature increases. It is in sharp contrast to the above result on the honeycomb ferromagnet without the antichiral DMI, where the band gaps are equal at two Dirac Dirac points [29,36,40,49]. As the temperature grows, the band gaps at the Dirac points K  and K  will close at 0.69 T J  and 0.82 T J  , respectively. Topological properties of interacting Dirac magnon In bosonic systems, the occurrence of the nontrivial band topology is caused by the energy band structure. This nontrivial band topology can be characterized via a nonzero Berry curvature, which produces a quantized integer, i.e., the Chern number. Physically, a nontrivial band topology can arise only when the present system reveals a nontrivial gap between two magnon energy bands and each band possesses the nonzero Chern number [29]. In 2D bosonic systems, the Berry curvature can be expressed as       k k i k k k          (12) In the present representation, one can put the magnon band physics on one Bloch sphere, and all the geometric and topological natures are included in the total mass 2 m k m DS    , which contains one scalar potential produced by the antichiral DMI. Physically, the scalar potential part can cause that the Berry curvatures of the Dirac points K  and K  are not equal. For the convenience of numerical calculation, one can adopt the Berry curvature of pseudospin freedom [50], which is   3 1 2 x y h h h k k k h                  ∓ .(16) where ,   , , Through adjusting the magnon population, we shall realize the topological phase transition in the upper magnon band from 1 C   to 1 C  . Fig. 1 . 1The for sublattice B. The last term is caused by the Zeeman effect. Fig. 1 . 1Schematic of the honeycomb lattice structure, which consists of two triangular sublattices. n  and n  ( 1, 2,3) n  denote the three NN and NNN vectors, respectively. Fig. 2 . 2The magnon dispersion relation along the path   here, the gap opening is not symmetric and leads to a tilting of the magnon bands near the Dirac momenta. While the antichiral DMI drives the tilting of the bands, it has no effect on the magnitude of the band gap. many-body effect. For the sake of solving Eq.(4), one can adopt the mean-field approximation and random phase Fig. 3 .D. 3The renormalized magnon band structures in the absence of the antichiral DMI:(a) The renormalized magnon dispersion curves along the path The gap  at the Dirac point K  and K  as a function of temperature.  .InFig. 3, we show the renormalized magnon band structures for the system in the absence of the antichiral DMI. It is clearly seen that, with the increase of the temperature, the renormalized magnon band gap at the Dirac points K  and K  If one further increases the temperature, then the renormalized magnon band gap reopens at K  and K  points and its width increases with T . Obviously, the widths of the band gaps at the Dirac points K  and K  are equal because of inversion symmetry. It is not difficult to display specifically that both the unperturbed and perturbed Hamiltonian have the inversion symmetry via checking the relationship Fig. 4 . 4The renormalized magnon band structures in the existence of the antichiral DMI: (a) The renormalized magnon dispersion curves along the path            , (b) The gap  at the Dirac point K  and K  as a function of temperature. The parameters are chosen as Fig. 5 . 5The Chern number for the upper band of the renormalized magnon: (a) the absence of the antichiral DMI ( 0 D  ), (b) the presence of the antichiral DMI , we display the magnon dispersion relation of the present honeycomb ferromagnet. In the inexistence of DMI, the two bands meet at two inequivalent Diracpoints, namely, 4 3 , 0 9 K a          and 2 3 2 , 9 3 K a a           . A non-vanishing D can cause an effective Haldane mass term that opens up a non-trivial band gap 6 3DS   between the upper and lower branches at the Dirac points. where⋯ stands for the ensemble mean,   t t    is referred to as step function, and the field operator   , r t   can be expressed as           * * , , , a i i b i i i i r a t r b t       . Here,   i a t  and   i b t  correspond to the magnon creation operators of the A and B sublattices, respectively. Furthermore, the frequency domain form of the retarded Green's function can be given via   ; , ; R k k G k k        . In the Heisenberg picture, it is not hard to derive the equation of motion on the frequency Green function, which is 0 int ; , , ; , ; known as the free magnon Green's function , and    1 k  stands for the first-order self-energy. By expanding int , k      H in Eq. (5), one can get the Hartree-type self-energy, namely,       2 2 2 2 int 2 0 (1) , , k k k k k k k V k       H . (8) Here,     2 2 2 2 2 2 , , , represent the eigenvector and Berry curvature of the lower or the upper magnon band, respectively. By introducing a pseudospin freedom  , the Hamiltonian in Eq. (12) can be rewritten aswith , u d   . Here,   k   and   k   0 1 0 H h h      (13) with 0 3 2 k k h JS B D S Q       (14)           ( ) ( ) Re , Im , 2 m k k k JS M q JS M q h DS k k            The topological phase of the present system is characterized via the Chern numbers of each renormalized magnon band, which can be calculated by integrating the relevant Berry curvature over the whole first Brillouin zone, namely,x y z h h h h  , and 2 2 2 x y z h h h h    .   2 BZ C d k k      . 2 2 x x z For magnons in the two-dimensional ferromagnet, one longitudinal temperature gradient can give rise to a transverse heat current via the Berry curvature[5,28].Physically, such formation of the transverse heat current is referred to as the thermal Hall effect. The thermal Hall conductivity can be expressed as[5,28,51,52]   Here,corresponds to the famous Bose-Einstein distribution function,, and2Li ( ) x represents the dilogarithm function. In the linear spin wave approximation, i.e, neglecting magnon-magnon interactions, xy  is always greater than zero, as displayed inFig. 8(a). We note that the dependence of xy  on the temperature T is not significantly affected by varying the value of D . InFig. 8(b), one can clearly see that one can easily see thatConclusionsTo conclude, we have presented a theoretical work on the topological phase Model for a quantum Hall effect without Landau levels: Condensed-matter realization of the" parity anomaly. F D M Haldane, Phys. Rev. Lett. 612015F. D. M. Haldane, Model for a quantum Hall effect without Landau levels: Condensed-matter realization of the" parity anomaly", Phys. Rev. Lett. 61, 2015 (1988). Quantum spin Hall effect in graphene. C L Kane, E J Mele, Phys. Rev. Lett. 95146802C. L. Kane and E. J. Mele, Quantum spin Hall effect in graphene, Phys. Rev. Lett. 95, 146802 (2005). Theory of the thermal Hall effect in quantum magnets. H Katsura, N Nagaosa, P A Lee, Phys. Rev. Lett. 10466403H. Katsura, N. Nagaosa, and P. A. Lee. Theory of the thermal Hall effect in quantum magnets, Phys. Rev. Lett. 104, 066403 (2010). Observation of the magnon Hall effect. Y Onose, T Ideue, H Katsura, Y Shiomi, N Nagaosa, Y Tokura, Science. 329297Y. Onose, T. Ideue, H. Katsura, Y. Shiomi, N. Nagaosa, and Y. Tokura, Observation of the magnon Hall effect, Science 329, 297 (2010). Theoretical prediction of a rotating magnon wave packet in ferromagnets. R Matsumoto, S Murakami, Phys. Rev. Lett. 106197202R. Matsumoto and S. Murakami, Theoretical prediction of a rotating magnon wave packet in ferromagnets , Phys. Rev. Lett. 106, 197202 (2011). Topological Magnon Insulator in Insulating Ferromagnet. L Zhang, J Ren, J.-S Wang, B Li, Phys. Rev. B. 87144101L. Zhang, J. Ren, J.-S. Wang, and B. Li, Topological Magnon Insulator in Insulating Ferromagnet, Phys. Rev. B 87, 144101 (2013). Thermal Hall effect of magnons in magnets with dipolar interaction. R Matsumoto, R Shindou, S Murakami, Phys. Rev. B. 8954420R. Matsumoto, R. Shindou, and S. Murakami, Thermal Hall effect of magnons in magnets with dipolar interaction, Phys. Rev. B 89, 054420 (2014). A first theoretical realization of honeycomb topological magnon insulator. S A Owerre, J. Phys. Condens. Matter. 28386001S. A. Owerre, A first theoretical realization of honeycomb topological magnon insulator, J. Phys. Condens. Matter 28, 386001 (2016). Quantum Breathers in a Two-Dimensional Hexangular Heisenberg Ferromagnet. W Feng, L Wu, B Tang, K Deng, Int. J. Theor. Phys. 601438W. Feng, L. Wu, B. Tang, K. Deng, Quantum Breathers in a Two-Dimensional Hexangular Heisenberg Ferromagnet, Int. J. Theor. Phys. 60, 1438 (2021). Half-metallicity in 2D organometallic honeycomb frameworks. H Sun, B Li, J Zhao, J. Phys. Condens. Matter. 28425301H. Sun, B. Li, and J. Zhao, Half-metallicity in 2D organometallic honeycomb frameworks, J. Phys. Condens. Matter 28, 425301 (2016). Photoinduced Floquet topological magnons in a ferromagnetic checkerboard lattice. Z Zhang, W Feng, Y Yao, B Tang, Phys. Lett. A. 414127630Z. Zhang, W. Feng, Y. Yao, B. Tang, Photoinduced Floquet topological magnons in a ferromagnetic checkerboard lattice, Phys. Lett. A, 414, 127630 (2021). Topological insulators and semimetals in classical magneticsystems. Z.-X Li, Y Cao, P Yan, Phys. Rep. 9151Z.-X. Li, Y. Cao, and P. Yan, Topological insulators and semimetals in classical magneticsystems, Phys. Rep. 915, 1 (2021). Two-dimensional quantum Heisenberg antiferromagnet at low temperatures. S Chakravarty, B I Halperin, D R Nelson, Phys. Rev. B. 392344S. Chakravarty, B. I. Halperin,and D. R. Nelson, Two-dimensional quantum Heisenberg antiferromagnet at low temperatures. Phys. Rev. B, 39, 2344 (1989). Antiferromagnetic Topological Insulators. R S K Mong, A M Essin, J E Moore, Phys. Rev. B. 81245209R. S. K. Mong, A. M. Essin, and J. E. Moore, Antiferromagnetic Topological Insulators, Phys. Rev. B 81, 245209 (2010). . L Lu, J D Joannopoulos, M Soljačić, Topological photonics. Nat. Photon. 8L. Lu, J. D. Joannopoulos, and M. Soljačić, Topological photonics. Nat. Photon. 8, 821-829 (2014). Photonic quantum information processing: a review. F Flamini, N Spagnolo, F Sciarrino, Rep. Prog. Phys. 8216001F. Flamini, N. Spagnolo, and F. Sciarrino, Photonic quantum information processing: a review. Rep. Prog. Phys. 82, 016001 (2018). Direct observation of corner states in second-order topological photonic crystal slabs. X.-D Chen, W.-M Deng, F.-L Shi, F.-L Zhao, M Chen, J.-W Dong, Phys. Rev. Lett. 122233902X.-D. Chen, W.-M. Deng, F.-L. Shi, F.-L. Zhao, M. Chen, and J.-W. Dong, Direct observation of corner states in second-order topological photonic crystal slabs, Phys. Rev. Lett. 122, 233902 (2019). . Y Pennec, J O Vasseur, B Djafari-Rouhani, L Dobrzyński, P , Y. Pennec, J. O. Vasseur, B. Djafari-Rouhani, L. Dobrzyński, and P. A. Two-dimensional phononic crystals: Examples and applications. Deymier, Surf. Sci. Rep. 65229291Deymier,Two-dimensional phononic crystals: Examples and applications. Surf. Sci. Rep. 65, 229291 (2010). Observation of a phononic quadrupole topological insulator. M Serra-Garcia, V Peri, R Süsstrunk, O R Bilal, T Larsen, L G Villanueva, S D Huber, Nat. 555M. Serra-Garcia, V. Peri, R. Süsstrunk, O. R. Bilal, T. Larsen, L. G. Villanueva, and S. D. Huber, Observation of a phononic quadrupole topological insulator, Nat. 555, 342-345 (2018). Magnonics: Experiment to prove the concept. V V Kruglyak, R J Hicken, J. Magn. Magn. Mater. 306191V. V. Kruglyak and R. J. Hicken, Magnonics: Experiment to prove the concept, J. Magn. Magn. Mater. 306, 191 (2006). . A A Serga, A V Chumak, B Hillebrands, J. Phys. D. 43264002A. A. Serga, A. V. Chumak, and B. Hillebrands, YIG magnonics, J. Phys. D 43, 264002 (2010). Magnonic Topological Insulators in Antiferromagnets. K Nakata, S K Kim, J Klinovaja, D Loss, Phys. Rev. B. 96224414K. Nakata, S. K. Kim, J. Klinovaja, and D. Loss, Magnonic Topological Insulators in Antiferromagnets, Phys. Rev. B 96, 224414 (2017). Review and prospects of magnonic crystals and devices with reprogrammable band structure. M Krawczyk, D Grundler, J. Phys. Condens. Matter. 26123202M. Krawczyk and D. Grundler, Review and prospects of magnonic crystals and devices with reprogrammable band structure, J. Phys. Condens. Matter 26, 123202 (2014). Zur theorie des ferromagnetismus. F Bloch, Z. Phys. 61206F. Bloch, Zur theorie des ferromagnetismus, Z. Phys. 61, 206 (1930). On the theory of ferromagnetic resonance absorption. C Kittel, Phys. Rev. 73155C. Kittel, On the theory of ferromagnetic resonance absorption, Phys. Rev. 73, 155 (1948). Space-and time-resolved Brillouin light scattering from nonlinear spin-wave packets. O Büttner, M Bauer, A Rueff, S O Demokritov, B Hillebrands, A N Slavin, M P Kostylev, B A Kalinikos, Ultrasonics. 38443O. Büttner, M. Bauer, A. Rueff, S. O. Demokritov, B. Hillebrands, A. N. Slavin, M. P. Kostylev, and B. A. Kalinikos, Space-and time-resolved Brillouin light scattering from nonlinear spin-wave packets, Ultrasonics 38, 443 (2000). On the phonon Hall effect in a paramagnetic Dielectric. A V Inyushkin, A N Taldenkov, JETP. Lett. 86379A. V. Inyushkin and A.N. Taldenkov, "On the phonon Hall effect in a paramagnetic Dielectric, JETP. Lett. 86, 379 (2007). Thermal Hall Effect of Magnons. S Murakami, A Okamoto, J. Phys. Soc. Jpn. 8611010S. Murakami and A. Okamoto, Thermal Hall Effect of Magnons, J. Phys. Soc. Jpn. 86, 011010 (2017). Topological honeycomb magnon Hall effect: A calculation of thermal Hall conductivity of magnetic spin excitations. S A Owerre, J. Appl. Phys. 120443903S. A. Owerre, Topological honeycomb magnon Hall effect: A calculation of thermal Hall conductivity of magnetic spin excitations, J. Appl. Phys, 120(4), 043903 (2016). Topological thermal Hall effect in frustrated kagome antiferromagnets. S A Owerre, Phys. Rev. B. 9514422S. A. Owerre, Topological thermal Hall effect in frustrated kagome antiferromagnets, Phys. Rev. B 95, 014422 (2017). Thermal Hall Effect of Spin Excitations in a Kagome Magnet. M Hirschberger, R Chisnell, Y S Lee, N P Ong, Phys. Rev. Lett. 115106603M. Hirschberger, R. Chisnell, Y. S. Lee, and N. P. Ong, Thermal Hall Effect of Spin Excitations in a Kagome Magnet, Phys. Rev. Lett. 115, 106603 (2015). Dirac magnons in honeycomb ferromagnets. S S Pershoguba, S Banerjee, J C Lashley, J Park, H Ågren, G Aeppli, A V Balatsky, Phys. Rev. X. 811010S. S. Pershoguba, S. Banerjee, J. C. Lashley, J. Park, H. Ågren, G. Aeppli, and A. V. Balatsky, Dirac magnons in honeycomb ferromagnets, Phys. Rev. X 8, 011010 (2018). Colloquium: Spontaneous magnon decays. M E Zhitomirsky, A L Chernyshev, Rev. Mod. Phys. 85219M. E. Zhitomirsky and A. L. Chernyshev, Colloquium: Spontaneous magnon decays, Rev. Mod. Phys. 85, 219 (2013). Realization of the Haldane-Kane-Mele Model in a System of Localized Spins. S K Kim, H Ochoa, R Zarzuela, Y Tserkovnyak, Phys. Rev. Lett. 117227201S. K. Kim, H. Ochoa, R. Zarzuela, and Y. Tserkovnyak, Realization of the Haldane-Kane-Mele Model in a System of Localized Spins, Phys. Rev. Lett. 117, 227201 (2016). Interaction-stabilized topological magnon insulator in ferromagnets. A Mook, K Plekhanov, J Klinovaja, D Loss, Phys. Rev. X. 1121061A. Mook, K. Plekhanov, J. Klinovaja, and D. Loss, Interaction-stabilized topological magnon insulator in ferromagnets, Phys. Rev. X 11, 021061 (2021). Topological Phase Transitions of Dirac Magnons in Honeycomb Ferromagnets. Y.-S Lu, J.-L Li, C.-T Wu, Phys. Rev. Lett. 127217202Y.-S. Lu, J.-L. Li, and C.-T. Wu, Topological Phase Transitions of Dirac Magnons in Honeycomb Ferromagnets, Phys. Rev. Lett. 127, 217202 (2021). Interacting topological Dirac magnons. H Sun, D Bhowmick, B Yang, P Sengupta, Phys. Rev. B. 10713134426H. Sun, D. Bhowmick, B. Yang, P. Sengupta, Interacting topological Dirac magnons, Phys. Rev. B, 107(13), 134426 (2023). Topological Spin Excitations in Honeycomb FerromagnetCrI3. L Chen, J.-H Chung, B Gao, T Chen, M B Stone, A I Kolesnikov, Q Huang, P Dai, Phys.Rev.X. 841028L. Chen, J.-H. Chung, B. Gao, T. Chen, M. B. Stone, A. I. Kolesnikov, Q. Huang, and P. Dai, Topological Spin Excitations in Honeycomb FerromagnetCrI3, Phys.Rev.X 8, 041028 (2018). Phys. Topological magnon insulator spin excitations in the two-dimensional ferromagnet BrI3. Z Cai, S Bao, Z.-L Gu, Y.-P Gao, Z Ma, Y Shangguan, W Si, Z.-Y Dong, W Wang, Y Wu, D Lin, J Wang, K Ran, S Li, D Adroja, X Xi, S.-L Yu, X Wu, J.-X Li, J Wen, Rev. B. 10420402Z. Cai, S. Bao, Z.-L. Gu, Y.-P. Gao, Z. Ma, Y. Shangguan, W. Si, Z.-Y. Dong, W. Wang, Y. Wu, D. Lin, J. Wang, K. Ran, S. Li, D. Adroja, X. Xi, S.-L. Yu, X. Wu, J.-X. Li, and J. Wen, Phys. Topological magnon insulator spin excitations in the two-dimensional ferromagnet BrI3, Rev. B 104, L020402 (2021). Antichiral edge states in Heisenberg ferromagnet on a honeycomb lattice. D Bhowmick, P Sengupta, Phys. Rev. B. 101195133D. Bhowmick and P. Sengupta, Antichiral edge states in Heisenberg ferromagnet on a honeycomb lattice, Phys. Rev. B 101, 195133 (2020). Field dependence of the intrinsic domain magnetization of a ferromagnet. T Holstein, H Primakoff, Phys. Rev. 581098T. Holstein and H. Primakoff, Field dependence of the intrinsic domain magnetization of a ferromagnet, Phys. Rev. 58, 1098 (1940). Double-time Green functions in statistical physics. D N Zubarev, Sov. Phys. Usp. 3320D. N. Zubarev, Double-time Green functions in statistical physics, Sov. Phys. Usp. 3, 320 (1960). Renormalization of Large-Wave-Vector Magnons in Ferromagnetic CrBr3 Studied by Inelastic Neutron Scattering: Spin-Wave Correlation Effects. W B Yelon, R Silberglitt, Phys. Rev. B. 42280W. B. Yelon and R. Silberglitt, Renormalization of Large-Wave-Vector Magnons in Ferromagnetic CrBr3 Studied by Inelastic Neutron Scattering: Spin-Wave Correlation Effects, Phys. Rev. B 4, 2280 (1971). Spin Waves in Ferromagnetic CrBr3 Studied by Inelastic Neutron Scattering. E J Samuelsen, R Silberglitt, G Shirane, J P Remeika, Phys. Rev. B. 3157E. J. Samuelsen, R. Silberglitt, G. Shirane, and J. P. Remeika, Spin Waves in Ferromagnetic CrBr3 Studied by Inelastic Neutron Scattering, Phys. Rev. B 3, 157 (1971). Resonant inelastic x-ray scattering studies of elementary excitations. L J P Ament, M V Veenendaal, T P Devereaux, J P Hill, J V D Brink, Rev. Mod. Phys. 83705L. J. P. Ament, M. v. Veenendaal, T. P. Devereaux, J.P. Hill, J. v. d. Brink, Resonant inelastic x-ray scattering studies of elementary excitations, Rev. Mod. Phys. 83, 705 (2011). Light Scattering by Spin Waves in FeF2. P A Fleury, S P S Porto, L E Cheesman, H J Guggenheim, Phys. Rev. Lett. 1784P. A. Fleury, S. P. S. Porto, L. E. Cheesman, and H. J. Guggenheim, Light Scattering by Spin Waves in FeF2, Phys. Rev. Lett. 17, 84 (1966). Brillouin light scattering from spin waves in magnetic layers and multilayers. B Hillebrands, P Baumgart, G Güntherodt, Appl. Phys. A. 49B. Hillebrands, P. Baumgart, and G. Güntherodt, Brillouin light scattering from spin waves in magnetic layers and multilayers, Appl. Phys. A 49, 589-598 (1989). Brillouin light scattering studies of confined spin waves: linear and nonlinear confinement. S O Demokritov, B Hillebrands, A N Slavin, Phys. Rep. 348S. O. Demokritov, B. Hillebrands, A. N. Slavin, Brillouin light scattering studies of confined spin waves: linear and nonlinear confinement, Phys. Rep. 348, 441-489 (2001). Nonlinear localized excitations in a topological ferromagnetic honeycomb lattice. W Feng, B Tang, L Wu, P Kong, C Yang, L Wang, K Deng, J. Magn. Magn. Mater. 536168089W. Feng, B. Tang, L. Wu, P. Kong, C. Yang, L. Wang, and K. Deng, Nonlinear localized excitations in a topological ferromagnetic honeycomb lattice, J. Magn. Magn. Mater. 536, 168089 (2021). Berry phase effects on electronic properties. D Xiao, M.-C Chang, Q Niu, Rev. Mod. Phys. 821959D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on electronic properties, Rev. Mod. Phys. 82, 1959 (2010). Negative thermal Hall conductance in a two-dimer Shastry-Sutherland model with a π-flux Dirac triplon. H Sun, P Sengupta, D Nam, B Yang, Phys. Rev. B. 103140404H. Sun, P. Sengupta, D. Nam, and B. Yang, Negative thermal Hall conductance in a two-dimer Shastry-Sutherland model with a π-flux Dirac triplon, Phys. Rev. B 103, L140404 (2021). Topological phase transition in magnon bands in a honeycomb ferromagnet driven by sublattice symmetry breaking. H Kim, S K Kim, Phys. Rev. B. 106104430H. Kim and S. K. Kim, Topological phase transition in magnon bands in a honeycomb ferromagnet driven by sublattice symmetry breaking, Phys. Rev. B 106, 104430 (2022).
[]
[ "A sketch-and-project method for solving the matrix equation AXB = C", "A sketch-and-project method for solving the matrix equation AXB = C" ]
[ "Wendi Bao \nCollege of Science\nChina University of Petroleum\n266580QingdaoP.R. China\n", "Zhiwei Guo \nCollege of Science\nChina University of Petroleum\n266580QingdaoP.R. China\n", "Weiguo Li \nCollege of Science\nChina University of Petroleum\n266580QingdaoP.R. China\n", "Ying Lv \nCollege of Science\nChina University of Petroleum\n266580QingdaoP.R. China\n", "Jichao Wang \nCollege of Science\nChina University of Petroleum\n266580QingdaoP.R. China\n" ]
[ "College of Science\nChina University of Petroleum\n266580QingdaoP.R. China", "College of Science\nChina University of Petroleum\n266580QingdaoP.R. China", "College of Science\nChina University of Petroleum\n266580QingdaoP.R. China", "College of Science\nChina University of Petroleum\n266580QingdaoP.R. China", "College of Science\nChina University of Petroleum\n266580QingdaoP.R. China" ]
[]
In this paper, based on an optimization problem, a sketch-and-project method for solving the linear matrix equation AXB = C is proposed. We provide a thorough convergence analysis for the new method and derive a lower bound on the convergence rate and some convergence conditions including the case that the coefficient matrix is rank deficient. By varying three parameters in the new method and convergence theorems, the new method recovers an array of well-known algorithms and their convergence results. Meanwhile, with the use of Gaussian sampling, we can obtain the Gaussian global randomized Kaczmarz (GaussGRK) method which shows some advantages in solving the matrix equation AXB = C. Finally, numerical experiments are given to illustrate the effectiveness of recovered methods.where the Kronecker product B ⊤ ⊗ A ∈ R pq×mn , the right-side vector vec (C) ∈ R pq×1 , and the unknown vector vec (X) ∈ R mn×1 . Many iteration methods are proposed[8,9]to solve the matrix equation (1.1) by applying the Kronecker product. When the dimensions of A and B are large, the dimension of linear system (1.2) increases sharply, which increases the memory usage and calculation cost of numerical algorithms. Many iterative methods frequently use the matrix-matrix product operation. Consequently, a lot of computing time consumes. Many recent researches show that Kaczmarz-type methods are suitable for large-scale problems since each Kaczmarz iterate requires only one row of the coefficient matrix and no matrix-vector product. In [10], to solve large-scale consistent linear matrix equations (1.1), Niu and Zheng proposed the global randomized block Kaczmarz (GRBK) algorithm and the global randomized average block Kaczmarz (GRABK) algorithm. Based on greedy ideas, Wu et al. [11] introduced the relaxed greedy randomized Kaczmarz (ME-RGRK) method and the maximal weighted residual Kaczmarz (ME-MWRK) method for solving consistent matrix equation AXB = C. In [12], Du et al. extended Kaczmarz methods to the randomized block coordinate descent (RBCD) method for solving the matrix least-squares problem min X∈R m×n ∥C − AXB∥ F . Meanwhile, by applying the Kaczmarz iterations and the hierarchical approach, Shafiei and Hajarian obtained new iterative algorithms for solving the Sylvester matrix equation in [13]. For linear systems Ax = b, Robert M. Gower et al. [14] constructed a sketch-and-project method, which unifies a variety of randomized iterative methods including both randomized Kaczmarz and coordinate descent along with all of their block variants. The general sketch-and-project framework has not yet been analyzed for the matrix equation AXB = C.Inspired by the idea in[14]and [13], we propose a sketch-and-project method for solving the matrix equation (1.1). The convergent analysis of the proposed method is investigated and existing complexity results for known variants can be obtained. A lower bound on the convergence rate is explored for the evolution of the expected iterates. Numerical experiments are given to verify the validity of recovered methods.The main contribution of our work is summarized as follows.(1) New method. By introducing three different parameters, we induce a sketch-and-project method for the matrix equation (1.1). The iteration scheme is as follows:
null
[ "https://export.arxiv.org/pdf/2306.03345v1.pdf" ]
259,088,571
2306.03345
72d220a138683527c936535d8bdad3928a61ff25
A sketch-and-project method for solving the matrix equation AXB = C Wendi Bao College of Science China University of Petroleum 266580QingdaoP.R. China Zhiwei Guo College of Science China University of Petroleum 266580QingdaoP.R. China Weiguo Li College of Science China University of Petroleum 266580QingdaoP.R. China Ying Lv College of Science China University of Petroleum 266580QingdaoP.R. China Jichao Wang College of Science China University of Petroleum 266580QingdaoP.R. China A sketch-and-project method for solving the matrix equation AXB = C Matrix equationIterative methodRandomized Kaczmarz methodRandomized coordinate descent methodGaussian sampling In this paper, based on an optimization problem, a sketch-and-project method for solving the linear matrix equation AXB = C is proposed. We provide a thorough convergence analysis for the new method and derive a lower bound on the convergence rate and some convergence conditions including the case that the coefficient matrix is rank deficient. By varying three parameters in the new method and convergence theorems, the new method recovers an array of well-known algorithms and their convergence results. Meanwhile, with the use of Gaussian sampling, we can obtain the Gaussian global randomized Kaczmarz (GaussGRK) method which shows some advantages in solving the matrix equation AXB = C. Finally, numerical experiments are given to illustrate the effectiveness of recovered methods.where the Kronecker product B ⊤ ⊗ A ∈ R pq×mn , the right-side vector vec (C) ∈ R pq×1 , and the unknown vector vec (X) ∈ R mn×1 . Many iteration methods are proposed[8,9]to solve the matrix equation (1.1) by applying the Kronecker product. When the dimensions of A and B are large, the dimension of linear system (1.2) increases sharply, which increases the memory usage and calculation cost of numerical algorithms. Many iterative methods frequently use the matrix-matrix product operation. Consequently, a lot of computing time consumes. Many recent researches show that Kaczmarz-type methods are suitable for large-scale problems since each Kaczmarz iterate requires only one row of the coefficient matrix and no matrix-vector product. In [10], to solve large-scale consistent linear matrix equations (1.1), Niu and Zheng proposed the global randomized block Kaczmarz (GRBK) algorithm and the global randomized average block Kaczmarz (GRABK) algorithm. Based on greedy ideas, Wu et al. [11] introduced the relaxed greedy randomized Kaczmarz (ME-RGRK) method and the maximal weighted residual Kaczmarz (ME-MWRK) method for solving consistent matrix equation AXB = C. In [12], Du et al. extended Kaczmarz methods to the randomized block coordinate descent (RBCD) method for solving the matrix least-squares problem min X∈R m×n ∥C − AXB∥ F . Meanwhile, by applying the Kaczmarz iterations and the hierarchical approach, Shafiei and Hajarian obtained new iterative algorithms for solving the Sylvester matrix equation in [13]. For linear systems Ax = b, Robert M. Gower et al. [14] constructed a sketch-and-project method, which unifies a variety of randomized iterative methods including both randomized Kaczmarz and coordinate descent along with all of their block variants. The general sketch-and-project framework has not yet been analyzed for the matrix equation AXB = C.Inspired by the idea in[14]and [13], we propose a sketch-and-project method for solving the matrix equation (1.1). The convergent analysis of the proposed method is investigated and existing complexity results for known variants can be obtained. A lower bound on the convergence rate is explored for the evolution of the expected iterates. Numerical experiments are given to verify the validity of recovered methods.The main contribution of our work is summarized as follows.(1) New method. By introducing three different parameters, we induce a sketch-and-project method for the matrix equation (1.1). The iteration scheme is as follows: Introduction In this paper, we consider the linear matrix equation AXB = C,(1.1) where coefficient matrices A ∈ R p×m and B ∈ R n×q , a right-hand side C ∈ R p×q , and an unknown matrix X ∈ R m×n . We shall assume throughout that the equation is consistent, there exists an X * satisfying AX * B = C. This assumption can be relaxed by choosing the least norm solution when the system has multiple solutions. The largescale linear matrix equation arises in computer science, engineering, mathematical computing, machine learning, and many other fields such as surface fitting in computer-aided geometric design (CAGD) [1], signal and image processing [2], photogrammetry, etc. Classical solvers for the matrix equation (1.1) are generally fall into two categories: direct and iterative methods. Direct methods, such as the generalized singular value decomposition and QR-factorization-based algorithms [3,4] are attractive when A and B are small and dense, while iterative methods are usually more practical in the field of large-scale system of equations [5][6][7]. It is universally known that the matrix equation (1.1) can be written as the following equivalent matrix-vector form by the Kronecker product E X k+1 − X * = E X k − X * − E Z ′ 1 (X k − X * )Z 2 Theorem 4.1 E X k+1 − X * 2 F(G) ≤ ρE X k − X * 2 F(G) Theorem 4.1 E X k+1 − X * F(G) ≤ ρ E X k − X * F(G) Theorem 4.2 E X k − X * 2 F(G) ≤ ρ σ X k − X * 2 F(G) . Theorem 4.5 The convergence rate is ρ = 1 − λ min E Z 2 ⊗ Z ′ 1 , ρ σ = 1 − σ 2 min E Z 2 ⊗ Z ′ 1 < 1. (3) Complexity: Special cases. As a generalized iterative method, the parameter random matrices S , P and G are given specific values, some well known methods are obtained. Two convergence theorems for the generalized method are explored. Besides these generic results, which hold without any major restriction on the sampling matrix S , P (in particular, it can be either discrete or continuous), we give a specialized result applicable to discrete sampling matrices S , P (see Theorem 4.7). Our analysis recovers the existing rates (see Table 1.2). GaussRK-A Gaussian sampling 1 − 2λ min (Ω 1 ) πT r(Ω 1 ) Theorem 5.1 * Ω 1 , Ω 2 are defined as the following described theorems. (4) Application and Extension. We apply our algorithms to real-world applications, such as the real-world sparse data and CT data. Gaussian global randomized Kaczmarz (GaussGRK) method shows some advantages in solving the matrix equation AXB = C. Meanwhile, based on our approach, many avenues for further development and research can be explored. For instance, it is possible to extend the results to the case that S and P are count sketch transforms. One also can design randomized iterative algorithms for finding the generalized inverse of a very large matrix and the solutions with special structures such as symmetric positive definite matrices. The rest of this paper is organized as follows. In Section 2, some notations and preliminaries are introduced. In Section 3, we derive the generalized iterative method for solving matrix equation (1.1). After that, Convergence analysis is explored. Convergence rate, convergence conditions and a low bound of convergenc rate are obtained in Section 4. We recover several existing methods by selecting appropriate parameters G, S and P. Meanwhile, all the associated complexity results will be summarized in the final theorems in Section 5. In Section 6, we shall describe variants of our method in the case when parameters S and P are Gaussian vectors and establish the convergence theorem. In Section 7, some numerical examples are presented to verify the efficiency of the proposed method and compared the convergence rate of it with other existing methods. At the end, some conclusions are given in Section 6. Notation and preliminary For any matrix M ∈ R m×n , we use M ⊤ , Range (M), M i j , σ max (M) and σ min (M) to denote its transpose, column space, the (i, j)-th entry, the largest and smallest nonzero singular values, respectively. When the matrix M is square, then T r(M) represents its trace. Define the Frobenius inner product ⟨A, B⟩ F := T r A ⊤ B = T r AB ⊤ , where A, B ∈ R m×n . Specially, we denote the Frobenius norm as ∥A∥ 2 F = ⟨A, A⟩ F . Let ∥X∥ 2 F(G) = T r X ⊤ GX , where G is a parameter matrix which is symmetric positive definite. When G is an identity matrix, it holds that ∥A∥ 2 15]). For the Kronecker product, some well-known properties are summarized as follows. F(G) = ∥A∥ 2 F , ∥A∥ G = max ∥x∥ G =1 ∥Ax∥ G . • vec (ABC) = C ⊤ ⊗ A vec (B) , • (AC) ⊗ (BD) = (A ⊗ B) (C ⊗ D) , • ∥A ⊗ B∥ F = ∥A∥ F · ∥B∥ F , • (A ⊗ B) ⊤ = A ⊤ ⊗ B ⊤ , • (A ⊗ B) −1 = A −1 ⊗ B −1 , • λ (A ⊗ B) = λ i µ j : λ i ∈ λ (A) , µ j ∈ µ (B) , i = 1, 2, ..., n; j = 1, 2, ..., m , where λ (A) , µ (B) denote spectrums and the matrices A, B, C and D have compatible dimensions. Lemma 2.2 ([13,16]). If A, B, C and X are four real matrices of compatible sizes, we have • ∂ ∂X T r (AXB) = A ⊤ B ⊤ , • ∂ ∂X T r AX ⊤ B = BA, • ∂ ∂X T r X ⊤ X = ∂ ∂X T r XX ⊤ = 2X, • ∂ ∂X T r X ⊤ AXB = AXB + A ⊤ XB ⊤ , • ∂ ∂X T r (AXB + C) (AXB + C) ⊤ = 2A ⊤ (AXB + C) B ⊤ . A Sketch-and-project Kaczmarz iterative method To solve the problem (1.1), starting from X k our method draws random matrices S , P and uses them to generate a new point X k+1 . This iteration can be formulated in two seemingly different but equivalent ways (see Fig. 3.1 ). Two formulations • Projection viewpoint: sketch-and-project. X k+1 is the nearest point to X k which solves a sketched version of the original linear system: X k+1 = arg min X∈R m×n 1 2 X − X k 2 F(G) subject to S ⊤ AXBP = S ⊤ CP,(3.1) where S ∈ R p×τ 1 and P ∈ R q×τ 2 are two parameters, each of them is drawn in an independent and identically distributed fashion at each iteration. We do not restrict the number of columns of S and P, hence τ 1 and τ 2 are two random variables. • Optimization viewpoint: constrain-and-approximate. The solution set of the random sketched equation contains all solutions of the original system. However, there are many solutions, so we have to define a method to select one of them. From the optimization viewpoint, we know that X k+1 is the best approximation of X * in a random space passing through X k . That is, we choose an affine space randomly which contains X k and constrain our method to choose the next iterate from this space. That is to say, consider the following problem X k+1 = arg min X∈R m×n 1 2 ∥X − X * ∥ 2 F(G) , subject to X = X k + G −1 A ⊤ S Y P ⊤ B ⊤ , Y is free. (3.2) Then we pick X k+1 as the point which best approximates X * on this space. The geometry of our algorithm. The next iterate, X k+1 , is obtained by projecting X k onto the affine space formed by intersecting {X|X = X * + Z, with S ⊤ AZBP = 0, Z ∈ R m×n } (see (3.1)) and {X|X = X k + G −1 A ⊤ S Y P ⊤ B ⊤ , Y ∈ R τ 1 ×τ 2 } (see (3.2)). Stochastic iterative algorithm Now we deduce the iterative scheme for the problem (1.1). Based on the Lagrangian function of the problem 3.1, we have L (X, Y) = 1 2 X − X k 2 F(G) + Y, S ⊤ AXBP − S ⊤ CP = 1 2 T r X − X k ⊤ G X − X k + T r Y ⊤ S ⊤ AXBP − S ⊤ CP , (3.3) where Y is a Lagrangian multiplier. By using 2.2, we take the gradient of L (X, Y) and equate its components to zero for finding the stationary matrix: ∇ X L (X, Y) | X k+1 = 1 2 G + G ⊤ X k+1 − X k + A ⊤ S Y P ⊤ B ⊤ = 0, ∇ Y L (X, Y) | X k+1 = S ⊤ AX k+1 BP − S ⊤ CP = 0. Since G is symmetric positive definite, then we have S ⊤ AX k BP − S ⊤ AG −1 A ⊤ S Y P ⊤ B ⊤ BP = S ⊤ CP, Y = S ⊤ AG −1 A ⊤ S † S ⊤ AX k B − C P P ⊤ B ⊤ BP † , thus, the iteration form is as follows X k+1 =X k − G −1 A ⊤ S S ⊤ AG −1 A ⊤ S † S ⊤ AX k B − C P P ⊤ B ⊤ BP † P ⊤ B ⊤ . (3.4) Let Z ′ 1 = G −1 A ⊤ S S ⊤ AG −1 A ⊤ S † S ⊤ A, Z 2 = BP P ⊤ B ⊤ BP † P ⊤ B ⊤ , the above scheme becomes X k+1 = X k − Z ′ 1 X k − X * Z 2 . (3.5) Therefore, the sketch-and-project method is obtained. Especially, when ∥X∥ 2 F(G) = ∥X∥ 2 F , i.e., the problem X k+1 = arg min X∈R m×n 1 2 ∥X − X * ∥ 2 F . With a similar process, we have X k+1 =X k − A ⊤ S S ⊤ AA ⊤ S † S ⊤ AX k B − C P P ⊤ B ⊤ BP † P ⊤ B ⊤ =X k − Z 1 X k − X * Z 2 , (3.6) where Z 1 = A ⊤ S S ⊤ AA ⊤ S † S ⊤ A, Z 2 = BP P ⊤ B ⊤ BP † P ⊤ B ⊤ . Algorithm 3.1 Stochastic Iterative Method for Matrix Equations AXB = C 1: Input: A ∈ R p×m , B ∈ R n×q , C ∈ R p×q and the positive definite matrix G ∈ R m×m 2: Initialize: arbitrary square matrix X 0 ∈ R m×n 3: for k = 1, 2, · · · , do 4: Sample parameters: P, S are distribution over random matrices; 5: Compute T 1 = G −1 A ⊤ S S ⊤ AG −1 A ⊤ S † S ⊤ , T 2 = P P ⊤ B ⊤ BP † P ⊤ B ⊤ 6: X k+1 = X k − T 1 (C − AX k B)T 2 7: end for 8: Output: last iterate X k Recall that S ∈ R q×τ 1 , P ∈ R q×τ 2 (with τ 1 , τ 2 possibly being random) and A ∈ R q×m , B ∈ R n×q , G ∈ R m×m . Let us define the random quantity d = rank (P ⊤ B ⊤ ) ⊗ (S ⊤ A) , and notice that d ≤ min {τ 1 τ 2 , mn}, we have dim Range (BP) ⊗ (G −1 A ⊤ S ) = d, dim Null (P ⊤ B ⊤ ) ⊗ (S ⊤ A) = mn − d. Lemma 3.1. With respect to the geometry induced by the (I ⊗ G)-inner product, we have that 1. Z 2 ⊗ Z ′ 1 projects orthogonally onto d-dimensional subspace Range (BP) ⊗ (G −1 A ⊤ S ) . 2. I − Z 2 ⊗ Z ′ 1 projects orthogonally onto (mn − d)-dimensional subspace Null (P ⊤ B ⊤ ) ⊗ (S ⊤ A) . Proof. See Appendix Appendix A for more details. □ Lemma 3.2. Let Z 1 = GZ ′ 1 andẐ 1 = G −1/2 Z 1 G −1/2 . For Z 1 , Z ′ 1 , Z 1 ,Ẑ 1 and Z 2 , there exist the following relations. 1. (Z ′ 1 ) 2 = Z ′ 1 ,Z 1 = Z 2 1 , Z T 1 = Z 1 and Z 2 = Z 2 2 , Z T 2 = Z 2 . 2. Z 1 = Z 1 G −1 Z 1 , Z 1 T = Z 1 ,Ẑ 1 =Ẑ 1 T andẐ 1 2 =Ẑ 1 . Proof. We can verify them directly. □ Convergence analysis Hereunder, we detail the convergence analysis for the scheme 3.1 . From Lemma 3.2, it is easy to obtain the following relation. ρ = 1 − λ min E Z 2 ⊗Ẑ 1 = 1 − λ min I ⊗ G − 1 2 E Z 2 ⊗ Z 1 I ⊗ G − 1 2 = 1 − λ min I ⊗ G −1 E Z 2 ⊗ Z 1 = 1 − λ min E Z 2 ⊗ Z ′ 1 . (4.1) Our convergence theorems depend on the above convergence rate ρ. Convergence theorem Theorem 4.1. For every X * ∈ R m×n satisfying AX * B = C, we have E X k − X * 2 F(G) ≤ ρ k X 0 − X * 2 F(G) , where ρ = 1 − λ min E Z 2 ⊗ Z ′ 1 . Therefore, the iteration sequence generated by (3.5) converges to X * if 0 ≤ ρ < 1. Proof. The iteration sequence (3.5) can be rewritten as a simple fixed point formula X k+1 − X * = X k − X * − Z ′ 1 X k − X * Z 2 . (4.2) According to the definition of Frobenius norm, we have X k+1 − X * 2 F(G) = X k − X * 2 F(G) − T r X k − X * ⊤ GZ ′ 1 X k − X * Z 2 − T r Z ⊤ 2 X k − X * ⊤ (Z ′ 1 ) ⊤ G X k − X * + Z ′ 1 X k − X * Z 2 2 F(G) . (4.3) With Lemma 3.2, T r(MN) = T r(N M) and G ⊤ = G, we can get Z ′ 1 X k − X * Z 2 2 F(G) = T r Z ⊤ 2 X k − X * ⊤ Z ′⊤ 1 GZ ′ 1 X k − X * Z 2 = T r Z ⊤ 2 X k − X * ⊤ Z ′⊤ 1 Z 1 X k − X * Z 2 = T r X k − X * ⊤ Z 1 X k − X * Z 2 . Using the fact T r( M ⊤ N) = T r(N ⊤ M), we have T r X k − X * ⊤ GZ ′ 1 X k − X * Z 2 = T r X k − X * ⊤ Z 1 X k − X * Z 2 , T r Z ⊤ 2 X k − X * ⊤ (Z ′ 1 ) ⊤ G X k − X * = T r X k − X * ⊤ Z 1 X k − X * Z 2 . Then, by substituting the above three equations into (4.3) and using the properties of Kronecker product, we can get X k+1 − X * 2 F(G) = X k − X * 2 F(G) − Z ′ 1 X k − X * Z 2 2 F(G) = X k − X * 2 F(G) − vec Z ′ 1 X k − X * Z 2 2 I⊗G = X k − X * 2 F(G) − Z ⊤ 2 ⊗ Z ′ 1 vec X k − X * 2 I⊗G , From ∥A∥ 2 F(G) = ∥vec (A)∥ 2 I⊗G ,E X k+1 − X * 2 F(G) = X k − X * 2 F(G) − E Z ⊤ 2 ⊗ Z ′ 1 vec X k − X * 2 I⊗G . (4.4) By Lemmas 2.1 and 3.2, it holds With the symmetries of Z 2 and G −1/2 Z 1 G −1/2 in Lemma (3.2), it results in E Z ⊤ 2 ⊗ Z ′ 1 vec X k − X * 2 I⊗G =E vec X k − X * ⊤ Z ⊤ 2 ⊗ Z ′ 1 ⊤ (I ⊗ G) Z ⊤ 2 ⊗ Z ′ 1 vec X k − X * =vec X k − X * ⊤ (I ⊗ G 1/2 )E Z T 2 ⊗ (G −1/2 Z 1 G −1/2 ) (I ⊗ G 1/2 )vec X k − X * .E Z ⊤ 2 ⊗ Z ′ 1 vec X k − X * 2 I⊗G ≥ λ min E Z ⊤ 2 ⊗Ẑ 1 (I ⊗ G 1/2 )vec X k − X * 2 2 = ρ c X k − X * 2 F(G) , (4.6) where (I ⊗ G 1/2 )vec X k − X * 2 2 = vec X k − X * 2 F(G) and ρ c = λ min E Z ⊤ 2 ⊗Ẑ 1 . For the inequality, the following estimate λ min = min x 0 x ⊤ Ax x ⊤ x is used. Therefore, combining (4.4) and (4.6), we can obtain an estimate as follows E X k+1 − X * 2 F(G) = (1 − ρ c ) X k − X * 2 F(G) . Taking the full expectation of both sides, we can get that E X k+1 − X * 2 F(G) = (1 − ρ c ) E X k − X * 2 F(G) . By induction on the above process, the proof is completed. □ Theorem 4.2. For every X * ∈ R m×n satisfying AX * B = C, we have the norm of expectation as follows E X k+1 − X * F(G) ≤ ρ k X 0 − X * F(G) , where ρ = λ max I − E Z 2 ⊗ Z ′ 1 = 1 − λ min (E Z 2 ⊗ Z ′ 1 ). Proof. By the Kronecker product, the iterative formula (3.5) can be written as follows vec X k+1 − X * = I − Z ⊤ 2 ⊗ Z ′ 1 vec X k − X * . (4.7) It is evident that the transform X → vec (X) gives a linear isomorph of R m×n → R mn . Since Z ⊤ 2 = Z 2 , taking expectations conditioned on X k in (4.7) we have E vec X k+1 − X * |vec X k = I − E Z 2 ⊗ Z ′ 1 vec X k − X * . Taking expectations again gives E vec X k+1 − X * = I − E Z 2 ⊗ Z ′ 1 E vec X k − X * . Applying Lemma 3.2 and the norms to both sides we obtain the estimate E vec X k+1 − X * 2 I⊗G = I − E Z 2 ⊗ Z ′ 1 E vec X k − X * 2 I⊗G ≤ I − E Z 2 ⊗ Z ′ 1 2 I⊗G E vec X k − X * 2 I⊗G I − E Z 2 ⊗ Z ′ 1 2 I⊗G = max ∥(I⊗G 1/2 )x∥ 2 =1 (I ⊗ G 1/2 ) I − E Z 2 ⊗ Z ′ 1 x 2 2 . Substituting y = G 1/2 x in the above gives I − E Z 2 ⊗ Z ′ 1 2 I⊗G = max ∥y∥ 2 =1 (I ⊗ G 1/2 ) I − E Z 2 ⊗ Z ′ 1 (I ⊗ G −1/2 )y 2 2 = max ∥y∥ 2 =1 I − E Z 2 ⊗ (G −1/2 Z 1 G −1/2 ) y 2 2 = λ 2 max I − E Z 2 ⊗ (G −1/2 Z 1 G −1/2 ) , the third equality we used the symmetry of I − E Z 2 ⊗ (G −1/2 Z 1 G −1/2 ) when passing from the operator norm to the spectral radius. Note that the symmetry of E Z 2 ⊗ (G −1/2 Z 1 G −1/2 ) derives from the symmetries of Z 2 and G −1/2 Z 1 G −1/2 in Lemma 3.2. Considering that the vector operator is isomorphic and vec(X k − X * ) 2 I⊗G = X k − X * 2 F(G) , with the formula (4.1) we have E X k+1 − X * F(G) = E vec X k+1 − X * I⊗G ≤ ρ E X k − X * F(G) . By induction, the conclusion follows. The proof is completed. □ Convergence rate and convergence conditions To show that the rate ρ is meaningful, in Lemma 4.3 we prove that 0 ≤ ρ ≤ 1. We also provide a meaningful lower bound for ρ. Theorem 4.3. The quantity ρ = 1 − λ min Z 2 ⊗ Z ′ 1 satisfies 0 ≤ 1 − E [d] mn ≤ ρ ≤ 1, where d = rank (P ⊤ B ⊤ ) ⊗ (S ⊤ A) . Proof. Through Lemma 3.2 we known Z 2 ⊗Ẑ 1 is a projection, then we get I ⊗ G −1/2 Z 2 ⊗ Z 1 I ⊗ G −1/2 2 = I ⊗ G −1/2 Z 2 ⊗ Z 1 I ⊗ G −1/2 , where Z 1 = GZ 1 , whence the spectrum of I ⊗ G −1/2 Z 2 ⊗ Z 1 I ⊗ G −1/2 is contained in {0, 1}. Using this, combined with the fact that the mapping A → λ max (A) is convex on the set of symmetric matrices and Jensen's inequality, we have λ max E Z 2 ⊗ Z ′ 1 = λ max I ⊗ G −1/2 E Z 2 ⊗ Z 1 I ⊗ G −1/2 ≤ E λ max Z 2 ⊗ G −1/2 Z 1 G −1/2 ≤ 1. Analogously, with the convexity of the mapping A → −λ min (A), it holds λ min E Z 2 ⊗ Z ′ 1 ≥ 0. Thus λ min E Z 2 ⊗ Z ′ 1 ∈ [0, 1] which implies 0 ≤ ρ ≤ 1. To the lower bound, we use the fact that the trace of a matrix is the sum of its eigenvalues, and have E T r Z 2 ⊗ Z ′ 1 = T r E Z 2 ⊗ Z ′ 1 ≥ mnλ min E Z 2 ⊗ Z ′ 1 . Since Z 2 ⊗ Z ′ 1 is project on a d-dimensional subspace from Lemma 3.1, it results in T r Z 2 ⊗ Z ′ 1 = d. Thus, from the above formula, we can get ρ = 1 − λ min E Z 2 ⊗ Z ′ 1 ≥ 1 − E [d] mn . □ Lemma 4.1. If E Z 2 ⊗ Z 1 is invertible, then ρ = 1 − λ min E Z 2 ⊗ Z ′ 1 < 1, B ⊤ ⊗ A and (BP) ⊤ ⊗ (S ⊤ A) have full column rank, and X * is unique. Proof. See Appendix for more details. □ Lemma 4.2 ([17] ). If A ∈ R n×n is symmetric positive definite and X ∈ R n×k has rank k, then B = X ⊤ AX ∈ R k×k is also symmetric positive definite. Lemma 4.3. For an arbitrary symmetric positive definite matrix A, there exists E A 2 ⪰ (E [A]) ⊤ E [A] . Proof. See Appendix for more details. □ Lemma 4.4. (BP) ⊤ ⊗ (S ⊤ A) is full column rank if and only if E Z 2 ⊗ Z 1 is symmetric positive definite . Proof. Since (BP) ⊤ ⊗ (S ⊤ A) is full column rank and G is symmetric positive definite, we can get Z 2 ⊗ Z 1 is symmetric positive definite by Lemma 4.2. Since Z 2 ⊗ Z 1 is symmetric positive definite, it holds Z ⊤ 2 ⊗ Z 1 = H ⊤ H where H is symmetric positive definite. Then, for every y ∈ R mn 0, y ⊤ E Z ⊤ 2 ⊗ Z 1 y = y ⊤ E H ⊤ H y ≥ y ⊤ E [H] ⊤ E [H] y > 0. The first inequality is obtained by Lemma 4.3. It is easy to know that y ⊤ E [H] ⊤ E [H] y ≥ 0. If y ⊤ E [H] ⊤ E [H] y = 0, then y ⊤ E [H] y = 0, i.e. , y ⊤ Hy = 0 which contradicts with the property that H is symmetric positive definite. Thus, the necessity exists. Meanwhile, the sufficiency is obtained by Lemma 4.1. □ Lemma 4.5 ([14]). Let Z = Z 2 ⊗ Z 1 . If E [Z] is symmetric positive definite, then ⟨E [Z] y, y⟩ ≥ (1 − ρ) ∥y∥ 2 I⊗G , for all y ∈ R mn , where ρ = 1 − λ min Z 2 ⊗ Z ′ 1 and G symmetric positive definite. Proof. Note that E Z 2 ⊗ Z 1 and G are symmetric positive definite, we get 1 − ρ = 1 − λ min I ⊗ G − 1 2 E Z 2 ⊗ Z 1 I ⊗ G − 1 2 = max t t| I ⊗ G − 1 2 E Z 2 ⊗ Z 1 I ⊗ G − 1 2 − tI ⪰ 0 = max t t|E Z 2 ⊗ Z 1 − t(I ⊗ G) ⪰ 0 . Therefore, E Z 2 ⊗ Z 1 ⪰ I ⊗ G, and the conclusion is obtained. □ Theorem 4.4. If (BP) ⊤ ⊗ (S ⊤ A) is full column rank, then E X k − X * 2 F(G) ≤ ρ k X 0 − X * 2 F(G) , where ρ < 1 is given in Lemma 4.1. Proof. Since (BP) ⊤ ⊗ (S ⊤ A) is full column rank, from Lemma 4.4 we know that E Z 2 ⊗ Z 1 is symmetric positive definite. Let r k = vec X k − X * . From (3.5) we have r k+1 = r k − Z ⊤ 2 ⊗ Z ′ 1 r k . Taking expectation in r k+1 2 I⊗G conditioned on r k gives E r k+1 2 I⊗G |r k = E I − Z ⊤ 2 ⊗ Z ′ 1 r k 2 I⊗G |r k = E (I ⊗ G) − Z ⊤ 2 ⊗ GZ ′ 1 r k , r k |r k = r k 2 I⊗G − E Z ⊤ 2 ⊗ Z 1 r k , r k ≤ ρ r k 2 I⊗G (by Lemma 4.5) . Since r k 2 I⊗G = X k − X * 2 F(G) , then the conclusion follows. □ Remark 4.1. When A is full column rank or x ∈ Range(A ⊤ ), the following estimate holds ∥Ax∥ 2 2 ≥ σ 2 min (A)∥x∥ 2 2 . For one case of the matrix with full column rank, 4.5 give the convergence of the generalized method. For the other case, some conclusions are given in the following. For special methods, some results have been obtained (for the GRBK method, see [10]). Lemma 4.6 ( [12]). Let A ∈ R p×m and B ∈ R n×q be given. Denote M = M ∈ R m×n |∃Y ∈ R p×q s.t. M = A ⊤ Y B ⊤ . (4.8) Then, for any matrix M ∈ M, it holds ∥AMB∥ 2 F ≥ σ 2 min (A) σ 2 min (B) ∥M∥ 2 F . Lemma 4.7. Let the two sets M 1 and M 2 be defined by M 1 = X 1 ∈ R m×n |A ⊤ Y 1 B ⊤ = X 1 for some Y 1 ∈ R p×q , M 2 = X 2 ∈ R m×n |A ⊤ AY 2 BB ⊤ = X 2 for some Y 2 ∈ R m×n . Then, it holds M 1 = M 2 . Proof. See Appendix for more details. □ Theorem 4.5. Assume that Z ′ 1 and Z 2 are independent random variables. Let M = M ∈ R m×n |∃Y ∈ R p×q s.t.M = E Z ′ 1 YE [Z 2 ] . If (X k − X * ) ∈M, then for every X * ∈ R m×n satisfying AX * B = C, we have E X k − X * 2 F(G) ≤ ρ k σ X 0 − X * 2 F(G) . with ρ σ = 1 − σ 2 min E Z 2 ⊗ Z ′ 1 < 1. Therefore, the iteration sequence generated by (3.6) converges to X * . Proof. From Lemma 3.2, we can get Z T 2 ⊗Ẑ 1 = Z T 2 ⊗Ẑ 1 T , Z T 2 ⊗Ẑ 1 2 = Z T 2 ⊗Ẑ 1 . Thus, for every y ∈ R mn , we have y T E Z T 2 ⊗Ẑ 1 y = y T E Z T 2 ⊗Ẑ 1 ⊤ Z T 2 ⊗Ẑ 1 y ≥ y T E Z T 2 ⊗Ẑ 1 ⊤ E Z T 2 ⊗Ẑ 1 y. Using (4.5), we can get E Z ⊤ 2 ⊗ Z ′ 1 vec X k − X * 2 I⊗G ≥ E Z T 2 ⊗Ẑ 1 (I ⊗ G 1/2 )vec X k − X * 2 2 = E Ẑ 1 Rk E [Z 2 ] 2 F ≥ σ 2 min (E [Z 2 ]) σ 2 min E Ẑ 1 Rk 2 F = σ 2 min E Z 2 ⊗ Z ′ 1 X k − X * 2 F(G) ,(4.9)whereR k = G 1/2 (X k − X * ). The first inequality is obtained by E ∥X − E [X]∥ 2 = E ∥X∥ 2 −∥E [X]∥ 2 . From (X k − X * ) ∈ M,Ẑ 1 =Ẑ 1 T and Z T 2 = Z 2 , it yieldŝ R k ∈M = M ∈ R m×n |∃Y ∈ R p×q s.t.M = E Ẑ 1 T YE [(Z 2 )] T . It is easy to know that vec(R k ) = (I ⊗G 1/2 )vec X k − X * , Rk 2 F = X k − X * 2 F(G) . Then, with the use of Lemma 4.6 the second inequality holds. Since σ min E Ẑ 1 = σ min E Z ′ 1 , we have σ 2 min (E [Z 2 ]) σ 2 min E Ẑ 1 = σ 2 min E Z 2 ⊗ Z ′ 1 . With the similar process of Theorem 4.1, the conclusion is obtained. □ Remark 4.2. It is easy to prove that (X k − X * ) ∈M is equivalent to (I ⊗ G 1/2 )vec X k − X * ∈ Range E Z T 2 ⊗Ẑ 1 . In the global randomized Kaczmarz method, E Z ′ 1 = E Ẑ 1 = A ⊤ A ∥A∥ 2 F , E [Z 2 ] = BB T ∥B∥ 2 F , it has M = M ∈ R m×n |∃Y ∈ R p×q s.t.M = A T AY BB T . Obviously,M is well defined because X 0 = O ∈M and A † CB † ∈M. Meanwhile, it is easy to verify that (X k −X * ) ∈ M in (4.8) with the iteration scheme (3.6) for the global randomized Kaczmarz method. Then it results in (X k − X * ) ∈M from Lemma 4.7. Convergence with convenient probabilities Definition 4.6 ( [14]). Let the random matrix S , P be discrete distributions. (S , P) will be called a complete discrete sampling pair if S = S i ∈ R p×τ 1i with probability p 1 i > 0, where S ⊤ i A has full row rank and τ 1i ∈ N for i = 1, ..., r 1 . S =: S 1 , ..., S r 1 ∈ R p×Σ r 1 i=1 τ 1i is such that A ⊤ S has full row rank. Meanwhile, P = P i ∈ R q×τ 2i with probability p 2 i > 0, where P ⊤ i B ⊤ has full row rank and τ 2i ∈ N for i = 1, ..., r 2 . P =: P 1 , ..., P r 2 ∈ R q×Σ r 2 i=1 τ 2i is such that BP has full row rank. Assume that S , P is a complete discrete sampling pair, then S ⊤ A and (BP) ⊤ have full row rank and S ⊤ AA ⊤ S † = S ⊤ AA ⊤ S −1 , P ⊤ B ⊤ BP † = P ⊤ B ⊤ BP −1 . Therefore we replace the pseudoinverse in (3.6) by the inverse. Define D S = diag p 1 1 (S 1 ) ⊤ AA ⊤ S 1 −1/2 , ..., p 1 r 1 S r 1 ⊤ AA ⊤ S r 1 −1/2 , (4.10) D P = diag p 2 1 (P 1 ) ⊤ B ⊤ BP 1 −1/2 , ... , p 2 r 2 P r 2 ⊤ B ⊤ BP r 2 −1/2 , (4.11) where D S and D P are block diagonal matrices, and are well defined and invertible, as S ⊤ i A has full row rank for i = 1, ..., r 1 and P ⊤ j B ⊤ has full row rank for j = 1, ..., r 2 . Taking the expectation of Z 1 and Z 2 , we get E [Z 1 ] = r 1 i=1 A ⊤ S i S ⊤ i AA ⊤ S i −1 S ⊤ i Ap 1 i = A ⊤        r 1 i=1 S i p 1 i S ⊤ i AA ⊤ S i −1/2 S ⊤ i AA ⊤ S i −1/2 p 1 i S ⊤ i        A = A ⊤ SD S D S S ⊤ A ,(4.12) and E [Z 2 ] = r 2 j=1 BP j P ⊤ j B ⊤ BP j −1 P ⊤ j B ⊤ p 2 j = B         r 2 j=1 P j p 2 j P ⊤ j B ⊤ BP j −1/2 P ⊤ j B ⊤ BP j −1/2 p 2 j P ⊤ j         B ⊤ = (BPD P ) D P P ⊤ B ⊤ . (4.13) Since A ⊤ S and BP have full row rank, and D S , D P are invertible, we can get E [Z i ] , i = 1, 2 are symmetric positive definite. So, complete discrete sampling pairs guarantee the convergence of the resulting methods. Next we develop a choice of probability distribution that yields a convergence rate that is easy to interpret. This result is new and covers a wide range of methods, including randomized Kaczmarz method and randomized coordinate descent method, as well as their block variants. However, it is more general and covers many other possible particular algorithms, which arise by choosing two particular sets of sample matrices S i for i = 1, ..., r 1 and P j for j = 1, ..., r 2 . Theorem 4.7. Assume that (S , P) is a complete discrete sampling pair with the following probabilities, respectively, p 1 i = T r S ⊤ i AA ⊤ S i ∥A ⊤ S∥ 2 F for i = 1, ..., r 1 , p 2 j = T r P ⊤ j B ⊤ BP j ∥BP∥ 2 F for j = 1, ..., r 2 , (4.14) Then the formula (3.6) satisfies E X k − X * 2 F ≤ ρ k X 0 − X * 2 F , (4.15) where ρ = 1 − λ min S ⊤ AA ⊤ S λ min P ⊤ B ⊤ BP ∥A ⊤ S∥ 2 F ∥BP∥ 2 F . (4.16) Proof. Let u i = T r (S i ) ⊤ AA ⊤ S i , v j = T r P j ⊤ B ⊤ BP j . Taking (4.14) into (4.10) and (4.11), respectively, we have D 2 S = 1 ∥A ⊤ S ∥ 2 F diag u 1 S ⊤ 1 AA ⊤ S 1 −1 , ..., u r 1 S ⊤ r 1 AA ⊤ S r 1 −1 , D 2 P = 1 ∥BP∥ 2 F diag v 1 P ⊤ 1 B ⊤ BP 1 −1 , ..., v r 2 P ⊤ r 2 B ⊤ BP r 2 −1 . and thus λ min D 2 S = 1 ∥A ⊤ S∥ 2 F min 1≤i≤r 1          u i λ max S ⊤ i AA ⊤ S i          ≥ 1 ∥A ⊤ S∥ 2 F , (4.17) λ min D 2 P = 1 ∥BP∥ 2 F min 1≤ j≤r 2          v j λ max P ⊤ j B ⊤ BP j          ≥ 1 ∥BP∥ 2 F . (4.18) Using the fact that for arbitrary matrices M, N of appropriate sizes, λ min (MN) = λ min (N M), hence λ min (E [Z 1 ]) = λ min A ⊤ SD 2 S S ⊤ A = λ min S ⊤ AA ⊤ SD 2 S . Then, using the fact that λ min (MN) = λ min (M) λ min (N) for M, N ∈ R n×n are symmetric positive definite, from (4.17) we can obtain λ min (E [Z 1 ]) = λ min S ⊤ AA ⊤ SD 2 S ≥ λ min S ⊤ AA ⊤ S λ min D 2 S ≥ λ min S ⊤ AA ⊤ S ∥A ⊤ S∥ 2 F . Similarly, we can also obtain λ min (E [Z 2 ]) = λ min BPD 2 P P ⊤ B ⊤ = λ min P ⊤ B ⊤ BPD 2 P ≥ λ min P ⊤ B ⊤ BP λ min D 2 P ≥ λ min P ⊤ B ⊤ BP ∥BP∥ 2 F . Since ρ c = λ min E Z ⊤ 2 ⊗ Z 1 = λ min E Z ⊤ 2 ⊗ E [Z 1 ] ≥ λ min (E [Z 2 ]) λ min (E [Z 1 ]) ≥ λ min S ⊤ AA ⊤ S λ min P ⊤ B ⊤ BP ∥A ⊤ S∥ 2 F ∥BP∥ 2 F . Hence, by Theorem 4.1 we have E X k − X * 2 F ≤ (1 − ρ c ) X k−1 − X * 2 F ≤       1 − λ min S ⊤ AA ⊤ S λ min P ⊤ B ⊤ BP ∥A ⊤ S∥ 2 F ∥BP∥ 2 F       X k−1 − X * 2 F = ρ X k−1 − X * 2 F , where ρ = 1 − λ min( S ⊤ AA ⊤ S)λ min( P ⊤ B ⊤ BP) ∥A ⊤ S∥ 2 F ∥BP∥ 2 F . Finally, taking full expectation and by induction, we can get E X k − X * 2 F ≤ ρ k X 0 − X * 2 F . Obviously, the coefficient ρ satisfies 0 ≤ ρ < 1, so the method is convergent. □ Special cases: Examples In this section we briefly mention how by selecting the parameters S and P of our method we recover several existing methods. Furthermore, we propose some similar methods based on discrete sampling pairs. The list is by no means comprehensive and merely serves the purpose of an illustration of the flexibility of our algorithm. Global randomized Kaczmarz method If we choose S i = e i (the unit coordinate vector in R p ) and P j = e j (the unit coordinate vector in R q ), in view of (3.1), this results in X k+1 = arg min X∈R m×n 1 2 X − X k 2 F subject to A i,: XB :, j = C i j . With the use of (3.6), the iteration can be calculated with X k+1 = X k − A i,: ⊤ A i,E X k − X * 2 F ≤       1 − λ min A ⊤ A λ min BB ⊤ ∥A∥ 2 F ∥B∥ 2 F       k X 0 − X * 2 F . (5.1) About details of another convergence proof of the GRK method, we refer the reader to references [10]. We also provide new convergence results which based on the convergence of the norm of the expected error. Applying Theorem 4.2 to the GRK method gives E X k − X * 2 F ≤       1 − λ min A ⊤ A λ min BB ⊤ ∥A∥ 2 F ∥B∥ 2 F       2k X 0 − X * 2 F . (5.2) Thought the expectation is moved inside the norm, which is weaker form of convergence. We can find that the convergence rate appears squared, which means it is a better rate. Similar results for the convergence of the norm of the expected error holds for all the methods we present, and we will not repeat to illustrate this in following methods. Global randomized block Kaczmarz method Our framework also extends to block formulations of the global randomized Kaczmarz method. Let τ 1 be a random subset of p , and S = I :,τ 1 be a column concatenation of the columns of the p × p identity matrix I indexed by τ 1 . Similarly, let τ 2 be a random subset of q , and P = I :,τ 2 be a column concatenation of the columns of the q × q identity matrix I indexed by τ 2 . Then (3.2) specializes to X k+1 = arg min X∈R m×n 1 2 X − X k 2 F subject to A τ 1 ,: XB :,τ 2 = C τ 1 ,τ 2 . In view of (3.6), this can be equivalently written as X k+1 =X k + A τ 1 ,: ⊤ A τ 1 ,: A τ 1 ,: ⊤ † C τ 1 ,τ 2 − A τ 1 ,: X k B :,τ 2 B :,τ 2 ⊤ B :,τ 2 † B :,τ 2 ⊤ =X k + A † τ 1 ,: C τ 1 ,τ 2 − A τ 1 ,: X k B :,τ 2 B † :,τ 2 . (5.3) This is recovered the global randomized block Kaczmarz (GRBK) method in [10]. Remark 5.1. Now, let the sizes of block index sets be |I k | = 1 and |J k | = q, i.e., parameter matrices S = e i ∈ R p and P = I ∈ R q×q . In this case, the index i k ∈ [p] is selected according to a probability distribution P (i k ) = ∥Ai k ,: ∥ 2 2 ∥A∥ 2 F . Then, the update (5.3) becomes X k+1 = X k + A ⊤ i k ,: C i k ,: − A i k ,: X k B B † A i k ,:2 2 , which is called the randomized Kaczmarz method of matrix A (RK-A). Assume that B has full row rank. Using Theorem 4.7, we get the convergence rate in the expectation of the form . E X k − X * 2 F ≤       1 − λ min A ⊤ A ∥A∥ 2 F       k X 0 − X * 2 F . Suppose that A is full column rank. Also, we get the convergence rate in the expectation of the form E X k − X * 2 F ≤       1 − λ min BB ⊤ ∥B∥ 2 F       k X 0 − X * 2 F . (5.5) Comparing (5.4) and (5.5) with the convergence rate (5.1), we find the convergence factors of the RK-A and RK-B methods are smaller than that of the GRK method. Randomized coordinate descent method In this subsection, by choosing different parameters P, S , G, we induce two randomized coordinate descent algorithms. In the following two cases, we assume that B has full row rank. Positive definite case If A is symmetric positive definite, then we can choose G = A, P = I and S = e i in (3.1) and obtain X k+1 = arg min X∈R m×n 1 2 X − X k 2 A subject to (e i ) ⊤ AXB = (e i ) ⊤ C. where we use the symmetry of A to get (e i ) ⊤ A = A i,: = A :,i ⊤ . The solution to the above, given by (3.5), is X k+1 = X k − e i A :,i ⊤ X k B − C :,i B ⊤ B † B ⊤ A i,i . When i is chosen randomly, this is the randomized coordinate descent (CD-pd) method. Applying Theorem 4.7 with ρ c = λ min E Z 2 ⊗ Z ′ 1 , we see the probability distribution p i = A ii T r(A) results in a convergence with E X k − X * 2 A ≤ 1 − λ min (A) T r (A) k X 0 − X * 2 A . Least-squares version By choosing S = Ae i = A :,i as the ith column of A, P = I and G = A ⊤ A, the resulting iterative formula (3.5) is given by X k+1 = X k − e i A :,i ⊤ AX k B − C B ⊤ B † B ⊤ A :,i 2 2 . (5.6) When i is selected at random, this is the randomized coordinate descent (RCD) method applied to the least-squares problem min X∈R m×n ∥AXB − C∥ 2 F . A similar result was established by Kui Du et. al. [12]. Applying Theorem 4.7, we see that selecting i with probability proportional to the magnitude of column i of A, that is, p i = ∥A:,i∥ 2 2 ∥A∥ 2 F , results in a convergence with E X k − X * 2 A ⊤ A ≤       1 − λ min A ⊤ A ∥A∥ 2 F       k X 0 − X * 2 A ⊤ A . Variants: Gaussian sampling In this section, we shall develop a variant of our method. When parameter matrices S and P are Gaussian vectors with mean 0 ∈ R p , 0 ∈ R q and positive definite covariance matrices Σ 1 ∈ R p×p , Σ 2 ∈ R q×q , respectively. That is, S = ζ ∼ N (0, Σ 1 ) , P = η ∼ N (0, Σ 2 ). When they are applied to (3.6), the iterative formula becomes X k+1 = X k + A ⊤ ζ ζ ⊤ Cη − ζ ⊤ AX k Bη η ⊤ B ⊤ ∥ζ ⊤ A∥ 2 2 ∥Bη∥ 2 2 . (5.7) Unlike the discrete methods in Section 3, to calculate an iteration of (5.7) we need to compute the product of a matrix with a dense vector. However, in our numeric tests in Section 5, the faster convergence of the Gaussian method often pays off for its high iteration cost. Before analyzing the convergence, we introduce some lemmas. Lemma 5.1 ([14] ). Let D ∈ R n×n be a positive definite diagonal matrix and U ∈ R n×n be an orthogonal matrix, and Ω = UDU ⊤ . If u ∼ N (0, D) and ξ ∼ N (0, Ω), then E ξξ ⊤ ξ ⊤ ξ = UE uu ⊤ u ⊤ u U ⊤ ,(5. 8) and E ξξ ⊤ ξ ⊤ ξ ⪰ 2 π Ω T r (Ω) . 17]). If A, A + B ∈ R n×n are symmetric matrices, we shall use λ k (A) to designate the kth largest eigenvalue, i.e., λ n (A) ≤ · · · ≤ λ 2 (A) ≤ λ 1 (A) , (5.9) Lemma 5.2 ([then we have λ k (A) + λ n (B) ≤ λ k (A + B) ≤ λ k (A) + λ 1 (B) , k = 1, 2, · · · , n. (5.10) Lemma 5.3. Let A 1 , A 2 , B 1 , B 2 ∈ R n×n be symmetric positive semi-definite matrices. If A 2 − A 1 , B 2 − B 1 are also symmetric positive semi-definite matrices, we have λ min A ⊤ 2 ⊗ B 2 ≥ λ min A ⊤ 1 ⊗ B 1 . (5.11) Proof. See Appendix for more details. □ To analyze the complexity of the resulting method, let µ = A ⊤ S , ν = BP which are also Gaussian, distributed as µ ∼ N (0, Ω 1 ) , ν ∼ N (0, Ω 2 ), with Ω 1 = A ⊤ Σ 1 A, Ω 2 = BΣ 2 B ⊤ . In this section, we assume that A has full column rank and B has full row rank so that Ω 1 and Ω 2 are always positive definite. Theorem 5.1. Let µ = A ⊤ S , ν = BP distributed as µ ∼ N (0, Ω 1 ) and ν ∼ N (0, Ω 2 ). Then the iterative scheme (5.7) satisfies E X k − X * 2 F ≤ ρ k X 0 − X * 2 F , where ρ = 1 − 4 π 2 T r (Ω 2 ) T r (Ω 1 ) · λ min Ω ⊤ 2 ⊗ Ω 1 . Proof. The complexity of the method can be established through ρ = 1 − λ min (E [Z 2 ⊗ Z 1 ]) = 1 − λ min       E       νν ⊤ ∥ν∥ 2 2 ⊗ µµ ⊤ ∥µ∥ 2 2             = 1 − λ min       E       νν ⊤ ∥ν∥ 2 2       ⊗ E       µµ ⊤ ∥µ∥ 2 2             (5.12) Using Lemma 5.1, we have E       µµ ⊤ ∥µ∥ 2 2       ⪰ 2 π Ω 1 T r (Ω 1 ) , E       νν ⊤ ∥ν∥ 2 2       ⪰ 2 π Ω 2 T r (Ω 2 ) . Then, combining Lemma 5.3, Equation (5.12) can be written as ρ = 1 − λ min       E       νν ⊤ ∥ν∥ 2 2       ⊗ E       µµ ⊤ ∥µ∥ 2 2             ≤ 1 − λ min 2 π Ω ⊤ 2 T r (Ω 2 ) ⊗ 2 π Ω 1 T r (Ω 1 ) = 1 − 4 π 2 T r (Ω 2 ) T r (Ω 1 ) · λ min Ω ⊤ 2 ⊗ Ω 1 .X k+1 = X k + A ⊤ ζ ζ ⊤ C − ζ ⊤ AX k B B † ∥ζ ⊤ A∥ 2 2 , which is called the Gaussian randomized Kaczmarz method about matrix A (GaussRK-A). Thus at each iteration, a random normal Gaussian vector ζ is drawn and a search direction is formed by A ⊤ ζ. Similarly, let S = I ∈ R p×p and P = η ∼ N (0, Σ), we have the GaussRK-B method as follows X k+1 = X k + A † Cη − AX k Bη η ⊤ B ⊤ ∥Bη∥ 2 2 . Numerical results In this section, we present several numerical examples to illustrate the performance of the iteration methods proposed in this paper for solving the matrix equation (1.1). All experiments are carried out using MATLAB (version R2020a) on a personal computer with a 2.50 GHz central processing unit (Intel(R) Core(TM) i5-7200U CPU), 4.00 GB memory, and Windows operating system (64 bit Windows 10). To construct a matrix equation, we set C = AX * B, where X * = ones(m, n) is the exact solution of this matrix equation. All computations are started from the initial guess X 0 = O, and terminated once the relative error (RE) of the solution, defined by RE = X k − X * 2 F ∥X * ∥ 2 F , at the current iterate X k , satisfies RE < 10 −6 or exceeded maximum iteration. IT and CPU denote the average number of iteration steps and the average CPU times (in seconds) for 10 times repeated runs, respectively. The item '−' represents that the number of iteration steps exceeds the maximum iteration (100000) or the CPU time exceeds 120s. We consider the following methods and variants: • GRK with complete discrete sampling and as in Section 5.1. • RCD with complete discrete sampling and as in Section 5.3.2. • RK-A: (3.6) with S = e i and P = I as in Remark 5.1. • GaussGRK with Gaussian sampling and as in Section 4. • GaussRK-A: (3.6) with S = ζ ∼ N (0, Σ) and P = I as in (5.3). For the block methods GRBK [10], we assume that [p] = {I 1 , · · · , I s } and [q] = {J 1 , · · · , J t }, respectively, are partitions of [p] and [q], and the block samplings have the same size |I k | = τ 1 and |J k | = τ 2 , where I i =          {(i − 1)τ 1 + 1, (i − 1)τ 1 + 2, · · · , iτ 1 } , i = 1, 2, · · · , s − 1, {(s − 1)τ 1 + 1, (s − 1)τ 1 + 2, · · · , p} , i = s, and J i =          {( j − 1)τ 2 + 1, ( j − 1)τ 2 + 2, · · · , jτ 2 } , j = 1, 2, · · · , t − 1, {(t − 1)τ 2 + 1, (t − 1)τ 2 + 2, · · · , q} , j = t. We divided our tests into three categories: synthetic dense data, real-world sparse data, and CT Data. Example 6.1. Synthetic dense data. Random matrices for this test are generated as follows: • Type I [10]: For given p, m, and r 1 = rank(A), we construct a matrix A by A = U 1 D 1 V ⊤ 1 , where U 1 ∈ R p×r 1 and V 1 ∈ R m×r 1 are orthogonal columns matrices. The entries of U 1 and V 1 are generated from a standard normal distribution, and then columns are orthogonalization, i.e., [U 1 , ∼] = qr (randn(p, r 1 ), 0) , [V 1 , ∼] = qr (randn(m, r 1 ), 0) . The matrix D 1 is an r 1 × r 1 diagonal matrix whose diagonal entries are uniformly distribution numbers in (1, 2), i.e., D 1 = diag (1 + rand(r 1 , 1)) . Similarly, for given n, q, and r 2 = rank(B), we construct a matrix B by B = U 2 D 2 V ⊤ 2 , where U 2 ∈ R n×r 2 and V 1 ∈ R q×r 2 are orthogonal columns matrices, and the matrix D 2 is an r 2 × r 2 diagonal matrix. • Type II: For given p, m, n, q, the entries of A and B are generated from standard normal distributions, i.e., A = randn(p, m), B = randn(n, q). In Tables 6.1 and 6.2, we report the average IT and CPU of GRK, GaussGRK, GRBK, RCD, RK-A, and GaussRK-A for solving matrix equations with Types I and II, where A is full column rank, i.e., r 1 = m and B full row rank i.e., r 2 = n in Type I. For the GRBK method, we use different block sizes in Table 6.1 while fixed block sizes τ 1 = τ 2 = 10 in Table 6.2. From these two tables, we can see that the GaussGRK method is better than the GRK method in terms of IT and CPU time. The IT and CPU time of both the GRK and GaussGRK methods increase with the increase of matrix dimensions. However, the GaussGRK method has a small increase in terms of CPU time. In Fig. 6.1 , we plot the relative errors of GRK and GaussGRK for two matrix equations with Type I (A = U 1 D 1 V ⊤ 1 with m = 50, p = 20, r 1 = 20 and B = U 2 D 2 V ⊤ 2 with n = 50, q = 20, r 2 = 20) and Type II (A = randn(50, 20) and B = randn(20, 50)). From Fig. 6.1 , we can more intuitively see that the GaussGRK method is better than the GRK method in terms of IT and CPU time. However, as the matrix size continues to increase, the GaussGRK method and the GRK method require significant computational costs, so these two methods will not be considered in future experiments. From Tables 6.1 and 6.2, we observe that the GRBK method vastly outperforms the RCD, RK-A, and GaussRK-A methods in terms of IT and CPU time, because the GRBK method selects multiple rows and columns in each iteration. Among these methods which select a single row or column for calculation in each iteration, the RK-A and GaussRK-A methods perform slightly better than than the RCD method. In detail, from Table 6.1, we observe that the RCD method requires fewer iteration steps, the GaussRK-A method takes less CPU when the matrix size is small and the RK-A method is more challenging when the matrix size is large. From Table 6.2, we find that the GaussRK-A method is competitive in terms of IT and CPU time. To accelerate the convergence speed, multiple columns (or rows) can be selected in the RCD and RK-A methods instead of a single column (or row) in each iteration, i.e., block versions. We will not experiment and show them here. However, the block variant of the GaussRK-A method is a dense matrix computation, so the computational complexity increases significantly from vector-matrix product to matrix-matrix product. In Fig. 6.2 , we plot the relative errors of the GRBK method with different block sizes τ 1 = τ 2 = τ for Type I (A = U 1 D 1 V ⊤ 1 with m = 100, p = 50, r 1 = 50 and B = U 2 D 2 V ⊤ 2 with n = 100, q = 50, r 2 = 50). From Fig. 6.2 (a), we observe that increasing block sizes leads to a better convergence rate of the GRBK method. From Fig. 6.2 (b), we can find that as the block sizes τ increase, the IT and CPU time first decreases, and then increases after reaching the minimum. From Fig. 6.2 (c), it is easy to see that when τ = 14, 15, 16, the IT and CPU time reach the minimum. The GRK method is the GRBK method with the sizes of block index sets |I k | = |J k | = 1. This also verifies that the GRK method is computationally expensive. Finally, we also compare them with the RBCD [12] method. To give an intuitive demonstration of the advantage, we define the speed-up as follows: speed − up = CPU o f RBCD CPU o f NEW MET HOD . In Fig. 6.3 , we plot the relative errors of RBCD, RCD, RK-A, GaussRK-A, and GRBK for matrix equation with Type II (A = randn(100, 20), B = randn(20, 100)). For the GRBK method, we use the almost optimal block sizes τ 1 = τ 2 = 15. We can see that the RCD, RK-A, GaussRK-A, and GRBK methods are better than the RBCD method in terms of IT and CPU time. From Table 6.3, we see that the IT and CPU of the RCD, RK-A, GaussRK-A, and GRBK methods are smaller than the RBCD methods in terms of both iteration counts and CPU times with significant speed-ups. Example 6.2. Real-world sparse data. The entries of A and B are selected from the real-world sparse data [18]. Table 6.5. In Fig. 6.4 , we plot the relative errors of GRBK, RCD, RK-A, and GaussRK-A for the real-world matrix equations. For the GRBK method, we use the block sizes τ 1 = τ 2 = 15. In Table 6.5, we report the average IT and CPU of GRK, GaussGRK, GRBK, RCD, RK-A, and GaussRK-A for solving real-world matrix equations. From them, we observe again that the curves of the GRBK methods are decreasing much more quickly than those of the RCD, RK-A, and GaussRK-A methods with respect to the increase of the iteration steps and CPU times. However, as the matrix size increases, the CPU of the GRBK method grows because it takes some time to compute the pseudoinverse. At this time, the RK-A and GaussGRK-A methods are more prominent in terms of IT and CPU times. [19], where N represents that a cross-section of the subsurface is divided into N equally spaced intervals in both dimensions creating N 2 cells and s, p, θ and q denote the number of sources, number of receivers, angle of parallel rays and number of parallel rays. We set N = 40, θ = 0 : 200 and q = 100 in the function paralleltomo (N, θ, q), which generates an exact solution x * of size 1600 × 1 and X * = reshape(x * , 40, 40), and let N = 30, s = 60 and p = 100 in the function seismictomo (N, s, p), which generates an exact solution x * of size 900 × 1 and X * = reshape(x * , 30, 30). For given p, q, the entries of A and B are generated from standard normal distributions, i.e., A = randn(p, m), B = randn(n, q). C is obtained by C = AX * B. All computations start from the initial matrix X 0 = O and run 4000 iterations on the paralleltomo function and run 5000 iterations on the seismictomo function. In the following experiments, the structural similarity index (SSIM) between the two images X and Y was used to evaluate the quality of the recovered images. SSIM is defined as S S I M = (2µ X µ Y + C 1 ) (2δ XY + C 2 ) µ 2 X + µ 2 Y + C 1 δ 2 X + δ 2 Y + C 2 , where µ X , µ Y and δ 2 X , δ 2 Y are the means and variances of image X, Y, respectively. δ XY is the covariance of images X and Y, C 1 and C 2 are brightness and contrast constants. The mean of the image represents the brightness of the image and the variance of the image indicates the contrast of the image. Criteria for judging SSIM: SSIM is a number between 0 and 1, and the larger the SSIM value is, the smaller the difference between the two images is. Numerical results are shown in Fig. 6.5 and Fig. 6.6 . The convergence conclusions similar to the previous two groups of experiments are verified again. The maximum number of iterations for all these methods is set no more than 4000. From Fig. 6.5 and Fig. 6.6 , we can see that the RCD, RK-A, GaussRK-A, and GRBK methods recovered by the sketch-and-project method perform better than the RBCD method in terms of both image processing and CPU times. In Fig. 6.5 , the SSIM value of the recovery image through both the GaussRK-A and GRBK methods is around 1. Since the GRBK method needs to calculate the pseudoinverse, it requires more CPU times than the GaussRK-A method. In Fig. 6.6 , we can see that all methods have almost recovered this image except the RBCD method. Conclusions In this paper, we have proposed a sketch-and-project method for solving the matrix equation AXB = C. The convergence of the generalized iterative method is explored. Meanwhile, by varying its three parameters, we recover some well-known algorithms as special cases. Numerical experiments show that in a series of methods of vectormatrix product, Gaussian-type methods are competitive in terms of IT and CPU time. It is clear to see that our method allows for a much wider selection of three parameters, which leads to a series of new specific methods. Based on this skecth-and project method, we will investigate new methods for solving nonlinear matrix equations in our future work. Appendix A. Proof for Lemmas Proof of Lemma 4.3 Proof. Since E A 2 = E A ⊤ A , E A ⊤ E [A] = (E [A]) ⊤ E [A] , to obtain the conclusion, we need to prove E A ⊤ A ⪰ E A ⊤ E [A] , i.e., E A ⊤ − E A ⊤ (A − E [A]) ⪰ 0. By the definition, it can be seen that for arbitrary column vector c ∈ R n , A is called a positive semi-definite matrix if c ⊤ Ac ≥ 0. So we just need to prove that Since A ∈ R n×n , Ac ∈ R n is a column vector. For convenience, let c ⊤ A ⊤ = (y 1 , y 2 , ..., y n ) = Y ⊤ , we have E c ⊤ A ⊤ − E A ⊤ (A − E [A] c) ≥ 0, i.e., E c ⊤ A ⊤ − E c ⊤ A ⊤ (Ac − E [Ac]) ≥ 0.E Y ⊤ − E Y ⊤ (Y − E [Y]) = E Y ⊤ Y − E Y ⊤ Y − Y ⊤ E [Y] + E Y ⊤ E [Y] = E Y ⊤ Y − E Y ⊤ E [Y] − E Y ⊤ E [Y] + E Y ⊤ E [Y] = E Y ⊤ Y − E Y ⊤ E [Y] = E y 2 1 + y 2 2 + ... + y 2 n − E y 1 2 + E y 2 2 + ... + E y n 2 = Dy 1 + Dy 2 + ... + Dy n ≥ 0, where Dy i is the variance y i for i = 1, 2, · · · , n. Therefore, E A ⊤ − E A ⊤ (A − E [A]) is positive semi-definite. □ Proof of Lemma 4.7 Proof. For ∀X 2 ∈ M 2 , then ∃Y 2 s.t. A ⊤ AY 2 BB ⊤ = X 2 . Let Y 1 = AY 2 B, there exsits A ⊤ AY 2 BB ⊤ = A ⊤ Y 1 B ⊤ = X 2 , which means X 2 ∈ M 1 . For ∀X 1 ∈ M 1 , then ∃Y 1 s.t. A ⊤ Y 1 B ⊤ = X 1 , which means matrix equation A ⊤ XB ⊤ = C has a solution. Hence, we konw that A ⊤ (A ⊤ ) † X 1 (B ⊤ ) † B ⊤ = X 1 . Based on the nature of the pseudoinverse, we can obtain Thus X 1 can be written as X 1 = A ⊤ AA † (A † ) ⊤ X 1 (B † ) ⊤ B † BB ⊤ = A ⊤ AW 2 BB ⊤ where W 2 = A † (A † ) ⊤ X 1 (B † ) ⊤ B † . It holds A ⊤ Y 1 B ⊤ = A ⊤ AW 2 BB ⊤ = X 1 . Thus, X 1 is also in the set M 2 . Therefore, M 1 = M 2 . □ Proof of Lemma 3.1 Proof. For any matrix M, the pseudoinverse satisfies the identity M † MM † = M † . Let M = S ⊤ AG −1 A ⊤ S , we get Z ′ Proof. Let Z 1 = GZ ′ 1 . Since E Z 2 ⊗ Z 1 is invertible and G − 1 2 Z 1 G − 1 2 is an idempotent matrix, the spectrum of I ⊗ G − 1 2 Z 2 ⊗ Z 1 I ⊗ G − 1 2 is contained in {0, 1}, we have I ⊗ G − 1 2 E Z 2 ⊗ Z 1 I ⊗ G − 1 2 is positive definite. With ρ = 1 − λ min E Z 2 ⊗ Z ′ 1 = 1 − λ min E Z 2 ⊗ G − 1 2 Z 1 G − 1 2 . it holds ρ < 1. If B ⊤ ⊗ A is not full column rank, then there would be 0 x ∈ R nm such that B ⊤ ⊗ A x = 0. Therefore, we have Z 1 XZ 2 = 0 and E Z T 2 ⊗ Z 1 vec(X) = 0, which contradicts the assumption that E Z T 2 ⊗ Z 1 is invertible. Analogously, (BP) ⊤ ⊗ (S T A) is also full column rank. Finally, since B ⊤ ⊗ A is full column rank, X * must be unique (recall that assume throughout the paper that AXB = C is consistent). □ Proof of Lemma 5.3 Proof. Using the properties of Kronecker product and considering the matrices are symmetric semi-definite, we have λ min A ⊤ ⊗ B = λ min (A) λ min (B) . To prove (5.11), we only need to demonstrate that λ min (A 2 ) λ min (B 2 ) ≥ λ min (A 1 ) λ min (B 1 ) . Since Combining (A.1) and (A.2), we can obtain λ min (A 2 ) λ min (B 2 ) ≥ λ min (A 1 ) λ min (B 1 ), i.e., λ min A ⊤ 2 ⊗ B 2 ≥ λ min A ⊤ 1 ⊗ B 1 . The proof is completed. □ λ min (A) and λ max (A), respectively, are the smallest and largest eigenvalues values of the matrix A. A ⪰ B indicates that A − B is positive semi-definite. Fig. 3 . 1 : 31Fig. 3.1 : The geometry of our algorithm. The next iterate, X k+1 , is obtained by projecting X k onto the affine space formed by intersecting {X|X = X * + Z, with S ⊤ AZBP = 0, Z ∈ R m×n } (see (3.1)) and {X|X = X k + G −1 A ⊤ S Y P ⊤ B ⊤ , Y ∈ R τ 1 ×τ 2 } (see (3.2)). . : X k B :, j − C i j B :, This is recovered global randomized Kaczmarz (GRK) method.When i, j are selected at random, this is the global randomized Kaczmarz method. Applying Theorem 4.7, we see the probability distributions p i = ∥Ai, . 2 . 2Similar to the RK-A method, with the block index sets size |I k | = p and |J k | = 1, the index j k ∈ [q] is selected according to a probability distribution P ( j k ) = ∥B:,j have the randomized Kaczmarz method of matrix B (RK-B) update as follows X k+1 = X k + A † C :, j k − AX k B :, definite, and thus the expected norm of the error of Gaussian method converges exponentially to zero. □ Remark 5.3. Choosing Σ 1 = Σ 2 = I so that S = ζ ∼ N (0, I) , P = η ∼ N (0, I), then we obtain the Gaussian global random Kaczmarz (GaussGRK) method. Let parameter matrices S = ζ ∼ N (0, I) and P = I ∈ R q×q . Then, the update (5.7) becomes Fig. 6 . 1 : 61The relative errors of GRK and GaussGRK for Types I and II Fig. 6 . 2 : 62Relative errors of GRBK with different block sizes τ 1 = τ 2 = τ for Type I. Fig. 6 . 3 : 63Relative errors of RBCD, RCD, RK-A, GaussRK-A, and GRBK with Type II. Table 6.4 lists the features of these sparse matrices, in which rank(A) denote the rank of the matrix A, respectively, and the density is defined as density = the number of non-zero elements of an m-by-n matrix mn , which indicates the sparsity of the corresponding matrix. Numerical results are shown in Fig. 6.4 and ) A = divorce, B = ash219 ⊤ Fig. 6.4 : Relative errors of GRBK, RCD, RK-A, and GaussRK-A. Example 6.3. CT Data. The test problems of two-dimensional tomography are implemented in the function seismictomo (N, s, p) and the function paralleltomo (N, θ, q) in the MATLAB package AIR TOOLS of the exact figure and five methods for paralleltomo test problem with n = 40. Fig. 6 . 6 : 66Performance of the exact figure and five methods for seismictomo test problem with n = 30. ( A ⊤ ) † = (A † ) ⊤ = (A † AA † ) ⊤ = A † (AA † ) ⊤ = (AA † ) ⊤ (A † ) ⊤ = AA † (A † ) ⊤ , (B ⊤ ) † = (B † ) ⊤ = (B † BB † ) ⊤ = (B † B)B † ⊤ = (B † ) ⊤ (B † B) ⊤ = (B † ) ⊤ B † B. A 2 and A 2 − A 1 are symmetric positive definite matrices, by Lemma 5.2 we can obtain the following inequality0 ≤ λ min (A 2 − A 1 ) ≤ λ min (A 2 ) + λ max (−A 1 ) .From the fact that λ max (−A 1 ) = −λ min (A 1 ), hence it results in 0 ≤ λ min (A 2 − A 1 ) ≤ λ min (A 2 ) − λ min (A 1 ) . Therefore, we get λ min (A 2 ) ≥ λ min (A 1 ) ≥ 0. (A.1) Similarly, for B 1 , B 2 , there exists λ min (B 2 ) ≥ λ min (B 1 ) ≥ 0. (A.2) Table 1 . 11: Our main complexity results. Table 1 . 12: Summary of convergence guarantees of various sampling strategies for the sketch-and-project method.Method Sampling Strategy Convergence Rate Bound Rate Bound Derived From GRK p i = ∥Ai,:∥ 2 2 ∥A∥ 2 F , p j = ∥B:,j∥ 2 2 ∥B∥ 2 F 1 − λ min( A ⊤ A)λ min( BB ⊤ ) ∥A∥ 2 F ∥B∥ 2 F Theorem 4.1 or Theorem 4.7 RK-A p i = ∥Ai,:∥ 2 2 ∥A∥ 2 F 1 − λ min( AA ⊤ ) ∥A∥ 2 F Theorem 4.1 or Theorem 4.7 RCD p j = ∥A:,j∥ 2 2 ∥A∥ 2 F 1 − λ min( AA ⊤ ) ∥A∥ 2 F Theorem 4.1 or Theorem 4.7 GaussGRK Gaussian sampling 1 − 4 π 2 T r(Ω 1 )T r(Ω 2 ) · λ min Ω ⊤ 2 Ω 1 Theorem 5.1 Table 6 . 61: The average IT and CPU of GRK, GaussGRK, GRBK, RCD, RK-A, and GaussRK-A with Type I. m p τ 1 n q τ 2 GRK GaussGRK GRBK RCD RK-A GaussRK-A 50 20 10 20 50 10 IT 8596 8013 112 267 287 291 CPU 0.3760 0.0609 0.0175 0.0122 0.0089 0.003 100 40 20 40 100 20 IT 38985 37349 94 570 663 643 CPU 1.8965 0.5035 0.0294 0.0484 0.0335 0.0172 100 40 20 100 500 100 IT − − 26 606 667 648 CPU − − 0.0573 0.2978 0.1685 0.1465 500 100 50 100 500 50 IT − − 78 1513 1561 1650 CPU − − 0.1442 4.7249 0.6455 0.9307 1000 200 200 100 500 50 IT − − 24 3122 3242 3322 CPU − − 0.2241 19.1592 2.5464 4.2424 Table 6 . 2 : 62The average IT and CPU of GRK, GaussGRK, GRBK, RCD, RK-A, and GaussRK-A with Type II.m p n q GRK GaussGRK GRBK RCD RK-A GaussRK-A 30 10 10 30 IT 50057 11656 1 770 530 256 CPU 2.0871 0.0708 0.0003 0.0213 0.0139 0.0017 50 20 20 50 IT − 74033 88 2694 1541 825 CPU − 0.6492 0.0157 0.1233 0.0475 0.0075 100 40 40 100 IT − − 588 7442 4092 2160 CPU − − 0.0974 0.6245 0.2027 0.0556 100 40 100 500 IT − − 966 5494 3082 1697 CPU − − 0.2509 2.3004 0.5499 0.2739 500 100 100 500 IT − − 1775 8749 5259 2977 CPU − − 0.5908 27.2126 2.0268 1.6648 1000 200 100 500 IT − − 3486 18890 10251 5799 CPU − − 1.4599 116.6466 8.3202 7.4095 1000 200 200 1000 IT − − 7250 − 9998 5896 CPU − − 4.3672 − 31.4897 23.0945 Table 6 . 63: The average IT, CPU and speed-up of RBCD, RCD, RK-A, GaussRK-A, and GRBK with Type II.p × m 50 × 20 100 × 20 200 × 20 500 × 20 1000 × 20 RBCD IT 4204 1025 376 302 2.0041 CPU 0.1208 0.0454 0.0342 0.4958 0.0655 RCD IT 801 353 255 182 169 CPU 0.0338 0.0226 0.0332 0.3916 1.9815 speed-up 3.5740 2.0088 1.0301 1.2661 1.0114 RK-A IT 640 375 280 266 269 CPU 0.0181 0.0125 0.0181 0.0622 0.7622 speed-up 6.674 3.6320 1.8895 7.9711 2.6294 GaussRK-A IT 590 361 284 258 256 CPU 0.0053 0.0053 0.0117 0.0646 0.9349 speed-up 22.7925 8.5660 2.9231 7.6749 2.1437 GRBK IT 70 31 23 19 19 CPU 0.0135 0.0060 0.0046 0.0041 0.0061 speed-up 8.9481 7.5667 7.4348 120.9268 328.541 Table 6 . 64: The detailed features of sparse matrices from[18].name size rank density ash219 219 × 85 85 2.3529% ash958 958 × 292 292 0.68493% divorce 50 × 9 9 50% Worldcities 315 × 100 100 53.625% Table 6 . 65: The average IT and CPU of GRK, GaussGRK, GRBK, RCD, RK-A and GaussRK-A.A B τ 1 τ 2 GRK GaussGRK GRBK RCD RK-A GaussRK-A ash219 divorce ⊤ 15 15 IT − − 58 1334 1360 1428 CPU − − 0.0152 0.1368 0.0713 0.0442 divorce ash219 ⊤ 15 15 IT − − 67 2559 506 610 CPU − − 0.0167 0.3823 0.0873 0.0301 divorce ash219 15 15 IT − − 1632 3024 1204 910 CPU − − 0.4559 0.3216 0.0684 0.0291 ash958 ash219 ⊤ 15 14 IT − − 5038 6105 5782 5265 CPU − − 3.2982 20.8641 2.6898 4.2691 ash219 ash958 ⊤ 15 15 IT − − 4976 1792 1709 1718 CPU − − 3.0925 9.0248 3.7558 3.8908 ash958 Worldcities ⊤ 15 15 IT − − 34572 5978 5756 5353 CPU − − 23.6345 26.7466 4.035 6.1356 H Lin, T Maekawa, C Deng, Survey on geometric iterative methods and their applications. 95H. Lin, T. Maekawa, C. Deng, Survey on geometric iterative methods and their applications, Computer Aided Design 95 (2018) 40-51. Kronecker products, unitary matrices and signal processing applications. P A Regalia, S K Mitra, SIAM Review. 31P. A. Regalia, S. K. Mitra, Kronecker products, unitary matrices and signal processing applications, SIAM Review 31 (1989) 586-613. On the symmetric solutions of linear matrix equations. D Hua, Linear Algebra and its Applications. 131D. Hua, On the symmetric solutions of linear matrix equations, Linear Algebra and its Applications 131 (1990) 1-7. H Zha, Comments on Large Least Squares Problems Involving Kronecker Products. USA16H. Zha, Comments on Large Least Squares Problems Involving Kronecker Products, volume 16, Society for Industrial and Applied Mathematics, USA, 1995. Iterative least-squares solutions of coupled Sylvester matrix equations. F Ding, T Chen, Systems and Control Letters. 54F. Ding, T. Chen, Iterative least-squares solutions of coupled Sylvester matrix equations, Systems and Control Letters 54 (2005) 95-107. On Hermitian and skew-Hermitian splitting iteration methods for the linear matrix equation AXB=C. X Wang, Y Li, L Dai, Computers and Mathematics with Applications. 65X. Wang, Y. Li, L. Dai, On Hermitian and skew-Hermitian splitting iteration methods for the linear matrix equation AXB=C, Computers and Mathematics with Applications 65 (2013) 657-664. The Jacobi and Gauss-Seidel-type iteration methods for the matrix equation AXB=C. Z Tian, M Tian, Z Liu, T Xu, Applied Mathematics and Computation. 292Z. Tian, M. Tian, Z. Liu, T. Xu, The Jacobi and Gauss-Seidel-type iteration methods for the matrix equation AXB=C, Applied Mathematics and Computation 292 (2017) 63-75. Re-nnd solutions of the matrix equation AXB=C. D S Cvetković-Ilić, Journal of the Australian Mathematical Society. 84D. S. Cvetković-Ilić, Re-nnd solutions of the matrix equation AXB=C, Journal of the Australian Mathematical Society 84 (2008). A matrix LSQR iterative method to solve matrix equation AXB=C. Z Peng, International Journal of Computer Mathematics. 87Z. Peng, A matrix LSQR iterative method to solve matrix equation AXB=C, International Journal of Computer Mathematics 87 (2010) 1820-1830. Y Niu, B Zheng, arXiv:2204.13920On global randomized block Kaczmarz algorithm for solving large-scale matrix equations, arXiv MATH Numerical Analysis. Y. Niu, B. Zheng, On global randomized block Kaczmarz algorithm for solving large-scale matrix equations, arXiv MATH Numerical Analysis (2022) arXiv:2204.13920. On the Kaczmarz methods based on relaxed greedy selection for solving matrix equation AXB=C. N Wu, C Liu, Q Zuo, Journal of Computational and Applied Mathematics. 413N. Wu, C. Liu, Q. Zuo, On the Kaczmarz methods based on relaxed greedy selection for solving matrix equation AXB=C, Journal of Computational and Applied Mathematics 413 (2022). On the convergence of a randomized block coordinate descent algorithm for a matrix least squares problem. K Du, C Ruan, X Sun, Applied Mathematics Letters. 124K. Du, C. Ruan, X. Sun, On the convergence of a randomized block coordinate descent algorithm for a matrix least squares problem, Applied Mathematics Letters 124 (2022) 1660-1690. Developing Kaczmarz method for solving Sylvester matrix equations. S G Shafiei, M Hajarian, Journal of the Franklin Institute. 359S. G. Shafiei, M. Hajarian, Developing Kaczmarz method for solving Sylvester matrix equations, Journal of the Franklin Institute 359 (2022) 8991-9005. Randomized iterative methods for linear systems. R M Gower, P Richtárik, SIAM Journal on Matrix Analysis and Applications. 36R. M. Gower, P. Richtárik, Randomized iterative methods for linear systems, SIAM Journal on Matrix Analysis and Applications 36 (2015) 1660-1690. Computational methods for linear matrix equations. V Simoncini, SIAM Review. 58V. Simoncini, Computational methods for linear matrix equations, SIAM Review 58 (2016) 377-441. Kronecker Products and Matrix Calculus: with Applications. A Graham, Ellis Horwood LtdA. Graham, Kronecker Products and Matrix Calculus: with Applications, Ellis Horwood Ltd, 1981. . G H Golub, C F Van Loan, Matrix Computations. The Johns Hopkins University Press4th editionG. H. Golub, C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, 4th edition, 2013. The University of Florida Sparse Matrix Collection. T A Davis, Y Hu, ACM Transactions on Mathematical Software. 38T. A. Davis, Y. Hu, The University of Florida Sparse Matrix Collection, ACM Transactions on Mathematical Software 38 (2011). AIR Tools II: algebraic iterative reconstruction methods, improved implementation. P C Hansen, J S Jórgensen, Numerical Algorithms. 79P. C. Hansen, J. S. Jórgensen, AIR Tools II: algebraic iterative reconstruction methods, improved implementa- tion, Numerical Algorithms 79 (2018) 107-137.
[]
[ "BUILDING HEIGHT PREDICTION WITH INSTANCE SEGMENTATION", "BUILDING HEIGHT PREDICTION WITH INSTANCE SEGMENTATION" ]
[ "Furkan Burak Bagci \nHuawei Turkey R&D Center Istanbul\nTurkey\n", "Ahmet Alp Kindiroglu \nHuawei Turkey R&D Center Istanbul\nTurkey\n", "Metehan Yalçin \nHuawei Turkey R&D Center Istanbul\nTurkey\n", "Ufuk Uyan \nHuawei Turkey R&D Center Istanbul\nTurkey\n", "Mahiye Uluyagmur Öztürk \nHuawei Turkey R&D Center Istanbul\nTurkey\n" ]
[ "Huawei Turkey R&D Center Istanbul\nTurkey", "Huawei Turkey R&D Center Istanbul\nTurkey", "Huawei Turkey R&D Center Istanbul\nTurkey", "Huawei Turkey R&D Center Istanbul\nTurkey", "Huawei Turkey R&D Center Istanbul\nTurkey" ]
[]
Extracting building heights from satellite images is an active research area used in many fields such as telecommunications, city planning, etc. Many studies utilize DSM (Digital Surface Models) generated with lidars or stereo images for this purpose. Predicting the height of the buildings using only RGB images is challenging due to the insufficient amount of data, low data quality, variations of building types, different angles of light and shadow, etc. In this study, we present an instance segmentation-based building height extraction method to predict building masks with their respective heights from a single RGB satellite image. We used satellite images with building height annotations of certain cities along with an open-source satellite dataset with the transfer learning approach. We reached, the bounding box mAP 59, the mask mAP 52.6, and the average accuracy value of 70% for buildings belonging to each height class in our test set.
10.48550/arxiv.2212.09277
[ "https://export.arxiv.org/pdf/2212.09277v1.pdf" ]
254,853,976
2212.09277
f093ea303e6558b9afd417bb4a76f75f30c8fd67
BUILDING HEIGHT PREDICTION WITH INSTANCE SEGMENTATION Furkan Burak Bagci Huawei Turkey R&D Center Istanbul Turkey Ahmet Alp Kindiroglu Huawei Turkey R&D Center Istanbul Turkey Metehan Yalçin Huawei Turkey R&D Center Istanbul Turkey Ufuk Uyan Huawei Turkey R&D Center Istanbul Turkey Mahiye Uluyagmur Öztürk Huawei Turkey R&D Center Istanbul Turkey BUILDING HEIGHT PREDICTION WITH INSTANCE SEGMENTATION Instance Segmentation · Satellite Images · Building Height Prediction Extracting building heights from satellite images is an active research area used in many fields such as telecommunications, city planning, etc. Many studies utilize DSM (Digital Surface Models) generated with lidars or stereo images for this purpose. Predicting the height of the buildings using only RGB images is challenging due to the insufficient amount of data, low data quality, variations of building types, different angles of light and shadow, etc. In this study, we present an instance segmentation-based building height extraction method to predict building masks with their respective heights from a single RGB satellite image. We used satellite images with building height annotations of certain cities along with an open-source satellite dataset with the transfer learning approach. We reached, the bounding box mAP 59, the mask mAP 52.6, and the average accuracy value of 70% for buildings belonging to each height class in our test set. Introduction Problem Definition Landcover mapping is an important subject used in many fields such as telecommunications, city planning, and transportation. This problem used to be costly, as people had to traverse the target area with measuring instruments. In addition, since there may be changes in some objects over time (for example, many buildings are demolished and new ones are built over time), the mapping process becomes out of date after a certain period of time. With the development of remote sensing technologies and deep learning-based computer vision methods, object detection and image segmentation applications on satellite images have been more accurate. Therefore, the landcover mapping problem can be solved fast and cheaply. One particular field of landcover mapping is the calculation of height data from remote images. Due to the lack of publicly available height information for buildings, there exists only a handful of studies on problems such as building height prediction. Most of the height prediction studies use stereo images or DSM that are important for depth extraction. It is valuable to extract height information from monocular images where we do not have such other data sources. This is a challenging task as depth information depends on angle of satellite and light, shadows that can change based on time of day and occlusions of larger objects on smaller objects etc. In this study our hypothesis is given sufficient large building height image dataset, the model can learn to predict building height despite these problems. Instance Segmentation Many studies have been carried out on the extraction of buildings from satellite images with semantic segmentation methods. Semantic segmentation tools can predict object masks at the pixel level, however, it is not possible to distinguish between objects in the same class. Some post-processing operations (thresholding, connected component analysis, etc.) are required so that each building can be individually identified and modeled in 3D. However, the connections between the semantic masks of neighboring buildings may cause us to treat them as a single building and this may result in incorrect 3D modeling. Instance segmentation is a challenging computer vision problem that aims to predict the bounding box and mask for each object in the input image at the instance level. The instance segmentation methods consist of components from object detection and semantic segmentation methods. It provides the separation of each object in the same class. YOLOACT [1], a real-time, fully convolutional instance segmentation method, runs at 33.5 fps on Titan Xp and reaches 33.5 mAP on the MS COCO dataset. This method treats the instance segmentation problem as two main subtasks. These are, generating set of prototype masks and estimating the coefficients for each object mask. Final mask estimates are determined by linear combination of prototype masks with coefficients. Mask2Former [2], a transformer-based approach, has achieved new state-of-the-art results on the MS-COCO dataset in panoptic, instance and semantic segmentation tasks. The main component in this method is masked attention, which extracts local features from predicted masks by constraining cross-attention Mask R-CNN [3], which is based on a two-stage Faster R-CNN object detection method, is proposed as a general framework and can be used for instance segmentation and pose estimation. It is a powerful method and it is easy to adapt it to different domains. The main difference between Mask R-CNN from Faster R-CNN is the segmentation branch as an addition to classification and bounding box regression branches, which can work in parallel on each region proposal. Literature Review In our literature review, we did not find any study using instance segmentation for building and height prediction. However, some studies try to detect buildings without height prediction. Chitturi et. al. [4] proposed a single-class building instance segmentation model from satellite images with Mask RCNN. In their study, some TTA(Test Time Augmentation) techniques (horizontally-vertically flip, bight, contrast) were applied and the effect of each augmentation on model performance was reported. Fritz [5] compared various semantic (Unet and FCN) and instance segmentation (Mask R-CNN) architectures for building prediction. In [6], authors proposed a novel segmentation framework based on Mask R-CNN and histogram thresholding for classifying new and old buildings. Zhou et. al. [7] trained Mask R-CNN and FCN (Fully Convolutional Network) networks using very high resolution aerial images and compared the results or building segmentation tasks. As a result, they reported that Mask R-CNN is better in detecting mask edges and works with 15% higher accuracy than FCN in detecting small objects. Since many studies suggest Mask R-CNN-based methods for building instance segmentation problem, we based this method in our study. In another group of studies, authors try to predict heights from pixel level height annotation with DSM. The Karatsiolis [8] and Liu [9] both proposed a UNET-based custom building height segmentation model. They tried to estimate the height at the pixel level from the RGB image in a regression-like way with the loss of RMSE and MAE. Method Preprocessing There are many problems in the raw data set, such as overlapping annotations, incorrect or missing annotations, and annotation shifts. To overcome these problems, we performed some data preprocessing operations. Overlapping Anotations This problem occurs by overlapping small size high length annotations on high peak areas of the building and low length annotations on the rest of the building for a high height building such as a skyscraper and plaza etc. In this case, training our network with the same part of a building as both high and low length, decreases the model accuracy. For this reason, we developed an algorithm to merge multiple overlapping annotations with the tallest one. The IOU (Intersection Over Union) is one of the most widely used metrics for calculating how well model predictions fit with ground truth masks in segmentation and bounding boxes in object detection. The IOU is calculated by the ratio of the overlapping area of two object predictions to the total area. However, IOU does not work well to decide if two annotations overlap because this value is too small for annotations of very different sizes and the algorithm fails for most of the overlapping buildings. For this reason, we used intersection over each annotation area, to decide whether two annotations overlap. The metric we use, calculates the ratio of intersection over each annotation area and uses the maximum one. In this way, when a large part of any mask intersects with the other small mask, those annotations are merged. Misaligned and Partial Annotations Another problem in the dataset is incorrect annotations and shifted ones. Considering that there are tens of thousands of images in the data set, it is not a realistic solution to manually filter the faulty samples. Therefore, erroneous samples in our data set were filtered by an automatic filtering method. In our experiments, it has been shown that the Unet++ semantic segmentation model works well against such faulty examples. For this reason, first of all, a single-class Unet++ building segmentation model is trained. Then, we made inferences with this model on our whole data set. After that, we calculated the IOU between each of the predicted buildings and each of the annotations to determine missing predictions and annotations. If the ratio of missing predictions or annotations is above 50% then we discarded that image. Over-Under Detections In our data set, some small buildings are annotated as a group. If these buildings are detected one by one, they are considered to be incorrect. Similarly, for some groups of buildings, our method could predict a single building. Calculating only one-to-one matches causes us to ignore some of the acceptable model predictions for the business use case of the model. For this reason, over-detection and under-detection cases need to be considered for evaluation. Over-detection refers to situations when there are multiple predictions for an object, while under-detection occurs when there is one prediction for multiple objects as in Figure 1. Ozdemir et. al. [10] proposed an approach that calculates over-detection and under-detection in object detection performance measurement. We used the approach from this work while calculating the confusion matrix. Firstly, we identified 1-to-1 matches. Then we calculated over-detection and under-detection cases. In the under-detection calculation, for a 1-to-1 mismatch, we detected all annotations in the same class with an IOU value greater than 0. Then, the new object formed by combining these annotations is calculated. Lastly, the IOU value between this combined object and our prediction was calculated. If the IOU value is higher than the threshold value, that prediction is also considered correct. A similar scenario has been applied for over-detection cases as well. Training In the training phase, we used the open-source mmdetection [11] library, which includes Pytorch implementation of many object detection and instance segmentation methods. In order to apply the transfer learning approach, we first train a single class building instance segmentation model with an open-source building segmentation dataset [12] which was proposed by Miyazaki. Then we fine-tuned those weights with our 3 class dataset, by initializing our model with them. We used the Resnet-50 backbone network for our initial experiments because of its speed. After performing the hyperparameter optimization, we ran our experiments with the ResNext-101 backbone, which outperformed Resnet-50. Transfer Learning Since different geographies contain different types of objects, model generalization is a problem. Our dataset, consists of satellite images from certain regions along with buildings and related heights. The instance segmentation model, trained using only our dataset, performs well only in a limited region, but generalizes poorly to other regions. In order to overcome this issue, it is important to use the features of the buildings from images taken in different light conditions and different building types from very wide geographies. Recently published data set [12] has been used for transfer training because of its good features such as; the quality of the building labels, the separation of the labels of different buildings from each other, and the geographical variation of images. The images and annotations in the open-source dataset were first divided into 512x512 parts. Then original building annotations, generated for semantic segmentation were converted for instance segmentation using adaptive thresholding and connected component analysis. In the first step, the Mask R-CNN model was trained with an open source single class building data set. The MS-COCO weights were used as a starting point for the encoder. In the pretraining phase, only the first layer of the ResNext encoder is frozen and all other layers are trained. After the learning rate and other hyperparameters are optimized, we fine-tuned this model with our 3 class-building height dataset by initializing the weights obtained in the pre-training as a starting point. In the fine-tuning part, the encoder is frozen, and the other layers of the network are trained. For both pretraining and fine tuning, we increased the number of anchor boxes and added smaller anchor boxes because the ratio of building annotations to the image is very small in the satellite dataset compared to the objects in the MS-COCO dataset. We changed the default scales parameter of the Region proposal network from [8] to [2,4,8,16,32]. Results In this section, we first describe the datasets that we used. Afterthat, we provided experimental results. Data Set In our experiments, we make use of two different datasets to train our model. The first one, which we pretrain our model, is the public dataset proposed by Miyazaki. The second dataset, we used for fine-tuning contains building masks and respective heights for certain regions. Public Data Set The public aerial dataset [12] has 18127 images with 1024x1024 resolution from 7 different countries and pixel-level building annotations. We used images from 4 countries (Japan, Thailand, Kenya, and Mozambique) from this dataset. We ran our experiments with about 20000 images generated as 512x512 slices from original full-size images. Height Data Set Our private data set consists of 744 satellite images with 5000x6000 resolution, segmentation masks for each building, and height values for each building belonging to 5 different cities. In order to reduce the computational complexity and increase the training speed and accuracy, we prepared 512x512 grid-shaped slices and performed training and testing on these images. Our dataset has 3 classes and these are; up to 15 meters, between 15 meters and 40 meters, and over 40 meters. We selected our labels after make histogram analysis on our dataset, considering the balance between classes. As we were looking for a solution to the building height prediction problem with instance segmentation, we converted our raw input data into coco annotation format, which is one of the well-known formats. Experiments We calculated the performance of our algorithm on randomly selected 1200 test images from a city. We used the mAP value along with the confusion matrix for the evaluation. The bounding box mAP(IOU=0.5) value of the model was 59 and the segmentation mask mAP(IOU=0.5) value was 52.6 as shown in Figure 1. The model reached 73% accuracy for buildings up to 15 meters, 71% for buildings between 15 meters and 40 meters, and 65% for buildings over 40 meters. Most of the mistakes belong to adjacent classes. For example, 8% of buildings that belong to up to 15 meters class were estimated between 15 meters and 40 meters, but none were estimated above 40 meters. False positive predictions were most common in buildings up to 15 meters class that model confuses at most. Conclusion Building height prediction is a demanding problem in many sector including telecommunication, transprotation etc. Predicting height from single view RGB images is a challenging task, because of the light reflection, effect of the big shadows, lack of view from differnet perspectives for low height buildings etc. We proposed instance segmentationbased building height prediction solution from single RGB images. Most of the studies utilize semantic segmentation architectures to predict pixel-based buildings, however, our model not only tries to predict buildings but also their heights at the instance level. We applied transfer learning and fine-tuning approach in order to overcome the model generalization problem. Our model reached 59 bbox mAP and 52.6 segmentation mAP values and in overall it reached about 70% accuracy for all of the height classes. Figure 1 : 1Under Detection Case Figure 2 2Figure 2: Train Loss Table 1 : 1Confusion MatrixPrediction Ground Truth 0m-15m 15m-40m 40m+ Background0m-15m 73% 8% 0% 18% 15m-40m 16% 71% 5% 6% 40m+ 1% 23% 65% 9% Background 88% 9% 2% 0% Yolact: Real-time instance segmentation. Daniel Bolya, Chong Zhou, Fanyi Xiao, Yong Jae Lee, Proceedings of the IEEE/CVF international conference on computer vision. the IEEE/CVF international conference on computer visionDaniel Bolya, Chong Zhou, Fanyi Xiao, and Yong Jae Lee. Yolact: Real-time instance segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9157-9166, 2019. Mask2former for video instance segmentation. Bowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexander Kirillov, Rohit Girdhar, Alexander G Schwing, arXiv:2112.10764arXiv preprintBowen Cheng, Anwesa Choudhuri, Ishan Misra, Alexander Kirillov, Rohit Girdhar, and Alexander G Schwing. Mask2former for video instance segmentation. arXiv preprint arXiv:2112.10764, 2021. Mask r-cnn. Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionKaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017. Building detection in deformed satellite images using mask r-cnn. Gayatri Chitturi, Gayatri Chitturi. Building detection in deformed satellite images using mask r-cnn, 2020. Instance segmentation of buildings in satellite images. Karin Fritz, Karin Fritz. Instance segmentation of buildings in satellite images, 2020. A novel framework based on mask r-cnn and histogram thresholding for scalable segmentation of new and old rural buildings. Ying Li, Weipan Xu, Haohui Chen, Junhao Jiang, Xun Li, Remote Sensing. 1361070Ying Li, Weipan Xu, Haohui Chen, Junhao Jiang, and Xun Li. A novel framework based on mask r-cnn and histogram thresholding for scalable segmentation of new and old rural buildings. Remote Sensing, 13(6):1070, 2021. Building segmentation from airborne vhr images using mask r-cnn. The International Archives of Photogrammetry. K Zhou, Chen, R Smal, Lindenbergh, Remote Sensing and Spatial Information Sciences. 42K Zhou, Y Chen, I Smal, and R Lindenbergh. Building segmentation from airborne vhr images using mask r-cnn. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 42:155-161, 2019. Img2ndsm: Height estimation from single airborne rgb images with deep learning. Savvas Karatsiolis, Andreas Kamilaris, Ian Cole, Remote Sensing. 13122417Savvas Karatsiolis, Andreas Kamilaris, and Ian Cole. Img2ndsm: Height estimation from single airborne rgb images with deep learning. Remote Sensing, 13(12):2417, 2021. Im2elevation: Building height estimation from single-view aerial imagery. Chao-Jung Liu, Vladimir A Krylov, Paul Kane, Geraldine Kavanagh, Rozenn Dahyot, Remote Sensing. 12172719Chao-Jung Liu, Vladimir A Krylov, Paul Kane, Geraldine Kavanagh, and Rozenn Dahyot. Im2elevation: Building height estimation from single-view aerial imagery. Remote Sensing, 12(17):2719, 2020. Performance measures for object detection evaluation. Bahadır Özdemir, Selim Aksoy, Sandra Eckert, Martino Pesaresi, Daniele Ehrlich, Pattern Recognition Letters. 3110Bahadır Özdemir, Selim Aksoy, Sandra Eckert, Martino Pesaresi, and Daniele Ehrlich. Performance measures for object detection evaluation. Pattern Recognition Letters, 31(10):1128-1137, 2010. Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, arXiv:1906.07155Chen Change Loy, and Dahua Lin. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprintKai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019. A dataset for detecting buildings, containers, and cranes in satellite images. Hiroyuki Miyazaki, Hiroyuki Miyazaki. A dataset for detecting buildings, containers, and cranes in satellite images, 2022.
[]
[ "Conditions for asymptotic stability of first order scalar differential-difference equation with complex coefficients", "Conditions for asymptotic stability of first order scalar differential-difference equation with complex coefficients" ]
[ "Rafał Kapica ", "Radosław Zawiski " ]
[]
[]
We investigate a scalar characteristic exponential polynomial with complex coefficients associated with a first order scalar differential-difference equation. Our analysis provides necessary and sufficient conditions for allocation of the roots in the complex open left half-plane what guarantees asymptotic stability of the differential-difference equation. The conditions are expressed explicitly in terms of complex coefficients of the characteristic exponential polynomial, what makes them easy to use in applications. We show examples including those for retarded PDEs in an abstract formulation.
null
[ "https://export.arxiv.org/pdf/2204.08729v2.pdf" ]
253,097,701
2204.08729
bb7a85996c317899648c670784bfd78ee79dc4b3
Conditions for asymptotic stability of first order scalar differential-difference equation with complex coefficients Rafał Kapica Radosław Zawiski Conditions for asymptotic stability of first order scalar differential-difference equation with complex coefficients first order differential-difference equation with complex coefficientsstabil- ity of differential-difference equationcharacteristic exponential polynomial of differential- difference equationretarded differential-difference equation (DDE) 2020 Subject Classification: 30C1534K0634K2034K41 We investigate a scalar characteristic exponential polynomial with complex coefficients associated with a first order scalar differential-difference equation. Our analysis provides necessary and sufficient conditions for allocation of the roots in the complex open left half-plane what guarantees asymptotic stability of the differential-difference equation. The conditions are expressed explicitly in terms of complex coefficients of the characteristic exponential polynomial, what makes them easy to use in applications. We show examples including those for retarded PDEs in an abstract formulation. Introduction In this article we study the asymptotic stability of a scalar linear differential-difference equation (DDE) x (t) = λx(t) + γx(t − τ ), t ≥ 0, where λ, γ ∈ C and 0 < τ , through the analysis of the corresponding characteristic equation s − λ − γ e −sτ = 0.(2) This problem is frequently related to stability analysis of x (t) = f (x(t), x(t − τ )) , t ≥ 0,(3) where f : R n ×R n → R n is a smooth (nonlinear) function about x 0 ∈ R n , via its linearization given by x (t) = Ax(t) + Bx(t − τ ),(4) where A = J 1 f (x 0 , x 0 ) and B = J 2 f (x 0 , x 0 ) are partial Jacobian matrices of f at (x 0 , x 0 ). To be more precise we use the following Definition 1. An equilibrium solution x * (t) ≡ x 0 ∈ R n of (3) is exponentially stable if there exist M, ω, δ > 0 such that x(t) − x 0 ≤ M e −ωt (t ≥ 0) holds for every solution x of (3) satisfying an inital condition x(t) − x 0 < δ (t ∈ [−τ, 0]) with the Euclidean norm · . By the principle of linearized stability [5] for the case at hand we have [10] Fact 2. Let the linearization of (3) about an equilibrium solution x * (t) ≡ x 0 be expressed by (4) and let the corresponding characteristic equation be given by det sI − A − B e −sτ = 0.(5) Then the following statements hold: (i) x * is exponentially stable if Re s < 0 for all characteristic roots s of (5), (ii) x * is unstable if Re s > 0 for all characteristic roots s of (5). In a finite-dimensional setting and under appropriate, though restrictive conditions, for example if A and B commute -see [9] or [14] -the problem of asymptotic stability of (3) is equivalent to finding conditions on the coefficients of (1) which will guarantee that every root s of (2) is such that Re(s) < 0. In this setting coefficients λ and γ in (2) are eigenvalues of A and B, respectively. Equation (1) is encountered also in an infinite-dimensional setting. Consider (4) on a Hilbert space X where A is a diagonal generator of a strongly continuous semigroup and B is linear and bounded diagonal operator on X (see Example 4 below, [7] or [13]). Then (1) describes dynamics of (4) about a single coordinate that corresponds to a given eigenvalue λ of A. This setting is not a mere generalisation of the previous one. In finite dimensions it is something rather special that linearization of (3) produces commuting A and B. In infinite dimensions surprisingly many dynamical system are represented by diagonal generators [15]. Our motivation to investigate (1) with complex coefficients thus stems from the fact that in both cases above these coefficients represent eigenvalues of respective operators. We also note that whenever mentioning stability we refer to a situation when all the roots of (2) have negative real parts. The literature contains two intertwined approaches to stability problem -one based on analysis of some form of (4) in time domain and one based on analysis of (5). In the latter approach the case with λ, γ ∈ R is well understood -see [6], where the author obtained necessary and sufficient conditions for stability of s − a − c e −s = 0 with a, c ∈ R. In a more general form A(s) + B(s) e −sτ = 0, with A(s) and B(s) real polynomials see [12] and references therein. For a thorough exposition of other methods of analysis of the real λ and γ case see [18] and references therein. The case λ, γ ∈ C is less analysed. In particular, some sufficiency results can be found in [1], where the author presents a numerical analysis of (1) and stipulates asymptotic stability for every τ > 0 if − Re λ > |γ|. The authors of [3] provide, based on algorithmic criteria, some sufficiency result for specific values of complex λ and γ, proving also the result in [1] for some cases. Authors of [17] built on [3] and provide additional sufficient conditions for stability. In [8] the author uses a continuous dependence of the roots of (1) on τ and manages to obtain stability conditions for some values of λ ∈ R and γ ∈ C. In [10] the author provides necessary and sufficient conditions for the zeros of (2) to be in the left complex half-plane. The argument there is based on analysis of the Lambert W function, what complicates applications of obtained conditions. In particular, the condition from [10, Theorem 1.2] uses a nested trigonometric functions of Im λ and Arg γ, what makes it difficult to visualise the region in coefficients-plane that ensures stability. To the best of authors' knowledge [11] is the first work providing necessary and sufficient conditions for stability of (2) for λ, γ ∈ C and τ = 1. Results of [11] are, however, based on specific analysis of roots of (2) which is uneasy to trace for different values of τ , even after change of parameters a → aτ and η → ητ (see (12) below). This may explain why, although it precedes many of the works mentioned above, [11] did not receive much recognition. Our approach here combines analysis of roots placement depending on τ , as shown in [12,Proposition 6.2.3], with arguments of algebraic nature in the complex plane. This allowed us to obtain necessary and sufficient conditions for stability of (2) based explicitly on a relation between λ, γ ∈ C and τ > 0. The conditions do not require to calculate any specific roots of a transcendental equation and allow to visualise how the "stability" region changes with parameters in the coefficients-plane. Thus we not only provide a different formulation of stability conditions but our results are based on a new, different proof. Preliminaries The following observation, which can be found e.g. in [3] or [11], is crucial to simplify the problem of analysis of (1). Lemma 3. Let a, b, c, d, τ ∈ R and let {s 0 } be the set of roots of s − (a + ib) − (c + id) e −sτ = 0(6) and {z 0 } be the set of roots of z − a − e −ibτ (c + id) e −zτ = 0.(7) Then Re(s 0 ) < 0 for all s 0 if and only if Re(z 0 ) < 0 for all z 0 . Proof. Let z 0 be a root of (7). Then s 0 = z 0 + ib is a root of (6). Conversly, let s 0 be a root of (6). Then z 0 = s 0 − ib is a root of (7). As Re(s 0 ) = Re(z 0 ) the result follows. Remark 4. It is worth to mention that a similar yet different simplification is possible. In [2] the author based his approach on a version of Lemma 3 where the non-delayed coefficient in (7) is complex and the one corresponding to the delay is real. For our approach, however, the current version of Lemma 3 is more convenient. We will also use the following result concerning parameter β ∈ R and real functions L, R : [0, ∞) → R, L(r) := r r 2 + 1 , R(r) := arctan(r) + β.(8) Lemma 5. Let β ∈ R and put A = {r ∈ [0, ∞) : L(r) ≤ R(r)}, where real functions L and R are given by (8). Then: (i) A = [0, ∞) if and only if β ≥ 0, (ii) A = [r 0 , ∞) with r 0 > 0 if and only if β ∈ − π 2 , 0 , wherein the correspondence (0, ∞) r 0 ←→ β ∈ − π 2 , 0 is one-to-one, (iii) set A is empty if and only if β ≤ − π 2 . Proof. Let us consider function ϕ : [0, ∞) → R given by ϕ = R − L. Clearly, we have ϕ (r) = 2r 2 (r 2 + 1) 2 > 0, ϕ(0) = β, ϕ(r) −→ β + π 2 as r → ∞. In particular ϕ is strictly increasing and ϕ([0, ∞)) = [β, β + π 2 ). If β ≥ 0, then ϕ(r) ≥ 0 for r ∈ [0, ∞), i.e. A = [0, ∞). On the other hand if L(0) ≤ R(0), then β ≥ 0. This gives assertion (i). Suppose β ∈ − π 2 , 0 . Hence ϕ(0) < 0 and ϕ(r) > 0 for large enough r > 0. Then there exists a unique r 0 > 0 such that ϕ(r 0 ) = 0. This shows that A = [r 0 , ∞). If now A = [r 0 , ∞) for some r 0 > 0, then β < 0 by (i). To finish the proof it is enough to notice that for β ≤ π 2 we have ϕ(r) < 0 for r ∈ [0, ∞), i.e. A = ∅. Corollary 6. Equation L(r) = R(r) has exactly one solution if and only if β ∈ (− π 2 , 0]. We also make use of the following half-planes C + :={s ∈ C : Re(s) > 0}, C − := {s ∈ C : Re(s) < 0}, Π + :={s ∈ C : Im(s) > 0}, Π − := {s ∈ C : Im(s) < 0}. Main results By Lemma 3 we restrict our attention to (7). Taking η = u+iv = e −ibτ (c+id) the conditions for stability of (2) are given on an (u, iv)-complex plane in terms of regions that depend on a and τ . Remark 7. We take the principal argument of λ to be Arg λ ∈ (−π, π]. Let D r ⊂ C be an open disc centred at 0 with radius r > 0. We shall require the following subset of the complex plane, depending on τ > 0 and a ∈ (−∞, 1 τ ], namely: • for a < 0: Λ τ,a := η ∈ C \ D |a| : Re η + a < 0, |η| < |η π |, |Arg η| > τ |η| 2 − a 2 + arctan − 1 a |η| 2 − a 2 ∪ D |a| ,(9) where η π is such that |η π | 2 − a 2 τ + arctan − 1 a |η π | 2 − a 2 = π; • for a = 0: Λ τ,a := η ∈ C \ {0} : Re η < 0, |η| < π 2τ , |Arg η| > τ |η| + π 2 ; (10) • for 0 < a ≤ 1 τ Λ τ,a := η ∈ C : Re η + a < 0, |η| < |η π |, |Arg η| > τ |η| 2 − a 2 + arctan − 1 a |η| 2 − a 2 + π ,(11) where η π is such that |η π | > a and |η π | 2 − a 2 τ + arctan − 1 a |η π | 2 − a 2 = 0. Figure 4 shows Λ τ,a for fixed τ and varying a. The zeros of (2) are in the left half plane C − , according to Lemma 3, if and only if the roots of (7) belong to C − . Thus for λ = a + ib, γ = c + id and η := e −ibτ γ we have the following Theorem 8. Let τ > 0, a ∈ R and η ∈ C. Then every solution of the equation s − a − η e −sτ = 0(12) belongs to C \ C + if and only if a ≤ 1 τ and η belongs to the closure of Λ τ,a given by (9), (10) or (11). Proof. 1. Denote the closure of Λ τ,a by Λ τ,a . For any τ > 0 and a ≤ 0 there is 0 ∈ Λ τ,a and taking η = 0 the statement of the proposition obviously holds true, while for 0 < a ≤ 1 τ we have 0 ∈ Λ τ,a . Thus for the remainder of the proof assume that η = 0. 2. It is know that for τ > 0 equation (12) has infinitely many solutions. By the Rouché's theorem (see e.g. [18, Prop.1.14]) solutions of (12) vary continuously with τ , except at τ = 0 where only one remains. Let η = u + iv and let s = x + iy. In the limit as τ → 0 in (12) we obtain Figure 1: Outer boundaries of the Λ τ,a , defined in (9) with η = u + iv, for a = −1.5 and different values of τ : dotted for τ = 0.5, dashdotted for τ = 1, dashed for τ = 2. The solid line shows a circle with radius |a| = 1.5. Let us establish when at least one of the solutions crosses the imaginary axis for the first time as τ increases from zero upwards. At the crossing of the imaginary axis there is s = iω for some ω ∈ R. In view of (12) we can treat s as an implicit function of τ and check the direction in which zeros of it cross the imaginary axis by analysing the sgn Re ds dτ if s = iω. By calculating the implicit function derivative we have x = a + u, y = v, -4 -3 -2 -1 1 u -2 -1 1 2 v τ = 0.5 τ = 1 τ = 2 |a| = 1.5-3.5 -3 -2.5 -2 -1.5 -1 -0.5 0.5 1 u -1.5 -1 -0.5 0.5 1 1.5 v τ = 0.5 τ = 1 τ = 2ds dτ = − s 2 − as 1 − aτ + sτ . As sgn Re z = sgn Re z −1 we have if s = iω that sgn Re ds dτ = sgn 1 ω 2 + a 2 > 0 and the zeros cross from the left to the right half-plane. As the sign of the above does not depend on τ , the direction of the crossing remains the same for every value of τ . Thus with η = u + iv a necessary condition for the solutions of (12) to be in C \ C + is a + u ≤ 0.(13) 3. Consider again (12) with fixed τ > 0 and a ∈ R and take such η ∈ C that (13) holds. Let us focus on the crossing point i.e. let s = iω for some ω ∈ R. Taking the complex conjugate of (12) at the crossing we obtain − iω − a −η e iωτ = 0.(14) Using now (12) for s = iω and (14) to eliminate the exponential part we have ω 2 = |η| 2 − a 2 . From here we see that for a given a ∈ R and every η = u + iv satisfying both, (13) and |η| < |a|, the crossing does not exist, regardless of τ , and all the solutions of (12) are in C \ C + . 4. Let us focus on the case when the first crossing happens. To that end consider (12), fix a ∈ R and take η such that (13) and |η| ≥ |a|(15) hold. By point 2 as τ increases the roots of (12) move continuously to the right. Denote by τ 0 the smallest τ for which the crossing happens. By assumptions we know that such τ 0 exists. By point 3 the crossing takes place at s = ±i |η| 2 − a 2 . Putting s = i |η| 2 − a 2 into (12) with τ = τ 0 gives η = − a e i √ |η| 2 −a 2 τ0 + |η| 2 − a 2 e i( π 2 + √ |η| 2 −a 2 τ0) = e i √ |η| 2 −a 2 τ0 −a + i |η| 2 − a 2 ,(16) Putting s = −i |η| 2 − a 2 into (12) gives an equation corresponding to (16), namely η = e −i √ |η| 2 −a 2 τ0 −a − i |η| 2 − a 2 .(17) Equations (16) and (17) show the relation between all coefficients (or parameters) of (12) in the boundary case of transition between asymptotic stability and instability. Thus we focus on the triple (τ 0 , a, η) and how changes within it influence stability of s − a − η e −sτ0 = 0.(18) By point 2, for a and η as in (16) and with every τ > τ 0 equation (18) is unstable, while for τ < τ 0 it is stable. And so we turn our attention to η. 5. Let η be a solution of (16). Then η is a solution of (17) and these solutions are obviously symmetric about the real axis. It will be more convenient to use different notation that the one in (16) or (17). Define γ + : [|a|, ∞) → C and γ − : [|a|, ∞) → C as the right side of (16) and (17), respectively, i.e. γ + (w) := e i √ w 2 −a 2 τ0 −a + i w 2 − a 2 ,(19)γ − (w) := e −i √ w 2 −a 2 τ0 −a − i w 2 − a 2 .(20) Let Γ + := γ + ([|a|, ∞)) be the image of (19) and Γ − := γ − ([|a|, ∞)) be the image of (20). We easily see that γ + (w) = γ − (w) for every w ≥ |a| and so Γ + is symmetric to Γ − about the real axis. Up to this moment all considerations in points 2-5 were done regardless of the sign of parameter a. In the reminder of the proof, along with (13) and (15), we will consider additional assumptions on a, namely a < 0, a = 0, a ∈ (0, 1 τ0 ] and a > 1 τ0 . 6. Assume additionally that a < 0 and let a function describing a continuous argument increment of (19) be given by ∆γ + : [|a|, ∞) → [0, ∞), ∆γ + (w) = w 2 − a 2 τ 0 + arctan − 1 a w 2 − a 2 .(21) We easily see that it is a strictly increasing, non-negative function. We also define ∆γ − : [|a|, ∞) → (−∞, 0] and ∆γ − (w) = −∆γ + (w) for every w ∈ [|a|, ∞). Looking at (19) note that the first component has modulus 1 and introduces counterclockwise rotation, while the second component is always in the first quadrant, with a positive real part equal to −a, and its modulus is strictly increasing and tends to infinity as w → ∞. Thus Γ + is a curve that is a counter-clockwise outward spiral that begins in −a ∈ C. An exemplary pair of Γ + and Γ − curves is shown in Fig. 5. 7. Let a set {η (2k−1)π } k∈N be such that the argument increment along Γ + as w changes from |a| to |η (2k−1)π | is equal to (2k − 1)π, that is ∆γ + |η (2k−1)π | = (2k − 1)π.(22) Due to constraint (13) we take into account only these parts of Γ + (or Γ − ) that lie to the left of u = −a line, as depicted in Figs. 5 and 6. Let us now focus on the closure of the first part of Γ + that lies in Π + i.e. γ + ([ |a|, |η π | ]). By (21) and (22) for every w ∈ [| a|, |η π | ] we have ∆γ + (w) ∈ [0, π]. For the case of the part of Γ − equal to γ − ([ |a|, |η π | ]) the argument expression gives ∆γ − (w) = −∆γ + (w). Putting both cases together and returning to the notation of (16) and (17) ∈ (|a|, 10). The constraint related to a and expressed by (13) is marked with a dotted line. The crossings of the real negative semi-axis by Γ + (and Γ − ) are at η π and η 3π . The crossings of u = −a by Γ + , as w increases, are at η 1 , η 2 and η 3 . . Note that D η 0 is bounded due to (13). becomes |Arg η| = |η| 2 − a 2 τ 0 + arctan − 1 a |η| 2 − a 2 , |η| ≤ |η π |,(23) where η π is such that ∆γ + (|η π |) = |η π | 2 − a 2 τ 0 + arctan − 1 a |η π | 2 − a 2 = π. 8. The set of all η ∈ C that satisfy (23) is the boundary of the Λ τ0,a region -see Fig. 1 for its shape. To show that for every η inside this boundary the roots of (18) are in C \ C + consider the following. For every η in the half-plane {u + iv ∈ C : a + u < 0} simple geometric considerations show that there exists exactly one η 0 fulfilling (23) and such that Arg η = Arg η 0 . Conversely, let us fix η 0 fulfilling (23) and consider a function τ = τ (|η|) defined on a ray from the origin and passing through η 0 . More precisely, define D η0 := {η = u + iv ∈ C : |η| > |a| and a + u < 0 and Arg η = Arg η 0 } and let D t η0 := {t ≥ 0 : t = |η|, η ∈ D η0 }. Now reformulate the equality in (23) to express τ as a function τ : D t η0 → (0, ∞), τ (t) = arctan 1 a √ t 2 − a 2 + |Arg η 0 | √ t 2 − a 2 .(24) This is a well-defined positive continuous function. Indeed, for positivity note that for u ≤ 0 there is | Arg η 0 | ≥ π 2 , while for u ∈ (0, −a) consider the following trigonometric identity arctan 1 a t 2 − a 2 + |Arg η 0 | = arctan 1 a t 2 − a 2 + arctan v u = arctan u a √ t 2 − a 2 + |v| u − 1 a √ t 2 − a 2 |v| and the estimation u a √ t 2 − a 2 > − √ u 2 + v 2 − a 2 > −|v|. The derivative of (24) is given by dτ dt (t) = t t 2 − a 2 a t 2 − τ (t) .(25) As a < 0 we have dτ dt < 0 for every t ∈ D t η0 and τ is a decreasing function. Thus for every η ∈ D η0 such that |η| ≤ |η 0 | we have τ (|η|) ≥ τ (|η 0 |) = τ 0 , that is |Arg η| ≥ |η| 2 − a 2 τ 0 − arctan 1 a |η| 2 − a 2 , |η| ≤ |η π |.(26) As the above is true for every η 0 fulfilling (23), condition (26) is true for every η ∈ Λ τ0,a \ {η ∈ C : |η| ≤ |a|}. Stated otherwise, for a given η ∈ Λ τ0,a \ {η ∈ C : |η| ≤ |a|} the time τ for this η to be such that the first root of (18) reaches the imaginary axis is bigger than τ 0 . This also gives that τ > τ 0 implies Λ τ ,a ⊂ Λ τ0,a , as shown in Fig. 1. 9. Results of the previous point show that the only parts of Γ + and Γ − that we need to consider are the ones already discussed i.e. γ + ([|a|, |η π |]) ∪ γ − ([|a|, |η π |]). Indeed, let η k , k = 1, 2, . . . be consecutive points where Γ + crosses the constraint line u = −a, as depicted in Figs. 5 and 6. Then for every η + ∈ γ + (|η π |, |η 1 |) ∪ γ + ([|η 2k |, |η (2k+1) |]), k ∈ N there exists η 0 ∈ γ + ([|a|, |η π |]) ∪ γ − ([|a|, |η π |]) such that Arg η 0 = Arg η + and |η 0 | < |η + |. The result of point 8 now gives a contradiction as τ 0 cannot be the smallest delay for which the first crossing happens. In fact, although (16) still describes (18) with a root corresponding to η + at the imaginary axis, at least one root of (18) -the one corresponding to η 0 -is already in C + . The same argument holds for Γ − . 10. It is easy to see that for |η| = |a| estimation (26) is true and thus the closed disc {η ∈ C : |η| ≤ |a|} ⊂ Λ τ0,a . Taking into account that for the interior of this disc the roots of (18) are in C \ C + (see point 3), we reach the necessity of the condition η ∈ Λ τ0,a for a < 0. 11. Let now a = 0 and let τ 0 > 0 be as before (considerations in points 1 and 2 remain the same). The crossing takes place at s = ±i|η|. Equation |Arg η| = |η|τ 0 + π 2(27) comes now directly from (16). The analysis of points 5-9 simplifies greatly resulting in a necessity condition of the form |Arg η| ≥ |η|τ 0 + π 2 , |η| < π 2τ 0 .(28) 12. Assume now 0 < a. Equations (19) and (20) have the same form. The difference now is that the second product term in (19) is constantly in the second quadrant, with a negative real part −a and imaginary part tending to +∞ as w → ∞. This changes e.g. the behaviour of the continuous argument increment function ∆γ + , as it is in general no longer strictly increasing. In fact for 0 < a we have ∆γ + : [a, ∞) → [0, ∞), ∆γ + (w) = w 2 − a 2 τ 0 + arctan − 1 a w 2 − a 2 + π(29) and ∆γ − : [a, ∞) → [0, ∞), ∆γ − (w) = −∆γ + (w) for every w ≥ a. As (29) is a differentiable function its derivative is d∆γ + dw (w) = w √ w 2 − a 2 τ 0 − a w 2 .(30) We have d∆γ + dw (w) < 0 if and only if w < w m := a τ 0 , Taking into account the domain of (29) i.e. a ≤ w, we see that for a ∈ (0, 1 τ0 ) function ∆γ + is firstly decreasing, reaching a local minimum ∆γ + (w m ) > π 2 , and then it is increasing to infinity; while for a > 1 τ0 it is strictly increasing. These two cases are analysed separately. 13. Fix 0 < a ≤ 1 τ0 . Similarly as in points 7 and 8 we focus initially on a part of Γ + given by γ + ([a, η π ]), as indicated in Fig. 7. Take η 1 that fulfils (19) and with |η 1 | = w 1 < w m . For such η 1 we have ∆γ + (w m ) < Arg η 1 = ∆γ + (w 1 ) ≤ π. Define a ray from the origin and passing through η 1 by D η1 := {η = u + iv ∈ C : |η| > |a| and a + u < 0 and Arg η = ∆γ + (w 1 )} a √ t 2 − a 2 + ∆γ + (w 1 ) − π √ t 2 − a 2 ,(32) where ∆γ + (w 1 ) = Arg η 1 . Note also that as w 1 < w m there exists η 2 , with |η 2 | = w 2 , such that η 2 ∈ D η1 ∩ γ + ([a, η π ]) and w m < w 2 ≤ |η π |. The derivative of (32) is again expressed by (25), namely dτ dt (t) = t t 2 − a 2 a t 2 − τ (t) , but, unlike in point 8, this derivative is in general not negative due to a > 0. In fact, at the intersections {η 1 , η 2 } = D η1 ∩ γ + ([a, η π ]) we find dτ dt (w 1 ) = w 1 w 2 1 − a 2 a w 2 1 − τ (w 1 ) = w 1 w 2 1 − a 2 a w 2 1 − τ 0 = 1 w 2 1 − a 2 − d∆γ + dw (w 1 ) > 0,(33) where the last inequality comes from (31); similarly dτ dt (w 2 ) = 1 w 2 2 − a 2 − d∆γ + dw (w 2 ) < 0.(34) We see that τ is an increasing function in a neighbourhood of t 1 = w 1 and a decreasing one in a neighbourhood of t 2 = w 2 i.e. at the boundaries of the Λ τ0,a region shown in Fig. 7. If we show that τ has only one extreme value -a local maximum -inside Λ τ0,a , that is for some t ∈ (w 1 , w 2 ), then with the reasoning of point 8 we will show that for every η inside Λ τ0,a region the roots of (2) are in C \ C + . We are interested in the number of solutions of dτ dt (t) = 0, what is equivalent to the number of solutions of a t 2 = arctan 1 a √ t 2 − a 2 + β √ t 2 − a 2 ,(35) where β = ∆γ + (w 1 ) − π. Define r := 1 a √ t 2 − a 2 . Then r > 0 is a bijective image of t > a and (35) can be rearranged to r r 2 + 1 = arctan(r) + β. As π 2 < ∆γ + (w m ) < Arg η 1 ≤ π we have β ∈ (− π 2 , 0] and by Corollary 6 we infer that there is only one local extremum i.e. local maximum of τ for t ∈ (w 1 , w 2 ). Hence for every η ∈ D η1 , w 1 ≤ |η| ≤ w 2 we have τ (|η|) ≥ τ 0 i.e. Arg η 1 ≥ |η| 2 − a 2 τ 0 − arctan 1 a |η| 2 − a 2 + π. Thus by the definition of D η1 and symmetry about the real axis we obtain that for every η with |η| ≤ |η π | such that | Arg η| ≥ |η| 2 − a 2 τ 0 − arctan 1 a |η| 2 − a 2 + π(37) the time τ for this η to be such that the first root of (12) reaches the imaginary axis is bigger than or equal to τ 0 . Argument similar to the one in point 8 shows that if | Arg η| ≥ ∆γ + (w m ) then the only region we need to consider is the one given by (37). Thus we distinguish a ray D ηm = {η = u + iv ∈ C : |η| > |a|, a + u < 0, Arg η = ∆γ + (w m )} together with a delay time function based on it, namely τ m : D t ηm → (0, ∞), τ m (t) = arctan 1 a √ t 2 − a 2 + ∆γ + (w m ) − π √ t 2 − a 2 ,(38) where ∆γ + (w m ) = Arg η m , see Fig. 7. The above analysis shows that for τ m we have τ m (t) ≤ τ 0 for every t ∈ D t ηm , where the equality holds only for t = w m . 14. Take now, without loss of generality due to symmetry, η ∈ C − ∩Π + such that Re η < −a and π 2 < Arg η < ∆γ + (w m ) = Arg η m . We claim that for every such η there is τ (|η|) < τ 0 , where τ is defined on a ray containing η. Really, let us fix η as above and assume otherwise i.e. τ (|η|) ≥ τ 0 . Then there exists η 0 that fulfils (16), Arg η 0 = Arg η and ∆γ + (w 0 ) = Arg η + 2π, where w 0 = |η 0 | (see Fig. 7). As η ∈ D η0 we have τ : D t η0 → (0, ∞) defined as in (32) but on the ray D η0 , and such that for t = |η| it takes the value τ (t) = arctan 1 a √ t 2 − a 2 + Arg η 0 + π √ t 2 − a 2 ,(39) where we used a fact that ∆γ + (w 0 ) = Arg η 0 + 2π. Note that for a fixed t the above is a continuous function of Arg η 0 ∈ ( π 2 , π). Let us take a sequence {η k 0 } k∈N such that η k 0 fulfils (16), |η k 0 | < |η k+1 0 | for every k ∈ N and η k 0 → η * m as k → ∞, where Arg η * m = Arg η m . Geometry of the problem shows that for every k ∈ N we have Arg η k 0 < Arg η k+1 0 < Arg η m and D t η k 0 ⊂ D t η k 0 +1 ⊂ D t ηm . For the fixed t from (39) consider a continuous, strictly increasing function τ t : [Arg η 0 , π] → (0, ∞), τ t (Arg ξ) = arctan 1 a √ t 2 − a 2 + Arg ξ + π √ t 2 − a 2 . Our hypothesis now gives τ 0 ≤ τ (t) < lim k→∞ τ t (Arg η k 0 ) = τ t (Arg η * m ) = τ m (t) ≤ τ 0 , where we used strict monotonicity and continuity of τ t , continuity of γ + , definition of D ηm and boundedness of τ m given by (38). The above contradiction proves our claim. Thus with 0 < a ≤ 1 τ0 for the roots of (18) to be in C \ C + the region given by (37) is the only allowable one for η. 15. Fix a > 1 τ0 . By (31) and a comment directly below it the continuous argument increment function ∆γ + given by (29) is now strictly increasing with range ∆γ + ([a, ∞)) = [π, ∞). The minimal value of ∆γ + (w) = π for w = |a| and point 14 shows that if the roots of (18) are in C \ C + then η = −a; there is no such η that the roots of (18) are in C − . This finishes the necessity proof for (18) and, by the same argument, for (12). 16. To be able to use previous notation and ease referencing we show sufficiency for (18). Let τ 0 > 0 be given and a ≤ 1 τ0 . The behaviour of the roots described in points 1 and 2 does not change. Every η ∈ Λ τ0,a , where Λ τ0,a is defined accordingly to a, is either inside D |a| or satisfies (26), (28) or (37). Following backwards the reasoning in points 5-13 we reach the boundary condition (23), (27) or equality in (37), for which the roots of (18) are on the imaginary axis, what happens exactly when η is at the boudary of Λ τ0,a . Corollary 9. Let a delay τ > 0, coefficients λ, γ, η ∈ C be such that λ = a + ib with a ≤ 1 τ , b ∈ R, and let the corresponding Λ τ,a ⊂ C be given by Proof. Part (i) follows from the continuity of (24) or (32). Part (ii) follows from (i) and Lemma 3 by defining η = γ e −ibτ for the case of (40), while for the case of (41) by the real-axis symmetry of Λ τ,a we have η ∈ Λ τ,a if and only if η = γ e ibτ ∈ Λ τ,a . Discussion Before going to examples we make some comments concerning previous work of other authors with respect to the proof of Theorem 8. We also comment on practicality of results obtained in this paper. Theorem 8 relies on subsets Λ τ,a of the complex plane that are defined before the theorem itself. Their origin, however, becomes clear after going through points 6 -7 of the proof of Theorem 8. The remainder of the proof is in fact an analysis of what happens inside those regions. It is worth to mention that inequalities in (9)-(11) can be obtained from the result in [10] after suitable simplifications. As noted in the introduction an analysis of τ as a function of coefficients is present also in [8]. The author obtains there an inequality similar to (24), but does so in the context where γ of (1) is a 2 × 2 real matrix of a special form (and with λ ∈ R). The necessary and sufficient condition for stability of first order scalar differentialdifference equations with complex coefficients characterised by (2) are given by Corollary 9. The condition is based on mutual implicit relation between coefficients of the characteristic equation (2) and a subset of C given by (9)- (11). As the latter is defined by non-linear inequalities there arises a question whether numerical approximation of the rightmost root isn't a more practical approach than finding numerically (i.e. approximately) the region given by (9)-(11), especially given the abundance of literature of computational techniques to approximate characteristic roots. Analysis of dependence between τ and the crossing of the imaginary axis by the first root in points 2-3 of the proof is well-known. In one of the early works [4] authors discuss (2) with λ, γ ∈ R, in [16] the authors show a general approach for real polynamial case of (4) with multiple delays, what is also shown in [12]. More recently such analysis is also used in [8]. A good exposition of such techniques is in [18,Chapter 5.3.2]. Our calculations in point 2-3 are in fact based on [16] and we decided to include all of its steps for the reader's convenience. The answer to the above question depends, in the authors' opinion, on the purpose of approaching that problem. If the purpose is an analysis of a given differential-difference equation, considered as a delayed dynamical system, fulfilling assumptions of Corollary 9, than a numerical check, up to a given accuracy, of at most one of inequalities (9)-(11) is usually a straightforward procedure. If, on the other hand, the purpose is a synthesis of a delayed dynamical system that has some a priori specified properties, as may be the case of a controller design for such system, then a numerical search for the rightmost root may carry more relevant information. Examples With the above discussion in mind we present examples concerning only analysis of given differential-difference equations. These examples illustrate how the necessary and sufficient conditions of Theorem 8 can be compared with and improve known literature results. Note initially that the stability condition discussed in [1] and later proved in [3], that is − Re λ > |γ|, follows immediately from |η| < |a| (point 3 in the proof of Theorem 8). Note also that as Corollary 9 concerns the placement of roots of the characteristic equation (2), it gives also a necessary and sufficient condition for stability of (1). With that in mind we give the following examples. Example 1 Consider a differential-difference equation x (t) = i20x(t) + γx(t − 0.1),(42) where R γ > 0. Equation (42) is a special case of (1), for which necessary and sufficient conditions of stability were found in [3]. By Corollary 9 equation (42) is stable if and only if γ e −i2 ∈ Λ 0.1,0 , where Λ 0.1,0 is given by (10). We thus obtain that (42) Example 2 Consider the differential-difference equation x (t) = 1 4 + i π 4 x(t) − 1 √ 2 + i 1 √ 2 x(t − 1),(43) for which the corresponding characteristic equation takes the form s − 1 4 + i π 4 − − 1 √ 2 − i 1 √ 2 e −s = 0.(44) By Corollary 9 and (11) (or, in fact, by investigating Fig. 4) we see that (44) is stable. Example 3 In [11] the author considers a semi-linear system version of (3) and -due to the approach method -states results only for a fixed delay τ = 1. The exemplary system analysed there is transformed to the form of (4) with τ = 1, A = 0 i.e. x (t) = Bx(t − 1), B = −1 1 8 −1 −1 . As A = 0, and thus λ = 0, we are interested only in eigenvalues of B, which are −1 ± i 1 √ 8 . The author concludes that the system is stable. With conditions (10) we can improve results for the exemplary system in [11] by finding a maximal delay τ for which such system remains stable. Let η = −1 + i 1 and Arg η = π − arctan 1 √ 8 and by (10) we obtain that (45) is stable if and only if 0 < τ < 1 |η| Arg η − π 2 = 2 √ 2 3 π 2 − arctan 1 √ 8 . Note that we do not need to consider η due to the symmetry of Λ τ,0 about the real axis. Example 4 Previous examples relate current results to the ones known from the literature and thus demonstrate the technique. The following example shows how the current results can be used in the case of a retarded partial differential equation in an abstract formulation. Let the representation of our system be ż(t) = Az(t) + A 1 z(t − τ ) + Bu(t) z(0) = x,(46) where the state space X is a Hilbert space, A : D(A) ⊂ X → X is a closed, densely defined diagonal generator of a C 0 -semigroup (T (t)) t≥0 on X, A 1 ∈ L(X) is also a diagonal operator and 0 < τ < ∞ is a fixed delay. The input function is u ∈ L 2 (0, ∞; C) and B is the control operator. We assume that X posses a Riesz basis (φ k ) k∈N consisting of eigenvectors of A, which has a corresponding sequence of eigenvalues (λ k ) k∈N . A simplified form of (46) is analysed in [13] from the perspective of admissibility which, roughly speaking, asserts whether a solution z of (46) follows a required type of trajectory. One of the key elements in the approach to admissibility analysis presented in [13] is to establish when a differential equation associated with the k-th component of (46), namely ż k (t) = λ k z k (t) + γ k z k (t − τ ) z k (0) = x k ,(47) is stable, where λ k ∈ C is an eigenvalue of A, γ k ∈ C is an eigenvalue of A 1 and x k ∈ C is an initial condition for the k-th component of X. Then, having stability conditions for every k ∈ N, one may proceed with analysis for the whole X. Based on Corollary 9 we immediately obtain a genuine approach method of obtaining these stability conditions, namely Proposition 10. For a given delay τ ∈ (0, ∞) and sequences (λ k ) k∈N and (γ k ) k∈N consider a corresponding set of Cauchy problems of the form (47). For every k ∈ N system (47) is stable if and only if λ k = a k + iβ k ∈ z ∈ C : Re(z) < 1 τ and γ k e −iβ k τ ∈ Λ τ,a k ∀k ∈ N, with Λ τ,a k defined in (9)- (11). Notice that Proposition 10 not only extends [13,Proposition 3.5] by adding the necessary condition, but it also allows for analysis of unbounded A, as it includes e.g. the case when a k → −∞ as k → ∞. This is in fact exactly the case presented in [7]. Figures 1 , 12 and 3 show Λ τ,a for fixed values of a and varying τ , while Figure 2 : 2Outer boundaries of the Λ τ,a , defined in (10) with η = u + iv, for a = 0 and different values of τ : dotted for τ = 0.5, dashdotted for τ = 1, dashed for τ = 2. Figure 3 : 3Outer boundaries of the Λ τ,a , defined in(11) with η = u + iv, for a = 0.25 and different values of τ : dotted for τ = 0.5, dash-dotted for τ = 1, dashed for τ = 2. Figure 4 : 4Outer boundaries of Λ τ,a , defined in (9)-(11) with η = u + iv, for τ = 1 and different values of a: solid for a = −1.5, dashed for a = 0 and dotted for a = 0.25. and the solutions start in C \ C + i.e. with x ≤ 0 if and only if a + u ≤ 0. Figure 5 : 5Curves Γ + (solid line) and Γ − (dashdotted line) drawn for τ 0 = 1 and a = −1.5 with |η| = w Figure 6 : 6Enlargement of the central part of Fig. 5 with γ + ([|a|, |η π |]) ∪ γ − ([|a|, |η π |]) with two cases D η 0 (solid line) and D η 0 (dashed line) Figure 7 : 7(a): Curves Γ + (solid line) and Γ − (dash-dotted line) drawn for τ 0 = 1 and a = 0.25 with w ∈ (|a|, 10). The constraint related to a and expressed by (13) is marked with a dotted line. The first crossings of the real negative semi-axis by Γ + (and Γ − ) is at η π . Auxiliary rays D ηm and D η 0 are indicated in solid and dashed lines, respectively; (b): enlargement of the central part of (a) with γ + ([|a|, |η π |]) ∪ γ − ([|a|, |η π |]) with D η 1 (solid line) and D η 0 (dashed line), |η 1 | = w 1 , |η 2 | = w 2 . The D η 0 ray is based on η 0 such that Arg η 0 < ∆γ + (w m ); point η m = γ + (w m ) is indicated with an arrow and a star * symbol, Arg η m = Arg η * m . and let D t η1 := {t ≥ 0 : t = |η|, η ∈ D η1 }. To express τ as a function on this ray, i.e. τ : D t η1 → (0, ∞) we now reformulate (29) to obtain τ (t) := arctan 1 (9)-(11). Then (i) every solution of the equation s − a − η e −sτ = 0 belongs to C − if and only if η ∈ Λ τ,a ; (ii) every solution of s − λ − γ e −sτ = 0 (40) and its version with conjugate coefficients s − λ − γ e −sτ = 0 (41) belongs to C − if and only if γ e −ibτ ∈ Λ τ,a . is stable if and only if γ < 20 − 5π, what is equivalent to the condition given by [3, Theorem 3.1]. AcknowledgementsThe authors would like to thank Prof. Yuriy Tomilov for mentioning to them reference[11]. The research of Rafał Kapica was supported by the Faculty of Applied Mathematics AGH UST statutory tasks within subsidy of Ministry of Education and Science. The work of Radosław Zawiski was performed when he was a visiting researcher at the Centre for Mathematical Sciences of the Lund University, hosted by Sandra Pott, and supported by the Polish National Agency for Academic Exchange (NAWA) within the Bekker programme under the agreement PPN/BEK/2020/1/00226/U/00001/A/00001. Special stability problems for functional differential equations. V K Barwell, BIT. 15V. K. Barwell, Special stability problems for functional differential equations, BIT 15 (1975), 130-135. On characteristic roots and stability charts of delay differential equations. D Breda, International Journal of Robust and Nonlinear Control. 22D. Breda, On characteristic roots and stability charts of delay differential equations, International Journal of Robust and Nonlinear Control, 22 (2012), 892-917. On stability of a frst-order complex delay differential equation. B Cahlon, D Schmidt, Nonlinear Analysis: Real World Applications. 3B. Cahlon and D. Schmidt, On stability of a frst-order complex delay differential equa- tion, Nonlinear Analysis: Real World Applications 3 (2002), 413-429. Discrete delay, distributed delay and stability switches. K L Cooke, Z Grossman, Journal of Mathematical Analysis and Applications. 86K. L. Cooke and Z. Grossman, Discrete delay, distributed delay and stability switches, Journal of Mathematical Analysis and Applications 86 (1982), 592-627. (2002), 413- 429. Delay Equations Functional-, Complex-, and Nonlinear Analysis. O Diekmann, S A Van Gils, S M Verdyun Lunel, H.-O Walther, Applied Mathematical Sciences. 110Springer-VerlagO. Diekmann, S.A. van Gils, S.M. Verdyun Lunel and H.-O. Walther, Delay Equa- tions Functional-, Complex-, and Nonlinear Analysis, Applied Mathematical Sciences, vol. 110, Springer-Verlag, New York, 1995. Roots of the transcendental equation associated with a certain differencedifferential equation. N D Hayes, Journal of the London Mathematical Society. 25N. D. Hayes, Roots of the transcendental equation associated with a certain difference- differential equation, Journal of the London Mathematical Society 25 (1950), 226-232. Admissibility of retarded diagonal systems with one dimensional input space. R Kapica, J R Partington, R Zawiski, arXiv 2207.00662R. Kapica and J.R. Partington and R. Zawiski, Admissibility of retarded diagonal sys- tems with one dimensional input space, arXiv 2207.00662 Delay-dependent and delay-independent stability criteria for a delay differential system. H Matsunaga, Proceedings of the American Mathematical Society. 136H. Matsunaga, Delay-dependent and delay-independent stability criteria for a delay differential system, Proceedings of the American Mathematical Society 136 (2008), 4305-4312. Pairs of matrices with property L. T S Motzkin, O Taussky, Transactions of the American Mathematical Society. 73T. S. Motzkin and O. Taussky, Pairs of matrices with property L, Transactions of the American Mathematical Society 73 (1952), 108-114. On parameter dependence of exponential stability of equilibrium solutions in differential equations with a single constant delay. J Nishiguchi, Discrete and Continuous Dynamical Systems. 36J. Nishiguchi, On parameter dependence of exponential stability of equilibrium solu- tions in differential equations with a single constant delay, Discrete and Continuous Dynamical Systems 36 (2016), 5657-5679. Roots of a transcendental equation associated with a system of differential-difference equations. V W Noonburg, SIAM Journal of Applied Mathematics. 17V. W. Noonburg, Roots of a transcendental equation associated with a system of differential-difference equations, SIAM Journal of Applied Mathematics 17 (1969), 198- 205. Linear Operators and Linear Systems: An Analytical Approach to Control Theory. J R Partington, Cambridge University Press60Cambridge, UKJ. R. Partington, Linear Operators and Linear Systems: An Analytical Approach to Control Theory, London Mathematical Society Student Texts, vol. 60, Cambridge Uni- versity Press, Cambridge, UK, 2004. Admissibility of state delay diagonal systems with one-dimensional input space. J R Partington, R Zawiski, Complex Analysis and Operator Theory. 13J. R. Partington and R. Zawiski, Admissibility of state delay diagonal systems with one-dimensional input space, Complex Analysis and Operator Theory 13 (2019), 2463- -2485. Retarded dynamical systems: Stability and characteristic functions, Longman Scientific and Technical. G Stépán, HarlowG. Stépán, Retarded dynamical systems: Stability and characteristic functions, Long- man Scientific and Technical, Harlow, 1989. Observation and Control for Operator Semigroups. M Tucsnak, G Weiss, Birkhäuser Verlag AGBaselM. Tucsnak and G. Weiss, Observation and Control for Operator Semigroups, Birkhäuser Verlag AG, Basel, 2009. Direct method for TDS stability analysis. K Walton, J E Marshall, IEE Proceedings D -control theory and applications. 134K. Walton and J. E. Marshall, Direct method for TDS stability analysis, IEE Proceed- ings D -control theory and applications 134 (1987), 101-107. Stability analysis in a first-order complex differential equations with delay. J Wei, C Zhang, Nonlinear Analysis. 59J. Wei and C. Zhang, Stability analysis in a first-order complex differential equations with delay, Nonlinear Analysis 59 (2004), 657-671. W Michiels, S.-I Niculescu, Stability, Control, and Computation for Time-Delay Systems. SIAM, PhiladelphiaW. Michiels and S.-I. Niculescu, Stability, Control, and Computation for Time-Delay Systems, SIAM, Philadelphia, 2014.
[]
[ "Adversarial Feature Augmentation for Cross-domain Few-shot Classification", "Adversarial Feature Augmentation for Cross-domain Few-shot Classification", "Adversarial Feature Augmentation for Cross-domain Few-shot Classification", "Adversarial Feature Augmentation for Cross-domain Few-shot Classification" ]
[ "Yanxu Hu \nSchool of Computer Science and Engineering\nSun Yat-sen University\nChina\n", "Andy J Ma \nSchool of Computer Science and Engineering\nSun Yat-sen University\nChina\n\nGuangdong Province Key Laboratory of Information Security Technology\nChina\n\nKey Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education\nChina\n", "Yanxu Hu \nSchool of Computer Science and Engineering\nSun Yat-sen University\nChina\n", "Andy J Ma \nSchool of Computer Science and Engineering\nSun Yat-sen University\nChina\n\nGuangdong Province Key Laboratory of Information Security Technology\nChina\n\nKey Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education\nChina\n" ]
[ "School of Computer Science and Engineering\nSun Yat-sen University\nChina", "School of Computer Science and Engineering\nSun Yat-sen University\nChina", "Guangdong Province Key Laboratory of Information Security Technology\nChina", "Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education\nChina", "School of Computer Science and Engineering\nSun Yat-sen University\nChina", "School of Computer Science and Engineering\nSun Yat-sen University\nChina", "Guangdong Province Key Laboratory of Information Security Technology\nChina", "Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education\nChina" ]
[]
Few-shot classification is a promising approach to solving the problem of classifying novel classes with only limited annotated data for training. Existing methods based on meta-learning predict novel-class labels for (target domain) testing tasks via meta knowledge learned from (source domain) training tasks of base classes. However, most existing works may fail to generalize to novel classes due to the probably large domain discrepancy across domains. To address this issue, we propose a novel adversarial feature augmentation (AFA) method to bridge the domain gap in few-shot learning. The feature augmentation is designed to simulate distribution variations by maximizing the domain discrepancy. During adversarial training, the domain discriminator is learned by distinguishing the augmented features (unseen domain) from the original ones (seen domain), while the domain discrepancy is minimized to obtain the optimal feature encoder. The proposed method is a plug-andplay module that can be easily integrated into existing few-shot learning methods based on meta-learning. Extensive experiments on nine datasets demonstrate the superiority of our method for cross-domain few-shot classification compared with the state of the art. Code is available at https : //github.com/youthhoo/AF A F or F ew shot learning.
10.48550/arxiv.2208.11021
[ "https://export.arxiv.org/pdf/2208.11021v1.pdf" ]
251,741,098
2208.11021
f435ecf2c840c567258b4f0806b66910ca554291
Adversarial Feature Augmentation for Cross-domain Few-shot Classification Yanxu Hu School of Computer Science and Engineering Sun Yat-sen University China Andy J Ma School of Computer Science and Engineering Sun Yat-sen University China Guangdong Province Key Laboratory of Information Security Technology China Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education China Adversarial Feature Augmentation for Cross-domain Few-shot Classification few-shot classificationdomain adaptationadversarial learn- ingmeta-learning Few-shot classification is a promising approach to solving the problem of classifying novel classes with only limited annotated data for training. Existing methods based on meta-learning predict novel-class labels for (target domain) testing tasks via meta knowledge learned from (source domain) training tasks of base classes. However, most existing works may fail to generalize to novel classes due to the probably large domain discrepancy across domains. To address this issue, we propose a novel adversarial feature augmentation (AFA) method to bridge the domain gap in few-shot learning. The feature augmentation is designed to simulate distribution variations by maximizing the domain discrepancy. During adversarial training, the domain discriminator is learned by distinguishing the augmented features (unseen domain) from the original ones (seen domain), while the domain discrepancy is minimized to obtain the optimal feature encoder. The proposed method is a plug-andplay module that can be easily integrated into existing few-shot learning methods based on meta-learning. Extensive experiments on nine datasets demonstrate the superiority of our method for cross-domain few-shot classification compared with the state of the art. Code is available at https : //github.com/youthhoo/AF A F or F ew shot learning. Introduction The development of deep convolutional neural networks (DCNNs) has achieved great success in image/video classification [16,22,39,47]. The impressive performance improvement relies on the continuously upgrading computing devices and manual annotations of large-scale datasets. To ease the heavy annotation burdens for training DCNNs, few-shot classification [21] has been proposed to recognize instances from novel classes with only limited labeled samples. Among various recent methods to address the few-shot learning problem, the meta-learning approach [8,10,24,34,36,38,42,46] have received a lot of attention due to its effectiveness. In general, meta-learning divides the training data into a series of tasks and learns an inductive distribution bias of these tasks to alleviate the negative impact of the imbalance between base and novel classes. Meta-learning is good at generalizing the base-class model to novel classes under the condition that the training distribution of base classes is almost equal to the testing one of novel classes. Nevertheless, when the distributions of the training (source domain) and the testing (target domain) data differ from each other, the performance of the meta-learning model will degrade as justified by existing works [5,15]. Fig. 1 illustrates the domain shift problem in which the target dataset (e.g. CUB) is different from the source domain (e.g. mini-ImageNet). In this scenario, the distribution of the target domain features extracted by the encoder E may greatly deviate from the source domain distribution. With the distribution misalignment, the class discriminator D c cannot make a correct decision for classifying novel-class data. Domain adaptation (DA) [43] can learn domain-invariant features by adversarial training [12] to bridge the domain gap. While DA assumes a lot of unlabelled samples are available in the target domain for training, the domain generalization (DG) approach [23] can generalize from source domains to target domain without accessing the target data. Differently, in few-shot learning, novel classes in the target domain do not overlap with base classes in the source domain and only very limited number of training samples are available for each class. As a result, existing DA methods are not applicable for cross-domain few-shot classification. To mitigate the domain shift under the few-shot setting, the adversarial task augmentation (ATA) method [44] is proposed to search for the worst-case problem around the source task distribution. While the task augmentation lacks of the capacity of simulating various feature distributions across domains, the feature-wise transformation (FT) [40] is designed for feature augmentation using affine transforms. With multiple source domains for training, the hyperparameters in the FT are optimized to capture variations of the feature distributions. When there is only single source domain, these hyper-parameters are empirically determined for training. Though the FT achieves convincing performance improvement for both the base and novel classes, the empirical setting of hyper-parameters in the FT is sub-optimal. Consequently, it cannot fully imitate the distribution mismatch under single source domain adaptation. To overcome the limitations in existing works, we propose a novel adversarial feature augmentation (AFA) method for domain-invariant feature learning in cross-domain few-shot classification. Different from the ATA, our method performs data augmentation in features (instead of tasks) to simulate the feature distribution mismatch across domains. Unlike the FT using multiple source domains to determine the optimal solution, the proposed AFA aligns the cross-domain feature distributions by adversarial learning based on single source domain. In our method, we design a feature augmentation module to transform the features extracted by the encoder E according to sufficient statistics of normal distribution. Considering the original and the augmented features as two different domains (seen and unseen respectively), the feature augmentation module is trained by maximizing the domain discrepancy across domains. Moreover, the In meta-learning methods, it consists of an feature encoder E and a prediction head Dc. There may be domain shift between the training (source domain) data of base classes and testing (target domain) data of novel classes. In this case, the distribution of the features extracted by the source domain encoder (blue) differs from the target domain features (red). Due to the distribution misalignment, the metalearned prediction head may not be able to correctly classify samples of novel classes from the target domain. Moreover, the feature distribution in the target domain can hardly be estimated due to the limited number of novel-class sample. In this paper, we propose a novel adversarial feature augmentation (AFA) method to learn domain-invariant features for cross-domain few-shot classification. feature augmentation module is inserted into multiple layers of the encoder, such that the difference between the distributions of the seen and unseen domains are enlarged. The distance between the gram matrices of multi-layer features from seen and unseen domains is used to measure the domain discrepancy. During domain adversarial training, both the feature augmentation module and the domain discriminator is trained to distinguish the seen domain from the unseen one, while the encoder is learned by confusing the two different domains. In summary, the contributions of this work are in three folds: 1. We propose a model-agnostic feature augmentation module based on sufficient statistics of normal distribution. The feature augmentation module can generate various feature distributions to better simulate the domain gap by maximizing the domain discrepancy under the cross-domain few-shot setting. 2. We develop a novel adversarial feature augmentation (AFA) method for distribution alignment without accessing to the target domain data. During adversarial training, the domain discriminator is learned by recognizing the augmented features (unseen domain) from the original ones (seen domain). At the same time, the domain discrepancy is maximized to train the feature augmentation module, while it is minimized to obtain the optimal feature encoder. In this way, the domain gap is reduced under the few-shot setting. 3. The proposed AFA is a plug-and-play module which can be easily integrated into existing few-shot learning methods based on meta-learning including matching net (MN) [42], graph neural network (GNN) [31], transductive propagation network (TPN) [25], and so on. We experimentally evaluate the performance on the proposed method combined with the MN, GNN and TPN under the cross-domain few-shot setting. Experimental results demonstrate that our method can improve the classification performance over the few-shot learning baselines and outperform the state-of-the-art cross-domain few-shot classification methods in most cases. Related Work Few-shot classification. Few-shot classification [7,10,15,24,25,46] aims to recognize novel classes objects with few labeled training samples. MatchingNet [42] augments neural networks with external memories via LSTM module and maps a few labelled support samples and an unlabelled query samples to its label, while GNN [31] assimilates generic message-passing inference algorithms with their neural-network counterparts to interact the information between the labelled data and unlabelled data by graph. TPN [25] learns a graph construction module that exploits the manifold structure in the data to propagate labels from labeled support images to unlabelled query instances, which can well alleviate the few-shot classification problem. However, these meta-learning methods fail to generalize to target domains since the distribution of image features may vary largely due to the domain shift. Our work improves the generalization ability of the meta-learning model with the proposed adversarial feature augmentation (ATA) to better recognize target domain samples. Domain adaptation. Existing domain adaptation (DA) methods can be divided into three categories, i.e., discrepancy-based [26,50], reconstruction-based approaches [6,13] and adversarial-based [11,12,19,41]. For the discrepancy-based methods, DAN [26] measures the distance between the distribution of source and target domain and the domain discrepancy is further reduced using an optimal multi-kernel selection method for mean embedding matching. The reconstructionbased method DRCN [13] proposes a constructor to reconstruct target domain data, the more similar between the original data and constructed data, the more effective the feature learned by encoder are. While the adversarial-based method DANN [12] learns domain-variance features by adversarial progress between encoder and domain discriminators. Nevertheless, these DA methods take the unlabelled data in the target domain as inputs for training, which is unavailable in the training stage under cross-domain few-shot classification setting. Adversarial training. Adversarial training [14,27,32] is a powerful training module to improve the robustness of deep neural networks. To the end, Madry et al. [27] develop projected gradient descent as a universal "first-order adversary" and use it to train model in adversarial way. Sinha et al. [33] provide a training procedure that updates model parameters with worst-case perturbations of training data to perturb the data distribution, which has been referred by ATA [44] to generate virtual 'challenging' tasks to improve the robustness of models. In this work, we generate the "bad-case perturbations" in feature level via adversarial feature augmentation, which can simulate various feature distributions, to improve the generalization ability of various meta-learning methods. Cross-Domain few-shot classification. Different from the few-shot domain adaptation works [30,49], the unlabelled data from target domain isn't used for training and the categories vary from training set to the testing set in crossdomain few-shot classification (CDFSC) problems. Compared to the few-shot classification, in the CDFSC, base classes are not share the same domain with novel classes. To improve the generalization of meta-learning methods, LRP [37] develops a explanation-guided training strategy that emphasizes the features which are important for the predictions. While Wang et al. [44] focus on elevating the robustness of various inductive bias of training data via enlarging the task distribution space. And Feature Transformation [40] try to improve generalization to the target domain of metric-base meta-learning methods through modelling the various different distribution with feature-wise transformation layer. Compared to the above methods, The CNAPs-based approaches [29,1,3,4,2] developed from different perspectives, which is proposed based on Feature-wise Linear Modulation (FiLM) for efficient adaptation to new task at test time. Different from their approaches, we aim to simulate the various distributions in the feature-level with adversarial training and take it as feature augmentation to learn an encoder for extracting domain-invariant features. Proposed Method In this section, the preliminaries and the overall network architecture of our method are first introduced. Then, the feature augmentation module and the adversarial training process are presented. Preliminaries Following the few-shot classification setting [28], the novel categories in the testing stage C test are different from base classes C train used in the training stage, i.e., C train ∩ C test = ∅, then data for training and testing is divided into a series of tasks T . Each task contains a support set T s and a query set T q . In a n-way k-shot support set T s , the number of categories and labelled samples of each category is n and k, respectively. The query set T q consists of the samples sharing the same classes as in T s . During meta-learning, the training process on T s and the testing process on T q are called meta-training and meta-testing. For each task, the goal is to correctly classify samples from T q by learning from T s . With the domain shift problem in cross-domain few-shot classification, the training dataset (e.g. mini-ImageNet) is different from the testing data (e.g. CUB [45] or Cars [20]). In this work, we focus on adapting the meta model from single source domain to various target domains. In other words, only one source domain dataset is used for training while testing can be performed on different In the feature encoder E, a novel adversarial feature augmentation (AFA) module is embedded after each batch normalization layer. Bottom: Adversarial Feature Augmentation. The AFA module generates augmented features Fa (unseen domain) to simulate distribution variations. By using adversarial training through inserting the gradient reverse layer (GRL) into the AFA module, the discrepancy between the distributions of Fa and original features Fo (seen domain) is maximized. At the same time, the domain discriminator D d is learned by distinguishing the seen domain from the unseen one, while the discrepancy E is minimized to obtain the optimal feature encoder. Parameters of the AFA module, D d and E are denoted as θa, θe and θ d , respectively. datasets. Notice that T s and T q of each task is from the same domain. Since the labelled data from the target domain is very limited and the target novel classes are not overlapped with the source base classes, we propose to augment the features of each task by adversarial training to bridge the domain gap. Network Architecture As shown in Fig. 2, the network architecture of the proposed method contains a feature encoder E and a class discriminator D c similar to meta-learning models. Different from the traditional feature encoder, a novel adversarial feature augmentation (AFA) module is embedded after each batch normalization layer to simulate various feature distributions with details introduced in Section 3.3 & 3.4. With the augmented features (unseen domain), a domain discriminator D d is trained to distinguish the unseen domain from the seen one (original features). The training procedures of our method follows the meta-learning approach to learn the inductive bias over the feature distribution from a series of tasks. By doing this, a class discriminator D c is learned and transferred to target tasks in the testing stage. For meta-training in each task, the base learner B outputs the optimal class discriminator D c based on the support set T s and the feature encoder E, i.e., D c = B(E(T s ; θ e ); θ c ), where θ c , θ e denote the learnable parameters of D c and E, respectively. During meta-testing, the objective function is to minimize the classification loss of the query set T q , i.e., min θc,θe L c = L c (Y c q ,Ŷ c q ),Ŷ c q = D c (E(T q ; θ e ); θ c )(1) where Y c q andŶ c q are the sets of ground-truth labels and predictions of the query images, respectively. To mitigate the domain shift, we propose a novel AFA module integrated in the encoder E. For each task, the output of E in our method contains the original (seen domain) features F o ∈ R N ×C and augmented (unseen domain) features F a ∈ R N ×C , where N, C are the batch size and the number of channels, respectively. As shown in the bottom of Fig. 2, F a representing the distribution varied from the source domain is used in the classification loss L c . When optimizing for the loss function L c , the learnable parameters of the AFA module θ a are fixed. Details about how to learn the optimal θ a and parameters θ d of the domain discriminator D d are given in the following two subsections. Feature Augmentation To simulate various feature distributions, we design the feature augmentation function via disturbing the sufficient statistics of the original (seen domain) feature distribution. Given a specified mean and variance, normal distribution best represents the current state of knowledge with maximum entropy. As a results, we assume the feature maps in a training batch follows multivariate normal distribution and is independent and identically distributed. Denote f as any element in the feature map and f 1 , . . . , f N as the corresponding observations in a batch. Since the marginal distribution of multivariate normal distribution is still normal, the probability density of a batch of f in can be estimated by, p(f ) = N i=1 1 √ 2πσ exp − (f i − µ) 2 2σ 2(2) where µ, σ are the mean and variance of f . Then the probability density function can be decomposed into the part that is relevant to the overall distribution and is independent of the overall distribution, By simplifying the product in right-hand side of Eq. (2), we have p(f ) = (2πσ) − N 2 exp − 1 2σ 2 i (f 2 i − 2µf i + µ 2 )(3) By Eq. (3), the probability density p(f ) can be decomposed into the form of a factor which does not depend on the distribution parameters µ, σ multiplied by the other factor depending on µ, σ and statistics i f 2 i , i f i . According to the Fisher-Neyman factorization theorem [9,35], i f 2 i and i f i are sufficient statistics of normal distribution. Moreover, the mean and variance are also sufficient statistics of normal distribution, because the statistics i f 2 i and i f i can be calculated by them. The sufficient statistics of feature distribution include all the information of the distribution. Thus, we propose to simulate various feature distributions by disturbing the mean and variance of the original features. For this purpose, we insert a linear perturbation function with learnable parameters after each batch normalization layer. Denote the original intermediate features from a certain batch normalization layer as m o ∈ R C×H×W , where H, W are the spatial resolutions of the feature map. We initialize the scaling parameter γ ∈ R C (for variance perturbation) and bias term β ∈ R C (for mean disturbance) by normal distribution similar to [40]. Then, the augmented feature m a ∈ R C×H×W is computed by, m a c,h,w = γ c × m o c,h,w + β c(4) The learnable parameters γ, β are optimized by adversarial training which will be elaborated in the next subsection. Adversarial Feature Augmentation The augmentation module would not explore the distribution space that would enable the encoder to better handle tasks from a completely different domain, if γ and β is directly learned by solving the optimization problem in Eq. (1). In this case, the ATA module fails to simulate the domain variations for domaininvariant feature learning. In our method, we optimize the parameters in the AFA module, the domain discriminator D d and the feature encoder E by adversarial training. Let us consider the original features F o and the augmented features F a as the seen and unseen domain, respectively. Denote the domain label of the input features as y d i . If the input features are from the seen domain, then y d i = 0. Otherwise, y d i = 1. As in the DANN [12], the domain discriminator D d (·) : R C → [0, 1] is defined as a logistic regressor, i.e., y d = D d (F ; µ, b) = h(µ T F + b), µ ∈ R C , F = F o or F a(5) where h(·) is the sigmoid function, µ, b are learnable parameters in D d , andŶ d is the predicted labeled of the original features or the augmented features. Then, the loss function given by cross-entropy is, L d (Ŷ d , Y d ) = 1 2N i [−y d i log(ŷ d i ) − (1 − y d i ) log(1 −ŷ d i )], 1 ≤ i ≤ 2N(6) where Y d ,Ŷ d are the sets of all the ground-truth and predicted domain labels. The domain discriminator can be trained by minimizing the loss function Eq. (6) to distinguish between the seen and unseen domains. Besides the domain similarity measured by the final output features from the encoder in Eq. L g = 1 4S 2 C 2 i,j (G i,j (m a ) − G i,j (m o )) 2(9) By maximizing the gram-matrix loss L g , the AFA module is trained to ensure that the augmented intermediate features in each layer are different from the original ones to better mimic the target feature distribution. For adversarial training, the domain similarity loss L d is maximized and the domain discrepancy loss L g is minimized to learn the feature encoder E for distribution alignment. In summary, the optimization problem for adversarial training is given as follows, max θe min θ d ,θa L D = L d − L g(10) The min-max optimization problem in Eq. (10) can be solved as gradient reverse layers (GRLs) introudced in the DANN [12], which reverse the gradients during back propagation. As shown in the bottom of Fig 2, the gradients of the domain discriminator D d , the encoder E and the AFA module are updated by λ∂L D /∂θ d , −λ∂L D /∂θ e and λ∂L D /∂θ a , respectively, where λ is a hyperparameter set empirically as in the DANN. Comparing to FT [40]. Both the feature-wise transformation (FT) [40] and our method aim at transforming image features to simulate various feature distributions. Our method takes full advantages of the original and augmented features to explicitly bridge the domain gap by adversarial training. Thus, the distribution variations can be imitated by using only single source domain for training. Nevertheless, FT relies on multiple source domains to learn the optimal feature transformation parameters. Under the single source domain setting, the transformation parameters are set as constants, such that FT may suffer from the problem of performance drop as shown in our experiments. Comparing to ATA [44]. The adversarial task augmentation (ATA) method employs adversarial training to search for the worst-case tasks around the source task distribution. In this way, the space of the source task distribution could be enlarged, so that it may be closer to the task distribution in the target domain. Nevertheless, the perturbation on source tasks would degrade the performance on the unseen classes of source domain compared to other competitive models. Different from task augmentation, we propose feature augmentation with adversarial training via the gradient reverse layers to learn domain-invariant features without the problem of performance degradation. Moreover, the ATA may not be able to fully utilize the available information in which only one of the generated tasks or the original tasks is used for training. In our method, both the original and augmented feature are used to train the domain discriminator. At the same time, the proposed gram-matrix loss helps to generate unseen augmented features through maximizing the difference compared to the original features. In addition, ATA is more computational expensive to find the worst-case tasks via gradient ascents as shown in the complexity comparison in the supplementary. Experiment Implementation In this section, we evaluate the proposed adversarial feature augmentation (AFA) module inserted into the Matching Network (MN) [42], Graph Neural Network (GNN) [31] and Transductive Propagation Network (TPN) [25]. We compare our method with the feature-wise transformation (FT) [40], explanation-guide training (LRP) [37] and Adversarial Task augmentation(ATA) [44]. Experimental Setting Datasets. In this work, nine publicly available benchmarks are used for experiments, i.e., mini-ImageNet [42], CUB [45], Cars [20], Places [51], Plantae [18], CropDiseases, EuroSAT, ISIC and ChestX. Following the experimental setting of previous works [40,44], we split these datasets into train/val/test sets, which is further divided into k-shot-n-class support sets and the same n-class query sets. We use the mini-ImageNet dataset as the source domain, and select the models with best accuracy on the validation set of mini-ImageNet for testing. Implementation Details. Our model can be integrated into existing metalearning methods, e.g., MN [42], GNN [31], TPN [25]. In these methods, we use the ResNet-10 [17] with the proposed AFA module as the feature encoder. The scaling term γ ∼ N (1, sof tplus(0.5)) and bias term β ∼ N (0, sof tplus(0.3)) are sampled from normal distribution for initialization. To ensure fair comparison with the FT [40], LRP [37], ATA [44] and the baseline methods. We follow the training protocol from [5]. Empirically, the proposed model is trained with the learning rate 0.001 and 40,000 iterations. The performance measure is the average of the 2000 trials with randomly sampled batches. There are 16 query samples and 5-way 5-shot/1-shot support samples for each trial. Pre-trained feature encoder. Before the few-shot training stage, we apply an additional pre-training strategy as in FT [40], LRP [37] and ATA [44] for fair comparison. The pre-trained feature encoder is minimized by the standard cross-entropy classification loss on the 64 training categories (the same as the training categories in few-shot training) in the mini-ImageNet dataset. Results on Benchmarks We train each model using the mini-ImageNet as the source domain and evaluate the model on the other eight target domains, i.e., CUB, Cars, Places, Plantae, CropDiseases, EuroSAT, ISIC and ChestX. In our method, the AFA module Table 1. Few-shot classification accuracy (%) of 5-way 5-shot/1-shot setting trained on the mini-ImageNet dataset, and tested on various datasets from target domains. The best results in different settings are in Bold. is inserted after each batch normalization layer of the feature encoder during the training stage. All the results are shown in Table 1. We have following observations from these results: i. Our method outperforms the state of the art for almost all the datasets and different-shot settings in different meta-learning methods. For 1-shot classification, our method improves the baselines by 3.45% averagely over the eight datasets in different models. In 5-shot setting, the average improvement is 4.25% compared to the baselines. ii. Compared to the competitive ATA [44], our method integrated with the proposed AFA achieves an average improvement of about 1%. Ablation Experiments Effect of the domain discriminator. As mentioned in Section 3.1, we apply the domain discriminator to maximize the discrepancy between the augmented features and original features. In this experiment, we perform ablation experiments of the domain discriminator through training the AFA via the classification loss function L c but without using the domain discriminator. The classification accuracy on various datasets are reported in the second line of cates that it can improve the performance on various datasets under the settings with different number of shots by training with the domain discriminator, (e.g. an average 2.48% improvement on the 1-shot setting). ii. Compared to the baseline (MN), the AFA without the domain discriminator also help to generalize to various domains in most cases. These results demonstrate that the adversarial training can alleviate the antagonistic action between the class discriminator and the feature augmentation module. iii. Although training with the AFA module with the classification loss can improve the performance in most datasets, it leads to a decline in the CropDiseases dataset comparing with the baseline. Effect of the gram-matrix loss. The gram-matrix loss function is to measure the difference between augmented and original features in each AFA module. The accuracy on various datasets are reported on the third line of Table 2. Compared with the last line results, we can find that the gram-matrix loss brings about 2.57% improvement. It leads to the best results for novel classes by combining the domain discriminator and the gram-matrix loss. The main reason behind is that these two modules contribute to a complementary improvement on global and local discrepancy between the augmented and original features. How about non-linear transformation? In Section 3.2, we introduce linear perturbation in the AFA module to mimic various feature distributions via disturbing the sufficient statistics of original feature distribution. Here, we replace the linear transformation with the non-linear transformation (convolution) layers to generate unseen feature distribution. The classification accuracy are report in the forth line of Table 2. As we can see, the non-linear transformation cannot bring obvious improvement or even performs worse. It verify the theoretical justification of our method based on sufficient statistic such that the disturbance to the mean and variance is better for generalizing to target domain. Results of base classes and novel classes Since meta-learning methods may show inconsistent results for base and novel classes, we report results of both novel and base classes accuracy (%) for comparison in Fig. 3. Here the performance of the novel classes is the average accuracy of the eight datasets, i.e., CUB, Cars, Places, Plantae, CropDiseases, EuroSAT, ISIC and ChestX. The base classes are the rest categories of the mini-ImageNet dataset different from the categories used for training. Our proposed modules with GNN perform better than the baseline (GNN) on both the base and novel classes, which indicates that the AFA module does not sacrifice the base classes performance to make do with cross-domain few-shot learning. The red dashed line of the base classes on 1-shot setting show that the Graph Convolution Network (GNN) with feature-wise transformation [40] has the slight improvement over our model on the base classes, but the performance on the novel classes degrades significantly. Moreover, compared to the competitive methods ATA [44], our method remarkably improve the performance on the base classes. Our method suppresses all the related works by the performance on the novel classes and also achieves competitive results on base classes. All these results demonstrate that our method can give the best balance between base and novel classes and classify the samples of novel classes well. Comparison with Fine-tuning As mentioned by Guo et al. [15], when coming across the domain shift, traditional pre-training and fine-tuning methods perform better than meta-learning methods in few-shot setting. This experiment is to verify that the superiority of the meta-learning methods with our module over the traditional pre-training and fine-tuning under the cross-domain few-shot setting. For a fair comparison, we follow the way of Wang et al. [44], i.e, using data augmentation for fine-tuning in target tasks. Given an target task T formed by the k-shot n-way samples as support set and n×15 pseudo samples as query set. The pseudo samples of query Table 3. Accuracy (%) of fine-tuning with the augmented support dataset from the target domain and our model for 1/5-shot 5-way classification on the target domains. * means the method fine-tuned with target tasks generated through data augmentation. Bold indicates the best results. set are generated by the support samples using the data augmentation method from [48]. For pre-training and fine-tuning, we first pre-train the model with the source tasks composed of the mini-ImageNet dataset. Then, the trained feature encoder is used for initialization and a fully connected layer is used as the discriminator to fulfill the unseen tasks mentioned above for fine-tuning. We use the SGD optimizer with learning rate 0.01 the same as [15]. For the meta-learning methods with our proposed module, we initialize the parameters of model with the meta-learning on the source tasks and then used the same support and query samples of the target task as above. We apply the Adam optimizer with the learning rate 0.001. Both fine-tuning and meta-learning methods are fine-tuned for 50 epoch under the 5-shot/1-shot 5-way setting. Since the data used for training are consistent in all models, it is a fair comparison. As shown in the Table 3, our method consistently outperforms the traditional pre-training and fine-tuning. Conclusions In this paper, we present a novel method namely Adversarial Feature Augmentation (AFA) which can generate augmented features to simulate domain variations and improve the generalization ability of meta-learning models. Based on sufficient statistics of normal distribution, the feature augmentation module is designed by perturbation on feature mean and variance. By adversarial training, the ATA module is learned by maximizing the domain discrepancy with the domain discriminator, while the feature encoder is optimized by confusing the seen and unseen domains. Experimental results on nine datasets show that the proposed AFA improves the performance of meta-learning baselines and outperforms existing works for cross-domain few-shot classification in most cases. Fig. 1 . 1Feature distribution misalignment problem in cross-domain few-shot classification. Fig. 2 . 2Top: Network Architecture. The network architecture of our method consists of a feature encoder E, a class discriminator and a domain discriminator D d . (6), we also measure the domain discrepancy of the AFA module inserted after each batch normalization layer. The gram matrices representing domain information of m o and m a are calculated as follows, m = F latten(m),m ∈ R C×S , S = HW (7) G(m) =m ×m T , G(m) ∈ R C×C , m = m o or m a (8) Then, the domain discrepancy between the intermediate features m o and m a is determined by the distance between G(m o ) and G(m a ), i.e., Fig. 3 . 3Accuracy (%) of baseline (GNN), FT, LRP, ATA and our model for 1/5-shot cross-domain classification on both novel classes and base classes. Table 2 . 2Based on the results, we have the following observations: i. When using the AFA without the domain discriminator, the performance degrades. This indi- Fine-tuning 43.53±0.4 63.76±0.4 35.12±0.4 51.21±0.4 50.57±0.4 70.68±0.4 38.77±0.4 56.45±0.4 MN+Ours * 43.62±0.4 68.73±0.4 36.83±0.4 52.53±0.4 52.82±0.5 71.56±0.4 38.56±0.4 56.50±0.4 GNN+Ours * 47.40±0.5 70.33 ±0.5 36.50±0.4 55.75 ±0.5 55.34±0.6 76.92 ±0.4 39.97±0.4 59.58 ±0.5 TPN+Ours * 48.05 ±0.5 67.78±0.4 38.45 ±0.4 54.89±0.4 57.27 ±0.5 73.06±0.4 40.85 ±0.4 59.04±0.4 Fine-tuning 73.43±0.5 89.84±0.3 66.17±0.5 81.59±0.3 34.60±0.3 49.51±0.3 22.13±0.2 25.37±0.2 MN+Ours * 74.67±0.4 90.53±0.3 66.48±0.5 82.00±0.3 34.58±0.3 48.46±0.3 22.29±0.2 25.80 ±0.3 GNN+Ours * 74.80±0.5 95.66 ±0.2 69.64±0.6 89.56 ±0.4 35.33 ±0.4 50.44 ±0.4 22.25±0.2 24.96±0.2 TPN+Ours * 81.89 ±0.5 93.67±0.2 70.37 ±0.5 86.68±0.2 34.88±0.4 50.17±0.3 22.65 ±0.2 24.79±0.2Method/shot CUB Cars Places Planae 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot CropDiseases EuroSAT ISIC ChestX 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot Enhancing few-shot image classification with unlabelled examples. P Bateni, J Barber, J Van De Meent, F Wood, WACVBateni, P., Barber, J., van de Meent, J., Wood, F.: Enhancing few-shot image classification with unlabelled examples. In: WACV (2022) Improved few-shot visual classification. P Bateni, R Goyal, V Masrani, F Wood, L Sigal, CVPRBateni, P., Goyal, R., Masrani, V., Wood, F., Sigal, L.: Improved few-shot visual classification. In: CVPR (2020) Tasknorm: Rethinking batch normalization for meta-learning. J Bronskill, J Gordon, J Requeima, S Nowozin, R E Turner, ICMLBronskill, J., Gordon, J., Requeima, J., Nowozin, S., Turner, R.E.: Tasknorm: Rethinking batch normalization for meta-learning. In: ICML (2020) Memory efficient meta-learning with large images. J Bronskill, D Massiceti, M Patacchiola, K Hofmann, S Nowozin, R Turner, NeurIPSBronskill, J., Massiceti, D., Patacchiola, M., Hofmann, K., Nowozin, S., Turner, R.: Memory efficient meta-learning with large images. In: NeurIPS (2021) A closer look at few-shot classification. W Chen, Y Liu, Z Kira, Y F Wang, J Huang, ICLRChen, W., Liu, Y., Kira, Z., Wang, Y.F., Huang, J.: A closer look at few-shot classification. In: ICLR (2019) Deep ladder reconstruction-classification network for unsupervised domain adaptation. W Deng, Z Su, Q Qiu, L Zhao, G Kuang, M Pietikäinen, H Xiao, L Liu, Pattern Recognit. Lett. 152Deng, W., Su, Z., Qiu, Q., Zhao, L., Kuang, G., Pietikäinen, M., Xiao, H., Liu, L.: Deep ladder reconstruction-classification network for unsupervised domain adap- tation. Pattern Recognit. Lett. 152, 398-405 (2021) Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, ICML. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML. pp. 1126-1135 (2017) Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, In: ICML. 70Finn, C., Abbeel, P., et al.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML. vol. 70, pp. 1126-1135 (2017) On the mathematical foundations of theoretical statistics. R A Fisher, Philosophical transactions of the Royal Society of London. Series A. 222Fisher, R.A.: On the mathematical foundations of theoretical statistics. Philosoph- ical transactions of the Royal Society of London. Series A, containing papers of a mathematical or physical character 222(594-604), 309-368 (1922) Few-shot one-class classification via meta-learning. A Frikha, D Krompaß, H Köpken, V Tresp, AAAIFrikha, A., Krompaß, D., Köpken, H., Tresp, V.: Few-shot one-class classification via meta-learning. In: AAAI. pp. 7448-7456 (2021) Unsupervised domain adaptation by backpropagation. Y Ganin, V S Lempitsky, Bach, F.R., Blei, D.M.37Ganin, Y., Lempitsky, V.S.: Unsupervised domain adaptation by backpropagation. In: Bach, F.R., Blei, D.M. (eds.) ICML. vol. 37, pp. 1180-1189 (2015) Y Ganin, E Ustinova, H Ajakan, P Germain, H Larochelle, F Laviolette, M Marchand, V S Lempitsky, Domain-adversarial training of neural networks. In: CVPR. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.S.: Domain-adversarial training of neural networks. In: CVPR, pp. 189-209 (2017) Deep reconstructionclassification networks for unsupervised domain adaptation. M Ghifary, W B Kleijn, M Zhang, D Balduzzi, W Li, In: ECCV. 9908Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D., Li, W.: Deep reconstruction- classification networks for unsupervised domain adaptation. In: ECCV. vol. 9908, pp. 597-613 (2016) Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, ICLRGoodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015) A broader study of cross-domain few-shot learning. Y Guo, N Codella, L Karlinsky, J V Codella, J R Smith, K Saenko, T Rosing, R Feris, Guo, Y., Codella, N., Karlinsky, L., Codella, J.V., Smith, J.R., Saenko, K., Rosing, T., Feris, R.: A broader study of cross-domain few-shot learning. In: ECCV. pp. 124-141 (2020) Stnet: Local and global spatial-temporal modeling for action recognition. D He, Z Zhou, C Gan, F Li, X Liu, Y Li, L Wang, S Wen, AAAIHe, D., Zhou, Z., Gan, C., Li, F., Liu, X., Li, Y., Wang, L., Wen, S.: Stnet: Local and global spatial-temporal modeling for action recognition. In: AAAI. pp. 8401- 8408 (2019) Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR. pp. 770-778 (2016) The inaturalist species classification and detection dataset. G V Horn, O M Aodha, Y Song, Y Cui, C Sun, A Shepard, H Adam, P Perona, S J Belongie, CVPRHorn, G.V., Aodha, O.M., Song, Y., Cui, Y., Sun, C., Shepard, A., Adam, H., Per- ona, P., Belongie, S.J.: The inaturalist species classification and detection dataset. In: CVPR. pp. 8769-8778 (2018) Progressive domain adaptation for object detection. H Hsu, C Yao, Y Tsai, W Hung, H Tseng, M K Singh, M Yang, IEEEHsu, H., Yao, C., Tsai, Y., Hung, W., Tseng, H., Singh, M.K., Yang, M.: Progressive domain adaptation for object detection. In: WACV. pp. 738-746. IEEE (2020) 3d object representations for finegrained categorization. J Krause, M Stark, J Deng, L Fei-Fei, ICCV. IEEE Computer SocietyKrause, J., Stark, M., Deng, J., Fei-Fei, L.: 3d object representations for fine- grained categorization. In: ICCV. pp. 554-561. IEEE Computer Society (2013) Human-level concept learning through probabilistic program induction. B M Lake, R Salakhutdinov, J B Tenenbaum, Science. 3506266Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350(6266), 1332-1338 (2015) X Li, W Wang, X Hu, J Yang, Selective kernel networks. In: CVPR. pp. Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In: CVPR. pp. 510-519 (2019) Feature-critic networks for heterogeneous domain generalization. Y Li, Y Yang, W Zhou, T M Hospedales, In: ICML. 97Li, Y., Yang, Y., Zhou, W., Hospedales, T.M.: Feature-critic networks for hetero- geneous domain generalization. In: ICML. vol. 97, pp. 3915-3924 (2019) Negative margin matters: Understanding margin in few-shot classification. B Liu, Y Cao, Y Lin, Q Li, Z Zhang, M Long, H Hu, In: ECCV. 12349Liu, B., Cao, Y., Lin, Y., Li, Q., Zhang, Z., Long, M., Hu, H.: Negative margin matters: Understanding margin in few-shot classification. In: ECCV. vol. 12349, pp. 438-455 (2020) Learning to propagate labels: Transductive propagation network for few-shot learning. Y Liu, J Lee, M Park, S Kim, E Yang, S J Hwang, Y Yang, ICLRLiu, Y., Lee, J., Park, M., Kim, S., Yang, E., Hwang, S.J., Yang, Y.: Learning to propagate labels: Transductive propagation network for few-shot learning. In: ICLR (2019) Learning transferable features with deep adaptation networks. M Long, Y Cao, J Wang, M I Jordan, ICML. Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: ICML. pp. 97-105 (2015) Towards deep learning models resistant to adversarial attacks. A Madry, A Makelov, L Schmidt, D Tsipras, A Vladu, ICLRMadry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018) Optimization as a model for few-shot learning. S Ravi, H Larochelle, Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2017) Fast and flexible multi-task classification using conditional neural adaptive processes. J Requeima, J Gordon, J Bronskill, S Nowozin, R E Turner, NeurIPSRequeima, J., Gordon, J., Bronskill, J., Nowozin, S., Turner, R.E.: Fast and flexible multi-task classification using conditional neural adaptive processes. In: NeurIPS (2019) Semi-supervised domain adaptation via minimax entropy. K Saito, D Kim, S Sclaroff, T Darrell, K Saenko, ICCV. IEEESaito, K., Kim, D., Sclaroff, S., Darrell, T., Saenko, K.: Semi-supervised domain adaptation via minimax entropy. In: ICCV. pp. 8049-8057. IEEE (2019) Few-shot learning with graph neural networks. V G Satorras, J B Estrach, In: ICLR. Satorras, V.G., Estrach, J.B.: Few-shot learning with graph neural networks. In: ICLR (2018) Adversarial training for free! In: NeurIPS. A Shafahi, M Najibi, A Ghiasi, Z Xu, J P Dickerson, C Studer, L S Davis, G Taylor, T Goldstein, Shafahi, A., Najibi, M., Ghiasi, A., Xu, Z., Dickerson, J.P., Studer, C., Davis, L.S., Taylor, G., Goldstein, T.: Adversarial training for free! In: NeurIPS. pp. 3353-3364 (2019) Certifying some distributional robustness with principled adversarial training. A Sinha, H Namkoong, J C Duchi, ICLRSinha, A., Namkoong, H., Duchi, J.C.: Certifying some distributional robustness with principled adversarial training. In: ICLR (2018) Prototypical networks for few-shot learning. J Snell, K Swersky, R S Zemel, NeurIPS. pp. Snell, J., Swersky, K., Zemel, R.S.: Prototypical networks for few-shot learning. In: NeurIPS. pp. 4077-4087 (2017) On the application of probability theory to agricultural experiments. essay on principles. section 9. J Splawa-Neyman, D M Dabrowska, T Speed, Statistical Science. Splawa-Neyman, J., Dabrowska, D.M., Speed, T.: On the application of probability theory to agricultural experiments. essay on principles. section 9. Statistical Science pp. 465-472 (1990) Knowledge guided metric learning for few-shot text classification. D Sui, Y Chen, B Mao, D Qiu, K Liu, J Zhao, Association for Computational Linguistics. NAACL-HLTSui, D., Chen, Y., Mao, B., Qiu, D., Liu, K., Zhao, J.: Knowledge guided metric learning for few-shot text classification. In: NAACL-HLT. pp. 3266-3271. Associ- ation for Computational Linguistics (2021) Explanationguided training for cross-domain few-shot classification. J Sun, S Lapuschkin, W Samek, Y Zhao, N Cheung, A Binder, Sun, J., Lapuschkin, S., Samek, W., Zhao, Y., Cheung, N., Binder, A.: Explanation- guided training for cross-domain few-shot classification. In: ICPR. pp. 7609-7616 (2020) Learning to compare: Relation network for few-shot learning. F Sung, Y Yang, L Zhang, T Xiang, P H Torr, T M Hospedales, CVPR. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: CVPR. pp. 1199-1208 (June 2018) M Tan, Q V Le, Efficientnet: Rethinking model scaling for convolutional neural networks. In: ICML. 97Tan, M., Le, Q.V.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: ICML. vol. 97, pp. 6105-6114 (2019) Cross-domain few-shot classification via learned feature-wise transformation. H Tseng, H Lee, J Huang, M Yang, ICLRTseng, H., Lee, H., Huang, J., Yang, M.: Cross-domain few-shot classification via learned feature-wise transformation. In: ICLR (2020) Adversarial discriminative domain adaptation. E Tzeng, J Hoffman, K Saenko, T Darrell, CVPR. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR. pp. 2962-2971 (2017) Matching networks for one shot learning. O Vinyals, C Blundell, T Lillicrap, D Wierstra, NeurIPS. pp. Vinyals, O., Blundell, C., Lillicrap, T., kavukcuoglu, k., Wierstra, D.: Matching networks for one shot learning. In: NeurIPS. pp. 3630-3638 (2016) Generalizing to unseen domains via adversarial data augmentation. R Volpi, H Namkoong, O Sener, J C Duchi, V Murino, S Savarese, NeurIPS. pp. Volpi, R., Namkoong, H., Sener, O., Duchi, J.C., Murino, V., Savarese, S.: Gen- eralizing to unseen domains via adversarial data augmentation. In: NeurIPS. pp. 5339-5349 (2018) Cross-domain few-shot classification via adversarial task augmentation. H Wang, Z Deng, Zhou, Z.Wang, H., Deng, Z.: Cross-domain few-shot classification via adversarial task aug- mentation. In: Zhou, Z. (ed.) IJCAI. pp. 1075-1081 (2021) P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, Caltech-ucsd birds 200. Welinder, P., Branson, S., Mita, T., Wah, C., Schroff, F., Belongie, S., Perona, P.: Caltech-ucsd birds 200 (2010) Attentive prototype few-shot learning with capsule network-based embedding. F Wu, J S Smith, W Lu, C Pang, B Zhang, In: ECCV. 12373Wu, F., Smith, J.S., Lu, W., Pang, C., Zhang, B.: Attentive prototype few-shot learning with capsule network-based embedding. In: ECCV. vol. 12373, pp. 237- 253 (2020) Mvfnet: Multi-view fusion network for efficient video recognition. W Wu, D He, T Lin, F Li, C Gan, E Ding, AAAIWu, W., He, D., Lin, T., Li, F., Gan, C., Ding, E.: Mvfnet: Multi-view fusion network for efficient video recognition. In: AAAI. pp. 2943-2951 (2021) Large margin mechanism and pseudo query set on cross-domain few-shot learning. J Yeh, H Lee, B Tsai, Y Chen, P Huang, W H Hsu, CoRR abs/2005.09218Yeh, J., Lee, H., Tsai, B., Chen, Y., Huang, P., Hsu, W.H.: Large margin mechanism and pseudo query set on cross-domain few-shot learning. CoRR abs/2005.09218 (2020) Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation. X Yue, Z Zheng, S Zhang, Y Gao, T Darrell, K Keutzer, A L Sangiovanni-Vincentelli, CVPR. IEEEYue, X., Zheng, Z., Zhang, S., Gao, Y., Darrell, T., Keutzer, K., Sangiovanni- Vincentelli, A.L.: Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation. In: CVPR. pp. 13834-13844. IEEE (2021) Central moment discrepancy (CMD) for domain-invariant representation learning. W Zellinger, T Grubinger, E Lughofer, T Natschläger, S Saminger-Platz, ICLRZellinger, W., Grubinger, T., Lughofer, E., Natschläger, T., Saminger-Platz, S.: Central moment discrepancy (CMD) for domain-invariant representation learning. In: ICLR (2017) Places: A 10 million image database for scene recognition. B Zhou, A Lapedriza, A Khosla, A Oliva, A Torralba, IEEE transactions. 406Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence 40(6), 1452-1464 (2017)
[]
[ "Maximally and minimally correlated states attainable within a closed evolving system", "Maximally and minimally correlated states attainable within a closed evolving system" ]
[ "Sania Jevtic \nDepartment of Physics\nControlled Quantum Dynamics Theory\nImperial College London\nSW7 2AZLondon\n", "David Jennings \nDepartment of Physics\nControlled Quantum Dynamics Theory\nImperial College London\nSW7 2AZLondon\n", "Terry Rudolph \nDepartment of Physics\nControlled Quantum Dynamics Theory\nImperial College London\nSW7 2AZLondon\n" ]
[ "Department of Physics\nControlled Quantum Dynamics Theory\nImperial College London\nSW7 2AZLondon", "Department of Physics\nControlled Quantum Dynamics Theory\nImperial College London\nSW7 2AZLondon", "Department of Physics\nControlled Quantum Dynamics Theory\nImperial College London\nSW7 2AZLondon" ]
[]
The amount of correlation attainable between the components of a quantum system is constrained if the system is closed. We provide some examples, largely from the field of quantum thermodynamics, where knowing the maximal possible variation in correlations is useful. The optimization problem it raises requires us to search for the maximally and minimally correlated states on a unitary orbit, with and without energy conservation. This is fully solvable for the smallest system of two qubits. For larger systems, the problem is reduced to a manageable, classical optimization.The idealized notion of a closed system is central to both classical and quantum mechanics, across scales from the microscopic to the universe itself. Here, we concern ourselves with the quantum mechanical version of a fundamental question: In the interactions between the constituent components of a closed system, to what extent does the closure of the system constrain the correlations attainable?We focus on the simplest case, where we divide the closed system into two parts and the correlations between these are quantified by the mutual information. For a given bipartite state of the system we therefore seek the two extremal (minimally and maximally) correlated states under all evolution that does not change the total entropy. We will also consider the case of evolution that obeys the additional restriction of energy conservation, either in a weak sense (the expected energy stays constant) or a strong sense (the interaction commutes with the free Hamiltonians of the two subsystems).We find that the answer to these problems, particularly for the case of the minimal attainable correlation, has a surprisingly rich mathematical structure. Because of the foundational nature of this result it can be applied to a range of problems. Before turning to our technical results, we present in some detail three such examples from the field of quantum thermodynamics.Example 1: Environmentally friendly work extraction from a Szilard Engine. Our first example concerns a Szilard engine immersed in a thermal bath at temperature T using correlated particles from which to extract work. The engine admits individual subsystems, one at a time, to "burn as fuel". We consider the case of two quantum subsystems, described by a bipartite mixed state ρ. For such fuel reserves, we can extract [1] from each subsystem at most an amount of work W µ = kT (log d µ −S(ρ µ )) where d µ is the dimension of subsystem µ ∈ {A, B}, ρ µ is its state, and S(ρ µ ) = −tr(ρ µ log ρ µ ) is its von Neumann entropy. The goal is to increase the total work extracted from the pair of systems:To do so, before the systems are fed into the engine they are sent into a refinery whose purpose is to "purify" ρ A and ρ B so as to reduce S(ρ A ) + S(ρ B ). More accurately, the refinery tries to localize existing purity in the composite fuel state. Such a purification scheme has been considered before under the restriction of local operations and classical communication (LOCC) processes [1, 2], however, here we work in a broader context and permit a global operation on the composite fuel state ρ, but crucially we impose the restriction that the refining process, which takes ρ to ρ , must be "environmentally friendly" in the sense that all measures of purity, such as the von Neumann entropy or tr[ρ 2 ], remain constant[19]. As a result, we are forced into taking the refining process to be a global unitary operation on the full reserve of fuel.The extra mechanical work obtained through the refining process is W extra = −kT (∆S A + ∆S B ) = −kT ∆I where ∆S µ = S(ρ µ ) − S(ρ µ ), ∆I = I(ρ ) − I(ρ), and we have introduced the quantum mutual information (QMI) I(ρ) = S(ρ A ) + S(ρ B ) − S(ρ) ≥ 0, which is the natural measure of correlations. If A and B are initially uncorrelated, the QMI is at its minimum and cannot be reduced; W extra = 0. However, if correlations are initially present in ρ, it is possible to obtain |W extra | > 0: a natural challenge is to find the maximum |W extra | for a given initial state fuel reserve ρ, in other words, to determine the largest attainable |∆I| under the environmentally friendly constraint. Generically, it is impossible to fully decorrelate the state, and the optimal refinement process reduces to the broader problem under consideration in this article.Example 2: Anomalous heat flow in the presence of correlations. It is known for two subsystems of a closed system, each initially in thermal states, that the traditional thermodynamic flow of heat from hot to cold can be distorted by the presence of correlations[3,4]. Indeed, with sufficiently strong correlations, a substantial amount of heat can be made to flow anomalously from the colder to the hotter system. What are the limitations on this process? Again, let ρ be the initial joint state of the two systems, µ ∈ {A, B}. By assumption, each subsystem is initially in a thermal (Gibbs) state ρ µ = ρ th µ = e −βµHµ /Z µ at temperature β −1 µ = kT µ , where Z µ = tr(e −Hµ/Tµ ), is the partition function. The subsystems interact, either by switching on a known controlled interaction for some finite time or by a scattering process, and the composite state ρ evolves to a final state
10.1103/physrevlett.108.110403
[ "https://export.arxiv.org/pdf/1110.2371v4.pdf" ]
6,588,362
1110.2371
6d79cf189914aec9b31dc34889f78a39cbf50484
Maximally and minimally correlated states attainable within a closed evolving system 27 Apr 2012 Sania Jevtic Department of Physics Controlled Quantum Dynamics Theory Imperial College London SW7 2AZLondon David Jennings Department of Physics Controlled Quantum Dynamics Theory Imperial College London SW7 2AZLondon Terry Rudolph Department of Physics Controlled Quantum Dynamics Theory Imperial College London SW7 2AZLondon Maximally and minimally correlated states attainable within a closed evolving system 27 Apr 2012(Dated: October 12, 2011)numbers: 0365Ta0367Mn0570Ln The amount of correlation attainable between the components of a quantum system is constrained if the system is closed. We provide some examples, largely from the field of quantum thermodynamics, where knowing the maximal possible variation in correlations is useful. The optimization problem it raises requires us to search for the maximally and minimally correlated states on a unitary orbit, with and without energy conservation. This is fully solvable for the smallest system of two qubits. For larger systems, the problem is reduced to a manageable, classical optimization.The idealized notion of a closed system is central to both classical and quantum mechanics, across scales from the microscopic to the universe itself. Here, we concern ourselves with the quantum mechanical version of a fundamental question: In the interactions between the constituent components of a closed system, to what extent does the closure of the system constrain the correlations attainable?We focus on the simplest case, where we divide the closed system into two parts and the correlations between these are quantified by the mutual information. For a given bipartite state of the system we therefore seek the two extremal (minimally and maximally) correlated states under all evolution that does not change the total entropy. We will also consider the case of evolution that obeys the additional restriction of energy conservation, either in a weak sense (the expected energy stays constant) or a strong sense (the interaction commutes with the free Hamiltonians of the two subsystems).We find that the answer to these problems, particularly for the case of the minimal attainable correlation, has a surprisingly rich mathematical structure. Because of the foundational nature of this result it can be applied to a range of problems. Before turning to our technical results, we present in some detail three such examples from the field of quantum thermodynamics.Example 1: Environmentally friendly work extraction from a Szilard Engine. Our first example concerns a Szilard engine immersed in a thermal bath at temperature T using correlated particles from which to extract work. The engine admits individual subsystems, one at a time, to "burn as fuel". We consider the case of two quantum subsystems, described by a bipartite mixed state ρ. For such fuel reserves, we can extract [1] from each subsystem at most an amount of work W µ = kT (log d µ −S(ρ µ )) where d µ is the dimension of subsystem µ ∈ {A, B}, ρ µ is its state, and S(ρ µ ) = −tr(ρ µ log ρ µ ) is its von Neumann entropy. The goal is to increase the total work extracted from the pair of systems:To do so, before the systems are fed into the engine they are sent into a refinery whose purpose is to "purify" ρ A and ρ B so as to reduce S(ρ A ) + S(ρ B ). More accurately, the refinery tries to localize existing purity in the composite fuel state. Such a purification scheme has been considered before under the restriction of local operations and classical communication (LOCC) processes [1, 2], however, here we work in a broader context and permit a global operation on the composite fuel state ρ, but crucially we impose the restriction that the refining process, which takes ρ to ρ , must be "environmentally friendly" in the sense that all measures of purity, such as the von Neumann entropy or tr[ρ 2 ], remain constant[19]. As a result, we are forced into taking the refining process to be a global unitary operation on the full reserve of fuel.The extra mechanical work obtained through the refining process is W extra = −kT (∆S A + ∆S B ) = −kT ∆I where ∆S µ = S(ρ µ ) − S(ρ µ ), ∆I = I(ρ ) − I(ρ), and we have introduced the quantum mutual information (QMI) I(ρ) = S(ρ A ) + S(ρ B ) − S(ρ) ≥ 0, which is the natural measure of correlations. If A and B are initially uncorrelated, the QMI is at its minimum and cannot be reduced; W extra = 0. However, if correlations are initially present in ρ, it is possible to obtain |W extra | > 0: a natural challenge is to find the maximum |W extra | for a given initial state fuel reserve ρ, in other words, to determine the largest attainable |∆I| under the environmentally friendly constraint. Generically, it is impossible to fully decorrelate the state, and the optimal refinement process reduces to the broader problem under consideration in this article.Example 2: Anomalous heat flow in the presence of correlations. It is known for two subsystems of a closed system, each initially in thermal states, that the traditional thermodynamic flow of heat from hot to cold can be distorted by the presence of correlations[3,4]. Indeed, with sufficiently strong correlations, a substantial amount of heat can be made to flow anomalously from the colder to the hotter system. What are the limitations on this process? Again, let ρ be the initial joint state of the two systems, µ ∈ {A, B}. By assumption, each subsystem is initially in a thermal (Gibbs) state ρ µ = ρ th µ = e −βµHµ /Z µ at temperature β −1 µ = kT µ , where Z µ = tr(e −Hµ/Tµ ), is the partition function. The subsystems interact, either by switching on a known controlled interaction for some finite time or by a scattering process, and the composite state ρ evolves to a final state The amount of correlation attainable between the components of a quantum system is constrained if the system is closed. We provide some examples, largely from the field of quantum thermodynamics, where knowing the maximal possible variation in correlations is useful. The optimization problem it raises requires us to search for the maximally and minimally correlated states on a unitary orbit, with and without energy conservation. This is fully solvable for the smallest system of two qubits. For larger systems, the problem is reduced to a manageable, classical optimization. The idealized notion of a closed system is central to both classical and quantum mechanics, across scales from the microscopic to the universe itself. Here, we concern ourselves with the quantum mechanical version of a fundamental question: In the interactions between the constituent components of a closed system, to what extent does the closure of the system constrain the correlations attainable? We focus on the simplest case, where we divide the closed system into two parts and the correlations between these are quantified by the mutual information. For a given bipartite state of the system we therefore seek the two extremal (minimally and maximally) correlated states under all evolution that does not change the total entropy. We will also consider the case of evolution that obeys the additional restriction of energy conservation, either in a weak sense (the expected energy stays constant) or a strong sense (the interaction commutes with the free Hamiltonians of the two subsystems). We find that the answer to these problems, particularly for the case of the minimal attainable correlation, has a surprisingly rich mathematical structure. Because of the foundational nature of this result it can be applied to a range of problems. Before turning to our technical results, we present in some detail three such examples from the field of quantum thermodynamics. Example 1: Environmentally friendly work extraction from a Szilard Engine. Our first example concerns a Szilard engine immersed in a thermal bath at temperature T using correlated particles from which to extract work. The engine admits individual subsystems, one at a time, to "burn as fuel". We consider the case of two quantum subsystems, described by a bipartite mixed state ρ. For such fuel reserves, we can extract [1] from each subsystem at most an amount of work W µ = kT (log d µ −S(ρ µ )) where d µ is the dimension of subsystem µ ∈ {A, B}, ρ µ is its state, and S(ρ µ ) = −tr(ρ µ log ρ µ ) is its von Neumann entropy. The goal is to increase the total work extracted from the pair of systems: W = W A + W B = kT (log d A d B − S(ρ A ) − S(ρ B )). To do so, before the systems are fed into the engine they are sent into a refinery whose purpose is to "purify" ρ A and ρ B so as to reduce S(ρ A ) + S(ρ B ). More accurately, the refinery tries to localize existing purity in the composite fuel state. Such a purification scheme has been considered before under the restriction of local operations and classical communication (LOCC) processes [1,2], however, here we work in a broader context and permit a global operation on the composite fuel state ρ, but crucially we impose the restriction that the refining process, which takes ρ to ρ , must be "environmentally friendly" in the sense that all measures of purity, such as the von Neumann entropy or tr[ρ 2 ], remain constant [19]. As a result, we are forced into taking the refining process to be a global unitary operation on the full reserve of fuel. The extra mechanical work obtained through the refining process is W extra = −kT (∆S A + ∆S B ) = −kT ∆I where ∆S µ = S(ρ µ ) − S(ρ µ ), ∆I = I(ρ ) − I(ρ), and we have introduced the quantum mutual information (QMI) I(ρ) = S(ρ A ) + S(ρ B ) − S(ρ) ≥ 0, which is the natural measure of correlations. If A and B are initially uncorrelated, the QMI is at its minimum and cannot be reduced; W extra = 0. However, if correlations are initially present in ρ, it is possible to obtain |W extra | > 0: a natural challenge is to find the maximum |W extra | for a given initial state fuel reserve ρ, in other words, to determine the largest attainable |∆I| under the environmentally friendly constraint. Generically, it is impossible to fully decorrelate the state, and the optimal refinement process reduces to the broader problem under consideration in this article. Example 2: Anomalous heat flow in the presence of correlations. It is known for two subsystems of a closed system, each initially in thermal states, that the traditional thermodynamic flow of heat from hot to cold can be distorted by the presence of correlations [3,4]. Indeed, with sufficiently strong correlations, a substantial amount of heat can be made to flow anomalously from the colder to the hotter system. What are the limitations on this process? Again, let ρ be the initial joint state of the two systems, µ ∈ {A, B}. By assumption, each subsystem is initially in a thermal (Gibbs) state ρ µ = ρ th µ = e −βµHµ /Z µ at temperature β −1 µ = kT µ , where Z µ = tr(e −Hµ/Tµ ), is the partition function. The subsystems interact, either by switching on a known controlled interaction for some finite time or by a scattering process, and the composite state ρ evolves to a final state arXiv:1110.2371v4 [quant-ph] 27 Apr 2012 ρ , which has local states ρ A , ρ B . The free energy functional F H,T [ρ] := tr(ρH)−kT S(ρ) is obtained from the relative entropy function with respect to the Gibbs state and is defined over the full state space. It is minimized by the thermal state e −βH /tr(e −βH ), β −1 = kT , and its value coincides with the usual thermodynamic free energy. Thus each subsystem satisfies the inequality F Hµ,Tµ [ρ µ ] − F Hµ,Tµ [ρ µ ] ≥ 0 for any state ρ µ (originating from the positivity of the relative entropy) which when added together yield β A Q A + β B Q B ≥ ∆S A + ∆S B ,(1) where [20] into system µ. Note that this inequality only demands that an initial temperature be defined, and no further restrictions on ρ µ are needed at this stage. Under the closed system constraint of constant total entropy and constant energy, Q µ = tr(ρ µ H µ ) − tr(ρ µ H µ ) is the heatQ A + Q B = 0, we can write (1) as Q A 1 kT A − 1 kT B ≥ ∆I.(2) This inequality provides directionality for any energy conserving process. It relies on local initial properties but also depends on non-local correlations. Any initial correlations, up to the constraint of thermal marginals, are permitted and the bound is independent of any assumptions on interaction strength, in contrast to several previous considerations of the thermodynamics of open quantum systems where weak coupling between the system and the bath is required [5][6][7]. We are interested in the evolution of a closed system which in itself displays thermodynamic behaviour. In standard thermodynamics it is assumed that the interacting systems are initially uncorrelated, rendering the entropy as additive: ρ = ρ A ⊗ ρ B and thus I(ρ) = 0. As the interaction cannot decorrelate A and B any further I(ρ ) ≥ I(ρ) and it follows that the left hand side of equation 2 must be positive. This means that when T A ≤ T B it must be the case that Q A ≥ 0, and heat flows in the standard manner, from hot to cold. In general, however, systems A and B could initially possess correlations [21], in which case the interaction could lower the QMI. If ∆I < 0 then there is no longer an absolute restriction on the direction of heat flow and for a suitably chosen interaction we will deterministically observe heat being transferred from the colder to the hotter body. We call this anomalous heat flow (AHF). Even though the local entropies have decreased and negative heat flow has occurred, after the local measurement of the individual energies the system is left uncorrelated and thus one cannot cause heat to flow from cold to hot in a cyclic process, thus saving the second law. In this sense correlations are a resource. To observe a large AHF, the initial state of the system would have to be very correlated, possibly entangled. Indeed, the AHF constitutes a discriminating feature between quantum and classical thermodynamics, and may be used as an operational indicator of entanglement [4] that does not require knowledge of the joint initial state of the two systems! This is easily seen, since the QMI over separable states is bounded from above by log(min{d A , d B }), while for the full quantum state space the bound is twice this. Therefore when ∆I > log(min{d A , d B }) the initial state ρ must be nonseparable, and in turn, any transfer of heat from the colder to the hotter body of an amount greater than log(min{d A ,d B }) |β A −β B | indicates the presence of entanglement [4]. Keeping in mind the additional constraint of equal energies for ρ and ρ included in this example, the quantity of AHF possible in a closed system is bounded by the largest ∆I that can be obtained reversibly. Once again, the determination of such a fundamental limitation reduces to our general problem. Example 3: Partovi/Peres collision model of equilibration. In Ref. [8] Partovi proposed a collision model of equilibration, later simplified by Peres [9]. Two ingredients are required in the collision process: firstly an increase in the local entropies, which is achieved by interacting two initially uncorrelated quantum systems via a (strongly) energy conserving unitary, and secondly irreversibility, causing a growth of the total entropy of the system. In the model the latter is enforced by assuming that the two systems decorrelate after interacting. One full collision can be written as ρ = ρ A ⊗ ρ B → ρ = U ρU † → ρ A ⊗ ρ B , with S(ρ A ) + S(ρ B ) ≥ S(ρ A ) + S(ρ B ). This process is reiterated, and it can be shown the systems reach a stationary state of equal temperature. The second requirement of complete decorrelation to a product state is very stringent -given that physical systems typically dephase (i.e. off diagonal "coherences" of the density matrix decay) much more rapidly than they completely decorrelate. A natural question therefore is whether the systems can retain some minimal amount of correlation and still reach equilibrium. Part of the solution to examples 1 and 2 is finding the state which has the minimum QMI on a unitary orbit: when the two interacting particles are qubits, we can use this result to show that, after the unitary part of the collision, if the qubits dephase to this minimally correlated state (which is not a product state) then equilibration is still achieved [10]. Overview of the general solution: Given an N = d A d Bdimensional bipartite state ρ with spectrum Λ = {λ i }; our goal is to find ρ min (ρ max ) defined, modulo local unitary transformations, as the state for which I is minimal (maximal) over the unitary orbit [11], O = {τ : τ = U ρU † }, for all unitaries U of dimension N . For simplicity we do not demand energy conservation for now but revisit it later when we consider a two qubit system. Finding the maximally correlated state is hard classically [10] but fairly straight forward over the space of quantum states. We can always find a unitary that trans-forms a state to ρ max = N i=1 λ i |Φ i Φ i |,(3) where {|Φ i } is any generalized Bell state basis [12] with N = (min{d A , d B }) 2 , obtained from the Schmidt decomposition. Since tr A (|Φ i Φ i |) ∝ I B for all i we deduce that also tr A (ρ max ) ∝ I B and in turn I(ρ max ) = 2 log(min{d A , d B })−H(Λ). This is the maximum attainable value of the QMI over all state space, with a reduction by the amount H(Λ) = − i λ i log λ i , the Shannon entropy, because of the restriction to a unitary orbit. Finding the minimally correlated state is considerably harder: because the total spectrum of the state is fixed, given an initial state ρ, there does not always exist a unitary transformation that can decorrelate its subsystems. Hence I(ρ min ) ≥ 0 in general and, unlike ρ max , the minimum sum of the local entropies depends on Λ. The challenge is to optimize over the set of reduced states compatible with a composite system having a fixed spectrum Λ. Finding the set of allowed such reduced states is the highly nontrivial "quantum marginal problem" [13][14][15]. The initial difficulty is that the optimization problem is not convex. There does not even appear to be a simple argument that the minimally correlated state should be separable, although intuitively it seems reasonable that this should be the case. In fact we are able to prove something stronger: the minimum of the quantum mutual information I(ρ) over the unitary orbit is attained for a classically correlated state ρ min = j,k λ jk |e j e j | ⊗ |f k f k |,(4) where λ jk , j = 1, .., d A , k = 1..d B is a reindexing of λ i and {|e j }, {|f k } are orthonormal basis states for systems A and B. That is, the minimum of the QMI over the unitary orbit equals H( j Π(λ jk ))+H( k Π(λ jk ))− H(Λ), where the first two terms are the Shannon entropies for the marginal of some permutation (Π) of the eigenvalues λ jk . To prove this, we consider the function G[σ A , σ B ] = S[σ A ] + S[σ B ] defined over the convex hull C of the unitary orbit O of ρ. The states in C take the form σ = i p i U i ρU † i , with p i = 1, p i ≥ 0 and σ A , σ B are the reduced states of σ. We then look for the minima of this function G. If these happen to occur on the unitary orbit, where S(ρ) in constant, then it will also give us the minima of I over O. Writing the eigenvalues as components of vectors, ν = spec(σ) and λ = spec(ρ), it can be shown that the reduced states of any σ (which include the unitary orbit states) have eigenvalues that are marginals of a probability distribution obeying the majorization relation ν ≺ λ [15]. Note that all ν satisfying this relation form a convex set P(λ). G can be shown to be concave on the set P(λ), and so its minima occur at the extremal points. These extrema are permutations of the components of λ, whose corresponding states lie on the unitary orbit, and so the minimum QMI occurs at a permutation of the {λ i } [22]. However, knowing that the state is classical is not the full solution to the problem. Consider a state with spec(ρ) = (1/2, 1/2, 0, 0) -the two classical states of the form (4) (|00 00|+|11 11|)/2 and (|00 00|+|01 01|)/2 have the correct spectrum but the former is correlated while the latter is not. So the QMI depends on the ordering of the eigenvalues in ρ min . There are N ! different permutations of λ i to consider, however it is possible [10] to reduce this number down to an irreducible set of Young Tableaux [16] in which the minimally correlated state will be found. For the simplest case of d A = d B = 2 the set has a unique element, which can be compactly represented [ν ij ] = λ 1 λ 2 λ 3 λ 4 .(5) Here the eigenvalues λ i are in non-increasing order, and row j column k corresponds to the re-indexing element λ jk of Eq. (4) above. For d A = 2, d B = 3 the full set of permutations has (d A d B )! = 720 elements, however our analysis [10] reduces this to just 5 tableaux: λ 1 λ 2 λ 3 λ 4 λ 5 λ 6 , λ 1 λ 2 λ 4 λ 3 λ 5 λ 6 , λ 1 λ 2 λ 5 λ 3 λ 4 λ 6 , λ 1 λ 3 λ 4 λ 2 λ 5 λ 6 , λ 1 λ 3 λ 5 λ 2 λ 4 λ 6 . For the case of two qutrits there are 21 tableaux to consider, for two 4-dimensional systems the irreducible set has approximately 12000 elements. Clearly it would be desirable to have an efficient algorithmic procedure to identify the element on the irreducible set on which the minimum is attained, but it is currently not clear if one exists. The primitive case of two qubits: As an illustrative example we consider two qubits in which case the preceding discussion shows that the minimal QMI on a unitary orbit has a value of I(ρ min ) = H(λ 1 + λ 2 ) + H(λ 1 + λ 3 ) − H(Λ), where λ 1 ≥ λ 2 ≥ λ 3 ≥ λ 4 and we have used the notation for the binary Shannon entropy H(x) = −x log x − (1 − x) log(1 − x) in the first two terms. Therefore the maximum that the QMI can change by for a two qubit system undergoing a global unitary transformation is ∆I max U = 2 − H(λ 1 + λ 2 ) − H(λ 1 + λ 3 ).(6) Considerable insight into this case can be gained by doing the optimization more explicitly. This is possible because the quantum marginal problem for a composite 1 2 ]. For each spectrum, the hollow circles correspond to ρmin, the large filled ones to ρmax. In (d), a state with energy E defines the set RE of states which could also have energy E. It is bounded from "above" by the solid-and-dotted line, on which the state itself is situated. The maximally correlated state in RE is at q. system of two qubits has been solved [15] and the results are readily applied to our situation. Examining this also allows us to include the constant energy constraint. Let us denote the two eigenvalues of the reduced state ρ µ as λ µ , 1 − λ µ where λ µ ≤ 1 2 , µ ∈ {A, B}. There is a set of inequalities that constrain the spectra of these marginals, given Λ, to a set R [23]. Figure 1 depicts the shape of the set R (shaded) that λ A , λ B occupy and gives some representative examples of how it varies according to the rank of ρ. Two qubit correlations with energy conservation: Example 2 above sought the maximal change in the QMI for a bipartite system in a state ρ undergoing unitary evolution to a new state ρ and constrained to energy conservation tr(ρH) = tr(ρ H) := E, where H = H A + H B is the sum of the original local Hamiltonians. The reduced states of ρ are allowed to be non-diagonal in H A , H B . This divides the set R of allowed reduced states into two regions: ones that could have energy E, forming the set R E ⊆ R, and ones that could not. R E defines an "energy-conserving region". For simplicity, let us pick H A = H B = |1 1|, so the energy spacing of H A equals that of H B . The region R E is shown in figure 1 (d). It is shown in Ref. [10] that the maximal variation of correlations for a two qubit state undergoing an energy conserving unitary transformation is found to be ∆I max E = 2H E 2 − H(λ 1 + λ 2 ) − H(λ 1 + λ 3 ), where the maximally and minimally correlated states in R E are also shown in figure 1. An interesting observation is that the point q in the figure does not uniquely define a joint state (even up to local unitaries). It can be the case that a strong energy conserving unitary acting on one state at q transform it only along the solid portion of the line however it evolves another along the full solid-and-dotted line. This is because these two states have different types of correlation even though they have the same QMI. The details for this appear in Ref. [10]. In any case the set of states reached in R E is restricted to the line for strong energy conserving unitary evolutions. These states have minimal variance for energy measurements. Weak energy conserving unitaries can transform the initial state to all other points in R E , which involve intrinsically quantum fluctuations via superpositions. Conclusions: In this Letter, we have analysed the abstract problem of how correlations vary along unitary orbits for isolated quantum systems, an intricate mathematical task that reveals a complex relationship between the mutual information and the ordering of a bipartite probability distribution. The results of this find application in different thermodynamic scenarios such as equilibration, heat exchange and localisation of free energy. Our work can be extended to understanding the correlation structure of more complicated processes, such as a quantum channel consisting of k unitaries each applied with some probability p k to the bipartite state. It would also be of interest to explore connections between our work and the recent papers [17,18] on the resource theory of quantum thermodynamics. We wish to acknowledge R. Spekkens and J. Anders for their useful comments. This work was supported by EPSRC. PACS numbers: 03.65.Ta, 03.67.Mn, 05.70.Ln FIG. 1 : 1Regions R (shaded) of allowed λA (y-axis), λB (x-axis) when the joint state of two qubits, with spectrum Λ = {λi} 4 i=1 , has various ranks: Λ = (a) {1,0,0,0}, (b) {0.8,0.2,0,0}, (c) {0.5,0.5,0,0}, (d) {0.6,0.3,0.1,0}. λA, λB ∈ [0, . J Oppenheim, M Horodecki, P Horodecki, R Horodecki, Phys. Rev. Lett. 89180402J. Oppenheim, M. Horodecki, P. Horodecki, and R. Horodecki, Phys. Rev. Lett. 89, 180402 (2002). . R Alicki, M Horodecki, P Horodecki, R Horodecki, Open Sys. & Information Dyn. 11205R. Alicki, M. Horodecki, P. Horodecki, and R. Horodecki, Open Sys. & Information Dyn. 11, 205 (2004). . M H Partovi, Phys. Rev. E. 7721110M. H. Partovi, Phys. Rev. E 77, 021110 (2008). . D Jennings, T Rudolph, Phys. Rev. E. 8161130D. Jennings and T. Rudolph, Phys. Rev. E 81, 061130 (2010). . T M Nieuwenhuizen, J. Mod. Optics. 502433T. M. Nieuwenhuizen, J. Mod. Optics 50, 2433 (2003). . A E Allahverdyan, T M Nieuwenhuizen, J. Phys. A: Math. Gen. 36875A. E. Allahverdyan and T. M. Nieuwenhuizen, J. Phys. A: Math. Gen. 36, 875 (2003). . A E Allahverdyan, R Balian, T M Nieuwenhuizen, J. Mod. Optics. 512703A. E. Allahverdyan, R. Balian, and T. M. Nieuwenhuizen, J. Mod. Optics 51, 2703 (2004). . M H Partovi, Phys. Lett. A. 137440M. H. Partovi, Phys. Lett. A 137, 440 (1989). A Peres, Quantum Theory: Concepts and Methods. SpringerA. Peres, Quantum Theory: Concepts and Methods (Springer, 1995). . S Jevtic, D Jennings, T Rudolph, arXiv:1112.3372quant-phS. Jevtic, D. Jennings, and T. Rudolph, arXiv:1112.3372 [quant-ph] (2011). . K Modi, arXiv:0902.0735quant-phK. Modi, arXiv:0902.0735 [quant-ph] (2009). . D Sych, G Leuchs, New J. Phys. 1113006D. Sych and G. Leuchs, New J. Phys. 11, 013006 (2009). . A Klyachko, arXiv:0409113quant-phA. Klyachko, arXiv:0409113 [quant-ph] (2004). . M Christandl, G Mitchison, Commun. Math. Phys. 261789797M. Christandl and G. Mitchison, Commun. Math. Phys. 261, 789797 (2006). . S Bravyi, Quant. Inf. and Comp. 412S. Bravyi, Quant. Inf. and Comp. 4, 012 (2004). W.-K Tung, Group Theory in Physics. World Scientific Publishing CompanyW.-K. Tung, Group Theory in Physics (World Scientific Publishing Company, 1985). . F G S L B Ao, M Horodecki, J Oppenheim, J M Renes, R W Spekkens, arXiv:1111.3882quant-phF. G. S. L. B. ao, M. Horodecki, J. Oppenheim, J. M. Renes, and R. W. Spekkens, arXiv:1111.3882 [quant-ph] (2011). . M Horodecki, J Oppenheim, arXiv:1111.3834quant-phM. Horodecki and J. Oppenheim, arXiv:1111.3834 [quant-ph] (2011). In particular, the requirement of constant total entropy during the purification is particularly natural given the subtle entropic counting that needs to be performed in any analysis of Szilard engines. In particular, the requirement of constant total entropy during the purification is particularly natural given the subtle entropic counting that needs to be performed in any analysis of Szilard engines. This is not fundamentally a problem when we remember that the only property being measured is the changes in the observables HA, HB, i.e. the changes in local energies, and this is called "heat" because the local entropies vary and the energies exchanged between A and B are assumed inaccessible for external work. In our model the experimenter is not required to know the initial correlations nor the interaction Hamiltonian. Heat is usually defined as Qµ = tr(ρ µ Hµ) − tr(ρµHµ) (see for instance A. Peres Quantum Theory: Concepts and Methods. H. Weimer, M. J. Henrich, F. RemppSpringer8330008but it is assumed that the initial and final states are both diagonal in Hµ. In the heat flow model this is true for the initial state, however for the final state ρ µ , Hµ = 0 is permittedHeat is usually defined as Qµ = tr(ρ µ Hµ) − tr(ρµHµ) (see for instance A. Peres Quantum Theory: Concepts and Methods, Springer (1995)) but it is assumed that the initial and final states are both diagonal in Hµ. In the heat flow model this is true for the initial state, however for the final state ρ µ , Hµ = 0 is permitted. This is not fundamentally a problem when we remember that the only property being measured is the changes in the ob- servables HA, HB, i.e. the changes in local energies, and this is called "heat" because the local entropies vary and the energies exchanged between A and B are assumed inaccessible for external work. In our model the exper- imenter is not required to know the initial correlations nor the interaction Hamiltonian so we need not appeal to a generalized notion of work and heat such as that proposed in H. Weimer, M. J. Henrich, F. Rempp, H. Schröder, and G. Mahler EPL 83 30008 (2008). It is reasonable to assume that A and B are "locally thermal", that is, when one is restricted to doing only local operations on them, they are indistinguishable from uncorrelated thermal states with matching local Hamiltonians and temperatures. Although they are locally thermal they can still be correlatedIt is reasonable to assume that A and B are "locally ther- mal", that is, when one is restricted to doing only local operations on them, they are indistinguishable from un- correlated thermal states with matching local Hamiltoni- ans and temperatures. Although they are locally thermal they can still be correlated. The state of minimal correlations ρmin is not unique due to symmetries of the QMI which are local unitary operations and a swap of A and B states. The state of minimal correlations ρmin is not unique due to symmetries of the QMI which are local unitary oper- ations and a swap of A and B states. Let the eigenvalues λi in spectrum Λ of the joint state ρ be arranged in non-increasing order λ1 ≥ λ2 ≥ λ3 ≥ λ4 and denote the two eigenvalues of the reduced state ρµ as λµ, 1 − λµ where 0 ≤ λµ ≤ 1 2 , µ ∈ {A, B}. Then ρA, ρB are valid marginals of a state in O if and only if their eigenvalues satisfy the following inequalities: λA ≥ λ3 + λ4, λB ≥ λ3 + λ4, λA + λB ≥ λ2 + λ3 + 2λ4, |λA − λB| ≤ min{λ1 − λ3, λ2 − λ4}. These inequalities define the set R and this result is taken from S. Bravyi, Quant. Inf. and Comp. 412Let the eigenvalues λi in spectrum Λ of the joint state ρ be arranged in non-increasing order λ1 ≥ λ2 ≥ λ3 ≥ λ4 and denote the two eigenvalues of the reduced state ρµ as λµ, 1 − λµ where 0 ≤ λµ ≤ 1 2 , µ ∈ {A, B}. Then ρA, ρB are valid marginals of a state in O if and only if their eigenvalues satisfy the following inequalities: λA ≥ λ3 + λ4, λB ≥ λ3 + λ4, λA + λB ≥ λ2 + λ3 + 2λ4, |λA − λB| ≤ min{λ1 − λ3, λ2 − λ4}. These inequalities define the set R and this result is taken from S. Bravyi, Quant. Inf. and Comp. 4, 012 (2008)
[]
[ "Coalgebras, Chu Spaces, and Representations of Physical Systems", "Coalgebras, Chu Spaces, and Representations of Physical Systems" ]
[ "Samson Abramsky \nOxford University Computing Laboratory\n\n" ]
[ "Oxford University Computing Laboratory\n" ]
[]
We revisit our earlier work on the representation of quantum systems as Chu spaces, and investigate the use of coalgebra as an alternative framework. On the one hand, coalgebras allow the dynamics of repeated measurement to be captured, and provide mathematical tools such as final coalgebras, bisimulation and coalgebraic logic. However, the standard coalgebraic framework does not accommodate contravariance, and is too rigid to allow physical symmetries to be represented. We introduce a fibrational structure on coalgebras in which contravariance is represented by indexing. We use this structure to give a universal semantics for quantum systems based on a final coalgebra construction. We characterize equality in this semantics as projective equivalence. We also define an analogous indexed structure for Chu spaces, and use this to obtain a novel categorical description of the category of Chu spaces. We use the indexed structures of Chu spaces and coalgebras over a common base to define a truncation functor from coalgebras to Chu spaces. This truncation functor is used to lift the full and faithful representation of the groupoid of physical symmetries on Hilbert spaces into Chu spaces, obtained in our previous work, to the coalgebraic semantics.6. This homomorphism is an arrow in the Grothendieck category F. 7. This works for all quantum systems, with respect to a single final coalgebra. This is a 'Big Toy Model' in the sense of [1]. We shall now investigate the nature of this coalgebraic semantics for physical systems in more detail.
10.1007/s10992-013-9276-4
[ "https://arxiv.org/pdf/0910.3959v1.pdf" ]
1,480,299
0910.3959
afeab9076c96901e9fedcf79a6dc47223f60a924
Coalgebras, Chu Spaces, and Representations of Physical Systems 20 Oct 2009 October 20, 2009 Samson Abramsky Oxford University Computing Laboratory Coalgebras, Chu Spaces, and Representations of Physical Systems 20 Oct 2009 October 20, 2009 We revisit our earlier work on the representation of quantum systems as Chu spaces, and investigate the use of coalgebra as an alternative framework. On the one hand, coalgebras allow the dynamics of repeated measurement to be captured, and provide mathematical tools such as final coalgebras, bisimulation and coalgebraic logic. However, the standard coalgebraic framework does not accommodate contravariance, and is too rigid to allow physical symmetries to be represented. We introduce a fibrational structure on coalgebras in which contravariance is represented by indexing. We use this structure to give a universal semantics for quantum systems based on a final coalgebra construction. We characterize equality in this semantics as projective equivalence. We also define an analogous indexed structure for Chu spaces, and use this to obtain a novel categorical description of the category of Chu spaces. We use the indexed structures of Chu spaces and coalgebras over a common base to define a truncation functor from coalgebras to Chu spaces. This truncation functor is used to lift the full and faithful representation of the groupoid of physical symmetries on Hilbert spaces into Chu spaces, obtained in our previous work, to the coalgebraic semantics.6. This homomorphism is an arrow in the Grothendieck category F. 7. This works for all quantum systems, with respect to a single final coalgebra. This is a 'Big Toy Model' in the sense of [1]. We shall now investigate the nature of this coalgebraic semantics for physical systems in more detail. Introduction Chu spaces and universal coalgebra are two general formalisms for systems modelling in a broad sense. Both have been studied quite extensively in Computer Science over the past couple of decades. Recently, we showed how quantum systems with their symmetries have a full and faithful representation as Chu spaces [1]. We had in fact originally intended to use coalgebras as the vehicle for this work. This did not prove satisfactory, for reasons which will be explained later. But coalgebras have many features which make them promising for studies of this kind. Moreover, as we shall show, the problems which arise can in fact be overcome to a considerable degree, in a fashion which brings to light some interesting and novel aspects of these two well-studied models, and in particular of the relationships between them -which have, to the best of our knowledge, not been studied at all previously. The purpose of the present paper is thus to develop some systematic connections and contrasts between Chu spaces and coalgebras, the modelling issues which arise, what can be done to resolve them, and which problems remain outstanding. The main results of our investigations can be summarized as follows: • Firstly, at the general level, we look at the comparative strengths and weaknesses of the two formalisms. On our analysis, the key feature that Chu spaces have and coalgebras lack is contravariance; the key feature which coalgebras have and Chu spaces lack is extension in time. There are some interesting secondary issues as well, notably symmetry vs. rigidity. • Formally, we introduce an indexed structure for coalgebras to compensate for the lack of contravariance, and show how this can be used to represent a wide class of physical systems in coalgebraic terms. In particular, we show how a universal model for quantum systems can be constructed as a final coalgebra. This opens the way to the use of methods such as coalgebraic logic in the study of physical systems. It also suggests how coalgebra can mediate between ontic and epistemic views of the states of physical systems. • We also define an analogous indexed structure for Chu spaces, and use this to obtain a novel categorical description of the category of Chu spaces. We use the indexed structures of Chu spaces and coalgebras over a common base to define a truncation functor from coalgebras to Chu spaces. • We use this truncation functor to lift the full and faithful representation of the groupoid of physical symmetries on Hilbert spaces into Chu spaces, obtained in [1], to the coalgebraic semantics. The further contents of the paper are organized as follows. In Section 2 we review some background on Chu spaces and coalgebras. In section 3 we make a first comparison of Chu spaces and coalgebras. Then in Section 4 we discuss the modelling issues, the problems which arise, and the strengths and weaknesses of the two approaches. In Section 5 we develop the technical material on indexed structure for coalgebras. A similar development for Chu spaces is carried out in Section 6, and the truncation functor is defined. In Section 7 we show how a universal model for quantum systems can be constructed as a final coalgebra; equality in the coalgebraic semantics is characterized as projective equivalence, and the representation theorem for the symmetry groupoid on Hilbert spaces is lifted from Chu spaces to the coalgebraic category. Section 8 outlines the general scheme of 'bivariant coalgebra' underlying our approach. Background Coalgebra Coalgebra has proved to be a powerful and flexible tool for modelling a wide range of systems. We shall give a very brief introduction. Further details may be found e.g. in the excellent presentation in [25]. Category theory allows us to dualize algebras to obtain a notion of coalgebras of an endofunctor. However, while algebras abstract a familiar set of notions, coalgebras open up a new and rather unexpected territory, and provides an effective abstraction and mathematical theory for a central class of computational phenomena: • Programming over infinite data structures: streams, infinite trees, etc. • A novel notion of coinduction. • Modelling state-based computations of all kinds. • The key notion of bisimulation equivalence between processes. • A general coalgebraic logic can be read off from the functor, and used to specify and reason about properties of systems. Let F : C → C be a functor. An F -coalgebra is a pair (A, α) where A is an object of C, and α is an arrow α : A → F A. We say that A is the carrier of the coalgebra, while α is the behaviour map. An F -coalgebra homomorphism from (A, α) to (B, β) is an arrow h : A → B such that A α -F A B h ? β -F B F h ? F -coalgebras and their homomorphisms form a category F −Coalg. An F -coalgebra (C, γ) is final if for every F -coalgebra (A, α) there is a unique homomorphism from (A, α) to (C, γ), i.e. if it is the terminal object in F −Coalg. Chu Spaces Chu spaces are a special case of a construction which originally appeared in [7], written by Po-Hsiang Chu as an appendix to Michael Barr's monograph on *autonomous categories [4]. Chu spaces have several interesting aspects: • They have a rich type structure, and in particular form models of Linear Logic [9,26]. • They have a rich representation theory; many concrete categories of interest can be fully embedded into Chu spaces [17,23]. • There is a natural notion of 'local logic' on Chu spaces [6], and an interesting characterization of information transfer across Chu morphisms [28]. Applications of Chu spaces have been proposed in a number of areas, including concurrency [24], hardware verification [16], game theory [29] and fuzzy systems [21,19]. Mathematical studies concerning the general Chu construction include [22,5,10]. We briefly review the basic definitions. Fix a set K. A Chu space over K is a structure (X, A, e), where X is a set of 'points' or 'objects', A is a set of 'attributes', and e : X × A → K is an evaluation function. A morphism of Chu spaces f : (X, A, e) → (X ′ , A ′ , e ′ ) is a pair of functions f = (f * : X → X ′ , f * : A ′ → A) such that, for all x ∈ X and a ′ ∈ A ′ : e(x, f * (a ′ )) = e ′ (f * (x), a ′ ). Chu morphisms compose componentwise: if f : (X 1 , A 1 , e 1 ) → (X 2 , A 2 , e 2 ) and g : (X 2 , A 2 , e 2 ) → (X 3 , A 3 , e 3 ), then (g • f ) * = g * • f * , (g • f ) * = f * • g * . Chu spaces over K and their morphisms form a category Chu K . Representing Physical Systems Our basic paradigm for representing physical systems, as laid out in [1], is as follows. We take a system to be specified by its set of states S, and the set of questions Q which can be 'asked' of the system. We shall consider only 'yes/no' questions; however, the result of asking a question in a given state will in general be probabilistic. This will be represented by an evaluation function e : S × Q → [0, 1] where e(s, q) is the probability that the question q will receive the answer 'yes' when the system is in state s. Thus a system is represented directly as a Chu space. In particular, a quantum system with a Hilbert space H as its state space will be represented as (H • , L(H), e H ) where H • is the set of non-zero vectors of H, L(H) is the set of closed subspaces of H, and the evaluation function e H is the basic 'statistical algorithm' of Quantum Mechanics: e H (ψ, S) = ψ | P S ψ ψ | ψ = P S ψ | P S ψ ψ | ψ = P S ψ 2 ψ 2 . For a more detailed discussion see [1]. That paper goes on to show that: • The biextensional collapse of this Chu space yields the usual projective representation of states as rays. • The Chu morphisms between these spaces are exactly the unitaries and unitaries, yielding a full and faithful functor from the groupoid of physical symmetries on Hilbert spaces to Chu spaces. • This representation is preserved by collapsing the unit interval to three values, but not by the further collapse by either of the standard 'possibilistic' reductions to two values. Comparison: A First Attempt We shall begin by showing that a subcategory of Chu spaces can be captured in completely equivalent form as a category of coalgebras. Fix a set K. We can define a functor on Set: F K : X → K PX . If we use the contravariant powerset functor, F K will be covariant. Explicitly, for f : X → Y : F K f (g)(S) = g(f −1 (S)), where g ∈ K PX and S ∈ PY . A coalgebra for this functor will be a map of the form α : X → K PX . Consider a Chu space C = (X, A, e) over K. We suppose furthermore that this Chu space is normal (cf. [20] for a related but not identical use of this term), meaning that A = PX. Given this normal Chu space, we can define an F K -coalgebra on X by α(x)(S) = e(x, S). We write GC = (X, α). A coalgebra homomorphism from (X, α) to (Y, β) is a function h : X → Y such that X α -K PX Y h ? β -K PY F h ? Proposition 3.1 Suppose we are given a Chu morphism f : C → C ′ , where C and C ′ are normal Chu spaces, such that f * = f −1 * . Then f * : GC → GC ′ is an F K -algebra homomorphism. Conversely, given any F K -algebra homomorphism f : GC → GC ′ , then (f, f −1 ) : C → C ′ is a Chu morphism. Proof Let (f * , f −1 * ) : C → C ′ be a Chu space morphism. Then (F f * • α)(x)(S) = F f * (α(x))(S) = α(x)(f −1 * S) = e(x, f −1 * S) = e(x, f * S) = e ′ (f * (x), S) = β • f * (x)(S) so f * is a F K -coalgebra homomorphism. The converse is verified similarly (in fact by a cyclic permutation of the steps of the above proof). Let NChu K be the category of normal Chu spaces and Chu morphisms of the form (f, f −1 ). Then by the Proposition, G extends to a functor G : NChu K → F K −Coalg, with G(f, f −1 ) = f . Conversely, given an F -coalgebra (X, α), we can define a normal Chu space H(X, α) = (X, PX, e), where e(x, S) = α(x)(S), and given a coalgebra homomorphism f : (X, α) → (Y, β), Hf = (f, f −1 ) : H(X, α) → (Y, β) will be a Chu morphism; this is verified in entirely similar fashion to Proposition 3.1. Altogether, we have shown: Theorem 3.2 NChu K and F K −Coalg are isomorphic categories, with the isomorphism witnessed by G and H = G −1 . Discussion A Critique of Coalgebras Normality Of course, the assumption of normality for Chu spaces is very strong; although it is worth mentioning that we have assumed nothing about either the value set or the evaluation function, in contrast to the notion of normality used in [20] (for quite different purposes), which allows the attributes to be any subset of the powerset, but stipulates that K = 2 and that the evaluation function is the characteristic function for set membership. One would like to extend the above correspondence to allow for wider classes of Chu spaces, in which the attributes need not be the full powerset. This is probably best done in an enriched setting of some kind. It should also be said that the use of powersets, full or not, to represent 'questions' is fairly crude and ad hoc. The degree of freedom afforded by Chu spaces to choose both the states and the questions appropriately is a major benefit to conceptually natural and formally adequate modelling of a wide range of situations. The Type Functor The experienced coalgebraist will be aware that the functors F K are problematic from the point of view of coalgebra. In particular, they fail to preserve weak pullbacks, and hence F K −Coalg will lack some of the nice structural properties one would like a category of coalgebras to possess. In fact, F K is a close cousin of the 'double contravariant powerset', which is a standard counter-example for these properties [25]. However, much coalgebra can be done without this property [12], and recent work has achieved interesting results for coalgebras over the double contravariant powerset [14]. A secondary problem is that as it stands, F K −Coalg cannot have a final coalgebra, for mere cardinality reasons. In fact, this issue can be addressed in a standard way. We can replace the contravariant powerset by a bounded version P κ . We can also replace the function space by the partial function space Pfn(X, Y ). Thinking of partial functions in terms of their graphs, there is a set inclusion Pfn(X, Y ) ⊆ P(X × Y ). Hence we can use a bounded version of the partial function functor, say Pfn λ (X, Y ), yielding those partial functions whose graphs have cardinality < λ. The resulting modified version of F K : X → Pfn λ (P κ (X), K) is bounded, and admits a final coalgebra. Moreover, by choosing κ and λ sufficiently large, we can still represent a large class of systems whose behaviour involves total functions. Behaviours vs. Symmetries However, there is a deeper conceptual problem which militates against the use of coalgebras in our context. An important property of physical theories is that they have rich symmetry groups (and groupoids), in which the key invariants are found, and from which the dynamics can be extracted. The main result of [1] was to recover these symmetries in the case of quantum systems as Chu morphisms. The picture in coalgebra is rather different. One is concerned with behavioural or observational equivalence, as encapsulated by bisimulation, and the final coalgebra gives a 'fully abstract' model of behaviour, in which bisimulation turns into equality. Moreover, every coalgebra morphism is a functional bisimulation. If we consider the class of strongly extensional coalgebras [25], those which have been quotiented out by bisimulation, they form a preorder, and essentially correspond to the subcolagebras of the final coalgebra. Thus in a sense coalgebras are oriented towards maximum rigidity, and minimum symmetry. From this point of view, it would seem more desirable to have a universal homogeneous model, with a maximum degree of symmetry, as a universal model for a large class of physical systems, rather than a final coalgebra. Such a model has been constructed for bifinite Chu spaces in [8]. That context is too limited for our purposes here. It remains to be seen if universal homogeneous models can be constructed for larger subcategories of Chu spaces, encompassing those involved in our representation results. In the present paper, we shall develop an alternative resolution of this problem by using a fibred category of coalgebras, in which there is sufficient scope for variation to allow for the representation of symmetries. We shall use this to lift the representation theorem of [1] from Chu spaces to coalgebras. In Praise of Coalgebras • The coalgebraic point of view can be described as state-based, but in a way that emphasizes that the meaning of states lies in their observable behaviour. Indeed, in the "universal model" we shall construct, the states are determined exactly as the possible observable behaviours -we actually find a canonical solution for what the state space should be in these terms. States are identified exactly if they have the same observable behaviour. We can see this as a kind of reconciliation between the ontic and epistemic standpoints, in which moreover operational ideas are to the fore. • Coalgebras allow us to capture the 'dynamics of measurement' -what happens after a measurement -in a way that Chu spaces don't. They have extension in time [3]. We explain what we mean by this in more detail below. Extension in Time Consider a coalgebraic representation of stochastic transducers: F : X → Prob(O × X) I where I is a fixed set of inputs, O a fixed set of outputs, and Prob(S) is the set of probability distributions of finite support on S. This expresses the behaviour of a state x ∈ X in terms of how it responds to an input i ∈ I by producing an output o ∈ O and evolving into a new state x ′ ∈ X. Since the automaton is stochastic, what is specified for each input i is a probability distribution over the pairs (o, x ′ ) comprising the possible responses. We can think of I as a set of questions, and O as a set of answers (which we could standardize by only considering yes/no questions). Thus we can see such a stochastic automaton as a variant of the representation of physical systems we discussed previously, with the added feature of extension in time -the capacity to represent behaviour under repeated interactions. What we can learn from this observation, incidentally, is that QM is less nondeterministic/probabilistic than stochastic transducers since in Quantum Mechanics, if we know the preparation and the outcome of the measurement, we know (by the projection postulate) exactly what the resulting quantum state will be. In automata theory, by contrast, even if we know the current state, the input, and which observable output was produced in response, we still do not know in general what the next state will be. Could there be physical theories of this type? Semantics In One Country As a first step to developing a viable coalgebraic approach to representing physical systems, we shall hold a single system fixed, and see how we can represent this coalgebraically. This simple step eliminates most of the problems with coalgebras which we encountered in the previous Section. We will then have to see how variation of the system being represented can be reintroduced. Coalgebraic Semantics For One System We fix attention on a single Hilbert space H. This determines a set of questions Q = L(H). We now define an endofunctor on Set: F Q : X → ({0} + (0, 1] × X) Q . A coalgebra for this functor is then a map α : X → ({0} + (0, 1] × X) Q The interpretation is that X is a set of states; the coalgebra map sends a state to its behaviour, which is a function from questions in Q to the probability that the answer is 'yes'; and, if the probability is not 0, to the successor state following a 'yes' answer. Unlike the functors F K , the functors F Q are very well-behaved from the point of view of coalgebra (they are in fact polynomial functors [25]). They preserve weak pull-backs, which guarantees a number of nice properties, and they are bounded and admit final coalgebras γ Q : U Q → ({0} + (0, 1] × U Q ) Q . The elements of U Q can be visualized as 'Q-branching trees', with the arcs labelled by probabilities. The F Q -coalgebra which is of primary interest to us is a H : H • → ({0} + (0, 1] × H • ) Q defined by: a H (ψ)(S) =    0, e H (ψ, S) = 0 (r, P S ψ), e H (ψ, S) = r > 0. The new ingredient compared with the Chu space representation of H is the state which results in the case of a 'yes' answer to the question, which is computed according to the (unnormalized) Lüders rule. This system will of course have a representation in the final coalgebra (U Q , γ Q ), specified by the unique coalgebra homomorphism h : (H • , a H ) → (U Q , γ Q ). Indexed Structure For Coalgebras Our strategy will now be to externalize contravariance as indexing. This will allow us to alleviate many of the problems we encountered with using coalgebras to represent physical systems, and to access the power of the coalgebraic framework. In particular, we will be able to construct a single universal model for quantum systems. We shall define a functor F : Set op → CAT where CAT is the 'superlarge' 1 category of categories and functors. F is defined on objects by Q → F Q −Coalg. For a function f : Q ′ → Q, we define t f X : F Q (X) → F Q ′ (X) :: Θ → Θ • f and F(f ) = f * : F Q −Coalg → F Q ′ −Coalg f * : (X, α) → (X, t f X • α), f * : (h : (X, α) → (Y, β)) → h. Proposition 5.1 For each f : Q ′ → Q, t f is a natural transformation, and f * is a functor. Proof The naturality of t f is the diagram X F Q X t f X -F Q ′ X Y g ? F Q Y F Q g ? t f Y -F Q ′ Y F Q ′ g ? This diagram commutes because t f acts by pre-composition and F Q , F Q ′ by postcomposition. For any Θ ∈ F Q X, we obtain the common value (1 + (1 × g)) • Θ • f. It is a general fact [25] that a natural transformation t : F → G induces a functor between the coalgebra categories in the manner specified above. The fact that the coalgebra homomorphism condition is preserved follows from the commutativity of X α -F X t X -GX Y h ? β -F Y F h ? t Y -GY Gh ? The left hand square commutes because h is an F -coalgebra homomorphism; the right hand square is naturality of t. Thus we get a strict indexed category of coalgebra categories, with contravariant indexing. The Grothendieck Construction We now recall an important general construction. Where we have an indexed category, we can apply the Grothendieck construction [11], to glue all the fibres together (and get a fibration). Applying the Grothendieck construction to F, we can now put all our categories of coalgebras, indexed by the sets of questions, together in one category. We will use this to get our universal model for quantum systems. Before turning to this, we will consider an analogous indexed structure for Chu spaces, which will allow us to define a comparison functor between the two models. Indexed Comparison With Chu Spaces Slicing and Dicing Chu For each Q, we define Chu Q K to be the subcategory of Chu K of Chu spaces (X, Q, e) and morphisms of the form (f * , id Q ). This doesn't look too exciting. In fact, it is just the comma category (− × Q,K) whereK : 1 → Set picks out the object K. Given f : Q ′ → Q, we define a functor f * : Chu Q K → Chu Q ′ K :: (X, Q, e) → (X, Q ′ , e • (1 × f )) and which is the identity on morphisms. To verify functoriality, we only need to check that the Chu morphism condition is preserved. That is, we must show, for any morphism (f * , id Q ) : (X, Q, e) → (X ′ , Q, e ′ ), x ∈ X, and q ′ ∈ Q, that e(x, f (q ′ )) = e ′ (f * (x), f (q ′ )) which follows from the Chu morphism condition on (f * , id Q ). This gives an indexed category Chu K : Set op → CAT. Grothendieck puts Chu back together again The fibre categories Chu Q K are pale reflections of the full category of Chu spaces, trivialising the contravariant component of morphisms. However, the Grothendieck construction gives us back the full category. Proposition 6.1 Chu K ∼ = Chu K . Proof Expanding the definitions, we see that objects in Chu K have the form where f : Q ′ → Q, and (f * , id Q ′ ) : (X, Q ′ , e • (1 × f )) → (X ′ , Q ′ , e ′ ) is a morphism in Chu Q ′ K . The morphism condition is: e(x, f (q ′ )) = e ′ (f * (x), q ′ ). This is exactly the Chu morphism condition for (f * , f ) : (X, Q, e) → (X ′ , Q ′ , e ′ ). Composition of (f, (f * , id Q ′ )) with (g, (g * , id Q ′′ )) is given by (f •g, (g * •f * , id Q ′′ )). The isomorphism with Chu K is immediate from this description. The Truncation Functor The relationship between coalgebras and Chu spaces is further clarified by an indexed truncation functor T : F → Chu. For each set Q there is a functor T Q : F Q −Coalg → Chu Q K This is defined on objects by T Q (X, α) = (X, Q, e) where e(x, q) =    0, α(x)(q) = 0 r, α(x)(q) = (r, x ′ ) The action on morphisms is trivial: T Q : (h : (X, α) → (Y, β)) → (h, id Q ). The verification that coalgebra homomorphisms are taken to Chu morphisms is straightforward. The fact that each T Q is a faithful functor is then immediate. For each f : Q ′ → Q, we have the naturality square F Q −Coalg T Q -Chu Q K F Q ′ −Coalg F(f ) ? T Q ′ -Chu Q ′ K Chu K (f ) ? On objects, both paths around the diagram carry a coalgebra (X, α) to the Chu space (X, Q ′ , e), where e(x, q ′ ) =    0, α(x)(f (q ′ )) = 0 r, α(x)(f (q ′ )) = (r, x ′ ) The action on morphisms in both cases is trivial: a coalgebra homomorphism h is sent to the Chu morphism (h, id Q ′ ). We can summarize this as follows: Proposition 6.2 T : F −→ Chu is a strict indexed functor, which is faithful on each fibre. As an immediate corollary, we obtain: Proposition 6.3 There is a faithful functor T : F −→ Chu ∼ = Chu K . We can also refine the isomorphism of Theorem 3.2. We say that an F Qcoalgebra (X, α) is static if for all x ∈ X: α(x)(q) = (r, x ′ ) ⇒ x ′ = x. Thus in a static coalgebra, observing an answer to a question has no effect on the state. We write S Q −Coalg for the full subcategory of F Q −Coalg determined by the static coalgebras. This extends to an indexed subcategory S of F, since the functors f * , for f : Q ′ → Q, carry S Q −Coalg into S Q ′ −Coalg. Proposition 6.4 For each set Q, Chu Q K is isomorphic to S Q −Coalg. Moreover this is an isomorphism of strict indexed categories. Proof We can define an indexed functor E Q : Chu Q K → S Q −Coalg E Q : (X, Q, e) → (X, α) where α(x)(q) =    0, e(x, q) = 0 (r, x), e(x, q) = r > 0. E Q takes a Chu morphism (f, id Q ) to f . It is straightforward to verify that this is an indexed functor, and inverse to the restriction of T to S. We can combine this with Proposition 6.1 to obtain: Theorem 6.5 The category of Chu spaces Chu K is isomorphic to a full subcategory of F, the Grothendieck category of an indexed category of coalgebras. This gives a clear picture of how coalgebras extend Chu spaces with some 'observational dynamics'. A Universal Model We can now define a single coalgebra which is universal for quantum systems. We proceed in a number of steps: 1. Fix a countably-infinite-dimensional Hilbert space, e.g. H U = ℓ 2 (N), with its standard orthonormal basis {e n } n∈N . Take Q = L(H U ). Let (U Q , γ Q ) be the final coalgebra for F Q . 2. Any quantum system is described by a separable Hilbert space K. In practice, the Hilbert space chosen to represent a given system will come with a preferred orthonormal basis {ψ n }. This basis will induce an isometric embedding i : K --H U :: ψ n −→ e n . Taking Q ′ = L(K), this induces a map f = i −1 : Q → Q ′ . This in turn induces a functor f * : F Q ′ −Coalg → F Q −Coalg. Bisimilarity and Projectivity Our first aim is to characterize when two states of a physical system are sent to the same element of the final coalgebra by the semantic map · . We can call on some general coalgebraic notions for this purpose. We shall begin with one of the key ideas in the theory of coalgebra, bisimilarity. This can be defined in generality for coalgebras over any endofunctor [25], but we shall just give the concrete definition as it pertains to F Q −Coalg. Given F Qcoalgebras (X, α) and (Y, β), a bisimulation is a relation R ⊆ X × Y such that: xRy ⇒ ∀q ∈ Q. α(x)(q) = 0 ⇒ β(y)(q) = 0 ∧ α(x)(q) = (r, x ′ ) ⇒ β(y)(q) = (r, y ′ ) ∧ x ′ Ry ′ . We say that x and y are bisimilar, and write x ∼ b y, if there is some bisimulation R with xRy. Note that bisimilarity can hold between elements of different coalgebras. This means that states of different systems can be compared in terms of a common notion of observable behaviour. The above definition is given in an apparently asymmetric form, but ∼ b is easily seen to be a symmetric relation, since the cases α(x)(q) = 0 and α(x)(q) = (r, x ′ ) are mutually exclusive and exhaustive. Proposition 7.1 Bisimilarity is an equivalence relation. Proof The main point is transitivity, which follows automatically since the polynomial functor F Q preserves pullbacks [25]. The key feature of bisimilarity is given by the following proposition, which is also standard for functors preserving weak pullbacks [25]. We consider coalgebras for such a functor F for which a final coalgebra exists. Given an F -coalgebra (X, α) and x ∈ X, we write x for the denotation of x in the final coalgebra. Proposition 7.2 For any F -coalgebras (X, α) and (Y, β), and x ∈ X, y ∈ Y : x = y ⇐⇒ x ∼ b y. Thus bisimilarity characterizes equality of denotation in the final coalgebra semantics. We begin by characterizing bisimilarity in the coalgebra (K • , a K ) arising from the Hilbert space K, for the functor F Q , where Q = L(K). We define the usual projective equivalence on the non-zero vectors of a Hilbert space K • by: ψ ∼ p φ ⇐⇒ ∃λ ∈ C. ψ = λφ. Thus two vectors are projectively equivalent if they belong to the same ray or onedimensional subspace. Proposition 7.3 For any vectors ψ, φ ∈ K • : ψ ∼ p φ ⇐⇒ ψ ∼ b φ. Proof Firstly, recall the definition of e K from Section 2.3. We can describe the bisimilarity condition on a relation R ⊆ K 2 • for the coalgebra (K • , a K ) more directly as follows: ψ R φ ⇒ ∀S ∈ L(H). e K (ψ, S) = e K (φ, S) ∧ (P S ψ) R (P S φ). Thus if ψ ∼ b φ, then for all S ∈ L(K), e K (ψ, S) = e K (φ, S), and hence ψ ∼ p φ by Proposition 3.2 of [1]. For the converse, it suffices to show that the relation ∼ p ⊆ K 2 • is a bisimulation. If ψ = λφ, then for all S, e K (ψ, S) = e K (φ, S) by Proposition 3.2 of [1], and P S ψ = λP S φ, so ∼ p is a bisimulation as required. We now show that bisimilarity in Hilbert spaces is stable under transport across fibres by isometries. Firstly, we have a general property of fibred coalgebras. Proposition 7.4 If f : Q ′ → Q is surjective, then bisimulation on the F Q ′coalgebra f * (X, α) coincides with bisimulation on the F Q -coalgebra (X, α). Proof Unwinding the definitions of the two bisimulation conditions on relations, the only difference is that one quantifies over questions q ∈ Q, and the other over questions f (q ′ ), for q ′ ∈ Q ′ . If f is surjective, these are equivalent. Given a Hilbert space K and an isometric embedding i : K --H U , let Q = L(H U ), Q ′ = L(K), f = i −1 : Q → Q ′ . Then the F Q -coalgebra f * (K • , a K ) is (K • , β), where: β(ψ)(S) = a K (ψ)(i −1 (S)). Proposition 7.5 Bisimulation on the elements of the F Q -coalgebra (K • , β) coincides with bisimulation on the F Q ′ -coalgebra (K • , a K ). If we identify K with the subspace H ′ ⊂ -H U determined by the image of i, it also coincides with bisimulation on H ′ . It is also the restriction of bisimulation on H U . Proof Since i is an isometry, the direct image i(S) of a closed subspace of K is a closed subspace of H U , and since i is injective, i −1 (i(S)) = S. Thus i −1 is surjective, yielding the first statement by Proposition 7.4. The fact that i is an isometric embedding also guarantees that e K (ψ, S) = e H U (ψ, S) for ψ ∈ H ′ , S ∈ L(H ′ ). Finally, by Proposition 7.3, bisimulation on Hilbert spaces coincides with projective equivalence, and projective equivalence on H ′ is the restriction of projective equivalence on H U . Putting these results together, we have the following: Theorem 7.6 Let · K• : f * (K • , a K ) → (U Q , γ Q ) be the final coalgebra semantics for K • with respect to the isometric embedding i : K --H U . Then for any ψ, φ ∈ K • : ψ K• = φ K• ⇐⇒ ψ ∼ b φ ⇐⇒ ψ ∼ p φ. Thus the strongly extensional quotient [25] of the coalgebra (K • , a K ) is the projective coalgebra (P(K),ā K ), where P(K) is the set of rays or one-dimensional subspaces of K, andā K is defined by: a K (ψ) =    0, α(ψ) = 0 (r,φ) α(ψ) = (r, φ). Hereψ = {λψ | λ ∈ C} is the ray generated by ψ. Remark There is a subtlety lurking here, which is worthy of comment. When we consider an extension of a Hilbert space to a larger one, H ′ ⊂ -H, the characteristic quantum phenomenon of incompatibility can arise; a subspace S of H may be incompatible with the subspace H ′ (so that e.g. the corresponding projectors do not commute). The characterization of bisimulation as projective equivalence shows that this notion is nevertheless stable under such extensions. However, we can expect incompatibility to be reflected in some fashion in the coalgebraic approach, in particular in the development of a suitable coalgebraic logic. Representing Physical Symmetries We shall now show that the passage to the Grothendieck category of coalgebras does succeed in alleviating the problem of excessive rigidity of coalgebras as discussed in Section 3.1.1. Our strategy will be to lift the Representation Theorem 3.15 from [1] from Chu spaces to coalgebras, using the results of Section 6.3. We consider a morphism in F between representations of Hilbert spaces. Such a morphism has the form h : f * (H • , a H ) → (K • , a K ) where H and K are any Hilbert spaces, and writing Q = L(H), Q ′ = L(K), the functor f * is induced by a map f : Q ′ → Q, and h is a homomorphism of F Q ′coalgebras. By Proposition 6.3, where P(h)(ψ) = h(ψ). By Theorem 7.6, the induced coalgebra homomorphism on the strongly extensional quotients of the corresponding coalgebras is Ph : f * (P(H),ā H ) → (P(K),ā K ). We can now use Theorem 3. where f * is injective. Then there is a semiunitary (i.e. a unitary or antiunitary) U : H → K such that f * = P(U ). U is unique up to a phase. Moreover, f * is then uniquely determined as U −1 . Since any coalgebra homomorphism gives rise to a Chu morphism, this will allow us to lift fullness of the representation in Chu spaces to the coalgebraic setting. Proof This follows by the same argument as Proposition 3.13 of [1]. In particular, the fact that U • is a coalgebra homomorphism follows from the relation P S (U ψ) = U (P U −1 (S) ψ) which is shown there. We must now account for the injectivity hypothesis in Theorem 7.7. The following properties of coalgebras and Chu spaces respectively are standard. Proposition 7.9 If F preserves weak pullbacks, the kernel of an F -coalgebra homomorphism is a bisimulation. Hence if (A, α) is a strongly extensional Fcoalgebra, on which bisimilarity is equality, then any homomorphism with (A, α) as domain must be injective. Proposition 7.10 If f : C 1 → C 2 is a morphism of separated Chu spaces, and f * is surjective, then f * is injective. We shall write sF for the restriction of F to sSet, the category of sets and surjective maps. Similarly, we write sChu for the restriction of Chu to sSet. Clearly T cuts down to these restrictions. Moreover, the isomorphism of Chu K with Chu of Proposition 6.1 cuts down to an isomorphism of sChu with sChu K , the subcategory of Chu spaces and morphisms f with f * surjective. Thus if we define the category PSymmH as in [1], with objects Hilbert spaces of dimension > 2, and morphisms semiunitaries quotiented by phases, we obtain the following result: Theorem 7.11 There is a full and faithful functor PC : PSymmH → sF. Moreover, the following diagram commutes: PSymmH > PC > > sF sChu [0,1] PR ∨ ∨ ∨ < ∼ = sChu T ∨ ∨ Here PR is the full and faithful functor of Theorem 3.15 of [1]. This result confirms that our approach of expressing contravariance through indexing over a base does succeed in allowing sufficient scope for the representation of physical symmetries, while also allowing for the construction of a universal model as a final coalgebra, and for the expression of the dynamics of repeated measurements. Bivariant Coalgebra Our development of 'coalgebra with contravariance' can be carried out quite generally. We shall briefly sketch this general development. Suppose we have a functor G : C op × C −→ C. Since CAT is cartesian closed, we can curry G to obtain G : C op −→ [C, C] where [C, C] is the (superlarge) functor category on C. There is also a functor [C, C] −→ CAT which sends a functor F to its category of coalgebras, and a natural transformation t : F → G to the corresponding functor between the categories of coalgebras, as in Proposition 5.1. Composing these two functors, we obtain a strict indexed category G : C op −→ CAT. We can then form the Grothendieck category G. The indexed category F arises in exactly this way, from the functor G : Set op × Set −→ Set :: (Q, X) −→ ({0} + (0, 1] × X) Q . We have found this combination of fibrational and coalgebraic structure a convenient one for our objective in the present paper of representing physical systems. In particular, the fibrational approach to contravariance allows enough 'elbow room' for the representation of symmetries. We also used the fibrational structure in formulating the connection to Chu spaces, which proved to be both technically useful and conceptually enlightening. A natural follow-up would be to develop a fibred version of coalgebraic logic, which we plan to do in a sequel. We note that a quite different, and in some sense more direct approach to coalgebra for bivariant functors has been developed by Tews [27]. A viable approach is developed in [27] only for a limited class of functors, the 'extended polynomial functors'. Moreover, the issues of rigidity vs. symmetry which we have been concerned with are not addressed in this approach, which is also technically fairly complex. Of course, there is a beautiful theory of the solution of reflexive equations for mixed-variance functors provided by Domain theory [13,2]. The value of coalgebras, in our view, is that they provide a simpler setting in which a great deal can be very effectively accomplished, without the need for the introduction of partial elements and the like. The need for contravariance in our context, motivated by the representation of physical systems, appears to be of a different nature, and hence better met by the fibrational methods we have introduced in the present paper. A deeper understanding of the issues here will, we hope, shed interesting light on each of the topics we have touched on in this paper: foundations of physics, computational models, and the mathematics of coalgebras. Given a functor I : C op → CAT we define I with objects (A, a), where A is an object of C and a is an object of I(A). Arrows are (G, g) : (A, a) → (B, b), where G : B → A and g : I(G)(a) → b. Composition of (G, g) : (A, a) → (B, b) and (H, h) : (B, b) → (C, c) is given by (G • H, h • I(H)(g)) : (A, a) → (C, c). ( Q, (X, Q, e : X × Q → K)) while morphisms have the form (f, (f * , id Q ′ )) : (Q, (X, Q, e)) → (Q ′ , (X, Q ′ , e ′ )) (h, f ) : (H • , L(H), e H ) → (K • , L(K), e K ) is a Chu morphism. By Proposition 3.2 and the remark following Theorem 3.10 of [1], the Chu morphism induced by the biextensional collapse of these Chu spaces is (Ph, f ) : (P(H • ), L(H),ē H ) → (P(K • ), L(K),ē K ) (f * , f * ) : (P(H), L(H),ē H ) → (P(K), L(K),ē K ). Proposition 7. 8 8If U : H → K is a semiunitary, then U • : f * (H • , a H ) → (K • , a K ) is a coalgebra homomorphism, where f * = U −1 . Proposition 2.1 If a final F -coalgebra exists, it is unique up to isomorphism.Proposition 2.2 (Lambek Lemma)If γ : C → F C is final, it is an isomorphism 12 of [1]: Theorem 7.7 Let H, K be Hilbert spaces of dimension greater than 2. Consider a Chu morphism For those concerned with set-theoretic foundations, we shall on a couple of occasions refer to 'superlarge' categories such as CAT, the category of 'large categories' such as Set. If we think of large categories as based on classes, superlarge categories are based on entities 'one size up' -'conglomerates' in the terminology of[15]. This can be formalized in set theory with a couple of Grothendieck universes. . This functor can be applied to the coalgebra (K • , a K ) corresponding to the Hilbert space K to yield a coalgebra in F Q −Coalg.4. Since (U Q , γ Q ) is the final coalgebra in F Q −Coalg, there is a unique coalgebra homomorphism · K• : f * (K • , a K ) → (U Q , γ Q ).5. This homomorphism maps the quantum system (K • , a K ) into (U Q , γ Q ) in a fully abstract fashion, i.e. identifying states precisely according to observational equivalence. Big toy models: Representing physical systems as Chu spaces. S Abramsky, arXiv:0910.2393Oxford University Computing LaboratoryTechnical ReportS. Abramsky. Big toy models: Representing physical systems as Chu spaces. Technical Report RR-09-08, Oxford University Computing Labo- ratory, 2009. arXiv:0910.2393. Domain theory. S Abramsky, A Jung, Handbook of Logic in Computer Science. S. Abramsky, D. Gabbay, and T. S. E. MaibaumOxford University PressS. Abramsky and A. Jung. Domain theory. In S. Abramsky, D. Gabbay, and T. S. E. Maibaum, editors, Handbook of Logic in Computer Science, pages 1-168. Oxford University Press, 1994. Interaction categories and the foundations of typed concurrent programming. Samson Abramsky, Simon J Gay, Rajagopal Nagarajan, Manfred Broy, editor, NATO ASI DPDSamson Abramsky, Simon J. Gay, and Rajagopal Nagarajan. Interaction cat- egories and the foundations of typed concurrent programming. In Manfred Broy, editor, NATO ASI DPD, pages 35-113, 1996. * -Autonomous categories. Michael Barr, Lecture Notes in Mathematics. 752SpringerMichael Barr. * -Autonomous categories, volume 752 of Lecture Notes in Mathematics. Springer, 1979. The separated extensional Chu category. Theory and Applications of Categories. Michael Barr, 4Michael Barr. The separated extensional Chu category. Theory and Applica- tions of Categories, 4(6):137-147, 1998. Information flow: the logic of distributed systems. Jon Barwise, Jerry Seligman, Cambridge University PressJon Barwise and Jerry Seligman. Information flow: the logic of distributed systems. Cambridge University Press, 1997. Constructing * -autonomous categories. Po-Hsiang Chu, Lecture Notes in Mathematics. 752Po-Hsiang Chu. Constructing * -autonomous categories, pages 103-137. Vol- ume 752 of Lecture Notes in Mathematics [4], 1979. Bifinite Chu spaces. Manfred Droste, Guo-Qiang Zhang, Mossakowski et al.18Manfred Droste and Guo-Qiang Zhang. Bifinite Chu spaces. In Mossakowski et al. [18], pages 179-193. . Jean-Yves Girard, Linear Logic. Theor. Comput. Sci. (TCS). 50Jean-Yves Girard. Linear Logic. Theor. Comput. Sci. (TCS), 50:1-102, 1987. A topologist's view of Chu spaces. Eraldo Giuli, Walter Tholen, Applied Categorical Structures. 155-6Eraldo Giuli and Walter Tholen. A topologist's view of Chu spaces. Applied Categorical Structures, 15(5-6):573-598, 2007. Catégories fibrées et descente (exposé VI). A Grothendieck, Lecture Notes in Mathematics. A. Grothendieck224SpringerRevêtement Etales et Groupe Fondamental (SGA1, volumeA. Grothendieck. Catégories fibrées et descente (exposé VI). In A. Grothendieck, editor, Revêtement Etales et Groupe Fondamental (SGA1, volume 224 of Lecture Notes in Mathematics, pages 145-194. Springer, 1970. Types and coalgebraic structure. Algebra Universalis. H , Peter Gumm, Tobias Schröder, 53H. Peter Gumm and Tobias Schröder. Types and coalgebraic structure. Alge- bra Universalis, 53:229-252, 2005. Semantic domains. In Handbook of Theoretical. Carl A Gunter, Dana S Scott, Formal Models and Sematics (B). ElsevierBCarl A. Gunter and Dana S. Scott. Semantic domains. In Handbook of Theo- retical Computer Science, Volume B: Formal Models and Sematics (B), pages 633-674. Elsevier, 1990. Bisimulation for neighbourhood structures. Clemens Helle Hvid Hansen, Eric Kupke, Pacuit, Proceedings of the 2nd Conference on Algebra and Coalgebra in Computer Science. the 2nd Conference on Algebra and Coalgebra in Computer ScienceBergen, NorwaySpringer4624Helle Hvid Hansen, Clemens Kupke, and Eric Pacuit. Bisimulation for neigh- bourhood structures. In Proceedings of the 2nd Conference on Algebra and Coalgebra in Computer Science (CALCO 2007), Bergen, Norway, volume 4624 of Springer LNCS, pages 279-293. Springer, 2007. Category Theory: An Introduction. Allyn and Bacon. H Herrlich, G Strecker, H. Herrlich and G. Strecker. Category Theory: An Introduction. Allyn and Bacon, 1973. Modeling non-iterated system behavior with Chu spaces. Lubomir Ivanov, CDES. Hamid R. ArabniaCSREA PressLubomir Ivanov. Modeling non-iterated system behavior with Chu spaces. In Hamid R. Arabnia, editor, CDES, pages 145-150. CSREA Press, 2008. Games Semantics for Linear Logic. Yves Lafont, Thomas Streicher, LICS. IEEE Computer SocietyYves Lafont and Thomas Streicher. Games Semantics for Linear Logic. In LICS, pages 43-50. IEEE Computer Society, 1991. Algebra and Coalgebra in Computer Science, Second International Conference. Till Mossakowski, Ugo Montanari, and Magne HaveraaenCALCO; Bergen, NorwaySpringer4624ProceedingsTill Mossakowski, Ugo Montanari, and Magne Haveraaen, editors. Alge- bra and Coalgebra in Computer Science, Second International Conference, CALCO 2007, Bergen, Norway, August 20-24, 2007, Proceedings, volume 4624 of Lecture Notes in Computer Science. Springer, 2007. Chu spaces: Towards new foundations for fuzzy logic and fuzzy control, with applications to information flow on the world wide web. Nhu Nguyen, Hung T Nguyen, Berlin Wu, Vladik Kreinovich, JACIII. 53Nhu Nguyen, Hung T. Nguyen, Berlin Wu, and Vladik Kreinovich. Chu spaces: Towards new foundations for fuzzy logic and fuzzy control, with applications to information flow on the world wide web. JACIII, 5(3):149- 156, 2001. Nabla algebras and Chu spaces. Alessandra Palmigiano, Yde Venema, Mossakowski et al.18Alessandra Palmigiano and Yde Venema. Nabla algebras and Chu spaces. In Mossakowski et al. [18], pages 394-408. Fuzzy sets and fuzzy relational structures as Chu spaces. K Basil, Apostolos Papadopoulos, Syropoulos, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 84Basil K. Papadopoulos and Apostolos Syropoulos. Fuzzy sets and fuzzy rela- tional structures as Chu spaces. International Journal of Uncertainty, Fuzzi- ness and Knowledge-Based Systems, 8(4):471-479, 2000. Cofree equivalences, dualities and *-autonomous categories. Dusko Pavlovic, I Chu, Mathematical Structures in Computer Science. 71Dusko Pavlovic. Chu I: Cofree equivalences, dualities and *-autonomous categories. Mathematical Structures in Computer Science, 7(1):49-73, 1997. The Stone gamut: A coordinatization of mathematics. R Vaughan, Pratt, LICS. IEEE Computer SocietyVaughan R. Pratt. The Stone gamut: A coordinatization of mathematics. In LICS, pages 444-454. IEEE Computer Society, 1995. Transition and cancellation in concurrency and branching time. R Vaughan, Pratt, Mathematical Structures in Computer Science. 134Vaughan R. Pratt. Transition and cancellation in concurrency and branching time. Mathematical Structures in Computer Science, 13(4):485-529, 2003. Universal coalgebra: a theory of systems. J M M Jan, Rutten, Theor. Comput. Sci. 2491Jan J. M. M. Rutten. Universal coalgebra: a theory of systems. Theor. Com- put. Sci., 249(1):3-80, 2000. Linear logic, *-autonomous categories and cofree coalgebras. R A G Seely, Categories in Computer Science and Logic. 92Am. Math. Soc.R. A. G. Seely. Linear logic, *-autonomous categories and cofree coalgebras. In Categories in Computer Science and Logic, volume 92 of Contemporary Mathematics, pages 371-382. Am. Math. Soc., 1989. Coalgebras for binary methods. Hendrik Tews, Electr. Notes Theor. Comput. Sci. 33Hendrik Tews. Coalgebras for binary methods. Electr. Notes Theor. Comput. Sci., 33, 2000. Information transfer across Chu spaces. Johan Van Benthem, Logic Journal of the IGPL. 86Johan van Benthem. Information transfer across Chu spaces. Logic Journal of the IGPL, 8(6), 2000. On game formats and Chu spaces. Stefano Vannucci, Department of Economics University of Siena 417, Department of Economics, University of SienaStefano Vannucci. On game formats and Chu spaces. Department of Eco- nomics University of Siena 417, Department of Economics, University of Siena, January 2004.
[]
[ "EXISTENCE RESULT OF THE GLOBAL ATTRACTOR FOR A TRIPLY NONLINEAR THERMISTOR PROBLEM", "EXISTENCE RESULT OF THE GLOBAL ATTRACTOR FOR A TRIPLY NONLINEAR THERMISTOR PROBLEM" ]
[ "Moulay Rchid ", "Sidi Ammi ", "Ibrahim Dahi ", "ANDAbderrahmane El Hachimi ", "Delfim F M Torres " ]
[]
[]
We study the existence and uniqueness of a bounded weak solution for a triply nonlinear thermistor problem in Sobolev spaces. Furthermore, we prove the existence of an absorbing set and, consequently, the universal attractor.2010 Mathematics Subject Classification. 35A01, 35A02, 46E35.
10.2478/mjpaa-2023-0002
[ "https://export.arxiv.org/pdf/2301.08561v1.pdf" ]
256,080,382
2301.08561
71107e46a9e9f6597d8a060ed217a12e5e64052e
EXISTENCE RESULT OF THE GLOBAL ATTRACTOR FOR A TRIPLY NONLINEAR THERMISTOR PROBLEM 8 Jan 2023 Moulay Rchid Sidi Ammi Ibrahim Dahi ANDAbderrahmane El Hachimi Delfim F M Torres EXISTENCE RESULT OF THE GLOBAL ATTRACTOR FOR A TRIPLY NONLINEAR THERMISTOR PROBLEM 8 Jan 2023 We study the existence and uniqueness of a bounded weak solution for a triply nonlinear thermistor problem in Sobolev spaces. Furthermore, we prove the existence of an absorbing set and, consequently, the universal attractor.2010 Mathematics Subject Classification. 35A01, 35A02, 46E35. Introduction The thermistor was discovered by Michael Faraday in 1833, who noticed that the temperature increases when the silver sulfides resistance decreases. A lot of studies of the thermistor problem can be found in [1,9,10,15,17]. A thermistor is a circuit component that may be used as a current limiter or as a temperature sensor. It is, typically, a tiny cylinder, constructed of a ceramic substance whose electrical conductivity is highly dependent on temperature. The thermistor regulates the heat created by an electrical current traveling through a conductor device. Thermistor problems have received a lot of attention. We refer the reader to [4,7,10,12,17,19] and references therein. Thermistors are commonly used as temperature control devices in a wide variety of industrial equipment, ranging from space vehicles to air conditioning controllers. They are also often used in the medical field, for localized and general body temperature measurement, in meteorology, for weather forecasting, and in chemical industries as process temperature sensors. A detailed description of thermistors and their applications in electronics and other industries can be found in [23]. There are two types of thermistors: NTC and PTC, which have a positive and negative temperature coefficient, respectively. An NTC thermistor is a temperature sensor that measures temperature using the resistance qualities of ceramic and metal composites. NTC sensors provide a number of benefits in terms of temperature sensing, including small size, great long-term stability, and high accuracy and precision. The operation of a PTC electric surge device is as follows: when the circuit's current is suddenly increased, the device heats up, causing a dramatic decline in its electrical conductivity, effectively shutting off the circuit. In this paper, we consider the following general nonlocal thermistor problem: Problem (1.1) is a generalization of the problem appearing in the work of Kavallaris and Nadzieja [16]. For α(v) = v and m = 2, one gets the classical model of the thermistor problem appearing in the work of Lacey [17], which is a transformation of the following problem:        ∂α(v) ∂s − ∆ m v = κ f (v) ( Ω f (v)dx) 2 , in Q, α(v(x, 0)) = α(v 0 ), in Ω, v = 0,∂v ∂s = ∇ · (κ(v)∇v) + ρ(v)|∇ψ| 2 , ∇ · (ρ(v)∇ψ) = 0, (1.2) where κ is the thermal conductivity, ψ is the electrical potential, and ρ(v) represents the electrical conductivity, which is normally a positive function supposed to drop sharply by several orders of magnitude at some critical temperature, and remains essentially zero for larger temperatures. This feature is essential for the intended functioning of thermistors as thermoelectric switches. In the case α(v) = v and m = 2, existence and uniqueness results of bounded weak solutions to problem (1.1) were established in [10]. Existence of an optimal control has been obtained by many authors with different assumptions on f and m. We refer, for instance, to [14]. On the other hand, numerical computations of (1.1) and (1.2) have been carried out by other authors, see for example [6,21,22,28], in which the chosen parameters correspond to actual devices. Moreover, a study of (1.2) in the case N = 1 can be found in [13]. Here, we extend the existing literature of the nonlocal thermistor problem to a triply nonlinear case. Let B be the area of Ω, I the current such that κ = I 2 /B 2 , and ∆ m be defined by ∆ m v = div(| ∇v | m−2 ∇v) ∀m ≥ 2. We further specify the terms in (1.1). We assume: (H1) v 0 ∈ L ∞ (Ω); (H2) α : R −→ R is a Lipschitz continuous increasing function such that α(0) = 0 and α ′ (s) ≥ λ > 0 for all s ∈ R; (H3) f is a Lipshitz continuous function, with compact support, verifying σ ≤ f (s), for all s ∈ R, for a positive constant σ. The rest of the paper is organized as follows. In Section 2, we collect some basic concepts and a few known results that are useful to our development. Section 3 is devoted to the existence of a classical solution to the regularized problem of (1.1). In Section 4, existence of a bounded weak solution to the regularized problem is proved. Then, in Section 5, we provide sufficient conditions under which the solution is unique. Existence of an absorbing set, as well as the global attractor, are proved in Section 6. Finally, we present some concluding remarks in Section 7. Preliminaries In this section we collect a few known results that are useful to us. Definition 1 (See [5]). Let α be a continuous increasing function with α(0) = 0. For s ∈ R we define Ψ (s) = s 0 α(t)dt. The Legendre transform Ψ * of Ψ is defined by Ψ * (t) = sup r∈R {rt − Ψ (t)}. (2.1) In particular, we get Ψ * (α(t)) = tα(t) − Ψ (t) . (2.2) Remark 2. If v ∈ L ∞ (Q), then α(v) ∈ L ∞ (Q) . It turns out, from equality (2.2), that Ψ * (α(v)) is also bounded. Lemma 3 (See [26]). Assume that z is a non-negative, absolutely continuous function, satisfying the following inequality: Lemma 4 (Ghidaglia lemma [26]). Let z be a positive and absolutely continuous function on ]0, ∞[ such that the inequality z ′ (s) ≤ hz(s) + g(s), for s ≥ s 0 ,z ′ + δz q ≤ η holds, where q > 1, δ > 0, η ≥ 0. Then, z(s) ≤ η δ 1/q + (δ(q − 1)s) −1/(q−1) for all s ≥ 0. Lemma 5 (See [2]). If v ∈ L m 0, M ; W 1,m (Ω) with ∂α(v) ∂s ∈ L m ′ 0, M ; W −1,m ′ (Ω)) , then ∂α(v) ∂s , v W −1,m ′ (Ω),W 1,m (Ω) = d ds Ω Ψ * (α(v)). In order to study the existence of the global (universal) attractor, we introduce the following definitions. Definition 7 (See [26]). The set A ⊂ F is said to be an universal attractor for the semigroup (S(s)) s≥0 , if the following conditions hold: (1) A ⊂ F is a nonempty invariant compact set, (2) the set A ⊂ F attracts any bounded set B ⊂ F , that is, dist (S(s)B, A) → 0 as s → +∞, such that dist(D, B) = sup a∈D inf b∈B a − b F . Regularized problems In this section, we first present our approximation scheme. Then we proceed to prove the existence of a weak solution to our regularized problem. To design our regularized scheme, we consider α r is of class C 1 (R) where 0 < λ < α ′ r , α r (0) = 0, α r −→ α in C loc (R) and |α r | ≤ |α|, f r is of class C ∞ (R), f r → f, in L 1 (Q) and a.e in Q, f r satisfies (H3) . (3.1) The initial condition is regularized as in the proof of [11, Proposition 3, p. 761], that is, v r,0 ∈ C ∞ c (Ω) such that v r,0 → v 0 in L ∞ (Ω), v r,0 L ∞ (Ω) ≤ v 0 L ∞ (Ω) + 1. (3. 2) Our regularized problems are then given by        ∂α r (v r ) ∂s − ∆ r m v = κ f r (v r ) ( Ω f r (v r )dx) 2 , in Q = Ω × [0, M ], α r (v r,x (0)) = α r (v r,0 ), in Ω, v r = 0, on Γ×]0, M [, (3.3) where ∆ r m v = div   | ∇v | 2 +r m − 2 2 ∇v   , m ≥ 2. Theorem 8. Assume that hypotheses (H1)-(H3) hold. Then there exists a solution to problem (3.3). The following lemma plays a key role in the proof of Theorem 8. Lemma 9. For all r > 0, we have v r L ∞ (Q) ≤ C(M, v 0 L ∞ (Ω) ), where C(M, v 0 L ∞ (Ω) ) is a positive constant. Proof. Multiplying the first equation of problem (3.3) by (α r (v r ) − α r (s 0 )) + p+1 (s 0 is a positive constant where | v r |> s 0 ) and integrating over Ω, we get Ω ∂α r (v r ) ∂s (α r (v r ) − α r (s 0 )) + p+1 − Ω ∆ r m v r (α r (v r ) − α r (s 0 )) + p+1 = Ω κ · f r (v r ) Ω f r (v r )dx 2 (α r (v r ) − α r (s 0 )) + p+1 . So, we have 1 p + 2 Ω ∂ ∂s (α r (v r ) − α r (s 0 )) + p+2 = Ω ∆ r m v r (α r (v r ) − α r (s 0 )) + p+1 + Ω κ · f r (v r ) Ω f r (v r )dx 2 (α r (v r ) − α r (s 0 )) + p+1 . Then, 1 p + 2 ∂ ∂s Ω (α r (v r ) − α r (s 0 )) + p+2 = Ω ∆ r m v r (α r (v r ) − α r (s 0 )) + p+1 + Ω κ · f r (v r ) Ω f r (v r )dx 2 (α r (v r ) − α r (s 0 )) + p+1 . (3.4) On the other hand, we have Ω ∆ r m v r (α r (v r ) − α r (s 0 )) + p+1 = Ω div   | ∇v r | 2 +r m − 2 2 ∇v r   (α r (v r ) − α r (s 0 )) + p+1 = −(p + 1) Ω   | ∇v r | 2 +r m − 2 2 | ∇v r | 2   α ′ r (v r ) (α r (v r ) − α r (s 0 )) + p + ∂Ω   | ∇v r | 2 +r m − 2 2 ∂v r ∂ν   (α r (v r ) − α r (s 0 )) + p+1 . Since | ∇v r | 2 +r m − 2 2 | ∇v r | 2 ≥ 0 and α ′ r > 0, we get 1 p + 2 ∂ ∂s Ω (α r (v r ) − α r (s 0 )) + p+2 ≤ ∂Ω   (| ∇v r | 2 +r) m − 2 2 ∂v r ∂ν   (α r (v r ) − α r (s 0 )) + p+1 + Ω κ · f r (v) Ω f r (v)dx 2 (α r (v r ) − α r (s 0 )) + p+1 . (3.5) By using (H3), we have Ω κ · f r (v r ) ( Ω f r (v r )dx) 2 (α r (v r ) − α r (s 0 )) + p+1 ≤ κ (σ · meas(Ω)) 2 Ω f r (v r ) (α r (v r ) − α r (s 0 )) + p+1 . Since f r satisfies (H3), it yields f r (v r (x, t)) = f r (v r (x, t)) χ {vr(x,t)∈supp(f )} + f r (v r (x, t)) χ {vr(x,t) / ∈supp(f )} ≤ f r (v r (x, t)) χ {vr(x,t)∈supp(f )} . If v r (x, t) ∈ supp(f ), then it follows that (v r (x, t)) r is bounded. Thus, there exists a positive constant C 0 such that Ω f r (v r ) (α r (v r ) − α r (s 0 )) + p+1 ≤ C 0 Ω (α r (v r ) − α r (s 0 )) + p+1 . Keeping that in mind, we have for a positive constant C 1 that 1 p + 2 ∂ ∂s Ω (α r (v r ) − α r (s 0 )) + p+2 ≤ C 1 Ω (α(v r ) − α r (s 0 )) + p+1 . (3.6) From Hölder's inequality, there exists positive constants C j , j = 2, 3, 4, such that Ω (α r (v r ) − α r (s 0 )) + p+1 ≤ (meas(Ω)) 1 p + 1 · Ω (α r (v r ) − α r (s 0 )) + p+2 p + 1 p + 2 ≤ C 2 [z p (s)] p+1 , where z p (s) := (α r (v r ) − α r (s 0 )) + L p+2 (Ω) . In view of (3.6), we have 1 p + 2 ∂ ∂s Ω (α r (v r ) − α r (s 0 )) + p+2 ≤ C 3 [z p (s)] p+1 . Then, 1 p + 2 ∂ ∂s [z p (s)] p+2 ≤ C 3 [z p (s)] p+1 ,(3.7) and hence ∂ ∂s [z p (s)] ≤ C 3 , from which it follows that [z p (s) − z p (0)] ≤ C 3 M, which implies z p (s) ≤ z p (0) + C 3 M. Letting p go to infinity, we obtain that (α r (v r ) − α r (s 0 )) + L ∞ (Ω) ≤ C 4 . (3.8) Now, let u r = −v r , and consider the following problem:            ∂α r (u r ) ∂s − ∆ r m u r = κf r (u r ) Ωf r (v r )dx 2 =:g(u r ) in Q, α r (u x,r (0)) =α r (u 0 ) in Ω, u r = 0 on Γ×]0, M [, (3.9) whereα r (τ ) = −α r (−τ ),g r (τ ) = −g r (−τ ) andf r (τ ) = −f r (−τ ) . Those functions satisfy the same conditions verified by α, g and f , respectively. The same reasoning done to get (3.8), shows that (α r (u r ) −α r (s 0 )) + L ∞ (Ω) ≤ C 5 , (3.10) which is equivalent to (−α r (−v r (s)) + α r (−s 0 )) + L ∞ (Ω) ≤ C 5 . Fromv r (s) L ∞ (Ω) ≤ C(M, v 0 L ∞ (Ω) ), for all s ∈ [0, M ]. The lemma is proved. Proof of Theorem 8. From Lemma 9 and hypotheses (H1)-(H3), we conclude, from the classical results of Ladyzenskaya (see [18, pp. 457-459]), with the existence of a classical solution to the regularized problem (3.3). Existence of a weak solution Definition 10. We say that v ∈ L ∞ (Q) ∩ L m 0, M ; W 1,m (Ω) ∩ L ∞ t, M ; W 1,m (Ω) , t > 0, is a bounded weak solution of problem (1.1), if it satisfies the following iden- tity: M 0 ∂α(v) ∂s , u − Q | ∇v | m−2 ∇v∇u = κ Q f (v) ( Ω f (v)dx) 2 u, (4.1) for all u ∈ L m 0, M ; W 1,m (Ω) ∩ L ∞ (Q) . Furthermore, if we have u ∈ W 1,1 0, M ; L 1 (Ω) ∩ L m 0, M ; W 1,m (Ω) with u(·, M ) = 0, then M 0 ∂α(v) ∂s , u = − M 0 Ω [α(v) − α(v 0 )] ∂ s u, where the duality product is defined by ·, · = ·, · W −1,m ′ (Ω),W 1,m (Ω) . Remark 11. Since α r is an increasing function and | α r |≤| α |, then, by using Lemma 9, we also have that (α r (v r )) r is bounded. Our plan is to derive now enough a priori estimates needed in the sequel. where C 6 is a positive constant independent of r. Proof. Multiplying the first equation of (3.3) by v r and integrating, we get Ω ∂α r (v r ) ∂s v r − Ω ∆ r m v r v r = Ω κ f r (v r ) ( Ω f r (v r )dx) 2 v r . (4.3) Applying (2.2), we obtain that Ω ∂α r (v r ) ∂s v r = Ω ∂ [Ψ * (α r (v r ))] ∂s . On another hand, by using Green's formula, we get Ω ∆ r m v r v r = − Ω | ∇v r | 2 +r m − 2 2 ∇v r ∇v r + ∂Ω | ∇v r | 2 +r ∂v r ∂ν · v r . Substituting into (4.3), we get Ω ∂α r (v r ) ∂s v r + Ω | ∇v r | 2 +r m − 2 2 | ∇v r | 2 = Ω κ · f r (v r ) ( Ω f r (v r )dx) 2 v r − ∂Ω | ∇v r | m−2 +r ∂v r ∂ν · v r , Ω | ∇v r | 2 +r m − 2 2 | ∇v r | 2 = Ω κ · f r (v r ) ( Ω f r (v r )dx) 2 v r − Ω ∂ [Ψ * (α r (v r ))] ∂s − ∂Ω | ∇v r | m−2 +r ∂v r ∂ν · v r . Then, using the boundary conditions, we have M 0 Ω | ∇v r | 2 +r m − 2 2 | ∇v r | 2 = M 0 Ω κ · f r (v r ) ( Ω f r (v r )dx) 2 v r − M 0 Ω ∂ [Ψ * (α r (v r ))] ∂s . (4.4) From Remark 2, we know that (Ψ * (α r (v r ))) r is bounded. With the aid of hypothesis (H3) and Lemma 9, there exists a positive constant C 7 such that M 0 Ω κ f r (v r ) ( Ω f r (v r )dx) 2 v r − M 0 Ω ∂ [Ψ * (α r (v r ))] ∂s ≤ M 0 Ω κ f r (v r ) ( Ω f r (v r )dx) 2 v r − Ω Ψ * (α r (v r (·, M ))) + Ω Ψ * (α r (v r (·, 0))) ≤ κ (σ · meas(Ω)) 2 M 0 Ω f r (v r )· | v r | + 2 · max Ω Ψ * (α r (v r (·, M ))) , Ω Ψ * (α r (v r (·, 0))) ≤ C 7 . It yields that M 0 Ω |∇v r | m ≤ M 0 Ω | ∇v r | 2 +r m − 2 2 | ∇v r | 2 ≤ C 7 . We deduce that v r ∈ L m 0, M ; W 1,m (Ω) . Remark 13. Inequality (4.2), combined with Young's inequality, imply that   | ∇v r | 2 +r m − 2 2 ∇v r   r is bounded in L m ′ 0, M ; W 1,m ′ (Ω) . A further upper bound for v r is established in the following lemma. Proof. Multiplying the first equation of problem (3.3) by ∂v r ∂s , and integrating, we obtain that Ω ∂α r (v r ) ∂s ∂v r ∂s − Ω ∆ r m v r ∂v r ∂s = Ω κ f r (v r ) ( Ω f r (v r )dx) 2 ∂v r ∂s . (4.8) Since Ω ∂α r (v r ) ∂s ∂v r ∂s = Ω α ′ r (v r ) ∂v r ∂s 2 , the equality (4.8) becomes Ω α ′ r (v r ) ∂v r ∂s 2 − Ω ∆ r m v r ∂v r ∂s = Ω κ f r (v r ) ( Ω f r (v r )dx) 2 ∂v r ∂s . By applying Green's formula, we get Ω α ′ r (v r ) ∂v r ∂s 2 + 1 m ∂ ∂s Ω | ∇v r | 2 +r m 2 = Ω κ f r (v r ) ( Ω f r (v r )dx) 2 ∂v r ∂s . (v r ) ∂s ≤ C 8 . Then, it yields that Ω g r (v r ) ∂v r ∂s ≤ C 8 · meas(Ω). With this in mind, we derive and, by using Gronwall's Lemma 3, we get Ω |∇v r | m ≤ 1 m Ω | ∇v r | 2 +r m 2 ≤ C 10 . (4.12) According to Poincaré's inequality, it follows that ||v r (s)|| W 1,m (Ω) ≤ C(t), for all s ≥ t. This, combined with inequality (4.10), yields As a consequence, we have M t Ω α ′ r (v r ) ∂v r ∂s 2 + 1 m Ω | ∇v r (·, M ) | 2 +r m 2 ≤ 1 m Ω | ∇v r (·, t) | 2 +r m 2 + C 9 (M − t) .M t Ω α ′ r (v r ) ∂v r ∂s 2 ≤ C(t, M ). Since α is a locally Lipschitzian function, then there exists a positive constant L such that α ′ r ≤ L. Hence, we get M t Ω ∂α r (v r ) ∂s 2 ≤ L M t Ω α ′ r (v r ) ∂v r ∂s 2 ≤ C 1 (t, M ). The proof is complete. Theorem 15. Assume that hypotheses (H1)-(H3) hold. Then there exists a weak bounded solution to problem (3.3). Proof. To achieve the proof of Theorem 15, we need to pass to the limit in problem (3.3). By virtue of Lemma 9, there exists a subsequence, still denoted ( v r ) r , such that v r −→ v weakly star in L ∞ (Q). Note from estimate (4.2) that v r −→ v weakly in L m 0, M ; W 1,m (Ω) . Since (v r ) r is bounded in L ∞ t, M ; W 1,m (Ω) , then v r −→ v weakly star in L ∞ t, M ; W 1,m 0 (Ω) . Under the hypotheses of f r , we have f r −→ f a.e. This, together with Vitali's theorem (see [20]), implies the convergence to f (v) in L 1 (Q). Applying Green's formula, By using Remark 13, the right-hand side of this inequality is bounded. Then there exists ϑ ∈ L m ′ 0, M ; W −1,m ′ (Ω) such that ∆ r m v r −→ ϑ weakly in L m ′ 0, M ; W −1,m ′ (Ω) . A classical argument (see [5]), asserts that ϑ = ∆ m v. Combining (4.5) and the smoothness of function α r , yields the boundedness of the sequence (α r (v r )) r in L ∞ t, M ; W 1,m (Ω) . On the other hand, by using (4.7), we deduce that ∂α r (v r ) ∂s r is bounded in L 2 t, M ; L 2 (Ω) , for all t > 0. Aubin's lemma (see [25]) allows us to claim that (α r (v r )) r is relatively compact in C ]0, M [; L 1 (Ω) . Therefore, α r (v r ) −→ δ strongly in C ]0, M [; L 1 (Ω) . Hence, in an entirely similar manner as in [5, p. 1048], it can be handled that δ = α(v). For the continuous of the solution at point s = 0, we proceed as in [3]. From Lemma 14, we deduce that α r (v r ) −→ α(v) strongly in C [0, M ]; L 1 (Ω) . Let us consider v 0 ∈ L ∞ (Ω) and take a smooth sequence (v r,0 ) satisfying (3.2). Hence, (v r,0 ) is bounded and convergent to v 0 in L 1 (Ω). Then, thanks to the dominate convergence theorem, we have α (v r,0 ) −→ α (v 0 ) in L 1 (Ω). Now, we deal with initial data v 0 ∈ C 1 (Ω). Choosing the sequence (v r,0 ) bounded in the space W 1,m (Ω) and verifying hypothesis (3.2), the corresponding α(v r ) are continuous at s = 0. Furthermore, we have α(v(s)) − α(v(0)) L 1 (Ω) ≤ α(v(s)) − α (v r (s)) L 1 (Ω) + α (v r (s)) − α (v r,0 ) L 1 (Ω) + α (v r,0 ) − α (v 0 ) L 1 (Ω) . (4.14) In view of Lemma 16, we have α(v(s)) − α(v(0)) L 1 (Ω) ≤ e Ks α (v 0 ) − α (v r,0 ) L 1 (Ω) + α (v r (s)) − α (v r,0 ) L 1 (Ω) + α (v r,0 ) − α (v 0 ) L 1 (Ω) . (4.15) As s goes to 0 of (4.15), all terms of the right hand side of (4.15) tend to 0. Then, we deduce that α (v) ∈ C [0, M ]; L 1 (Ω) . Finally, letting r −→ 0 in (3.3), we obtain the existence of a weak bounded solution. Uniqueness of solution To prove the uniqueness of the solution, we need to impose some further hypothesis. We assume that there exists a positive constant L 2 such that | f (u) − f (v) |≤ L 2 | α(u) − α(v) | . (5.1) Lemma 16. Let v and u be two solutions of problem (1.1) with initial data v 0 and u 0 , respectively. Then, the following inequality holds: α(v(s)) − α(u(s)) L 1 (Ω) ≤ e Ks α (v 0 ) − α (u 0 ) L 1 (Ω) , (5.2) where K is a positive constant. Proof. The proof is similar to the one in [8]. For the proof of our next result, we need the following lemma. Lemma 17 (Tartar's inequality [24]). If a, b ∈ R N , then Proof. For a small positive µ, let |a| m−2 a − |b| m−2 b · (a − b) ≥ C(m) |a − b| m , if m ≥ 2, |a−b| 2 (|a|+|b|) 2−m , if 1 < m < 2,H µ (Y ) = min 1, max Y µ , 0 , for all Y ∈ R. We use H µ (v − u) as a test function. Multiplying the first equation of problem (1.1), corresponding to u and v, by H µ (v − u) and subtracting the two equations, we derive that s 0 Ω ∂ ∂s (α(v) − α(u)) H µ (v − u) − s 0 Ω (∆ m v − ∆ m u) H µ (v − u) = s 0 Ω κ f (v) ( Ω f (v)dx) 2 H µ (v − u) − s 0 Ω κ f (u) ( Ω f (u)dx) 2 H µ (v − u). (5.4) Using Green's formula and taking into account the boundary conditions, we obtain that s 0 Ω (∆ m v) H µ (v − u) = − s 0 Ω | ∇v | m−2 ∇v · ∇(v − u) · H ′ µ (v − u). (5.5) We easily check that s 0 Ω (∆ m u) H µ (v − u) = − s 0 Ω | ∇u | m−2 ∇u · ∇(v − u) · H ′ µ (v − u). (5.6) From (5.5) and (5.6), it follows that s 0 Ω (∆ m v − ∆ m u) · H µ (v − u) = − s 0 Ω | ∇v | m−2 ∇v− | ∇u | m−2 ∇u ∇(v − u) · H ′ µ (v − u). By using Lemma 17, it follows that s 0 Ω (∆ m v − ∆ m u) · H µ (v − u) ≤ 0. Hence, s 0 Ω ∂ ∂s (α(v) − α(u)) H µ (v − u) ≤ s 0 Ω ∂ ∂s (α(v) − α(u)) H µ (v − u) − s 0 Ω (∆ m v − ∆ m u) H µ (v − u).∂ ∂s (α(v) − α(u)) · H µ (v − u) ≤ s 0 Ω γ(x) · H µ (v − u), (5.8) where γ(x) := κ f (v) ( Ω f (v)dx) 2 − κ f (u) ( Ω f (u)dx) 2 , γ(x) · χ {v−u>0} = κf (u) Ω [f (u) − f (v)] dx Ω [f (u) + f (v)] dx Ω f (u)dx 2 Ω f (v)dx 2 · χ {v−u>0} + κ f (v) − f (u) Ω f (v)dx 2 · χ {v−u>0} . Adding this to (5.1), γ(x) · χ {v−u>0} ≤ κL 2 Ω (α(v) − α(u)) dx Ω (f (v) + f (u)) dx Ω f (u)dx 2 Ω f (v)dx 2 f (u) · χ {v−u>0} + κL 2 (α(v) − α(u)) Ω f (v) dx 2 · χ {v−u>0} . (5.9) On the other hand, we have κ · L 2 s 0 Ω Ω (α(v) − α(u)) dx Ω (f (v) + f (u)) dx Ω f (u)dx 2 Ω f (v)dx 2 f (u) · χ {v−u>0} ≤ 2κ · L 2 · meas(Ω) · sup f (a) a∈supp(f ) s 0 Ω Ω (α(v) − α(u)) dx Ω f (u)dx 2 Ω f (v)dx 2 f (u) ≤ 2κ · L 2 · meas(Ω) · sup f (a) a∈supp(f ) 2 s 0 Ω Ω (α(v) − α(u)) dx Ω f (u)dx 2 Ω f (v)dx 2 . (5.10) Since Ω (α(v) − α(u)) dx = Ω (α(v) − α(u)) · χ {v−u>0} dx + Ω (α(v) − α(u)) · χ {v−u≤0} dx, and α is an increasing function, we get that Ω (α(v) − α(u)) dx ≤ Ω (α(v) − α(u)) · χ {v−u>0} dx ≤ Ω (α(v) − α(u)) + dx. (5.11) Keeping in mind (5.9)-(5.11) and hypothesis (H3) on f , it follows that s 0 Ω γ(x) · χ {v−u>0} dx dt ≤ κ · L 2 (meas(Ω)σ) 2 s 0 Ω (α(v) − α(u)) + dx dt + 2κ · L 2 · meas(Ω) (meas(Ω) · σ) 4 · sup f (a) a∈supp(f ) 2 s 0 Ω Ω (α(v) − α(u)) + dx ≤   κL 2 (meas(Ω)σ) 2 + 2κ · L 2 (meas(Ω)) 2 (meas(Ω)σ) 4 sup f (a) a∈supp(f ) 2   s 0 Ω (α(v) − α(u)) + dxdt. (5.12) On the another hand, when we tend µ to zero, we get s 0 Ω ∂ ∂s (α(v) − α(u)) H µ (v − u) −→ s 0 Ω ∂ ∂s (α(v) − α(u)) · χ {v−u>0} . We also have that s 0 Ω γ(x) · H µ (v − u) −→ s 0 Ω γ(x) · χ {v−u>0} . This, combined with (5.8) and (5.12), yields the existence of a positive constant C 11 such that Ω (α(v) − α(u)) + ≤ C 11 · s 0 Ω (α(v) − α(u)) + . (5.13) Applying the usual Gronwall's lemma, we get α(v) ≤ α(u). Knowing that α is an increasing function, it follows, in particular, that α(v) = α(u) in {v − u > 0}. Keeping this and (5.3) in mind, we obtain that ∇(v − u) = 0 in {µ > v − u > 0}. Hence, max{0, min{v − u, µ}} = C 12 , where C 12 is a positive constant. We deduce that v ≤ u in Q. Interchanging the role of v and u, the proof of uniqueness is finished. 6. Existence of an absorbing set and the universal attractor In this section we prove the existence of an universal attractor by first proving the existence of an absorbing set. To this end, let us consider (S(s)) s≥0 a continuous semigroup generated by problem (1.1) such that S(s) : L ∞ (Ω) → L ∞ (Ω) v 0 → α(v(s)), (6.1) where v is the bounded weak solution of problem (1.1). By using Theorem 8, the map (6.1) is well defined. Now, let us formulate the second main result in this paper. Theorem 19. For m > 2, (S(s)) s≥0 possesses an universal attractor, which is bounded in W 1,m 0 (Ω). In order to prove Theorem 19, we first show the following result. By using Poincaré's inequality, we derive that Ω |∇v| m α ′ (v) |α(v)| p ≥ λ · C 13 Ω |v| m+p , for a positive constant C 13 . The smoothness of the function α implies Ω |∇v| m α ′ (v) |α(v)| p ≥ λ · C 13 L 1 Ω |α(v)| m+p , (6.4) where L 1 is the Lipshitzity constant of function α. Recall from (6.2) − (6.4) that 1 p + 2 ∂ ∂s Ω |α(v)| p+2 + min λ · C 13 L 1 , λ · Ω |α(v)| m+p ≤ κ Ω f (v) Ω f (v)dx 2 |α(v)| p α(v). It is easy to check that 1 p + 2 ∂ ∂s Ω |α(v)| p+2 + min λ · C 13 L 1 , λ · Ω |α(v)| m+p ≤ C 14 Ω |α(v)| p+1 , for a positive constant C 14 . Set z p (s) := α(v) L p+2 (Ω) and C 15 := min λ · C 13 L 1 , λ . Making use of Hölder's inequality and the continuous embedding of L m+p (Ω) in L p+2 (Ω), we obtain that ∂z p (s) ∂s (z p (s)) p+1 + C 15 (z p (s)) m+p ≤ C 14 (z p (s)) p+1 . It follows that ∂z p (s) ∂s + C 15 (z p (s)) m−1 ≤ C 14 . Letting p going to infinity, we obtain that α(v) L ∞ (Ω) ≤ C(η) for all s ≥ η > 0. This implies v(s) L ∞ (Ω) ≤ max | α −1 (C(η)) |, | α −1 (−C(η)) | . (6.7) Let us consider ρ := max | α −1 (C(η)) |, | α −1 (−C(η)) | as the radius of the ball centered at 0. This ball is an absorbing set in L ∞ (Ω). Remark 21. Existence of an absorbing set in W 1,m (Ω) is obtained due to inequality (4.5) together with the lower semi-continuity of the norm. It yields that ||v (s)|| W 1,m (Ω) ≤ C(t) := ρ t , for all s ≥ t. Then the ball B (0, ρ t ) is an absorbing set in W 1,m (Ω). Now, in order to prove Lemma 23 below, we show that the solution of problem (1.1) is Hölder continuous. To this end, we set α(v) := w and we add the following assumptions: (H4) α is a strict increasing function and α −1 ∈ C 1 (R); (H5) i) α −1 (w) ′ is degenerate in the neighborhood of zero and there exists z ∈ [−η 0 , η 0 ], η 0 a positive constant, such that β 0 |z| k0 ≤ α −1 (w) ′ ≤ β 1 |z| k1 (6.8) for positive constants β j and k j , j = 0, 1; ii) there exists two positive constants e 0 and e 1 such that e 0 ≤ α −1 (w) ′ ≤ e 1 , (6.9) ∂w ∂s − div α −1 (w) ′ m−2 · α −1 (w) ′ |∇w| m−2 ∇w = κ f (α −1 (w)) Ω f (α −1 (w))dx 2 , (6.10) w = 0, (6.11) for all z ∈] − ∞, −η 0 [ ]η 0 , +∞[. Identifying (6.10) with (1) in the paper [27], and using hypotheses (H3)-(H5), we can apply the following theorem. In the following Lemma we prove that the operator (S(s)) s≥0 is uniformly compact for s large enough. Proof. We can derive from Lemma 9 that the set s≥s0 S(s)B is bounded in L ∞ (Ω). Furthermore, the approximation solution is uniformly bounded. We are in position to invoke Theorem 22 and, consequently, we deduce, by Ascoli-Arzelà theorem, that the set s≥s0 S(s)B is relatively compact. Proof of Theorem 19. We have to prove that (S(s)) s≥0 related to problem (1.1) possesses an universal attractor. We consider the following ω-limit: ω(B 0 ) := {v ∈ L ∞ (Ω) : ∃s n → +∞, ∃v n ∈ B 0 such that S (s n ) v n → v in L ∞ (Ω)}, where B 0 := S(t)B L ∞ (Ω) for some t > 0. We apply Lemma 1.1 in [26] to get that ω(B 0 ) is a nonempty compact invariant set. Then the first condition of Definition 7 holds. For the second condition of Definition 7, we proceed by absurd. Assume that A does not attract each bounded set in L ∞ (Ω). Then there exists a bounded set B, not attracted by A, and there exists s n → ∞ and ǫ > 0 such that dist (S(s n )B, A) ≥ ǫ 2 , (6.12) from whence follows that, for every n, there exists d n ∈ B such that dist (S(s n )d n , A) ≥ ǫ 2 . (6.13) Knowing that B 0 is an absorbing set for B (a bounded set), there exists s such that s ≥ s 1 , where s 1 is a positive constant, and we have S(s)B ⊂ B 0 . Since s n → ∞, then s n ≥ s 1 for large enough n and S(s n )B ⊂ B 0 . As a consequence, we have S(s n )d n ∈ B 0 . (6.14) On the other hand, recall from Lemma 23 that s≥s0 S(s)B 0 is relatively compact. Consequently, the sequence (S(s n )d n ) n is also relatively compact. So, there exists a subsequence such that S(s n )d n −→ ℓ ∈ L ∞ (Ω), as s n −→ ∞. With the semi-group propriety, we have In view of the fact that d ′ n ∈ B 0 , then s ′ n and d ′ n play the role of s n and d n , respectively, in (6.16). Keeping this and (6.15) in mind, we obtain that ℓ ∈ ω(B 0 ) = A. Then dist (ℓ, A) = 0 < ǫ 2 . This is in contradiction with inequality (6.13). Hence, A is the universal attractor. Conclusions and perspectives In this paper, we proved existence and uniqueness of a bounded weak solution in Sobolev spaces for a non-local thermistor problem in the presence of triply nonlinear terms. We also proved the existence of the global attractor. As future work, we plan to study the regularity of the global attractor, the stability of the solution, and the optimal control for the thermistor problem (1.1). models the diffusion of the temperature produced when an electric current flows crossing a material, where f (v) is the electrical resistance of the conductor and f (v) ( Ω f (v)dx) 2 represents the non-local term of (1.1). Here, Q = Ω × [0, M ], where Ω is an open bounded subset of R N , N ≥ 1, and M is a positive constant. where h and g are two non-negative integrable functions on [0, M ]. Then, for each s ∈ [0, M ], Definition 6 ( 6See [26]). Let us consider B ⊂ F and U an open bounded set such that U ⊂ B. Then B is an absorbing set in U if the orbit of each bounded set of U enters into B after a given period of time (which may depend on the set): ∀B 0 ⊂ U, B 0 bounded, ∃s 0 (B 0 ) such that S(s)B 0 ⊂ B, ∀s ≥ s 0 (B 0 ). (3.8) and(3.10), we deduce that there exists a positive constant C such that Lemma 12 . 12For all r > 0, we have ||v r || L m (0,M;W 1,m (Ω)) ≤ C 6 , (4.2) Lemma 14 .∂α r (v r ) ∂s 2 ≤ 142For all r, s > 0, there exist positive constants C(t), C(t, M ), and C 1 (t, M ), such that the following inequalities hold:||v r (s)|| W 1,m (Ω) ≤ C(t),for all s ≥ t, C 1 (t, M ). (4.7) (s)ds and g r (s) := f r (s) ( Ω f r (s)dx) 2 . By using the boundedness of v r and (3.1), we have ∂G r Ω | ∇v r (·, M ) | 2 +r m 2 ≤ 2C(t, M ). r ∇u , for u ∈ L m 0, M ; all m > 1, where C(m) = 2 2−m when m ≥ 2 and C(m) = m−1 when 1 < m < 2. Lemma 18. Let us consider two solutions v and u of problem (1.1) with initial data v 0 and u 0 , respectively, such that v 0 = u 0 . Then, v = u in Q. Lemma 20 . 20Under assumptions (H1)-(H3), there exists a positive constant ρ such that v(s) L ∞ (Ω) ≤ ρ, for all s > 0. Theorem 22 ( 22See [27]). Suppose that Theorem 8 holds. Then, under assumptions (H3)-(H5), the solution of problem (1.1) is Hölder continuous. Lemma 23 . 23If B is a bounded set, then s≥s0 S(s)B is relatively compact for any s ≥ s 0 . ′ n := s n − s 1 and d ′ n := S(s 1 )d n . We infer that ω(B 0 ) := {v : ∃s n , d n such that S(s n )d n −→ v}.(6.16) AcknowledgmentsTorres was supported by FCT through CIDMA and project UIDB/04106/2020.Proof. Multiplying the first equation of (1.1) by |α(v)| p α(v), and integrating over Ω, we obtain thatThen,Applying Green's formula, and using the boundary conditions, we getOn the other hand, sinceNow, we discuss two cases. Existence and uniqueness results on time scales for fractional nonlocal thermistor problem in the conformable sense. P Agarwal, M R Sidi Ammi, J Asad, Advances in Difference Equations. 2021P. Agarwal, M. R. Sidi Ammi, and J. Asad. Existence and uniqueness results on time scales for fractional nonlocal thermistor problem in the conformable sense. Advances in Difference Equations, 2021(1):1-11, 2021. Quasilinear elliptic-parabolic differential equations. H W Alt, S Luckhaus, 183Mathematische ZeitschriftH. W. Alt and S. Luckhaus. Quasilinear elliptic-parabolic differential equations. Mathema- tische Zeitschrift, 183(3):311-341, 1983. Attractor for a degenerate nonlinear diffusion problem with nonlinear boundary condition. F Andreu, J M Mazón, F Simondon, J Toledo, Journal of Dynamics and Differential Equations. 103F. Andreu, J. M. Mazón, F. Simondon, and J. Toledo. Attractor for a degenerate nonlinear diffusion problem with nonlinear boundary condition. Journal of Dynamics and Differential Equations, 10(3):347-377, 1998. The thermistor problem: existence, smoothness uniqueness, blowup. S N Antontsev, M Chipot, SIAM Journal on Mathematical Analysis. 254S. N. Antontsev and M. Chipot. The thermistor problem: existence, smoothness uniqueness, blowup. SIAM Journal on Mathematical Analysis, 25(4):1128-1156, 1994. Study of a doubly nonlinear heat equation with no growth assumptions on the parabolic term. D Blanchard, G Francfort, SIAM Journal on Mathematical Analysis. 195D. Blanchard and G. Francfort. Study of a doubly nonlinear heat equation with no growth assumptions on the parabolic term. SIAM Journal on Mathematical Analysis, 19(5):1032- 1056, 1988. Numerical solution of the thermistor problem. S A , Applied Mathematics and Computation. 1523S. A. Ç atal. Numerical solution of the thermistor problem. Applied Mathematics and Com- putation, 152(3):743-757, 2004. Existence of weak solutions for the nonstationary problem of the Joule heating of a conductor. G Cimatti, Annali di Matematica Pura ed Applicata162G. Cimatti. Existence of weak solutions for the nonstationary problem of the Joule heating of a conductor. Annali di Matematica Pura ed Applicata, 162(1):33-42, 1992. On a nonlinear parabolic problem arising in some models related to turbulent flows. J Diaz, De Thelin, SIAM Journal on Mathematical Analysis. 254J. Diaz and F De Thelin. On a nonlinear parabolic problem arising in some models related to turbulent flows. SIAM Journal on Mathematical Analysis, 25(4):1085-1111, 1994. Thermistor problem: a nonlocal parabolic problem. A El Hachimi, M R Sidi Ammi, Proceedings of the 2004-Fez Conference on Differential Equations and Mechanics. the 2004-Fez Conference on Differential Equations and Mechanics11A. El Hachimi and M. R. Sidi Ammi. Thermistor problem: a nonlocal parabolic problem. In Proceedings of the 2004-Fez Conference on Differential Equations and Mechanics, Electron. J. Differ. Equ. Conf, volume 11, pages 117-128, 2004. Existence and uniqueness of solutions for a nonlocal parabolic thermistor-type problem. A El Hachimi, M R Sidi Ammi, D F M Torres, arXiv:math/0512629Int. J. Tomogr. Stat. 5W07A. El Hachimi, M. R. Sidi Ammi and D. F. M. Torres. Existence and uniqueness of solutions for a nonlocal parabolic thermistor-type problem. Int. J. Tomogr. Stat., 5(W07):150-154, 2007. arXiv:math/0512629 Global existence and decay of solutions of the porus medium equation with nonlinear boundary conditions. J Filo, P De Mottoni, Communications in Partial Differential Equations. 175-6J. Filo and P. de Mottoni. Global existence and decay of solutions of the porus medium equa- tion with nonlinear boundary conditions. Communications in Partial Differential Equations, 17(5-6):737-765, 1992. Dimension reduction of thermistor models for large-area organic light-emitting diodes. A Glitzky, M Liero, G Nika, Discrete & Continuous Dynamical Systems-S. 14113953A. Glitzky, M. Liero, and G. Nika. Dimension reduction of thermistor models for large-area organic light-emitting diodes. Discrete & Continuous Dynamical Systems-S, 14(11):3953, 2021. The evolution thermistor problem with degenerate thermal conductivity. M T González Montesinos, F. Ortegón Gallego, Communications on Pure & Applied Analysis. 13313M. T. González Montesinos and F. Ortegón Gallego. The evolution thermistor problem with degenerate thermal conductivity. Communications on Pure & Applied Analysis, 1(3):313, 2002. Optimal control for the thermistor problem. D Hömberg, C Meyer, J Rehberg, W Ring, SIAM Journal on Control and Optimization. 485D. Hömberg, C. Meyer, J. Rehberg, and W. Ring. Optimal control for the thermistor problem. SIAM Journal on Control and Optimization, 48(5):3449-3481, 2010. Optimal control of a thermistor problem with vanishing conductivity. V Hrynkiv, S Koshkin, Applied Mathematics & Optimization. 812V. Hrynkiv and S. Koshkin. Optimal control of a thermistor problem with vanishing conduc- tivity. Applied Mathematics & Optimization, 81(2):563-590, 2020. On the blow-up of the non-local thermistor problem. N I Kavallaris, T Nadzieja, Proc. Edinb. Math. Soc. 502N. I. Kavallaris and T. Nadzieja. On the blow-up of the non-local thermistor problem. Proc. Edinb. Math. Soc. (2), 50(2):389-409, 2007. Thermal runaway in a non-local problem modelling ohmic heating: Part I: Model derivation and some special cases. A A Lacey, European Journal of Applied Mathematics. 62A. A. Lacey. Thermal runaway in a non-local problem modelling ohmic heating: Part I: Model derivation and some special cases. European Journal of Applied Mathematics, 6(2):127-144, 1995. Ural'ceva. Linear and quasi-linear equations of parabolic type. Izdat. O A Ladyženskaja, V A Solonnikov, N N , Nauka. O. A. Ladyženskaja, V. A. Solonnikov, and N. N. Ural'ceva. Linear and quasi-linear equations of parabolic type. Izdat. "Nauka", Moscow, 1967. On well-posedness of generalized thermistor-type problem. A A Nanwate, S P Bhairat, Art. 020018AIP Conf. Proc. 24351A. A. Nanwate and S. P. Bhairat. On well-posedness of generalized thermistor-type problem. AIP Conf. Proc., 2435(1):Art. 020018, 2022. The vitali convergence theorem for the vector-valued McShane integral. R Reynolds, C Swartz, Mathematica Bohemica. 1292R. Reynolds and C. Swartz. The vitali convergence theorem for the vector-valued McShane integral. Mathematica Bohemica, 129(2):159-176, 2004. Numerical analysis of a nonlocal parabolic problem resulting from thermistor problem. M R Sidi Ammi, D F M Torres, arXiv:0709.0129Math. Comput. Simulation. 772-3M. R. Sidi Ammi and D. F. M. Torres. Numerical analysis of a nonlocal parabolic prob- lem resulting from thermistor problem. Math. Comput. Simulation, 77(2-3):291-300, 2008. arXiv:0709.0129 Optimal control of nonlocal thermistor equations. M R Sidi Ammi, D F M Torres, arXiv:1206.2873Internat. J. Control. 8511M. R. Sidi Ammi and D. F. M. Torres. Optimal control of nonlocal thermistor equations. Internat. J. Control, 85(11):1789-1801, 2012. arXiv:1206.2873 Galerkin spectral method for the fractional nonlocal thermistor problem. M R Sidi Ammi, D F M Torres, arXiv:1605.07804Comput. Math. Appl. 736M. R. Sidi Ammi and D. F. M. Torres. Galerkin spectral method for the fractional nonlocal thermistor problem. Comput. Math. Appl., 73(6):1077-1086, 2017. arXiv:1605.07804 Régularité de la solution d'un problème aux limites non linéaires. J Simon, Ann. Fac. Sci. Toulouse Math. 33-4J. Simon. Régularité de la solution d'un problème aux limites non linéaires. Ann. Fac. Sci. Toulouse Math., 3(3-4):247-274, 1981. Compact sets in the space L p (0, T ; B). J Simon, Ann. Mat. Pura Appl. 1464J. Simon. Compact sets in the space L p (0, T ; B). Ann. Mat. Pura Appl. (4), 146:65-96, 1987. Infinite-dimensional dynamical systems in mechanics and physics. R Temam, Applied Mathematical Sciences. 68R. Temam. Infinite-dimensional dynamical systems in mechanics and physics. Applied Math- ematical Sciences, 68, 1988. On the local behaviour of solutions of a certain class of doubly nonlinear parabolic equations. V Vespri, Manuscripta Mathematica. 751V. Vespri. On the local behaviour of solutions of a certain class of doubly nonlinear parabolic equations. Manuscripta Mathematica, 75(1):65-80, 1992. Numerical solutions of the thermistor equations. S Zhou, D R Westbrook, Journal of Computational and Applied Mathematics. 791S. Zhou and D. R. Westbrook. Numerical solutions of the thermistor equations. Journal of Computational and Applied Mathematics, 79(1):101-118, 1997. . Moulay Ismail B. P. 509Moulay Rchid Sidi Ammi (corresponding author) Department of Mathematics, AMNEA Group, MAIS Laboratory, Faculty of Sciences and TechnicsEmail address: [email protected] Rchid Sidi Ammi (corresponding author) Department of Mathematics, AMNEA Group, MAIS Laboratory, Faculty of Sciences and Technics, Moulay Ismail B. P. 509, Errachidia, Morocco. Email address: [email protected]
[]
[ "String Diagrams with Factorized Densities", "String Diagrams with Factorized Densities" ]
[ "Eli Sennesh [email protected] \nAmsterdam Machine Learning Lab (AMLab)\nKhoury College of Computer Science Northeastern University Boston\nMassachusettsUnited States of America\n", "Jan-Willem Van De Meent [email protected] \nUniversity of Amsterdam Amsterdam\nthe Netherlands\n" ]
[ "Amsterdam Machine Learning Lab (AMLab)\nKhoury College of Computer Science Northeastern University Boston\nMassachusettsUnited States of America", "University of Amsterdam Amsterdam\nthe Netherlands" ]
[]
A growing body of research on probabilistic programs and causal models has highlighted the need to reason compositionally about model classes that extend directed graphical models. Both probabilistic programs and causal models define a joint probability density over a set of random variables, and exhibit sparse structure that can be used to reason about causation and conditional independence. This work builds on recent work on Markov categories of probabilistic mappings to define a category whose morphisms combine a joint density, factorized over each sample space, with a deterministic mapping from samples to return values. This is a step towards closing the gap between recent category-theoretic descriptions of probability measures, and the operational definitions of factorized densities that are commonly employed in probabilistic programming and causal inference.
null
[ "https://export.arxiv.org/pdf/2305.02506v1.pdf" ]
258,479,897
2305.02506
9fcb34dfa8d563221484984c08204ecf11f426b1
String Diagrams with Factorized Densities 4 May 2023 Eli Sennesh [email protected] Amsterdam Machine Learning Lab (AMLab) Khoury College of Computer Science Northeastern University Boston MassachusettsUnited States of America Jan-Willem Van De Meent [email protected] University of Amsterdam Amsterdam the Netherlands String Diagrams with Factorized Densities 4 May 2023Submitted to: ACT 2023 A growing body of research on probabilistic programs and causal models has highlighted the need to reason compositionally about model classes that extend directed graphical models. Both probabilistic programs and causal models define a joint probability density over a set of random variables, and exhibit sparse structure that can be used to reason about causation and conditional independence. This work builds on recent work on Markov categories of probabilistic mappings to define a category whose morphisms combine a joint density, factorized over each sample space, with a deterministic mapping from samples to return values. This is a step towards closing the gap between recent category-theoretic descriptions of probability measures, and the operational definitions of factorized densities that are commonly employed in probabilistic programming and causal inference. Introduction Statisticians and machine learners analyze observed data by synthesizing models of those data. These models take a variety of forms, with several of the most widely used being directed graphical models, probabilistic programs, and structural causal models (SCMs). Applications of these frameworks have included concept construction [14], epidemiological estimation [12], and reinforcement learning [6,15]. Unfortunately, the richer the model class, the weaker the mathematical tools available to reason rigorously about it: SCMs built on linear equations with Gaussian noise admit easy inference, while graphical models have a clear meaning and a wide array of inference algorithms but encode a limited family of models. Probabilistic programs can encode any distribution admitting computable sampling, but the definition of their densities commonly relies on operational analogies with directed graphical models. In recent years, category theorists have developed increasingly sophisticated ways to reason diagrammatically about a variety of complex systems. These include (co)parameterized categories of systems that may modify their parameters [4] and hierarchical string diagrams for rewriting higher-order computations [1]. Recent work on Markov categories of probabilistic mappings has provided denotational semantics to probabilistic programs [23,13], abstract categorical descriptions of conditioning, disintegration, sufficient statistics, conditional independence [5,7], and generalized causal models [8,9]. In this work, our goal is to close the gap between recent category-theoretic innovations and operational practice in probabilistic programming. Denotational semantics for probabilistic programs define a measure over return values of a program given its inputs [23,13]. To reason about inference methods, practitioners need to consider the joint distribution over all random variables, as well as the factorization of its density into conditionals. To facilitate such reasoning, we develop a category whose morphisms admit densities with respect to a finite-dimensional base measure. We then show that generalized causal models can factorize these densities and admit interventional and counterfactual queries. This paper is structured as follows. Section 2 reviews basic definitions of probability and measure theory, then gives a first category of probabilistic mappings and defines the abstract, categorical setting that encompasses that concrete category. Section 3 uses a category M from that abstract setting to define a category Joint(M ) whose morphisms map from a parameter into an internal joint distribution and then deterministically to an output; a concrete subcategory of Joint(M ) will support probability density functions over such internal joint distributions. Section 4 will review recent work on generalized causal models and interpret them into morphisms with joint densities. Section 5 summarizes our findings. Appendix A documents a useful property of probability kernels. Appendix B extends our results to the category of quasi-Borel spaces [23]. Appendix C extends our results to unnormalized densities. Notation We write compositions from right to left as g• f (or equivalently from left to right as f g) and (X * , ⊙, ()) for the finite list monoid on X 's. We draw string diagrams from the top (domain) to the bottom (codomain), showing products from left to right. We nest brackets with parentheses ([]) equivalently. Setting Unless otherwise mentioned, M will range over strict, causal Markov categories in the sense of Fritz [7], containing at least the standard Borel spaces Sbs ⊂ Meas and their Markov kernels so that BorelStoch ⊆ M . M det will denote the deterministic subcategory within M , such that M det ⊂ M . When we can unambiguously leave implicit the ambient category and σ -algebras, f : Z X will abbreviate f : M ((Z, Σ Z ), (X , Σ X )). C will range over strict copy/delete or symmetric monoidal categories. 2 Background: measure theory and categorical probability This section will review the background on which the rest of the paper builds. Section 2.1 begins with standard definitions for a number of common terms from measure theory and probability, before building up to a first category of probabilistic mappings. Section 2.2 abstracts away the details of measure theory to define categories of nondeterministic (including probabilistic) processes from first principles. Measure theory and probability Measure theory studies ways of assigning a "size" to a set (beyond its cardinality); these can include count, length, volume, and probability. Proposition 1 gives a basic categorical setting for measure theory. Proposition 1 (Measurable spaces and functions form a category [24]). Measurable spaces and functions form a category Meas with objects (X , Σ X ) ∈ Ob(Meas) consisting of sets X ∈ Ob(Set) and their σalgebras 1 Σ X and morphisms Meas((Z, Σ Z ), (X , Σ X )) = f ∈ X Z | ∀σ X ∈ Σ X , f −1 (σ X ) ∈ Σ Z consisting of measurable functions between measurable spaces (having measurable preimages of measurable sets). Definition 1 begins with a class of measurable spaces having desirable properties. Definition 1 (Standard Borel space). Let (X , T X ) ∈ Ob(Top) be a separable complete metric space or homeomorphic to one. Equipping X with its Borel σ -algebra B(X ) generated by complements, countable unions, and countable intersections of open subsets U ∈ T yields a standard Borel space (X , B(X )) ∈ Ob(Sbs), which is also a measurable space since Sbs ⊂ Meas. Example 1 is such a space. Example 1 (The unit interval). The closed unit interval [0, 1] with its Borel σ -algebra of open sets B(0, 1) forms a standard Borel space ([0, 1], B(0, 1)). Having a category of measurable spaces and some nice examples, Definition 2 formally defines what it means to assign a "size" to a measurable set. Definition 2 (Measure). A measure µ : M (Z) on a measurable space (Z, Σ Z ) ∈ Ob(Meas) is a function µ : Σ Z → [0, ∞] that is null on the empty set (µ( / 0) = 0) and countably additive over pairwise disjoint sets {σ k ∈ Σ Z } k∈N ∀k ∈ N, n ∈ N, n = k =⇒ σ k ∩ σ n = / 0 µ ( k∈N σ k ) = ∑ k∈N µ(σ k ) Reasoning compositionally about measure requires a class of maps between a domain and a codomain that form measures. The Giry monad [11] sends a measurable space (X , Σ X ) to its space of measures M (X ) and probability measures P (X ) ⊂ M (X ). Definition 25 defines maps into those spaces, treating the domain as a parameter space for a measure over the codomain. Definition 3 (Measure kernel). A measure kernel between measurable spaces (Z, Σ Z ), (X , Σ X ) ∈ Ob(Meas) is a function f : Z × Σ X → [0, ∞] such that ∀z ∈ Z, f (z, ·) : M (X ) is a measure and ∀σ X ∈ Σ X , f (·, σ X ) : Meas((Z, Σ Z ), ([0, ∞], Σ [0,∞] )) is measurable. Definition 4 specializes to measure kernels yielding only normalized probability measures. Definition 4 (Markov kernel). A Markov kernel is a measure kernel f : Z × Σ X → [0, ∞] whose measure is a probability measure so that ∀z ∈ Z, f (z, ·) : P (X ) and ∀z ∈ Z, f (z, X ) = 1. The Giry monad, restricted to probability spaces, yields Markov kernels as its Kleisli morphisms Meas((Z, Σ Z ), M (X )), forming a category Stoch. Example 2 forms a category of Markov kernels with somewhat nicer properties than Stoch and provides a common setting for the rest of the paper. Example 2 (Category of Markov kernels between standard Borel spaces). The category BorelStoch has as objects the standard Borel spaces (X , Σ X ) ∈ Sbs and as morphisms the Markov kernels f : Meas((Z, Σ X ), P (X )) between those spaces, with composition of f : Z X and g : X Y given by integration f g = z → x∈X g(x)(·) f (z)(dx) over the intermediate space. Section 2.2 defines the setting for the paper and shows BorelStoch forms an instance of this setting. Categories of nondeterministic processes Categorical probability begins from an abstract notion of nondeterminism: processes with a notion of "independent copies". This subsection makes that notion rigorous and general, first with a categorical setting in which nondeterministic processes "happen" whether observed or not and then with a refined setting in which processes only "happen" when they affect an observed output. Categories of probability kernels will form a concrete instance of the abstract setting. Definition 5 represents nondeterministic processes abstractly. A copy/delete category is a symmetric monoidal category whose morphisms generate nondeterministic information; this information can then be copied or deleted freely but not modified or replaced. Definition 5 (Copy/delete category). A copy-delete or CD-category is a symmetric monoidal category (C , ⊗, I) in which every object X ∈ Ob(C ) has a commutative comonoid structure copy X : C (X , X ⊗ X ) and del X : C (X , I) which commutes with the monoidal product structure. Definition 6 refines the abstract setting of CD categories to require that deleting the only result of a nondeterministic process is equivalent to deleting the process itself. [7] showed BorelStoch to be a Markov category; the rest of the paper will use a Markov category M containing BorelStoch as a subcategory BorelStoch ⊂ M . While Markov categories provide a compositional setting for nondeterministic processes, Markov kernels in these categories only provide probability measures for their outputs given their inputs. By design, they "forget" (i.e. marginalize over) all intermediate randomness in long chains of composition. Section 3 will build up a novel setting that "remembers" (i.e. does not marginalize over) joint distributions over all intermediate random variables through long chains of composition, and will show when there exist probability densities with respect to the joint distributions thus formed. Joint distributions and densities for string diagrams Statisticians cannot utilize input-output (parameter to distribution) mappings alone, except for maximum likelihood estimation. Instead, these typically appear as conditional probability distributions in a larger probability model. This larger model necessarily encodes a joint distribution over all relevant random variables, both those observed as data and the latent variables that give rise to observations. Practical probabilistic reasoning then consists of applying the laws of probability (product law for conjunctions, sum law for disjunctions, marginalization for unconditional events, Bayesian inversion) to numerical densities representing the joint distribution. This section will model the algebra of joint probability densities in a novel Markov category Joint(M ) defined in terms of an underlying Markov category M . Section 3.1 will first review an abstraction for categories in which morphisms act by "pushing forward" an internal "parameter space" and then instantiate that abstraction on a Markov category to yield a Markov category Joint(M ) of joint distributions. Section 3.2 will give the conditions for a concrete Markov kernel to admit a density and to be a pushforward of a distribution with a density; for standard Borel and quasi-Borel spaces these will admit categories whose morphisms act by pushing forward a joint probability density. Section C.1 will import an additional tool from applied probability, representing unnormalized densities for distributions with analytically intractable normalizing constants. Accumulating random variables into joint distributions Structural graphical models and probabilistic programs separate between the functions and variables they allow into deterministic and random ones [17]. Representing deterministic mechanisms categorically requires assuming that each nondeterministic process consists of a deterministic mechanism and a (potentially conditional) distribution over a random variable. Composing such mechanisms together than requires combining both their mechanistic and noisy components. Definition 7 gives a construction from categorical cybernetics [4] for doing just that. Given a Markov category M , we will represent causal mechanisms with deterministic morphisms Para ⊗ (M det ) and noise variables with their residuals M ∈ Ob(M ). Definition 8 will build Markov kernels from causal mechanisms and conditional distributions over residuals. (1) As implied by the hom-set notation above, joint Markov kernels form a category of nondeterministic processes. Since the residuals of joint distributions only contribute to downstream processes through their local outputs, this will be a Markov category. Theorem 1 (Joint Markov kernels form a Markov category). Given a strict Markov category M , Joint(M ) is a strict Markov category having Ob(Joint(M )) = Ob(C ) and joint Markov kernels as morphisms. Proof. Joint(M ) must admit the typical requirements of a category as well as deterministic, copy-delete symmetric monoidal structure. We can demonstrate the necessary deterministic structure by exhibiting joint kernels for any noiseless causal mechanism (I, k) : Para ⊗ (M det )(Z, X ) ([I, k], del Z ) : Joint(M )(Z, X ) . Setting k = copy X or k = del Z yields the necessary copy and delete maps. Setting k = swap Z⊗X gives the necessary symmetry of the monoidal product. It remains to show that Joint(M ) has a monoidal product over morphisms and that its hom-sets are closed under composition. Given two joint Markov kernels ([M 1 , k 1 ], f 1 ) : Joint(M )(Z, X ) and ([M 2 , k 2 ], f 2 ) : Joint(M )(W,Y ) , their monoidal product is formed by pairing their causal mechanisms and noise distributions ([M 1 , k 1 ], f 1 ) ⊗ ([M 2 , k 2 ], f 2 ) : Joint(M )(Z ⊗W, X ⊗Y ) ([M 1 , k 1 ], f 1 ) ⊗ ([M 2 , k 2 ], f 2 ) := ([M 1 , k 1 ] ⊗ Para ⊗ (M det ) [M 2 , k 2 ], f 1 ⊗ M f 2 ). Composing two joint Markov kernels ([M 1 , k 1 ], f 1 ) : Joint(M )(Z, X ) and ([M 2 , k 2 ], f 2 ) : Joint(M )(X ,Y ) along their intermediate object involves composing their parametric maps and taking a conditional product of their stochastic kernels to form the composite joint distribution ([M 1 , k 1 ], f 1 ) ([M 2 , k 2 ], f 2 ) :=                     M 1 ⊗ M 2 , M 1 M 2 Z k 1 k 2 Y         , Z f 1 k 1 f 2 M 1 M 2             : Joint(M )(X ,Y ).(2) Of course, given a joint Markov kernel that explicitly separates a causal mechanism from the noise making it nondeterministic or uncertain, simply integrating over the noise and taking the pushforward measure will recover an ordinary Markov kernel. Corollary 2 (Joint Markov kernels yield Markov kernels). Given a strict Markov category M , there exists an identity-on-objects faithful functor F : Joint(M ) → M which acts on morphisms F(([M, k], f )) : Joint(M )(Z, X ) → M (Z, X ) via Equation 1 F(([M, k], f )) = ([M, k], f ) . This subsection has considered arbitrary, unstructured joint distributions Joint(M ). Section 3.2 will examine the special case in which the residual object is a standard Borel space and the conditional distribution into it meets the necessary conditions to admit a probability density. Base measures and densities over standard Borel spaces Applied probability typically works not with probability measures but with probability densities, functions over a finite-dimensional sample space giving the "derivative" of a probability measure at a point. However, probability densities only exist for measures that meet the conditions of the Radon-Nikodymn Theorem, and only relative to a specified base measure over the sample space. This section will restrict the residual objects or internal noises of joint Markov kernels to standard Borel sample spaces admitting probability densities, and then show that this restriction still admits a broad class of joint Markov kernels. Standard Borel spaces are measurably isomorphic at each cardinality: they consist of finite sets, countable sets (such as N and Z), and R with the appropriate Borel σ -algebras. Products and coproducts of standard Borel spaces can then be used to build up Euclidean vector spaces, product spaces, and disjoint union spaces. Definition 9 inductively defines a standardized base measure over all of these. Definition 9 (Finite-dimensional base measure). For an object (X , Σ X ) ∈ Ob(Sbs) given by a finite product or coproduct of standard Borel spaces, we define a base measure µ X : Σ X → [0, ∞] by cases µ X (σ X ) =            µ # (σ X ) |X | ≤ ℵ 0 λ n (σ X ) X = R n , n ∈ N + µ X 1 (σ X 1 )µ X 2 (σ X 2 ) X = X 1 ⊗ X 2 , σ X 1 = {x 1 | x ∈ σ X }, σ X 2 = {x 2 | x ∈ σ X } µ X 1 (σ X 1 ) + µ X 2 (σ X 2 ) X = X 1 ⊕ X 2 , σ X 1 = {x 1 | inl(x 1 ) ∈ σ X }, σ X 2 = {x 2 | inr(x 2 ) ∈ σ X } , where µ # denotes the counting measure on countable sets and λ n the Lebesgue measure on R n . The Radon-Nikodym Theorem for the existence of probability densities requires that both measures under consideration consist of sums of countable partitions of the underlying space, or equivalently, allow some function to have a finite integral. Definition 10 states this condition formally. Definition 10 (σ -finite measure kernel). A σ -finite measure kernel f : Z × Σ X → [0, ∞] is a measure kernel which at every parameter z ∈ Z splits its codomain into countably many measurable sets X = n∈N X n ∈ Σ X , each of which is has finite measure f (z)(X n ) < ∞. Proposition 2 then shows that our standardized base measure on standard Borel spaces has this property and so can serve as a base measure for densities. Proposition 2 (µ X is σ -finite). For an object (X , Σ X ) ∈ Ob(Sbs) given by a finite product or coproduct of standard Borel spaces, µ X is σ -finite. Proof. By induction on the structure of (X , Σ X ) and the resulting definition of µ X . The base cases are countable sets and the reals. If X is countable, µ X = µ # , which is σ -finite on countable sets. If X = R n for some finite n, µ X = λ n the n-dimensional Lebesgue measure, which is σ -finite. If X = X 1 ⊗ X 2 , µ X is a product measure. By the inductive hypothesis, µ X 1 and µ X 2 are σ -finite, and so by taking the Cartesian product of the countable coverings of X 1 and X 2 we will have covering of the product X 1 ⊗ X 2 where the product of measures for each pair of sets is finite. If X = X 1 ⊕ X 2 , µ X is a sum of measures over disjoint sets. A coproduct space can be covered by its inclusions' images on its components. The inductive hypothesis then gives coverings by further disjoint subsets of X 1 and X 2 in which each covering set has finite measure, so their measures' sum is finite. Any other uncountable standard Borel space is measurably isomorphic to R and can count as R 1 . Having a suitable base measure, Definition 11 gives the class of Markov kernels which admit densities with respect to that measure, and Proposition 3 verifies that they do so. Definition 11 (Density kernel). A density kernel is a σ -finite Markov kernel f : Z X into a standard Borel space (X , Σ X ) ∈ Ob(Sbs) that is everywhere absolutely continuous ( f (z) ≪ µ X ) with respect to µ X , so that ∀z ∈ Z, ∀σ X ∈ Σ X , µ X (σ X ) = 0 =⇒ f (z)(σ X ) = 0. Proposition 3 (Density kernels admit densities). Every density kernel into a standard Borel space (Definition 11) f : Z X admits a density with respect to the base measure µ X . Proof. σ -finiteness of the kernel f and the base measure µ X , plus absolute continuity, give the necessary conditions for the classical Radon-Nikodym theorem: a Radon-Nikodym derivative therefore exists d f (z) dµ X : Meas(X , R ≥0 ) f (z)(σ X ) = x∈σ X d f (z) dµ X (x) µ X (dx). The Radon-Nikodym derivative is the measure-theoretic notion of a probability density function d f dµ X : Meas(Z × X , R ≥0 ), p f (· | ·) : Meas(X × Z, R ≥0 ), p f (x | z) := d f (z) dµ X (x) . The conditions on density kernels are therefore sufficient to yield probability densities. While composing two density kernels yields a density kernel, so does precomposing a density kernel with any other morphism. There is no identity density kernel, and for these reasons density kernels do not form a category. Theorem 3 shows they instead form a sieve: a subset of morphisms into certain objects (here, the standard Borel spaces) closed under arbitrary precomposition. Theorem 3 (Density kernels form sieves in any BorelStoch ⊆ M). Given a strict causal Markov category M such that BorelStoch ⊆ M, density kernels in M form sieves S (Y,Σ Y ) ⊂ Ob(M /(Y, Σ Y )) in the slice category M /(Y, Σ Y ) on their codomains (Y, Σ Y ) ∈ Ob(Sbs). Proof. Consider a g : X Y admitting the density g(x, σ Y ) = y∈σ Y p g (y | x) µ Y (dy) , and consider any f : Z X . The composition f g is the marginalization f g = (z, σ Y ) → x∈X g(x, σ Y ) f (z)(dx) = (z, σ Y ) → x∈X y∈σ Y p g (y | x) µ Y (dy) f (z)(dx) = (z, σ Y ) → y∈σ Y x∈X p g (y | x) f (z)(dx) µ Y (dy), so that the density can be given with respect to the same base measure by integrating p f g (y | z) = x∈X p g (y | x) f (z)(dx). We thereby see that f g ∈ S (Y,Σ Y ) . With some abuse of notation we will write S (X, Σ X ),(Y,Σ Y ) = { f ∈ S (Y,Σ Y ) | dom( f ) = (X , Σ X )}. Since density kernels form a sieve but not a category, Proposition 4 demonstrates that sieve is closed under finite products (conditionally independent conjunctions) and coproducts (mixture distributions). Proposition 4 (Density kernels are closed under finite products and coproducts). Density kernels S (Y,Σ Y ) into standard Borel spaces (X , Σ X ), (Y, Σ Y ) admit finite products S (X,Σ X )⊗(Y,Σ Y ) and coproducts S (X,Σ X )⊕(Y,Σ Y ) . Proof. Given f ∈ S θ 1 ,(X,Σ X ) and g ∈ S θ 2 ,(Y,Σ Y ) , their monoidal product f ⊗ g : S θ 1 ⊗θ 2 ,(X,Σ X )⊗(Y,Σ Y ) ( f ⊗ g)(θ , σ X⊗Y ) = (x,y)∈σ X⊗Y p f (x | θ 1 )p g (y | θ 2 ) µ X (dx)µ Y (dy) admits a density with respect to the product measure µ X⊗Y = µ X µ Y . Likewise, given two density kernels f ∈ S (Z,Σ Z ),(X,Σ X ) and g ∈ S (Z, Σ Z ),(Y,Σ Y ) with a shared domain, their coproduct f ⊕ g ∈ S (Z,Σ Z ),(X,Σ X )⊕(Y,Σ Y ) ( f ⊕ g)(z, σ X⊕Y ) = {x|inl(x)∈σ X⊕Y } p f (x | z) µ X (dx) + {y|inr(y)∈σ X⊕Y } p g (y | z) µ Y (dy) admits a density with respect to the disjoint sum measure µ X⊕Y = µ X + µ Y . Corollary 4 shows that this closure extends to products of joint distributions with conditionals. Corollary 4 (Density kernels are closed under joint distributions). A pair of density kernels f ∈ S (X,Σ X ) , g ∈ S (X,Σ X ),(Y,Σ Y ) yield a joint density kernel f copy (X,Σ X ) (id X ⊗ g) ∈ S (X,Σ X )⊗(Y,Σ Y ) . Proof. f copy (X,Σ X ) (id X ⊗ g) is a density kernel in both its outputs: the codomain is a standard Borel space, it is a σ -finite product measure, and it is absolutely continuous with respect to µ X⊗Y . Density kernels give σ -finite probability measures over residuals, but they are not closed under arbitrary post-compositions. Section 3.3 will remedy this issue with a categorical setting for joint densities. Joint densities over joint distributions Density kernels are not closed under pushforwards, they do not form a category M , and Joint(M ) cannot be specialized directly to density kernels. Definition 12 gives a class of measure kernels (including probability kernels) that contains density kernels but is closed under pushforwards. Definition 12 (s-finite measure kernel). An s-finite measure kernel f : Z × Σ X → [0, ∞] is a measure kernel (as in Definition 3 above) which decomposes into a sum of finite kernels f = ∑ n∈N f n such that ∀n ∈ N, f n : Z × Σ X → [0, ∞] and ∀n ∈ N, ∃r n ∈ R ≥0 , ∀z ∈ Z, f n (z, X ) ≤ r n . Proposition 5 will imply that every density kernel is an s-finite kernel (Definition 12). Proposition 5 (s-finite kernels are pushforwards of σ -finite kernels [25,23] ). A measure kernel f : Z × Σ X → [0, ∞] is s-finite if and only if it is a pushforward f = copy Z (p ⊗ id Z ) k of a σ -finite measure kernel p through a deterministic k. The above proposition includes trivial pushforwards, so every σ -finite (Definition 10) measure kernel is s-finite (Definition 12) but not the other way around. Proposition 6 shows measure kernels form a copy/delete category, as stated by Cho and Jacobs [5] as their Example 7.2. Proposition 6 (s-finite measure kernels form a CD-category [5]). s-finite measure kernels (Definition 12) between measurable spaces form a CD-category sfKrn with Ob(sfKrn) = Ob(Meas) and hom-sets given by sfKrn((Z, Σ Z ), (X , Σ X )) = { f : Z × Σ X → [0, ∞] | f is s-finite}. sfKrn only forms a copy/delete category, not a Markov category, since different measure kernels may have different normalizing constants, or no normalizing constant at all. Corollary 5 shows that restricting to probability kernels forms a Markov category. Corollary 5 (s-finite probability kernels form a Markov category). The s-finite probability kernels f : sfKrn((Z, Σ Z ), (X , Σ X )), for which ∀z ∈ Z, f (z, X ) = 1, form a Markov category sfStoch ⊆ Stoch. Proof. The restriction of all kernels to normalize to measure 1 renders every map del Z unique, making I a terminal object and the resulting subcategory sfStoch a Markov category. The remainder of this paper will interpret morphisms in a variety of categories as s-finite Markov kernels sfStoch(Z, X ) with densities sfKrn(Z ⊗ X , I). Proof. First we show the joint density kernels form a subcategory, then show that subcategory is wide. Corollary 4 shows that density kernels are closed under the composition of Joint(M ) (Equation 2). Proposition 4 shows them to be closed under products and coproducts as well, so this category inherits the symmetric monoidal structure of Joint(M ). The structure morphisms in Joint(M ) all have the unit I for their residual, which admits a trivial density as a finite standard Borel space; ∂ Joint(M ) therefore inherits the copy/delete structure of Joint(M ). This gives a subcategory ∂ Joint(M ) ⊂ Joint(M ). Objects and structure morphisms are inherited from Joint(M ), so the subcategory is wide. Finally, Theorem 7 shows that the joint Markov kernels of ∂ Joint(M ) are s-finite and admit densities jointly measurable in the parameter and the residual. Being s-finite, joint density kernels admit the required probability kernels p : sfStoch((Z, Σ Z ), (X , Σ X )) with p(z, σ X ) = f (z, k(z) −1 (σ X )) and densities p f (· | z) : M × Σ I → [0, ∞] measurable in z and m, defined as p f (m | z)({ * }) = d f (z) dµ M (m) by means of the Radon-Nikodym derivative (see Proposition 3). Theorem 6 and Theorem 7 finally give us the categorical setting we have sought: one which supports composition, products, and coproducts as a copy/delete category should, while decomposing into a deterministic causal mechanism applied to a random variable with a joint density as a structural causal model should. Section 4 will put together the machinery in this section with existing work on factorizing string diagrams syntactically to interpret those factorizations as generalizing directed graphical models. For statistics applications needing them, Appendix C.1 extends this setting with unnormalized densities. Diagrams as causal factorizations of joint distributions and densities This section demonstrates that string diagrams with factorized densities support the full "ladder of causation" as probabilistic models: posterior distributions, interventions, and counterfactual queries. Section 3 presented the ∂ Joint(M ) construction for building up joint densities while still expressing arbitrary pushforward measures over them. Reasoning about directed graphical models or probabilistic programs compositionally requires providing a graphical syntax interpretable into ∂ Joint(M ). Recent work [8,9] treated a combinatorial syntax of string diagrams as generalized causal models; this section applies that syntax to our constructions. Section 4.1 reviews the definitions involved in constructing a causal model as a morphism in a freely generated Markov category and interpreting it into a target Markov or copy/delete category. Section 4.2 then interprets generalized causal models into joint density kernels and shows how to represent interventions and counterfactuals. Generalized causal models in copy/delete categories Generalized causal models [8] allow for global inputs and outputs to a model, while making explicit the grouping of "edges" into Markov kernels. They employ hypergraphs, which "flip" the status of nodes and edges relative to ordinary graphs: "hypernodes" are drawn as wires and "hyperedges" connecting them as boxes. These hypergraphs represent string diagrams combinatorially; restricting hypergraphs to conditions matching certain kinds of categories defines "free" categories of those kinds. This subsection will build up free copy/delete and Markov categories with generalized causal models as morphisms. Definition 14 defines hypergraphs in terms of sets [10]; Bonchi et al [3] provides categorical intuition. (W, B, dom, cod) consisting of a set of vertices, nodes, or "wires" W ; a set of hyperedges or "boxes" B; a function dom : B → W * assigning a domain to each box; and a function cod : B → W * assigning a codomain to each box. We abuse notation and write individual boxes b ∈ B : dom(b) → cod(b). Definition 15 specifies relabelings of one hypergraph's wires and boxes with those of another. Definition 14 (Hypergraph). A hypergraph is a 4-tuple Definition 15 (Hypergraph morphism). Given hypergraphs G, H, a hypergraph morphism α : G → H is a pair of functions assigning wires to wires and boxes to boxes, the latter respecting the former Hyp(G, H) := (α W , α B ) ∈ W (H) W (G) × B(H) B(G) | ∀b ∈ B(G), α B (b) : α W (dom(b)) → α W (cod(b)) . As implied by the hom-set notation, hypergraphs and their morphisms form a category Hyp [3], and our application will employ the full subcategory FinHyp in which W and B both have finite cardinality. Finally, a hypergraph H is discrete when B(H) = / 0; n denotes a discrete hypergraph with n ∈ N wires. Any monoidal category has a (potentially infinite) underlying hypergraph, which we denote hyp(·) : MonCat → Hyp following Fritz and Liang [9]. Often a finite hypergraph Σ ∈ FinHyp denotes the generating objects and morphisms of a free monoidal category, or the primitive types and functions of a domain-specific programming language. We call such a finite hypergraph a monoidal signature. Definition 16 formally defines the copy/delete category freely generated by a signature Σ, which Definition 17 will restrict to free Markov categories. Definition 16 (Free copy/delete category for the signature Σ [9]). The free CD category FreeCD Σ for Σ ∈ FinHyp is a subcategory FreeCD Σ ⊆ cospan(FinHyp/Σ) where • Objects are the pairs (n, σ ) ∈ N × n → Σ assigning outer wires of a string diagram to wires in Σ; • Morphisms are isomorphism classes of cospans, given combinatorially FreeCD Σ ((n, σ n ), (m, σ m )) = {p → dom(τ) ← q ∈ FinHyp(n, dom(τ)) × Ob(FinHyp/Σ) × FinHyp(m, dom(τ))} , such that τ : G → Σ ∈ Ob(FinHyp/Σ) is a hypergraph morphism from an acyclic G and every wire w ∈ W (G) has at most one "starting place" as the diagram's input or a box's output |p −1 (w)| + ∑ b∈B(G) ∑ w ′ ∈cod(b) I[w ′ = w] ≤ 1. Intuitively, a morphism in FreeCD Σ is syntax specifying a string diagram with no looping or merging wires, whose boxes and wires are labeled by Σ. Definition 17 passes to the free Markov category FreeMarkov Σ just by syntactically enforcing the naturality of del Z . Definition 17 (Free Markov category for the signature Σ). The free Markov category FreeMarkov Σ for Σ ∈ FinHyp is the wide subcategory of FreeCD Σ restricted to morphisms in which every output from every box connects to somewhere else connects(w, G, q) := I[∃b ∈ B(G) : w ∈ cod(b) =⇒ q −1 (w) = / 0 ∨ ∃b ′ ∈ B(G) : w ∈ dom(b ′ )] FreeMarkov Σ (n, m) := {p → dom(τ) ← q ∈ FreeCD Σ (n, m) | ∀w ∈ W (dom(τ)), connects(w, dom(τ), q)} , and with composition redefined to syntactically enforce this by iterating the deletion of discarded boxes to a fixed-point after composition in FreeCD Σ . Having free copy/delete and Markov categories for Σ ∈ FinHyp as syntax, we can now define generalized causal models in those categories. Definition 18 describes a Markov generalized causal model. Definition 18 (Generalized causal model [9]). Given a monoidal signature Σ ∈ FinHyp, a generalized causal model ϕ is a free Markov string diagram p → dom(τ) ← q : FreeMarkov Σ (n, m) for some n, m ∈ N with q one-to-one on wires. Any generalized causal model p → dom(τ) ← q is equivalent to a morphism [8] ϕ : FreeMarkov Σ n i=1 τ(p(i)), m j=1 τ(q( j)) , so Definition 19 will capture factorization of a Markov kernel by a generalized causal model. Having a way to connect the combinatorial syntax of generalized causal models with actual Markov categories, Section 4.2 will apply factorization to the constructions of Section 3. Factorizations of joint density kernels by generalized causal models The joint density kernels ∂ Joint(M )(Z, X ) have an important difference from the simple Markov kernels factorized by generalized causal models in Definition 19: the density we want to factorize is not over x ∈ X but over the extra structure of the residual m ∈ M. This subsection will show how to add this extra structure to a factorization, then show how to access that structure to show that generalized causal models over joint density kernels support causal inference as such: interventions and counterfactual reasoning. Definition 20 will require a factorization to label each box's residual to apply to joint Markov kernels. Joint factorizations label residuals in the signature and also map to joint density kernels. Theorem 8 shows they factorize the implied joint density of a causal model. Theorem 8 (Joint density kernels admit factorized densities). Given a signature Σ ∈ FinHyp, a strict Markov functor F : FreeMarkov Σ → ∂ Joint(M ) gives a joint density p f (· | · ∈ F(dom(ϕ))) for every causal model ϕ : FreeMarkov Σ (n, m). Proof. Definition 20 requires for any sub-diagram ϕ ′ ⊆ ϕ there will be some F(ϕ ′ ) = ([M, k], f ). Theorem 7 then gives a density over the residual, while the functoriality of F and Corollary 4 together imply that products of individual joint-densities yield the complete joint density. Theorem 9 then shows that by assigning boxes optional points in their codomains, joint factorizations also admit interventional distributions. Proof. Any single-box free string diagram has an image F( b ). We define the required functor Int : FreeMarkov Σ → Joint(M ) by extension of a hypergraph morphism α : Σ → hyp(Joint(M )) following Fritz and Liang [9] (see their Remark 4.3). α will be identity on wires and intervene on boxes α(b) : B(Σ) → B(hyp(Joint(M ))) α(b) = hyp(([I, del dom(b) ], del dom(b) x)) do(b) = inr(x) hyp(F( b )) do(b) = inl(I) . Finally, Theorem 10 employs similar reasoning to model counterfactual queries over jointly factorized causal models, given fixed values for random variables and an intervention. α(b) = hyp(([I, del dom(b) ], del dom(b) x)) do(b) = inr(x) hyp(δ U(b) (g(b, ·))) do(b) = inl(I), F( b ) 2 ≃ u∈[0,1] g(u, ·)U (du) . Together, Theorems 8, 9, and 10 demonstrate that joint density kernels, jointly factorized by a generalized causal model, support the properties that have made directed graphical models so widely useful. Discussion This paper started from the existing work on copy/delete categories, Markov categories, and the factorization of morphisms in those categories by generalized causal models. From there, Section 3 constructed a novel Markov category Joint(M ) whose morphisms keep internal track of the joint distribution they denote, defined a subcategory ∂ Joint(M ) ⊂ Joint(M ) whose morphisms support only joint densities over standard Borel spaces as their internal distributions. Section 4 then demonstrated that Joint(M ) supports factorization by generalized causal models, that these factorize joint densities ∂ Joint(M ), and that these support the interventional and counterfactual reasoning necessary for causal inference. Future work can extend causal factorization to hierarchical diagrams [1] in closed Markov categories such as QBS. Instantiating our constructions in a Markov category with randomness pushback would also transform any causal factorization of a joint (density) kernel into a structural causal model [17]. Furthermore, most applications for factorizations of Markov kernels depend on efficient, accurate approximate inference. This paper's authors intend to pursue two paths towards that goal in upcoming work. We aim to apply the ∂ Joint(M ) construction alongside recent work on unique name generation [19] to model heterogeneous tracing in probabilistic programming. Recent work on free string diagrams [26] has also suggested ways to map from free string diagrams to free diagrams of optics; equipping joint density kernels with optic structure would follow up on the work of Smithe [22] and Schauer [20]. A Randomness pushback and probability kernels This subsection describes randomness pushback and demonstrates that the Markov category BorelStoch admits randomness pushback in a standardized form. [7] if and only if every f : Definition 21 (Randomness pushback). A Markov category M has randomness pushback M (Z, X ) is isomorphic to a triple (ϒ, p, k) ∈ Ob(M ) × M (I, ϒ) × M det (ϒ × Z, X ) f ≃ p Z ϒ k X . Definition 21 says that in a Markov category with randomness pushback, every morphism is equivalent to some deterministic pushforward of an independent source of randomness. This paper will work with (Ω, Σ Ω ) = ([0, 1], Σ [0,1] ) ∈ Ob(Sbs), which the next proposition will demonstrate can serve as a randomness pushback object for all of BorelStoch. Lemma 1 (Randomization Lemma for Markov kernels into standard Borel spaces). Markov kernels in standard Borel spaces Z, X ∈ Ob(BorelStoch) have randomness pushback (Definition 21) f : Z X f ≃ U Z Ω k X . Proof. We first observe that BorelStoch(Z, X ) = Sbs(Z, P (X )) and so f (z) : P (X ) is a standard Borel probability measure for all z ∈ Z. Proposition 10.7.6 in Bogachev [2] then tells us precisely that there exists a k : Sbs(Z ⊗ [0, 1], X ) such that f (z)(σ X ) = k(z) * (U )(σ X ) = U (k(z) −1 (σ X )). B Extension to Cartesian closed categories of Markov kernels This appendix defines a Cartesian closed category of Markov kernels Q and shows how to extend the constructions in the paper to that setting. The definitions in this appendix assume a fixed standard Borel space (Ω, Σ Ω ) ∈ Ob(Meas), which we have taken in this paper to be (Ω, Σ Ω ) = ([0, 1], B(0, 1)). Definition 22 (Quasi-Borel space [13]). A quasi-Borel space consists of a pair (X , M X ) where X ∈ Ob(Set) M X ⊆ X Ω ∈ Ob(Set), such that the set M X of "random variables" satisfies the following conditions: • It is closed under measurable endomorphisms f : Meas(Ω, Ω) ρ ∈ M X ρ • f ∈ M X ; • It contains all constant random variables ρ ∈ Ω X ∃ f ∈ I Ω , ∃g ∈ X I .g • f = ρ ρ ∈ M X ; • Countable Borel partitions of the sample space Ω give countable coproduct random variables Ω = ∏ i∈N S i ∀i ∈ N, ρ i ∈ M X β = ∏ i∈N ρ i ∈ M X . The definition of "random variables" given above implies that for every object (X , M X ), M X = QBS(Ω, (X , M X )). Proposition 7 (Quasi-Borel spaces and morphisms preserving random variables form a category [13]). Given a standard Borel space (Ω, Σ Ω ), the functions between quasi-Borel spaces (Z, M Z ) and (X , M X ), preserving membership in random variable sets QBS((Z, M X ), (X , M X )) := f ∈ X Z | ∀ρ ∈ M Z , f • ρ ∈ M X , form a Cartesian closed Markov category of quasi-Borel spaces QBS. The category QBS of quasi-Borel spaces then admits its own version of the Giry monad, defining measures [21] and probability measures [13]. Definition 23 gives the endofunctor for measures. B.1 Randomness pushback for quasi-Borel Markov kernels Probability measures on quasi-Borel spaces have randomnness pushback by construction. Proposition 9 (Markov kernels into quasi-Borel spaces have probabilistic sections). Every quasi-Borel Markov kernel f : QBS((Z, M Z ), P (X , M X )) has a section ρ −1 : QBS((Z, M Z ) × (X , M X ), P (Ω)). Proof. A point z ∈ Z determines a probability measure (µ, ρ) = f (z), and so the right-inverse is f (z) −1 (x) = δ u * (·) ∃u ∈ Ω : ρ(u) = x, u * = arg min u∈Ω ρ(u) = x µ otherwise C Extension to unnormalized distributions C. Unnormalized densities via a writer monad of weights Many real-world structural causal models do not employ conjugate densities for every pairwise connection between random variables and will not admit Bayesian inversions or complete conditionals in closed form. Applied probabilists therefore often calculate unnormalized densities in closed form and then try to approximate their normalizing constants to perform inference. This section will show how to define unnormalized densities for joint density kernels in terms of a probability density and a weight, potentially dependent on the exact residual at which it is evaluated, which makes the density unnormalized. Weights will be produced as an extra deterministic output from each joint density kernel via a writer monad. Perrone [18] described the writer monad for arbitrary monoids (A, ⊙, 1) in his Example 5.1.7, so Proposition 10 will only quickly review its construction. Proposition 10 (Weights form a writer monad). Consider a strict monoidal category (C , ⊗, I) with a multiplication monoid of nonnegative reals (R ≥0 , ·, 1) for R ≥0 ∈ Ob(C ). These nonnegative weights give a commutative, but not affine, writer monad W : C → C . Proof. A monoid object will result in a writer monad W sending each object Z ∈ Ob(C ) to R ≥0 ⊗ Z ∈ Ob(C ), with 1 for its unit η and real multiplication for its multiplier µ. The writer monad is not affine because W (I) = R ≥0 ⊗ I ≃I. In order to construct unnormalized densities for approximate inference, the weights must not only themselves be finite nonnegative reals, their integral over the entire probability space must be finite as well, letting the unnormalized density have a normalizing constant. Definition 25 enforces this property. Theorem 12 verifies that these morphisms form a copy/delete category just like joint density kernels. Theorem 12 (Category of s-finite joint measure kernels). Given a strict, causal Markov category such that BorelStoch ⊆ C , s-finite joint measure kernels form a copy/delete category of the weight monad's Kleisli category over joint density kernels MeasKer(C ) ⊂ Kl W (∂ Joint(C )). Proof. Typical Kleisli categories are premonoidal, but for a commutative monad they are in fact monoidal. The other necessary structure is thus inherited from ∂ Joint(C ) through W : ∂ Joint(C ) → ∂ Joint(C ), and the only morphisms left out by the restriction are those whose weights have infinite expectation. MeasKer(C ) morphisms can interpret weighted sampling procedures as arbitrary measure kernels with unnormalized densities. Definition 26 defines these. Definition 27 hints at applications of Theorem 12 throughout approximate inference [16]. Definition 27 (Strict proper weighting). A joint distribution p : I R ≥0 ⊗ X over weights and variates is strictly properly weighted with respect to µ : M ((X , Σ X )) when for all h : Meas((X , Σ X ), R) R ≥0 ×X wh(x) p(d(w, x)) = X h(x) µ(dx). We write p ≈ µ when p is strictly properly weighted (S.P.W) with respect to µ. In conclusion, this section built up a categorical setting in which morphisms accumulate joint unnormalized densities over standard Borel spaces, which are then pushed forward by deterministic causal mechanisms. Section 3.1 gave the general construction for accumulating joint distributions; Section 3.2 defined appropriate base measures and densities for measure kernels into standard Borel spaces; Section 3.3 specialized the joint distribution construction to push forward joint densities; and Section C.1 extended the logic to unnormalized joint densities. Section 4 will show how to factorize s-finite joint measure kernels according to generalized causal models and interpret them as structural causal models. C.2 Factorizations of joint unnormalized densities as structural causal models Definition 28 (Unnormalized causal model). Given a signature Σ ∈ FinHyp, an unnormalized causal model is a free CD string diagram p → G ← q : FreeCD Σ (n, m) for some n, m ∈ N with q one-to-one. Definition 29 (Factorization of a measure kernel by a signature). Given a strict copy/delete category C , a factorization ( f , ϕ, Theorem 15 (Factorizations induce an endofunctor converting causal models to pure bloom). Consider a strict CD category C , a signature Σ ∈ FinHyp, a joint factorization functor F : FreeCD Σ → Joint(M ), and a section functor F −1 : Joint(M ) → FreeCD Σ such that F −1 F = Id. Then there exists a P : FreeCD Σ → FreeCD Σ whose image consists of pure-bloom causal models. Proof. We make use of the faithful functor F ′ : Joint(M ) → CoPara ⊗ (M ) from Corollary 14. Then each of the right-hand kernels in Equation 3 forms a string diagram exposing the residual, which upon discarding the output becomes a pure-bloom string diagram P(ϕ) = F −1 (F ′ (F(ϕ)) 2 (id M ⊗ del X )). Definition 6 ( 6Markov category). A Markov category is a semicartesian CD-category (M , ⊗, I), so that the comonoidal counit is natural (∀ f : M (Z, X ), f del X = I) and makes I ∈ Ob(M ) a terminal object. Definition 7 ( 7Parametric categories [4]). Let (C , ⊗, I) be a strict symmetric monoidal category. Then the parametric category Para ⊗ (C ) has as objects those of C and as morphisms the pairs Para ⊗ (C )(A, B) = {(M, k) ∈ Ob(C ) × C (M ⊗ A, B)}. Composition for morphisms (M, k) : Para ⊗ (C )(A, B) and (M ′ , k ′ ) : Para ⊗ (C )(B,C) consists of (M ′ ⊗ M, k ′ • (id M ′ ⊗ k))) while identities on objects A consist of (I, id A ). Definition 8 ( 8Joint Markov kernel). Given a strict Markov category M , a joint Markov kernel is an element of Joint(M )(Z, X ) := {([M, k], f ) : Para ⊗ (M det )(Z, X ) × M (Z, M)}. These have an intuitive interpretation as diagrams sending the parameter z ∈ Z through the stochastic (diagrammed as round) f to generate a random variable m ∈ M that the deterministic (diagrammed as square) k combines with z ([M, k], f Definition 13 ( 13Joint density kernel). Given a strict, causal Markov category BorelStoch ⊆ M , a joint density kernel between objects Z,X ∈ Ob(M ) is an element of ∂ Joint(M )(Z, X ) := {([M, k], f ) ∈ Para ⊗ (M det )(Z, X ) × S Z,M | M ∈ Ob(Sbs)} Hom-set notation again implies these kernels form a category, which Theorem 6 characterizes.Theorem 6 (Joint density kernels form a subcategory of Joint(M )). Given a strict, causal Markov category BorelStoch ⊆ M , joint density kernels form a wide subcategory ∂ Joint(M ) ⊂ Joint(M ). Theorem 7 ( 7Joint density kernels give s-finite probability kernels and densities). Given a strict, causal Markov category BorelStoch ⊆ M and measurable spaces (Z, Σ Z ), (X , Σ X ), joint density kernels ([M, k], f ) : ∂ Joint(M )((Z, Σ Z ), (X , Σ X )) admit probability kernels p : sfStoch((Z, Σ Z ), (X , Σ X )) marginalizing out their randomness and probability densities p f (· | ·) : sfKrn((Z, Σ Z ) ⊗ (M, Σ M ), I). Proof. Any density kernel f ∈ S Z,(M,Σ M ) is σ -finite and any (M, k) : Para ⊗ (M det )(Z, X ) pushes it forward. Proposition 5 thus shows that ∂ Joint(M ) consists entirely of s-finite joint Markov kernels. Definition 19 ( 19Factorization of a Markov kernel by a causal model [8]). A factorization ( f , ϕ, F) consists of a morphism with decomposed domain and codomain f : M n i=1 D i , m j=1 C j , a causal model ϕ : FreeMarkov Σ (n, m), and a strict Markov functor F : FreeMarkov Σ → M such that f = F(ϕ), ∀i ∈ [1..n], D i = F(dom(ϕ) i ), and ∀ j ∈ [1..m],C j = F(cod(ϕ) j ). Definition 20 ( 20Joint factorization functor). A joint factorization functor for a signature Σ ∈ FinHyp is a labeling of boxes with residual wires r : B(Σ) → W (Σ) * and a strict Markov functor F : FreeMarkov Σ → Joint(M ) respecting ∀b ∈ B(Σ), F(b) = ([ w∈r(b) F(w), k], f ) : Joint(M )(F (dom(b)), F(cod(b))). Theorem 9 ( 9Joint factorizations admit interventional distributions). Consider a joint factorization ( f , ϕ, F) over a signature Σ. Then any intervention do : ∏ b:B(Σ) I ⊕ M det (I, F(cod(b)) induces a functor Int : FreeMarkov Σ → Joint(M ) and an interventional distribution Int(ϕ). Theorem 10 ( 10Joint factorizations give counterfactuals). Consider a signature Σ ∈ FinHyp and a joint factorization ( f , ϕ, F). Then any intervention do : ∏ b:B(Σ) I ⊕ M det (I, F(cod(b)) and any assignment U : B(Σ) → [0, 1] of uniform random variates to boxes induces a functor If : FreeMarkov Σ → Joint(M ) and a counterfactual distribution If(ϕ). Proof. We work as above, but this time explicitly consider the structure of the image F( b ) = ([M, k], f ).f gives a standard Borel probability measure, so Proposition 10.7.6 in Bogachev[2] says f (·) ≃ u∈[0,1] g(u, ·)U (du) is isomorphic to a pushforward of the uniform distribution. Our hypergraph morphism utilizes that fact Theorem 11 ( 11Randomness pushback parameterizes Markov kernels). Given a strict Markov category M with randomness pushback, there exists a faithful, identity-on-objects functor F : M → Para ⊗ (M det ). Proof. Randomness pushback (Definition 21) guarantees that each f ≃ (ϒ, p, k) : M (Z, X ) is isomorphic to a triple (ϒ, p, k) ∈ Ob(M )× M (I, ϒ)× M det (ϒ× Z, X ). F then acts on morphisms F( f ) : M (Z, X ) → Para ⊗ (M det )(Z, X ) by simply discarding the randomness source, so that F( f ≃ (ϒ, p, k)) = (ϒ, k). Definition 23 ( 23Measure on a quasi-Borel space). Given a quasi-Borel space (X , M X ), a measure M (X ) on that space consists of σ -finite measure on the base space Ω and a random variable from M X M (·) : Ob(QBS) → Ob(QBS) M (X ) := M (Ω) × M X ,M (·) : QBS(Z, X ) → QBS(M (Z) , M (X )) M ( f ) := (µ, ρ) → (µ, f • ρ).Definition 24 then specializes the above to probability measures.Definition 24 (Probability measure on a quasi-Borel space). Given a quasi-Borel space (X , M X ), a probability measure on that space consists of a σ -finite probability measure on the base space Ω and a random variable from M X P (·) : Ob(QBS) → Ob(QBS) P (X ) := P (Ω) × M X , P (·) : QBS(Z, X ) → QBS(P (Z), P (X )) P ( f ) := (µ, ρ) → (µ, f • ρ). Proposition 8 ( 8Quasi-Borel Markov kernels form a Kleisli category). Markov kernels QBS(Z, P (X )) form a category Q of Markov kernels. Definition 25 (s-finite joint measure kernel). Given a strict, causal Markov category BorelStoch ⊆ C , a joint measure kernel is a morphism of the Kleisli category Kl W (∂ Joint(C )) with finite expected weightMeasKer(C )(Z, X ) = ((M, k), f ) : ∂ Joint(C )(Z, R ≥0 ⊗ X ) | ∀z ∈ Z, m∈M k(m, z) 1 f (z)(dm) < ∞ . Definition 26 ( 26Unnormalized density of a s-finite joint measure kernel). Given an s-finite joint measure kernel ((M, k), f ) : MeasKer(C )((Z, Σ Z ), (X , Σ X )), its conditional unnormalized density is the product of the deterministic weight and its Radon-Nikodym derivative at a residualγ f (m; z) : sfKrn((Z, Σ Z ), (X , Σ X )) γ f (·; ·) : Z × M × Σ I → [0, ∞] γ f (m; z)({ * }) = k(m, z) 1 d f (z) dµ M (m). Example 3 gives the trivial application of strict proper weighting. Example 3 (s-finite joint measure kernels are S.P.W for their unnormalized densities). Every s-finite joint measure kernel ((M, k), f ) : MeasKer(C )((Z, Σ Z ), (X , Σ X )) is S.P.W for its unnormalized density(copy Z ( f ⊗ id Z ) k)(z) ≈ γ f (·; z), R ≥0 ×X wh(x) [copy Z ( f ⊗ id Z ) k](z)(d(w, x)) = m∈M h(x) γ f (m; z) dm. F) is a triple of a measure kernel with decomposed domain and codomain unnormalized causal model ϕ : FreeCD Σ (n, m), and a strict CD functor F : FreeCD Σ → C respecting the decompositions ∀i ∈ [1..n], D i = F(dom(ϕ) i ) and ∀ j ∈ [1..m],C j = F(cod(ϕ) j ) such that f = F(ϕ). Collections of "measurable subsets" closed under complements, countable unions, and countable intersections Proof. This is the Kleisli category Q = Kl P(·) (QBS) of the probability monad P (·) (Definition 24). Proof. Construction of a Kleisli category with a strong affine monad preserves Markov category structure[7], but in fact the weights monad (Proposition 10) is not affine and therefore only preserves copy/delete structure. However, the above theorems about factorization by a generalized causal model only rely on copy/delete structure, so the overall copy/delete structure extends to the Kleisli category MeasKer(C ) = Kl W (∂ Joint(C )).D Tools for using factorized joint densitiesThe next construction will model the reusability of random samples from a standard Borel probability distribution as a way to inject a point in a sample space "backwards" into the randomness pushback Ω. The construction will generalize to joint distributions but not necessarily form a category. Definition 30 (Probabilistic sections of Markov kernels). Consider a strict Markov category M with randomness pushback and a Markov kernel f ≃ (ϒ, p, k) : Z X ; any specific z ∈ Z induces a support supp( f (z)) ⊆ X . Then a section or right-inverse for a Markov kernel is a morphism k −1 : Proof. The right-inverse looks at x ∈ X and places all probability on the smallest u ∈ Ω which produces it by pushforward, or if none exists simply yields the uniform distributionDefinition 31 (Coparametric categories[4]). Let (C , ⊗, I) be a strict symmetric monoidal category. Then the coparametric category CoPara ⊗ (C ) has as objects those of C and as morphisms CoPara The coparametric category construction generalizes the idea of a writer monad to more than one object, and represents morphisms that "log" or "leave behind" some sort of cumulative effect. In fact, joint Markov kernels form a coparametric category. Rewriting for Monoidal Closed Categories. Mario Alvarez-Picallo, Dan Ghica, David Sprunger &amp; Fabio, Zanasi, 10.4230/LIPIcs.FSCD.2022.297th International Conference on Formal Structures for Computation and Deduction (FSCD 2022), 228, Schloss Dagstuhl -Leibniz-Zentrum für Informatik. GermanyDagstuhl Publishing290Mario Alvarez-picallo, Dan Ghica, David Sprunger & Fabio Zanasi (2022): Rewriting for Monoidal Closed Categories. In: 7th International Conference on Formal Structures for Computation and Deduction (FSCD 2022), 228, Schloss Dagstuhl -Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany, pp. 29:1- 29:0, doi:10.4230/LIPIcs.FSCD.2022.29. V I Bogachev, Measure theory. Berlin; New YorkSpringerV. I. Bogachev (2007): Measure theory. Springer, Berlin; New York. Rewriting modulo symmetric monoidal structure. Filippo Bonchi, Fabio Gadducci, Aleks Kissinger, Paweł Sobociński &amp; Fabio, Zanasi, https:/dl.acm.org/doi/10.1145/2933575.2935316Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science. the 31st Annual ACM/IEEE Symposium on Logic in Computer ScienceNew York NY USAACMFilippo Bonchi, Fabio Gadducci, Aleks Kissinger, Paweł Sobociński & Fabio Zanasi (2016): Rewriting modulo symmetric monoidal structure. In: Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, ACM, New York NY USA, p. 710-719, doi:10.1145/2933575.2935316. Available at https://dl.acm.org/doi/10.1145/2933575.2935316. Towards foundations of categorical cybernetics. Matteo Capucci, Bruno Gavranović, arXiv:2105.06332Applied Category Theory Conference (ACT 2021), EPTCS. Matteo Capucci, Bruno Gavranović, Jules Hedges & Eigil Fjeldgren Rischel (2021): Towards foundations of categorical cybernetics. In: Applied Category Theory Conference (ACT 2021), EPTCS. arXiv:2105.06332. Disintegration and Bayesian Inversion via String Diagrams. Kenta Cho, &amp; Bart Jacobs, Mathematical Structures in Computer Science. 1Kenta Cho & Bart Jacobs (2018): Disintegration and Bayesian Inversion via String Diagrams. Mathematical Structures in Computer Science (1). Available at http://arxiv.org/abs/1709.00322. Disentangling Controlled Effects for Hierarchical Reinforcement Learning. PMLRProceedings of the First Conference on Causal Learning and Reasoning, Proceedings of Machine Learning Research. Bernhard Schölkopf, Caroline Uhler & Kun Zhangthe First Conference on Causal Learning and Reasoning, Machine Learning Research177Oriol Corcoll & Raul Vicente (2022): Disentangling Controlled Effects for Hierarchical Reinforcement Learning. In Bernhard Schölkopf, Caroline Uhler & Kun Zhang, editors: Proceedings of the First Con- ference on Causal Learning and Reasoning, Proceedings of Machine Learning Research 177, PMLR, pp. 178-200. Available at https://proceedings.mlr.press/v177/corcoll22a.html. A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics. Tobias Fritz, Advances in Mathematics. 370107239Tobias Fritz (2020): A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics. Advances in Mathematics 370, p. 107239. The d-Separation Criterion in Categorical Probability. Tobias Fritz, &amp; Andreas Klingler, Journal of Machine Learning Research. 2446Tobias Fritz & Andreas Klingler (2023): The d-Separation Criterion in Categorical Probability. Journal of Machine Learning Research 24(46), pp. 1-49. Free gs-Monoidal Categories and Free Markov Categories. Tobias Fritz, &amp; Wendong Liang, 10.1007/s10485-023-09717-0Applied Categorical Structures. 31221Tobias Fritz & Wendong Liang (2023): Free gs-Monoidal Categories and Free Markov Categories. Applied Categorical Structures 31(2), p. 21, doi:10.1007/s10485-023-09717-0. Directed hypergraphs and applications. Giorgio Gallo, Giustino Longo, Stefano Pallottino &amp; Sang Nguyen, 10.1016/0166-218X(93)90045-PDiscrete Applied Mathematics. 422-390045Giorgio Gallo, Giustino Longo, Stefano Pallottino & Sang Nguyen (1993): Directed hypergraphs and appli- cations. Discrete Applied Mathematics 42(2-3), p. 177-201, doi:10.1016/0166-218X(93)90045-P. A categorical approach to probability theory. Michèle Giry, Categorical Aspects of Topology and Analysis. B. BanaschewskiBerlin Heidelberg; Berlin, HeidelbergSpringerMichèle Giry (1982): A categorical approach to probability theory. In B. Banaschewski, editor: Categorical Aspects of Topology and Analysis, Springer Berlin Heidelberg, Berlin, Heidelberg, p. 68-85. Sander Greenland, Judea Pearl, &amp; James, M Robins, Causal diagrams for epidemiologic research. Epidemiology. Sander Greenland, Judea Pearl & James M Robins (1999): Causal diagrams for epidemiologic research. Epidemiology, pp. 37-48. A convenient category for higher-order probability theory. Chris Heunen, Ohad Kammar, Sam Staton &amp; Hongseok, Yang , 10.1109/LICS.2017.8005137Proceedings -Symposium on Logic in Computer Science. -Symposium on Logic in Computer ScienceChris Heunen, Ohad Kammar, Sam Staton & Hongseok Yang (2017): A convenient category for higher-order probability theory. In: Proceedings -Symposium on Logic in Computer Science, doi:10.1109/LICS.2017. 8005137. ArXiv: 1701.02547 Citation Key: Heunen2017 ISSN: 10436871. Human-level concept learning through probabilistic program induction. Ruslan Brenden M Lake, &amp; Salakhutdinov, B Joshua, Tenenbaum, Science. 3506266Brenden M Lake, Ruslan Salakhutdinov & Joshua B Tenenbaum (2015): Human-level concept learning through probabilistic program induction. Science 350(6266), pp. 1332-1338. Sergey Levine, arXiv:1805.00909Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprintSergey Levine (2018): Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv preprint arXiv:1805.00909. Nested Sequential Monte Carlo Methods. Christian Naesseth, Fredrik Lindsten &amp; Thomas Schon, PMLRProceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research. Francis Bach & David Bleithe 32nd International Conference on Machine Learning, Machine Learning ResearchLille, France37Christian Naesseth, Fredrik Lindsten & Thomas Schon (2015): Nested Sequential Monte Carlo Methods. In Francis Bach & David Blei, editors: Proceedings of the 32nd International Conference on Machine Learning, Proceedings of Machine Learning Research 37, PMLR, Lille, France, pp. 1292-1301. Available at https:// proceedings.mlr.press/v37/naesseth15.html. The causal foundations of structural equation modeling. Handbook of structural equation modeling. Judea Pearl, Judea Pearl (2012): The causal foundations of structural equation modeling. Handbook of structural equation modeling, pp. 68-91. Paolo Perrone, arXiv:1912.10642Notes on Category Theory with examples from basic mathematics. arXiv preprintPaolo Perrone (2019): Notes on Category Theory with examples from basic mathematics. arXiv preprint arXiv:1912.10642. Probabilistic programming semantics for name generation. Marcin Sabok, Sam Staton, Dario Stein &amp; Michael Wolman, Proceedings of the ACM on Programming Languages. 5Marcin Sabok, Sam Staton, Dario Stein & Michael Wolman (2021): Probabilistic programming semantics for name generation. Proceedings of the ACM on Programming Languages 5(POPL), pp. 1-29. Moritz Schauer, &amp; Frank Van Der Meulen, arXiv:2303.13865Compositionality in algorithms for smoothing. arXiv preprintMoritz Schauer & Frank van der Meulen (2023): Compositionality in algorithms for smoothing. arXiv preprint arXiv:2303.13865. Denotational Validation of Higher-Order Bayesian Inference. Ohad Adamścibior, Matthijs Kammar, Sam Vákár, Hongseok Staton, Yufei Yang, Klaus Cai, Sean K Ostermann, Moss, 10.1145/3158148Proc. ACM Program. Lang. 2(POPL). ACM Program. Lang. 2(POPL)AdamŚcibior, Ohad Kammar, Matthijs Vákár, Sam Staton, Hongseok Yang, Yufei Cai, Klaus Ostermann, Sean K. Moss, Chris Heunen & Zoubin Ghahramani (2017): Denotational Validation of Higher-Order Bayesian Inference. Proc. ACM Program. Lang. 2(POPL), doi:10.1145/3158148. Available at https:// doi.org/10.1145/3158148. Toby St, Clere Smithe, arXiv:2006.01631Bayesian updates compose optically. arXiv preprintToby St Clere Smithe (2020): Bayesian updates compose optically. arXiv preprint arXiv:2006.01631. Lecture Notes in Computer Science 10201. Sam Staton, https:/link.springer.com/10.1007/978-3-662-54434-1_32doi:10. 1007/978-3-662-54434-1_32Commutative Semantics for Probabilistic Programming. Berlin Heidelberg, Berlin, HeidelbergSpringerSam Staton (2017): Commutative Semantics for Probabilistic Programming, p. 855-879. Lec- ture Notes in Computer Science 10201, Springer Berlin Heidelberg, Berlin, Heidelberg, doi:10. 1007/978-3-662-54434-1_32. Available at https://link.springer.com/10.1007/ 978-3-662-54434-1_32. Graduate studies in mathematics 126. Terence Tao, American Mathematical SocietyProvidence, R.IAn introduction to measure theoryTerence Tao (2011): An introduction to measure theory. Graduate studies in mathematics 126, American Mathematical Society, Providence, R.I. Matthijs Vákár, &amp; Luke Ong, arXiv:1810.01837ArXiv:1810.01837On S-Finite Measures and Kernels. mathMatthijs Vákár & Luke Ong (2018): On S-Finite Measures and Kernels (arXiv:1810.01837). Available at http://arxiv.org/abs/1810.01837. ArXiv:1810.01837 [math]. Corollary 13 (Factorizations extend to unnormalized densities). Given a signature Σ ∈ FinHyp, Theorems 8, 15 and 9 extend to strict copy/delete functors F : FreeCD Σ → MeasKer(C ). Paul Wilson, &amp; Fabio Zanasi, arXiv:2305.01041Data-Parallel Algorithms for String Diagrams. Paul Wilson & Fabio Zanasi (2023): Data-Parallel Algorithms for String Diagrams. arXiv:2305.01041. Corollary 13 (Factorizations extend to unnormalized densities). Given a signature Σ ∈ FinHyp, Theo- rems 8, 15 and 9 extend to strict copy/delete functors F : FreeCD Σ → MeasKer(C ).
[]
[ "INVOLUTIONS ON THE PRODUCT OF QUATERNIONIC PROJECTIVE SPACE AND SPHERE", "INVOLUTIONS ON THE PRODUCT OF QUATERNIONIC PROJECTIVE SPACE AND SPHERE" ]
[ "Dimpi And ", "Hemant Kumar Singh " ]
[]
[]
Let G = Z 2 act on a finite CW-complex X having mod 2 cohomology isomorphic to the product of quaternionic projective space and sphere HP n ×S m , n, m ≥ 1. This paper is concerned with the connected fixed point sets and the orbit spaces of free involutions on X.
null
[ "https://export.arxiv.org/pdf/2305.02611v1.pdf" ]
258,480,048
2305.02611
5830db373ddd26dd165b34e2d05d351c03f0f94c
INVOLUTIONS ON THE PRODUCT OF QUATERNIONIC PROJECTIVE SPACE AND SPHERE May 2023 Dimpi And Hemant Kumar Singh INVOLUTIONS ON THE PRODUCT OF QUATERNIONIC PROJECTIVE SPACE AND SPHERE May 2023 Let G = Z 2 act on a finite CW-complex X having mod 2 cohomology isomorphic to the product of quaternionic projective space and sphere HP n ×S m , n, m ≥ 1. This paper is concerned with the connected fixed point sets and the orbit spaces of free involutions on X. Introduction Let (G, X) be a transformation group, where G is compact Lie group and X is finite CW-complex, with the fixed point set F. The study of the cohomological structure of the fixed point set and orbit space has been an interesting problem in transformation groups. Smith [11] proved that the fixed point sets of G = Z p , p a prime, on a finite dimensional polyhedron X having mod p cohomology n-sphere are mod p cohomology r-sphere, where −1 ≤ r ≤ n. He has also proved that if G = Z 2 acts effectively on the real projective space then the fixed point set is either empty or it has two components having mod 2 cohomology real projective space [12]. Bredon [2] generalizes this result for G = Z p , p a prime, actions on cohomology projective spaces. Bredon [2] also proved that if a finitistic space X satisfies poincaré duality with respect tǒ Cech cohomology with Z p -coefficients then each component of the fixed point set also satisfies poincaré duality. Puppe [14] proved Bredon's conjecture which states that if X is totally nonhomologous to zero in X G (Borel space) then the number of generators of the cohomology ring of each component of the fixed point set with Z p -coefficient is at most the number of generators of H * (X). The fixed point sets of involutions on the product of projective spaces and Dold manifolds have been determined in [4,8]. On the other hand, it is well known that the orbit spaces of free actions of Z 2 , S 1 and S 3 on S n , S 2n+1 and S 4n+3 are projective spaces P n (q), where q = 1, 2 and 4, respectively. Recently, the orbit spaces of free involutions on real Milnor manifolds, Dold manifolds, and the product of two projective spaces have been discussed in [5,7,9]. The possibilities of connected fixed point sets of involutions and the orbit spaces of free involutions on the product of projective spaces and sphere FP n × S m , F = R or C, have been determined in [6]. In continuation, in this paper, we have determined the possibilities of the connected fixed point sets of involutions on X ∼ 2 HP n × S m and discussed the orbit spaces of free involutions on X. Preliminaries In this section, we recall some known facts that will be used in this paper. Let G = Z p act on a finite CW-complex X. Let G ֒→ E G → B G be universal G-bundle, where E G is contractible space and B G is finite CW-complex. Then the projection map X × E G → E G is G-equivalent map and gives a fibration X i ֒→ X G π → B G (called Borel fibration), where X G = (X × E G )/G is Borel space obtained by diagonal action of G on space X × E G . Suppose F = ∅, and let x ∈ F and η x : B G ֒→ X G be a cross section of projection map π : X G → B G , where B G ≈ ({x} × E G )/G, then H * (X G ) ∼ = ker η * x ⊕ im π * . The induced homomorphism η * x depends on the component F 0 of fixed point set F in which x lies. If α ∈ H n (X G ) such that α ∈ Ker η * x then the image of α under the restriction of j : (F G , x G ) ֒→ (X G , x G ) on (F 0 ) G does not involved the elements of H 0 (F 0 , x G ) [3]. Recall that a space X is said to be totally nonhomologous to zero (TNHZ) in X G if the inclusion map i : X ֒→ X G induces a surjection in the cohomology i * : H * (X G ) → H * (X). We have used the following Propositions: Proposition 2.1. ([1]) Let G = Z 2 act on a finite CW-complex X and rk H i (X, Z 2 ) < ∞. Then, the following statements are equivalent: (a) X is TNHZ (mod 2) in X G . (b) rk H i (F, Z 2 ) = rk H i (X, Z 2 ) . (c) G acts trivially on H * (X; Z 2 ) and spectral sequence E r,q 2 of X G → B G degenerates. Proposition 2.2. ( [2]) Let X be TNHZ in X G and {γ j } be a set of homogeneous elements in H * (X G ; Z p ) such that {i * (γ j )} forms Z p -basis of H * (X; Z p ). Then, H * (X G ; Z p ) is the free H * (B G )-module generated by {γ j }. Proposition 2.3. ([1]) Let G = Z 2 act on the finite CW-complex X and A ⊂ X be closed and invariant subspace. Suppose that H i (X, A; Z 2 ) = 0 for i > n. Then, the homomorphism j * : H k (X G , A G ; Z 2 ) → H k (F G , F G ∩ A G ; Z 2 ) is an isomorphism for k > n. If (X, A) is TNHZ (mod 2) in (X G , A G , ) then j * is a monomorphism for all k. Proposition 2.4. ( [1]) Let G = Z 2 act on a finite CW-complex X and X is TNHZ in X G . Then, a|F is nontrivial element of the fixed point set F, for any class a ∈ H n (X; Z 2 ) such that a 2 = 0. We know that H * (HP n × S m ; Z 2 ) = Z 2 [a, b]/ < a n+1 , b 2 >, where deg a = 4 and deg b = m. Throughout the paper, H * (X) will denote theČech cohomology of a space X, and X ∼ 2 Y, means H * (X; Z 2 ) ∼ = H * (Y ; Z 2 ). Main Theorems Let G = Z 2 act on a finite CW-complex X ∼ 2 HP n × S m , where n, m ≥ 1. In this section, we determined the possibilities of the connected fixed point sets of involutions on X, and orbit spaces of free involutions on X. First, we have determined the fixed point sets of involutions on X. Theorem 3.1. Let G = Z 2 act on a finite CW-complex X ∼ 2 HP n × S m , n, m ≥ 1. If X is TNHZ in X G and the fixed point set F is nonempty and connected, then F must be one of the following: (1) F ∼ 2 S 3 × S q or F ∼ 2 FP n × S q , where F = R, C or H, 1 ≤ q ≤ m. (2) F ∼ 2 FP n+1 #FP n+1 , where F = R, C or H. (3) H * (F ) is generated by c and d, with c n+1 = d 2 + c s = d 2l+2 = 0, where deg c = 2, deg d = q and l = [ n s ], s = q 2 if q is even and s = q if q is odd. Moreover, for q = 1, F ∼ 2 RP 2n+1 and for q = 2, F ∼ 2 CP 2n+1 . (4) H * (F ) is generated by c and d, with c r s +1 = d r q +1 = c r s + d r q = cd = 0, where deg c = s, s = 1, 2, deg d = q, q = 1, 2, 4 or 8, r = sq(2n + 2)/(q + s) and n = (q+s)k 2 − 1 for some k ∈ N. j + qj − (q + 1), n + 1 = jk for some k ∈ N, or j = 2k, k = 1 or 2, and n > (q+1)j 2s − 1. Proof. Let x ∈ F and {a, · · · , a n , b, ab, · · · a n b} be a generating set of H * (X, x), where deg a = 4 and deg b = m. Since X is TNHZ in X G , we get rk H * (F ) = 2n + 2, π 1 (B G ) acts trivially on H * (X G , x G ) and the E 2 -term E p,q 2 = H p (B G ) ⊗ H q (X) of Leray-Serre spectral sequence of the Borel fibration X i ֒→ X G π → B G is E p,q ∞ . So, the elements {1 ⊗ a, 1 ⊗ a 2 , · · · , 1 ⊗ a n , 1 ⊗ b, 1 ⊗ ab, · · · 1 ⊗ a n b} are permanent cocycles. Assume that α ∈ H 4 (X G , x G ) represents generator a ∈ H 4 (X, x) and β ∈ H m (X G , x G ) represents generator b ∈ H m (X, x) such that η * x (α) = η * x (β) = 0, where η x : {x}×E G G ֒→ X×E G G is the inclusion map. By Proposition 2.2, {α, α 2 , α 3 , · · · α n , β, αβ, α 2 β, · · · α n β} is a generating set of H * (X G , x G ) over H * (B G )-module. As H m (F G , x G ) = m i=0 H m−i (B G ) ⊗ H i (F, x) and η x (β) = 0. We may assume that ) and B i ∈ Z 2 , 1 ≤ i ≤ 4. We know that i * 1 j * = j * 1 i * where i 1 : F ֒→ F G and j 1 : F ֒→ X are the inclusion maps. So, we get c 4 = a|F. If c 4 = 0, then B 4 = 1. j * (β) = 1 ⊗ d m + t ⊗ d m−1 + · · · + t k ⊗ d m−k · · · + t m−2 ⊗ d 2 + t m−1 ⊗ d 1 , where d i ∈ H i (F, x), and j * (α) = B 1 t 3 ⊗ c 1 + B 2 t 2 ⊗ c 2 + B 3 t ⊗ c 3 + B 4 1 ⊗ c 4 , where c i ∈ H i (F, x Clearly, c n+1 4 = 0. Thus, j * (α) = 1 ⊗ c 4 + 3 i=1 B i t 4−i ⊗ c i , where B i ∈ Z 2 , 1 ≤ i ≤ 3. So, we consider eight cases according as B 1 , B 2 and B 3 are zero or nonzero. Case (1): If B 1 = B 2 = B 3 = 0, then j * (α) = 1 ⊗ c 4 . In this case, c i 4 = 0 for 1 ≤ i ≤ n. As j * is injective, d j = c j 4 , for some j, 1 ≤ j ≤ n, where deg d j = j. Suppose d q = d m−k = d is the least degree element such that d q = c j 4 . As j * is onto on high degrees, for sufficiently large value of r, we can write t k+r ⊗ d = j * (A 1 t r+m−4 α + · · · + A n t r+m−4n α n + A m t r β + · · · + A m+n t r−4n α n β), where A ′ i s are in Z 2 . After comparing the coefficient of t k+r ⊗ d, we get A m = 1. So, we have t r ⊗d m +· · ·+t r+k−1 ⊗d m−(k−1) +t r+k+1 ⊗d m−(k+1) +· · ·+t r+m−1 ⊗d 1 = −j * (A 1 t r+m−4 α+ · · · + A n t r+m−4n α n + A m+1 t r−4 αβ + · · · + A m+n t r−4n α n β). From the above equation, we get that if q ≡ 1, 2, 3 (mod 4), then d 4i = c i 4 , d 4i+q = c i 4 d, 1 ≤ i ≤ n, and if q ≡ 0 (mod 4), then d 4k = c k 4 + c k−1 4 d, where 1 ≤ i ≤ n, and zero otherwise. Thus, we get j * (α n β) = t k ⊗ c n 4 d. As α n β = 0, we get c n 4 d = 0, and hence c i 4 d = 0 for 1 ≤ i ≤ n. Clearly, if d 2 = 0, then F ∼ 2 HP n × S q , 1 ≤ q ≤ m. If d 2 = 0, then either q ≡ 0(mod 4) or q ≡ 2(mod 4). First, suppose that q ≡ 2(mod 4). So, we have d 2 = c q 2 4 . Consequently, d 2l+1 = cd ′ = d + c 4 , we get d ′n+2 = d n+2 = d ′n+1 + d n+1 = dd ′ = 0. Thus, F ∼ 2 HP n+1 #HP n+1 . This realizes possibility (2) for F = H. Case (2): If B 1 = B 2 = 0 and B 3 = 1, then j * (α) = 1 ⊗ c 4 + t ⊗ c 3 . Assume that H 1 (F ) = 0. Further, we consider cases according as c 3 = c 3 1 or c 3 = c 3 1 . First, consider c 3 = c 3 1 . Suppose H * (F ) has one generator. Then, c 4 = c 4 1 and j * (α n ) = n r=0 t r ⊗ c 4n−r 1 . By the injectivity of homomorphism j * , we get c 3n 1 = 0. Clearly, rk H * (F ) > 2n + 2, a contradiction. Suppose H * (F ) has two generators. Then, either c 4 = c 4 1 or c 4 = c 4 1 . Let c 4 = c 4 1 . Then, we also have rk H * (F ) > 2n + 2, a contradiction. Let c 4 = c 4 1 . Further, if c 4 1 = 0, then j * (α n ) = 1 ⊗ c n 4 if n is even 1 ⊗ c n 4 + t ⊗ c n−1 4 c 3 1 if n is odd. As j * (α n ) = 0, c n 4 = 0, for n even. If n is odd and c n 4 = 0, then rkH * (F ) = 4n > 2n + 2, n > 1 a contradiction. Clearly, this case is not possible for n = 1. Thus, we have c n 4 = 0. As F is poincaré duality space, again we get rk H * (F ) > 2n+2, a contradiction. Now, if c 4 = c 4 1 & c 4 1 = 0, then we must have j * (α n β) = n r=0 m−1 i=0 n r t r+i ⊗ (⊕ j+4l=m−i c n−r+l 4 c j+3r 1 ). If the cup product c 1 c 4 = 0, then c 3n+1 1 = 0. This gives rk H * (F ) > 2n + 2, a contradiction. If the cup product c 1 c 4 = 0, then the rank of H * (F ) further increases, which is not possible. Next, consider c 3 = c 3 1 . As H * (F ) has at most two generators, we get c 4 = c 4 1 . Thus j * (α n β) = n r=0 m−1 i=0 n r t r+i ⊗ (⊕ j+3l=m−i c 4n−4r+j 1 c r+l 3 ). We get c 4n+1 1 , c 4n−3 1 c 3 , · · · , c 1 c n 3 are least degree elements when r = 0, 1, · · · , n, respectively. In any case, it is easy to observed that rk H * (F ) > 2n + 2, a contradiction. Now, Suppose that H 1 (F ) = 0. Further, assume that H 2 (F ) = 0. Then, we must have c 4 = c 2 2 . Then, j * (α n β) = n r=0 m−1 i=0 n r t r+i ⊗ (⊕ 2l+3j=m−i c 2n−2r+l 2 c r+j 3 ). Note that c 2n+1 2 , c 2n−1 2 c 3 .c 2n−3 2 c 2 2 . · · · c 2 2 c n−1 3 , c 2 c n 3 are the least degree elements when r = 0, 1, 2 · · · n respectively. So, we always have rk H * (F ) > 2n + 2, a contradiction. Next, assume that H 2 (F ) = 0. Then, j * (α n β) = n r=0 m−1 i=0 n r t r+i ⊗ (⊕ 3l+4j=m−i c n−r+j 4 c r+l 3 ). The least degree elements in the above expression c n 4 c 3 , c n−1 4 c 2 3 , · · · c 4 c n 3 and c n+1 3 . Let c 3 c 4 = 0. If c 2 3 = 0, then c n 4 c 3 = 0. Thus, F ∼ 2 HP n × S 3 . If c 2 4 = 0 and c n+1 3 = 0, then F ∼ 2 X × S 4 , where X has truncated polynomial ring Z 2 [x] <x n+1 > , deg x = 3. By the Theorem 4.5 in [10], this is not possible. If c 2 4 = 0 and c n+1 3 = 0, then rk H * (F ) > 2n+2, a contradiction. Now, let c 3 c 4 = 0. Then, c r 3 3 = c r 4 4 is generator of H r (F ) and c i 3 3 = c i 4 4 for i < r. As, rk H * (F ) = 2n + 2, we get r = 24n+24 7 . So, n = 7k−2 2 , k even. Thus, c 4k+1 3 = c 3k+1 4 = c 3 c 4 = c 4k 3 + c 3k 4 = 0. Thus, F ∼ 2 Y #Z, where Y and Z both are truncated polynomials with generator c 3 and c 4 , respectively. But by the Theorem 4.5 in [10], this is not possible. Case (3): If B 1 = B 3 = 0 and B 2 = 1, then j * (α) = 1 ⊗ c 4 + t 2 ⊗ c 2 . If H * (F ) has one generator then F ∼ 2 RP 2n+1 or F ∼ 2 CP 2n+1 according as H 1 (F ) = 0 or H 1 (F ) = 0. This realizes possibility (3) for q = 1 and q = 2, respectively. Now, assume that H * (F ) has two generators. We consider two subcases according as c 4 = c 2 2 or c 4 = c 2 2 . Subcase(i): Assume that c 4 = c 2 2 . First, assume that H 1 (F ) = 0. If c 2 2 = 0 then j * (α n ) = 1 ⊗ c n 4 if n is even 1 ⊗ c n 4 + t 2 ⊗ c n−1 4 c 2 if n is odd. By the injectivity of j * , we get c n 4 = 0. If c n+1 4 = 0 then F ∼ 2 HP n × S 2 . If c n+1 4 = 0 then rk H * (F ) > 2n + 2, a contradiction. If c 2 2 = 0, then we get j * (α n β) = n r=0 m−1 i=0 n r t 2r+i ⊗ (⊕ 2l+4j=m−i c n−r+j 4 c r+l 2 ) . We get that the least degree elements are c n 4 c 2 , c n−1 4 c 2 2 , c n−2c 3 2 4 · · · c 2 4 c n−1 2 , c 4 c n 2 and c n+1 2 . Note that if c n−k 4 c k+1 2 = 0, for any 0 ≤ k ≤ n − 1, then rk H * (F ) > 2n + 2, a contradiction. So, at least one of c n+1 2 or c n 2 c 4 must be nonzero. Let c 2 c 4 = 0, then we must have c n+1 2 = 0. Thus, c r 2 2 = c r 4 4 is the generator of H r (F ). This implies that rk H * (F ) = r 2 + r 4 = 2n + 2. Thus, r = 8k, where n = 3k − 1, k ∈ N. Hence, F ∼ 2 CP 2k #HP 2k , k ∈ N. This realizes possibility (4) for s = 2 & q = 4. If c 2 c 4 = 0, then for c n+1 2 = 0 & c 2 4 = 0, clearly, F ∼ 2 CP n × S 4 . For c 2 4 = 0, we must have c 2 4 = c 3 2 . By the change of basis d ′ = c 2 2 + c 4 , we get the cohomology ring is given by c n+1 2 = d ′2 = 0. This realizes possibility (1) for F = C & q = 4. If c n+1 2 = 0 and c n 2 c 4 = 0, then rk H * (F ) > 2n + 2, a contradiction. If c n+1 2 = 0. Then, c r 2 2 = c r−4j 2 2 c j 4 , j > 1 forms generator of H r (F ). Which implies that c 2j 2 = c j 4 is generator of H 4j (F ). We get rk H * (F ) = j( r−4j 2 ) + 4j 2 + j which must be 2n + 2 so, r = 4n+4 j + 4j − 2. We must have either n + 1 = jk, for some k ∈ N or j = 2k, k = 1, 2. Note that c If H * (F ) = 0, then j * (α n ) = n r=0 n r t 2r ⊗ c 2n−r 2 , which implies that c n 2 = 0. For c n+1 2 = 0, we get j * (α n β) = c n 2 d = 0, where deg d = q, d = c i 2 , 1 ≤ i ≤ n. If d 2 = 0, then F ∼ 2 CP n × S q , 1 ≤ q ≤ m. If d 2 = 0, then either d 2 = c q 2 or d 2 = c q 2 2 d. Suppose that d 2 = c q 2 , then if q ≡ 0 (mod 2),d ′ = d+c 2 , we get d ′n+2 = d n+2 = d ′n+1 +d n+1 = dd ′ = 0. Thus, F ∼ 2 CP n+1 #CP n+1 . This realizes possibility (2) for F = C. For c n+1 2 = 0, we get j * (α n β) = n r=0 m−1 i=1 n r t 2r+i ⊗ (⊕ 2l+qj=m−i c 2n−r+l 2 d j ) j={0,1} = n r=0 m−1 i=1 n r t 2r+i ⊗ c 2n−r+ m−i 2 2 + n r=0 m−1 i=q A l,j n r t 2r+i ⊗ (⊕ 2l+q=m−i c 2n−r+l 2 d). From above expression we get c n+1 2 = d j , where qj < r. We get rk H * (F ) = j( r−qj 2 ) + qj 2 + j which must be 2n + 2 so, r = 4n+4 j + qj − (q + 2). We must have either n + 1 = jk, for some k ∈ N or j = 2k, k = 1, 2. Hence, the cohomology ring H * (F ) is generated by c 2 and d, c In this case, if H * (F ) has one generator then F ∼ 2 RP 2n+1 . Suppose that H * (F ) has two generators. Now, we consider two subcases according as c 4 = c 4 1 or c 4 = c 4 1 . Subcase(i): Assume that c 4 = c 4 1 . We have j * (α n ) = n r=0 n r t 3r ⊗ c 4n−3r 1 . Which implies that c n 1 must be non zero. Let d = c i , 1 ≤ i ≤ n be the generator of H * (F ) having degree q. We get j * (α n β) = n r=0 m−1 i=1 n r t 3r+i ⊗ c 4n−3r+m−i 1 + n r=0 m−1 i=q A l,j n r t 3r+i ⊗ c 4n−3r+(m−i−q) 1 d. After expanding the above expression we get c n+1 We get rk H * (F ) = j(r − qj) + qj + j which must be 2n + 2 so, r = 2n+2 j + qj − (q + 1). We must have either n + 1 = jk, for some k ∈ N or j = 2. Hence, the cohomology ring H * (F ) is generated by c 1 , d with c r+1 c j 3 , j > 1, which generates H r (F ) and r and j must be even. Since, rk H * (F ) = 2n + 2, so we must have j = 2 and r = 2n + 4, Hence, the cohomology ring H * (F ) is generated by c 2 and c 3 with c n+3 In this subcase, we must have H 1 (F ) = 0. It is easy to observed that rk H * (F ) > 2n+2 in both case either c 2 2 = 0 or c 2 2 = 0. Case (6): d ′ = c + d, we get d ′n+2 = d n+2 = d ′n+1 + d ′n+1 = dd ′ = 0. Thus, F ∼ 2 RP n+1 #RP n+1 .1 = c qj 1 + d j = c r−qj+1 1 d = 0, with (If B 2 = 0 and B 1 = B 3 = 1, then j * (α) = 1 ⊗ c 4 + t ⊗ c 3 + t 3 ⊗ c 1 . In this case, if H * (F ) has one generator then F ∼ 2 RP 2n+1 . Suppose that H * (F ) has two generators. Now we consider two subcases: (i) c 3 = c 3 1 (ii) c 3 = c 3 1 . Subcase(i): c 3 = c 3 1 .1 = d r q +1 = c 1 d = c r 1 + d r q = 0, where r = q(2n+2) q+1 . So, (q + 1)|(2n + 2), and hence, n = (q + 1)k − 1 for q even & n = ( q+1 2 )k − 1 for q odd, k ∈ N. This realizes possibility (4) for s = 1. Further, If q = 1, then F ∼ 2 RP n+1 #RP n+1 , if q = 2, then F ∼ 2 RP 4k #RP 2k and if q = 4, then F ∼ 2 RP 8k #HP 2k . If c 1 d = 0, then we get c r 1 = c r−qj 1 d j , j > 1 which generates H r (F ). Thus, c qi 1 = d i for 1 ≤ i ≤ j − 1 and c qj 1 = d j where qj < r. We get rk H * (F ) = j(r − qj) + qj + j which must be 2n + 2 so, r = 2n+2 If H * (F ) has one generator then F ∼ 2 RP 2n+1 . Suppose that H * (F ) has two generators. We consider two subcases: (i) c 2 = c 2 1 (ii) c 2 = c 2 1 . Subcase (i): c 2 = c 2 1 . First, assume that c 4 = c 4 1 . We have j * (α n ) = n = 0, then c n 1 d must be non zero. Thus, for d 2 = 0, we get F ∼ 2 RP n ×S q , 1 ≤ q ≤ m. This realizes possibility (1) for F = R. And for d 2 = 0, we get F ∼ 2 RP n × S q , 1 ≤ q ≤ m and F ∼ 2 RP n+1 #RP n+1 , when d 2 = c 2q 1 and d 2 = c q 1 d, respectively. Now, suppose that c n+1 Example 3.5. Bredon ([2]) constructed an example that P 2 (q)#P 2 (q) (connected sum of projective spaces) is a fixed point set of an involution on S 4 × S q+k , where k ≥ 4. This example realizes possibility (2) of Theorem 3.1 for n = 1. In this paper, Bredon also gave examples of involutions on X ∼ 2 S n × S m , n ≤ m and X ∼ 2 S 4 × S m , 4 < m with the fixed point sets F = RP 3 and F ∼ 2 S 7 , respectively. These examples realizes possibility (3) of Theorem 3.1 for n = 1, and the case when X ∼ 2 HP 1 × S m and not TNHZ in X G , respectively. Next, we have discussed the cohomology ring of the orbit spaces of free involutions on a space X having mod 2 cohomology the product of quaternionic projective space and sphere HP n × S m . For the existence of free involutions on HP n × S m , consider the diagonal action on HP n × S m , by taking any involution on HP n and antipodal action on S m . First, we have consider the case when π 1 (B G ) acts trivially on H * (X) under some assumptions on the associated Lerey-Serre spectral sequence of Borel fibration X ֒→ X G → B G . Note that [6] if G = Z 2 act freely on X ∼ 2 HP n × S m , then π 1 (B G ) acts trivially on H * (X) whenever one of the following holds: (1) 4n ≤ m (2) 4 = m < 4n, n is even, (3) 4 < m < 2m ≤ 4n, m ≡ 0(mod 4), and (4) m ≡ 0(mod 4). Theorem 3.6. Let G = Z 2 act freely on finite CW-complex X ∼ 2 HP n × S m , where n, m ≥ 1. Assume that π 1 (B G ) acts trivially on H * (X) and the differentials d r (1 ⊗ b) = 0 ∀ r ≤ m. Then, the cohomology ring of orbit space H * (X/G) is isomorphic to one of the following graded commutative algebras: (1) Z 2 [x, y, z]/I, where I is homogeneous ideal given by: Finally, we have consider the case when π 1 (B G ) acts nontrivally on H * (X). Theorem 3.7. Let G = Z 2 act freely on a finite CW-complex X ∼ 2 HP n × S m , where n, m ≥ 1. Assume that π 1 (B G ) acts nontrivially on H * (X). Then, H * (X/G) is isomorphic to one of the following graded commutative algebras: (1) Z 2 [x, y, z]/< x 9 , y 2 + a 0 z + a 1 x 8 , z Then, this gives a free involution on HP n × S m . The orbit space of this action is (HP n × S m )/Z 2 ∼ 2 HP n × RP m . This realizes possibility (3) of Theorem 3.6, for a i = 0 ∀ i. ( 5 )s 5H * (F ) is generated by c and d, +1 d = 0, where deg c = s, s = 1, 2, deg d = q < n, r = s(2n+2) 4 , where l = [ 2n q ] and d 2l+2 = 0. Thus, we get c n+14 = d 2l+2 = d 2 + c q 2 4 = 0.In particular, for q = 2, we get F ∼ 2 CP 2n+1 . This realizes possibility (3). Next, suppose that q ≡ 0(mod 4) then we have either d 2 , then by suitable change of basis, we get F ∼ 2 HP n × S q , 1 ≤ q ≤ m. If d 2 = c q 4 4 d, then q must be 4. By the change of basis are namely, c q 2 and d 2 , which is not possible. Hence, the cohomology ring is given by c realizes possibility (5) for s = 2 & q = 4. Now, suppose that H 1 (F ) = 0, then we consider two possibility accordingly c 2 = c 2 1 or c 2 = c 2 1 . If c 2 = c 2 1 , then we get c 2n 1 = 0. which leads contradiction. If c 2 = c 2 1 , then we must have c 4 = c 4 1 . It is easy to observed that either c 2 2 is zero or non zero, we get rk H * (F ) > 2n + 2, a contradiction. Subcase(ii): Assume that c 4 = c 2 2 . If H 1 (F ) = 0, then for c 2 = c 2 1 , we get c 2n+1 1 must be non zero. Thus, rk H * (F ) > 2n+2, a contradiction. Now, for c 2 = c 2 1 , we have j * (α n ) then F ∼ 2 CP n × S 1 . Which realizes possibility (1) for F = C and q = 1. If c n+1 2 = 0, then rank of H * (F ) exceed 2n + 2, a contradiction. Now, suppose that c 2 1 = 0 then this case only possible when n = 2 and cup product c 1 c 2 = 0, otherwise rk H * (F ) > 2n + 2. For n = 2, we get c 4 1 = c 2 2 is generator of H 4 (F ). Thus, F ∼ 2 RP 4 #CP 2 . This realizes possibility (4) for n = 2 and q = 2 & s = 1. then by the change of basisd ′ = d + c q 2 2 , we get d ′2 = 0 and c i 2 d ′ = 0 for 1 ≤ i ≤ n. Thus, F ∼ 2 CP n × S q , 1 ≤ q ≤ m. This realizes possibility (1) for F = C. If q ≡ 0 (mod 2), then d 2l+1 = c lq 2 d, where l = [ n q] and d 2n+2 = 0. This realizes possibility (3) for s = q & q odd. Moreover, if q = 1, then clearly F ∼ 2 RP 2n+1 . If d 2 = c q/2 2 d, then q must be 2. As if 2 < q(even) ≤ n, then F does not satisfy poincaŕe duality. By the change of basis 2 amd c n 2 2 2 222d are least degree elements. Clearly, if c n 2 d = 0 then rk H * (F ) > 2n + 2, a contradiction. Now, suppose that c n 2 d = 0 & c n+1 2 = 0. If c 2 d = 0, then we must have d 2 = 0, otherwise, we cannot have poincaré dual of d.It is easy to observe that c the generator of H r (F ), where r is the formal dimension of H * (F ). As rk H * (F ) = 2n + 2, we get r = 4q(n+1) q+2 . So, (q + 2)|(4n + 4), and hence, n = (q + 2)k − 1 for q ≡ 0, 1, or 3 (mod 4) & n = ( q+2 2 )k − 1 for q ≡ c 2 d = 0. This realizes possibility (4) for s = 2. If c 2 d = 0 and c n 2 d = 0. Let r be the formal dimension of F. In this case, we show that the generators of H r (F ) would be c , a contradiction. Thus, rk H * (F ) ≥ 2n + 4 > 2n + 2, which contradicts our hypothesis. Similarly, c is generator of H r (F ), where j > 1.As c n 2 d = 0, we must have q ≤ n and c = d i for 1 ≤ i ≤ j − 1 and c qj 2 d = 0. As qj < r, we get (q+1)j 4 − 1 < n. This realizes possibility (5) for s = 2. Case (4): If B 2 = B 3 = 0 and B 1 = 1, then j * (α) = 1 ⊗ c 4 + t 3 ⊗ c 1 . then c n 1 d must be non zero. Thus, for d 2 = 0, we get F ∼ 2 RP n × S q , 1 ≤ q ≤ m. This realizes possibility (1) for F = R. And for d 2 = 0, we have two possibility either d 2 = c 2q1 or d 2 = c q 1 d. If d 2 = c 2q1 , then by the change of basis d ′ = d + c q 1 , we get realizes possibility (1) for F = R. If d 2 = c q 1 d, then for 1 < q ≤ n, F does not satisfy poincaré duality. So, we must have q = 1. Again, by the change of basis This realizes possibility (2) for F = R. Now, Assume that c n+1 1 = 0 then c n 1 d either zero or non zero. Obviously, c n 1 d = 0 is not possible. Suppose that c n 1 d = 0. If c 1 d = 0, then we get c where r = q(2n+2) q+1 . This realizes possibility (4) for s = 1. If c 1 d = 0, then we get c r 1 = c r−qj 1 d j , j > 1 which generates H r (F ). Thus, c qi 1 = d i for 1 ≤ i ≤ j − 1 and c qj 1 = d j where qj < r. . if n is odd.Clearly, c n 4 must be nonzero. Thus, F ∼ 2 HP n × S 1 . This realizes possibility (1) for F = H and q = 1. If c 2 1 = 0 and c 3 1 = 0, then j * (α n ) = 2 r=0 n r (1 ⊗ c 4 ) n−r (t 3 ⊗ c 1 ) r and j * (α n β) = i+3r ⊗ (⊕ 4l+j=m−i c possible generator of formal dimension only when n = 3 and c 2 4 = 0. Thus we get F ∼ 2 RP 3 ×S 4 . Now, suppose that c 4 1 = 0. We have j * (α n β) then c n 1 c 4 must be non zero. Clearly, when c 2 4 = 0, F ∼ 2 RP n × S 4 and when c 2 4 = 0, then we must have c 24 = c 8 1 . After change of basis d ′ = c 4 1 + c 4 we get F ∼ 2 RP n × S 4 .This realizes possibility (1) for F = R and q = 4. Now suppose that c n+1 1 = 0, then c n 1 c 4 either zero or non zero. Obviously, c n 1 c 4 = 0 is not possible. And if c n 1 c 4 = 0, then for cup product zero, This realizes possibility (4) for s = 1 & q = 5. For cup product non zero, we get c r 1 = c r−4j 1 c j 4 , j > 1 which generates H r (F ). Thus, c 4i 1 = c i 4 for 1 ≤ i ≤ j − 1 and c 4j 1 = c j 4 , where 4j < r. We get rk H * (F ) = j(r − 4j) + 4j + j which must be 2n + 2 so, r = 2n+2 j + 4j − 5, where either n + 1 = jk, for some k ∈ N or j = 2. Hence, the cohomology ring H * (F ) is generated by c 1 and c 4 with c 1 < n. This realizes possibility (5) for s = 1 and q = 4. Case(5): If B 1 = 0 and B 2 = B 3 = 1, then j * (α) = 1 ⊗ c 4 + t ⊗ c 3 + t 2 ⊗ c 2 . In this case, if H * (F ) has one generator then F ∼ 2 RP 2n+1 . Suppose that H * (F ) has two generators. Now, we consider two subcases: (i) c 4 = c 2 2 (ii) c 4 generator of H r (F ). Thus, rk H * (F ) r ≡ 12k, n ≡ 5k − 1; k ∈ N. This realizes possibility (4) for s = 2 & q = 3. If c 2 c 3 = 0, then we have c n+1 2 and c n 2 c 3 are the possible least degree elements. If c n+1 2 = 0 then we must have c n 2 c 3 = 0. Clearly, for c 2 3 = 0, we get F ∼ 2 CP n × S 3 and for c 2 3 = 0, we must have c 2 3 = c 3 2 . Thus, c n+1 2 = c 2 3 + c 3 2 = c 2l+2 3 = 0 where l = [ n 3 ]. This realizes possibility (2) for q = 3. Clearly, c n+1 2 = 0 and c n 2 c 3 = 0 is not possible. = 0 . 0This realizes possibility (5) for s = 2 & q = 3. If H 1 (F ) = 0, then rk H * (F ) > 2n + 2, for either c 2 = c 2 1 or c 2 = c 2 1 . Subcase(ii): c 4 = c 2 2 First, suppose that c 4 = c 4 1 . Then, we have j * (α n ) then c n 1 d must be non zero. Thus, for d 2 = 0, we get F ∼ 2 RP n × S q , 1 ≤ q ≤ m. This realizes possibility (1) for F = R. And for d 2 = 0, we have two possibilityeither d 2 = c 2q 1 or d 2 = c q 1 d. If d 2 = c 2q1 , then by the change of basis d ′ = d + c q 1 , we realizes possibility (1) for F = R. If d 2 = c q 1 d, then for 1 < q ≤ n, F does not satisfy poincaré duality. So, we must have q = 1. Again, by the change of basis d ′ = c + d, we get d ′n+2 = d n+2 = d ′n+1 + d ′n+1 = dd ′ = 0. Thus, F ∼ 2 RP n+1 #RP n+1 . This realizes possibility(2)for F = R. Now, If c n+1 1 = 0, then either c n 1 d = 0 or c n 1 d = 0.. Obviously, c n 1 d = 0 is not possible. Suppose that c n 1 d = 0. If c 1 d = 0, then we get c r+1 j + qj − (q + 1). Hence, the cohomology ring H * (F ) is generated by c 1 and d with c r+1 1 = c qj 1 + d j = c r−qj+1 1 d = 0, with (q+1)j 2 − 1 < n. This realizes possibility (5) for s = 1. from above expression we get c n+1 1 and c n 1 c 4 are the possible least degree elements. If c 1 c 4 = 0, then we get c r 1 = c r 4 4 , where r is formal dimension. So, r = 8k where n ≡ 5k − 1, k ∈ N. Thus F ∼ 2 RP 8k #HP 2k . This realizes possibility (4) for s = 1 & q = 4. Now, Suppose that c 1 c 4 = 0. If c n+1 1 = 0, then we must have c n 1 c 4 = 0. Clearly, for c 2 4 = 0, we get F ∼ 2 RP n × S 4 and for c 2 4 = 0, we must have c 2 4 = c 8 1 . By the change of basis d ′ = c 4 1 + c 4 , we realizes possibility(1)for F = R and q j > 1, which generates H r (F ).Thus, c 4i 1 = c i 4 for 1 ≤ i ≤ j − 1 and c 4j 1 = c j 4 where 4j < r. We get rk H * (F ) = j(r − 4j) +5j which must be 2n + 2. Thus r = 2n+2 j + 4j − 5 and hence, the cohomology ring H * (F ) is generated by c 1 and c 4 , with c r+1 1 = c 4j 4 + c j 4 = c r−4j+1 1 c 4 = 0. This realizes possibility (5) for s = 1 & q = 4. Subcase (ii): c 3 = c 3 1 . As H * (F ) has at most two generators so, we must have c 4 = c 4 1 . Thus j * (α n 3 are the possible least degree elements. If c 1 c 3 = 0, then we get c r 1 = c r 3 3 , where r is formal dimension. So, r = 3k and n ≡ 2k − 1, k ∈ N. Thus, 1 c 3 = c 3k 1 + c k 3 , k ∈ N. This realizes possibility (4) for s = 1 & q = 3. Now, suppose that c 1 c 3 = 0. If c n+1 1 = 0, then we must have c n 1 c 3 = 0. Clearly, for c 2 3 = 0, we have F ∼ 2 RP n × S 3 and for c 2 3 = 0, we must have c 2 3 = c 6 1 . By the change of basis d ′ = c 3 1 + c 3 , we realizes possibility (1) for F = R and q j > 1, which generates H r (F ). Thus, c 3i 1 = c i 3 for 1 ≤ i ≤ j − 1 and c 3j 1 = c j 3 where 3j < r. Clearly, r = 2n+2 j + 3j − 4, where either n + 1 = jk, for some k ∈ N or j = 2. Hence, the cohomology ring H * (F ) is generated by c 1 and c 3 with c r+1 1 = c 3j 3 + c j 3 = c r−3j+1 1 c 3 = 0. This realizes possibility (5) for s = 1 & q = 3. Case(7): If B 3 = 0 and B 1 = B 2 = 1, then j * (α) = 1 ⊗ c 4 + t 2 ⊗ c 2 + t 3 ⊗ c 1 . deg x = 1, deg y = 8 & deg z = m, a 0 = 0 if m ≡ 0(mod 8) or m > 4n+4; a 1 = 0 if m ≡ 0(mod 8) or m > 4n; a 2 = 0 if m = 4(n + 1); a 3 = 0 if m ≡ i(mod 4) or {i = 0 and 2m > 4(n − 1)}, 0 ≤ 2i ≤ 4 and a 4 = 0 if m ≡ i ′ (mod 8) or m > 4n, 0 ≤ i ′ ≤ 4, a k ∈ Z 2 , 0 ≤ k ≤ 4, n odd,(2)Z 2 [x, y, z]/ < x 5 , y n 2 +1 , z 2 + a 0 y + a 1 x 4 z >, where deg x = 1, deg y = 8 & deg z = 4, a 0 , a 1 ∈ Z 2 , n even, and (3) Z 2 [x, y]/ < x m+1 , y n+1 + where deg x = 1, deg y = 4 and a i ∈ Z 2 . 2 , xy >, where deg x = 1, deg y = 4, & deg z = 8, a i ∈ Z 2 , 0 ≤ i ≤ 2, m = 4 < 4n, n odd, and(2) Z 2 [x, y, z, w k ]/< x 5 , y m 8 + a 0 w 1 , z 2 , xw k , w k w k+i + a k,i x 4d y 2m−4n+4q 8 z >, where deg x = 1, deg y = 8, deg z = 4n + 4 & deg w k = m + 4(k − 1)m < 4n < 2m,m ≡ 0 (mod 8) and a k,i = 0 if 4(n+2)−m 4 < 2k + i; a 0 and a k,i ′ s are in Z 2 . If d = 0, then i is even and q = 2k + i − 3. If d = 1, then i is odd and q = 2k + i − 4.The proofs of the above Theorems are similar to proofs of Theorem 4.2 and Theorem 4.5 in[6], respectively. Remark 3 . 8 . 38If a i = 0, 0 ≤ i ≤ 2 in possibility (1) of the Theorem 3.7, then X/G ∼ 2 (RP 8 ∨ S 4 ) × P n−12 (8). If a i = 0 ∀ 0 ≤ i ≤ 4, in possibility(1)of Theorem 3.6, then X/G ∼ 2 RP 4 × P j 2 (8) × S m , j = n − 1 for n odd and j = n for n even. If a i = 0 ∀ 0 < i ≡ 0(mod 4) ≤ min{4(n + 1), m} in the possibility(3), then X/G ∼ 2 RP m × HP n . Example 3 . 9 . 39Let T : HP n ×S m → HP n ×S m be a map defined by ([z], x) → ([z], −x). where either n + 1 = jk, for some k ∈ N or j = 2. Hence, the cohomology ring H * (F ) is generated by c 1 and c 4 with c r+1If H * (F ) has one generator then F ∼ 2 RP 2n+1 . Now, suppose that H * (F ) has two generators. We consider two subcases:has one generator then F ∼ 2 FP 3 , F = R or C. Suppose H * (F ) has two generators then clearly, F ∼ 2 S r × S q , 1 ≤ r ≤ 3 & 1 ≤ q ≤ m, or F ∼ 2 FP 2 #FP 2 , F = R or C. This realizes possibilities (1), (2) and (3) for F = R or C and n = 1.Remark 3.2. For n = 1, we get X ∼ 2 S 4 × S m . By Theorem 3.1, the possibilities of connected fixed point sets of involutions on X are S r × S q , 1 ≤ r ≤ 4 & 1 ≤ q ≤ m, F = R or C, or FP 2 #FP 2 , F = R, C or H. These possibilities have also been realized in. FP. 3Theorem 3.11, [13FP 3 , F = R or C, or FP 2 #FP 2 , F = R, C or H. These possibilities have also been realized in [Theorem 3.11, [13]]. fixed point sets of involutions on X ∼ 2 HP n × S m , when X is not TNHZ in X G has mod 2 cohomology of q-sphere, where −1 ≤ q ≤ 4n + m, under the assumptions that the associated Lerey-Serre spectral sequence of Borel fibration X ֒→ X G → B G is nondegenerate and the differentials d r of the spectral sequence satisfies d r (1 ⊗ b) = 0 ∀ r ≤ m. Remark 3.3. It is easy to observe that the. See Theorem 3.5 in [6])Remark 3.3. It is easy to observe that the fixed point sets of involutions on X ∼ 2 HP n × S m , when X is not TNHZ in X G has mod 2 cohomology of q-sphere, where −1 ≤ q ≤ 4n + m, under the assumptions that the associated Lerey-Serre spectral sequence of Borel fibration X ֒→ X G → B G is nondegenerate and the differentials d r of the spectral sequence satisfies d r (1 ⊗ b) = 0 ∀ r ≤ m, (See Theorem 3.5 in [6]). Now, we give examples to realizes above Theorem. 0 , x 1 , · · · x m ) → (x 0 , x 1 , · · · , x q , −x q+1 , · · · − x m ). Now, we give examples to realizes above Theorem. 0 , x 1 , · · · x m ) → (x 0 , x 1 , · · · , x q , −x q+1 , · · · − x m ). If we take conjugation action of G on HP n , i.e (z 0 , z 1 , · · · z n ) → (z 0 ,z 1 , · · ·z n ), then after taking diagonal action of G on HP n ×S m , we get fixed point set is RP n ×S q , 1 ≤ q ≤ m. If G acts on HP n , define by (z 0 , z 1 , · · · z n ) → (iz 0 , iz 1 , · · · iz n ), then after taking diagonal action of G on HP n × S m , we get fixed point set is CP n × S q , 1 ≤ q ≤ m. G on HP n , then after taking diagonal action of G on HP n × S m , we get fixed point set is HP n × S q , where 1 ≤ q ≤ m. This examples realizes possibility (1) of Theorem 3.1. Now, consider an action of G on S 4 defined by (x 0 , x 1 , x 2 , x 3 , x 4 ) → (x 0 , x 1 , x 2 , x 3 , −x 4If we consider trivial action of G on HP n , then after taking diagonal action of G on HP n × S m , we get fixed point set is HP n × S q , where 1 ≤ q ≤ m. If we take conjugation action of G on HP n , i.e (z 0 , z 1 , · · · z n ) → (z 0 ,z 1 , · · ·z n ), then after taking diagonal action of G on HP n ×S m , we get fixed point set is RP n ×S q , 1 ≤ q ≤ m. If G acts on HP n , define by (z 0 , z 1 , · · · z n ) → (iz 0 , iz 1 , · · · iz n ), then after taking diagonal action of G on HP n × S m , we get fixed point set is CP n × S q , 1 ≤ q ≤ m. This examples realizes possibility (1) of Theorem 3.1. Now, consider an action of G on S 4 defined by (x 0 , x 1 , x 2 , x 3 , x 4 ) → (x 0 , x 1 , x 2 , x 3 , −x 4 ). Then, the fixed point set of the diagonal action of G on S 4. Then, the fixed point set of the diagonal action of G on S 4 Introduction to Compact Transformation Groups. G E Bredon, Academic PressNew York, USAG. E. Bredon, Introduction to Compact Transformation Groups. New York, USA Academic Press, (1972). The cohomology ring structure of a fixed point set. G E Bredon, Ann. of Math. 80G. E. Bredon, The cohomology ring structure of a fixed point set. Ann. of Math. 80, 524- 537(1964). Cohomological aspects of transformation groups. G E Bredon, Proceedings of the Conference on Transformation Groups (New Oreleans, 1967). the Conference on Transformation Groups (New Oreleans, 1967)New YorkSpringer-VerlagG. E. Bredon, Cohomological aspects of transformation groups. Proceedings of the Conference on Transformation Groups (New Oreleans, 1967), Springer-Verlag, New York, 245-280(1968). Group actions on a product of two projective spaces. C N Chang, J C Su, Amer. J. Math. 1015C. N. Chang and J. C. Su, Group actions on a product of two projective spaces, Amer. J. Math., 101(5), 1063-1081(1979). Free actions of some compact groups on Milnor manifolds. P Dey, M Singh, Glasg. Math. J. 61P. Dey and M. Singh, Free actions of some compact groups on Milnor manifolds. Glasg. Math. J. 61, 727-742(2019). H K Dimpi, Singh, arXiv:2303.16478v1Involution in the product of projective space and sphere. Dimpi and H. K. Singh, Involution in the product of projective space and sphere, arXiv:2303.16478v1,(2023). The cohomology ring of orbit spaces of free Z 2 -actions on some Dold manifolds. A M M Morita, D De Mattos, P L Q Pergher, Bull. Aust. Math. Soc. 97A. M. M. Morita, D. De Mattos and P. L. Q. Pergher, The cohomology ring of orbit spaces of free Z 2 -actions on some Dold manifolds. Bull. Aust. Math. Soc. 97, 340-348(2018). C F Peltier, R P Beem, Involutions on DOLD manifolds. 85C. F. Peltier and R. P. Beem, Involutions on DOLD manifolds, Proc. Amer. Math. Soc. 85, 457-460(1982). Orbit spaces of free involutions on the product of two projective spaces. M Singh, Results Math. 571M. Singh, Orbit spaces of free involutions on the product of two projective spaces, Results Math. 57(1), 53-67(2010). N E Steenrod, Cohomology operations. Princeton, N.J.Princeton University PressN. E. Steenrod, Cohomology operations. Annals of Mathematics Studies, No. 50 Princeton University Press, Princeton, N.J. (1962). Fixed-point theorems for periodic transformations. P A Smith, Amer. J. Math. 63P. A. Smith, Fixed-point theorems for periodic transformations. Amer. J. Math. 63, 1-8(1941). New results and old problems in finite transformation groups. P A Smith, Bull. Amer. Math. Soc. 66P. A. Smith, New results and old problems in finite transformation groups, Bull. Amer. Math. Soc. 66, 401-415(1960). Periodic transformations on the product of two spheres. J C Su, Trans. Amer. Math. Soc. 112J. C. Su, Periodic transformations on the product of two spheres, Trans. Amer. Math. Soc. 112, 369-380(1964). On a conjecture of Bredon. P Volker, Manuscripta mathematica. 12P. Volker, On a conjecture of Bredon, Manuscripta mathematica, 12, 11-16(1974).
[]
[ "Diagnostic of stellar magnetic fields with cumulative circular polarisation profiles", "Diagnostic of stellar magnetic fields with cumulative circular polarisation profiles" ]
[ "O Kochukhov \nDepartment of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden\n" ]
[ "Department of Physics and Astronomy\nUppsala University\nBox 51675120UppsalaSweden" ]
[]
Information about stellar magnetic field topologies is obtained primarily from high-resolution circular polarisation (Stokes V) observations. Due to their generally complex morphologies, the stellar Stokes V profiles are usually interpreted with elaborate inversion techniques such as Zeeman Doppler imaging (ZDI). Here we further develop a new method of interpretation of circular polarisation signatures in spectral lines using cumulative Stokes V profiles (anti-derivative of Stokes V). This method is complimentary to ZDI and can be applied for validation of the inversion results or when the available observational data are insufficient for an inversion. Based on the rigorous treatment of polarised line formation in the weak-field regime, we show that, for rapidly rotating stars, the cumulative Stokes V profiles contain information about the spatially resolved longitudinal magnetic field density. Rotational modulation of these profiles can be employed for a simple, qualitative characterisation of the stellar magnetic field topologies. We apply this diagnostic method to the archival observations of the weak-line T Tauri star V410 Tau and Bp He-strong star HD 37776. We show that the magnetic field in V410 Tau is dominated by an azimuthal component, in agreement with the ZDI map that we recover from the same data set. For HD 37776 the cumulative Stokes V profile variation indicates the presence of multiple regions of positive and negative field polarity. This behaviour agrees with the ZDI results but contradicts the popular hypothesis that the magnetic field of this star is dominated by an axisymmetric quadrupolar component.
10.1051/0004-6361/201526318
[ "https://arxiv.org/pdf/1505.07266v1.pdf" ]
54,942,830
1505.07266
8614db3ea135bc414afe12dfd5a75aa8f0a4ccf2
Diagnostic of stellar magnetic fields with cumulative circular polarisation profiles May 28, 2015 O Kochukhov Department of Physics and Astronomy Uppsala University Box 51675120UppsalaSweden Diagnostic of stellar magnetic fields with cumulative circular polarisation profiles May 28, 2015Received 15 April 2015 / Accepted 22 May 2015Astronomy & Astrophysics manuscript no. 26318 c ESO 2015Polarization -Magnetic fields -Stars: activity -Stars: magnetic field -Stars: individual: V410 Tau, HD 37776 Information about stellar magnetic field topologies is obtained primarily from high-resolution circular polarisation (Stokes V) observations. Due to their generally complex morphologies, the stellar Stokes V profiles are usually interpreted with elaborate inversion techniques such as Zeeman Doppler imaging (ZDI). Here we further develop a new method of interpretation of circular polarisation signatures in spectral lines using cumulative Stokes V profiles (anti-derivative of Stokes V). This method is complimentary to ZDI and can be applied for validation of the inversion results or when the available observational data are insufficient for an inversion. Based on the rigorous treatment of polarised line formation in the weak-field regime, we show that, for rapidly rotating stars, the cumulative Stokes V profiles contain information about the spatially resolved longitudinal magnetic field density. Rotational modulation of these profiles can be employed for a simple, qualitative characterisation of the stellar magnetic field topologies. We apply this diagnostic method to the archival observations of the weak-line T Tauri star V410 Tau and Bp He-strong star HD 37776. We show that the magnetic field in V410 Tau is dominated by an azimuthal component, in agreement with the ZDI map that we recover from the same data set. For HD 37776 the cumulative Stokes V profile variation indicates the presence of multiple regions of positive and negative field polarity. This behaviour agrees with the ZDI results but contradicts the popular hypothesis that the magnetic field of this star is dominated by an axisymmetric quadrupolar component. Introduction Stellar magnetism represents an important, though not fully understood and poorly constrained, ingredient of the theories of stellar formation and evolution. Although magnetic fields are believed to play an important role in many astrophysical situations, their direct detection and characterisation is often very challenging. The stellar surface magnetic fields with typical strengths ranging from a few Gauss to several kilo-Gauss are commonly diagnosed with the help of the Zeeman effect. The broadening and splitting of magnetically sensitive spectral lines lines becomes apparent in high-resolution spectra when the field strength exceeds ∼ 1 kG, allowing one to measure the mean field modulus (Mathys et al. 1997;Kochukhov et al. 2006;Reiners & Basri 2007) and identify different field components (Johns-Krull & Valenti 1996;Shulyak et al. 2014) in strongly magnetised objects. However, this Zeeman broadening analysis is limited to slow rotators and provides little information about the magnetic field geometry. On the other hand, analysis of circular (and more recently linear) polarisation in spectral lines enables detection of much weaker magnetic fields (Wade et al. 2000;Petit et al. 2011;Marsden et al. 2014), especially when polarisation signals can be enhanced with line-addition methods . Reconstruction of the detailed surface magnetic field vector maps with the Zeeman Doppler imaging (ZDI) technique heavily relies on the interpretation of polarisation in spectral line profiles (Brown et al. 1991;Carroll et al. 2012). For early-type magnetic stars, which typically host strong, globally organised, dipolar-like fossil fields, one frequently observes morphologically simple, e.g. two-lobe, S-shaped, circu-lar polarisation (Stokes V) signatures. Several integral magnetic observables, for example the mean longitudinal magnetic field (Mathys 1991) and crossover (Mathys 1995), can be derived by computing the wavelength moments of such simple Stokes V profiles. These observables are interpreted by fitting their rotational phase curves with low-order multipolar field models (Landstreet & Mathys 2000;Bagnulo et al. 2002). Although this technique misses some small-scale surface magnetic field structures (Kochukhov et al. 2004;Kochukhov & Wade 2010), it is straightforward to apply, computationally inexpensive and therefore is widely used for obtaining information on the global magnetic field geometries of large stellar samples (e.g. Aurière et al. 2007;Hubrig et al. 2007). In contrast to stable and usually topologically simple magnetic fields of early-type stars, the dynamo-degenerated surface fields of active late-type stars are weak, complex and rapidly evolving. Their circular polarisation profile shapes are correspondingly more complex, often exhibiting many lobes. Such Stokes V profiles cannot be meaningfully characterised with integral magnetic observables because their low-order moments tend to be close to zero even when strong polarisation signatures are evident in the data (e.g. Kochukhov et al. 2013). In other words, the surface magnetic field distribution is so complex that vector averaging over the visible stellar disk leads to a substantial cancellation of the Stokes V signal corresponding to different field polarities. In this situation one usually resorts to an elaborate ZDI modelling of the Stokes V profiles themselves, which imposes stringent requirements on the signal-to-noise ratio and rotational phase coverage of the observational data. Being an intrinsically ill-posed inversion problem, reconstruction of the stellar magnetic field topologies with ZDI suffers from a num-ber of uniqueness and reliability issues (Donati & Brown 1997;Rosén & Kochukhov 2012), not the least related to the fact that only circular polarisation rather than the full Stokes vector spectra are typically available for active late-type stars. In this respect, ZDI is often perceived to be less reliable in comparison to the usual mapping of temperature spots or chemical inhomogeneities using intensity spectra. Due to a complex response of the circular polarisation spectra to the strength and orientation of the local magnetic field one cannot establish a straightforward connection between the polarisation profile variability pattern and major surface magnetic features in the same way as, for example, one can recognise individual star spots in the dynamic intensity spectra (e.g. Barnes et al. 2000). It is therefore of great interest to devise simple, yet informative, alternative methods of the analysis of complex Stokes V profiles that can be used to validate ZDI results or applied when the rotational phase coverage is insufficient for a magnetic inversion. Several recent studies proposed new methods of extracting information from the circular polarisation signatures. Carroll & Strassmeier (2014) considered diagnostic potential of the net absolute Stokes V signal, showing that under certain assumptions it allows to characterise the apparent absolute longitudinal magnetic field. On the other hand, Gayley & Owocki (2015) suggested to use an anti-derivative of the Stokes V profile for a more intuitive interpretation of the circular polarisation data. In this paper we further develop the latter idea. The main goal of this work is to present a comprehensive theoretical formulation of the new polarisation observable, demonstrate its connection to the underlying stellar surface magnetic field structure and to apply the new magnetic field diagnostic method both to simulated circular polarisation data and to real spectropolarimetric observations of stars with topologically complex magnetic fields. Cumulative Stokes V profiles Theoretical basis To describe the disk-integrated Stokes parameter profiles of a rotating star we consider a coordinate system with the z-axis directed towards the observer and the y-axis located in the plane formed by the line-of-sight and the stellar rotational axis. The star is assumed to have a unit radius and to rotate counterclockwise as seen from the visible rotational pole. Then, the Doppler shift across the stellar disk is a function of the x-coordinate alone. With these conventions the disk-integrated continuum intensity is F c = +1 −1 dx + √ 1−x 2 − √ 1−x 2 I c (x, y)dy(1) and the disk-integrated Stokes profiles of a spectral line with the central wavelength λ 0 are represented by the integrals F I = +1 −1 dx + √ 1−x 2 − √ 1−x 2 I[x, y; λ − λ 0 − ∆λ R x; B(x, y)]dy(2) and F V = +1 −1 dx + √ 1−x 2 − √ 1−x 2 V[x, y; λ − λ 0 − ∆λ R x; B(x, y)]dy,(3) where I c (x, y) represents the local continuum intensity and the local line intensity and polarisation are given by I(x, y, λ, B) and V(x, y, λ, B). The amplitude of the Doppler shift due to the stellar rotation is determined by the projected rotational velocity v e sin i ∆λ R = λ 0 v e sin i c .(4) Under the weak-field approximation (e.g. Landi Degl'Innocenti & Landolfi 2004) the local intensity profile is unaffected by the magnetic field while the Stokes V profile is determined by the product of the first derivative of Stokes I and the line-of-sight component B z (x, y) of the local magnetic field vector V = −C Zḡ B z (x, y) ∂I ∂λ .(5) In this expressionḡ denotes the effective Landé factor of a spectral line and C Z = eλ 2 0 4πm e c 2 = 4.6686 × 10 −13 λ 2 0 (6) for the field measured in G and wavelength in Å. Using this approximation of the local Stokes V profile one can express the disk-integrated circular polarisation spectrum as F V = −C Zḡ +1 −1 dx + √ 1−x 2 − √ 1−x 2 B z (x, y) × ∂ ∂λ I[x, y; λ − λ 0 − ∆λ R x]dy.(7) Recalling that the observed Stokes parameter profiles are normalised by the disk-integrated continuum intensity, we define the normalised disk-integrated spectral line intensity R I ≡ F I F c = +1 −1 dx + √ 1−x 2 − √ 1−x 2 I[x, y; λ − λ 0 − ∆λ R x]dy +1 −1 dx + √ 1−x 2 − √ 1−x 2 I c (x, y)dy(8) and circular polarisation R V ≡ F V F c = − C Zḡ +1 −1 dx + √ 1−x 2 − √ 1−x 2 I c (x, y)dy × +1 −1 dx + √ 1−x 2 − √ 1−x 2 B z (x, y) ∂ ∂λ I[x, y; λ − λ 0 − ∆λ R x]dy.(9) The latter equation can be equivalently written as R V = ∂ ∂λ C Zḡ F c +1 −1 dx + √ 1−x 2 − √ 1−x 2 B z (x, y) × (I c (x, y) − I[x, y; λ − λ 0 − ∆λ R x]) dy(10) or R V = ∂ ∂λ C Zḡ +1 −1 dx + √ 1−x 2 − √ 1−x 2 B z (x, y) × i c (x, y)r I [x, y; λ − λ 0 − ∆λ R x]dy ,(11) O. Kochukhov: Cumulative Stokes V profiles where i c denotes the local normalised continuum intensity i c (x, y) ≡ I c (x, y) F c(12) and r I corresponds to the local normalised residual Stokes I profile r I [x, y; λ − λ 0 − ∆λ R x] ≡ I c (x, y) − I[x, y; λ − λ 0 − ∆λ R x] I c (x, y) .(13) We define the cumulative Stokes V (CSV) profile as G V (λ) ≡ 1 W I λ λ min R V (λ )dλ ,(14) where W I ≡ λ max λ min (1 − R I )dλ(15) is the equivalent width of the disk-integrated Stokes I profile and the integration limits λ min , λ max cover the full extent of a spectral line. Substituting Eq. (11) into Eq. (14) yields G V (λ) = C Zḡ W I +1 −1 dx + √ 1−x 2 − √ 1−x 2 B z (x, y) × i c (x, y)r I [x, y; λ − λ 0 − ∆λ R x]dy.(16) Dividing this quantity by C Zḡ provides the longitudinal magnetic field density B z (λ) ≡ G V (λ) C Zḡ = 1 W I +1 −1 dx + √ 1−x 2 − √ 1−x 2 B z (x, y) × i c (x, y)r I [x, y; λ − λ 0 − ∆λ R x]dy.(17) This quantity represents a velocity-resolved measure of the lineof-sight magnetic field component, weighted by the projected surface area and by the equivalent width-normalised local Stokes I profile. B z (λ) has the units of magnetic field divided by wavelength. Its integration over the full line profile gives the nor- malised first moment of Stokes V λ max λ min B z (λ)dλ = − 1 C Zḡ W I λ max λ min (λ − λ 0 )R V dλ ≡ B z ,(18) commonly known as the mean longitudinal magnetic field. The G V (λ) observable discussed above is identical to the Stokes V anti-derivative concept introduced by Gayley & Owocki (2015) with the exception that we formulate this quantity with respect to the normalised Stokes V parameter R V and divide the resulting profiles by W I whereas the earlier study proposed to divide by 1 − R I . The quantity G V (λ) discussed by Gayley & Owocki (2015) is easily recovered from our CSV profiles using the transformation G V (λ) = W I 1 − R I G V (λ).(19) The corresponding velocity-resolved mean longitudinal magnetic field B z (λ), which can be obtained by dividing G V (λ) by C Zḡ , has the units of magnetic field strength. The longitudinal magnetic field density B z (λ) or the velocityresolved longitudinal field B z (λ) derived from the CSV profiles represent morphologically simpler observables compared to the Stokes V profiles themselves. They characterise the line-of-sight magnetic field averaged along the stripes of constant Doppler shift, thus providing a more intuitive representation of the information content of the Stokes V spectra. Although the observables B z (λ) and B z (λ) are closely related, different normalisation choices lead to a somewhat different behaviour. The velocity-resolved mean longitudinal field closely tracks the actual local line-of-sight magnetic field component. The longitudinal field density includes an additional weighting by the projected surface area and therefore gradually diminishes to zero towards the line profile edges even if the underlying B z distribution is uniform. The B z (λ) quantity compensates changing projected area by the 1 − R I normalisation but instead suffers from major noise artefacts at profile edges due to division of two small numbers. Compared to B z (λ), this quantity may also be adversely affected by the distortions of Stokes I profile shape caused by stellar surface inhomogeneities. Equation (17) can be considered as a convolution integral with a kernel given by the residual Stokes I profile. The larger is the ratio of ∆λ R to the local profile width ∆λ l , the less susceptible is B z (λ) to the cancellation of polarisation signatures corresponding to different field polarities. Therefore, the new magnetic observable will be most effective for rapid rotators since in this case ∆λ R ∆λ l . Based on these definitions, we carried out an illustrative calculation of the B z (λ) profiles for a magnetic field distribution comprising several circular spots with the radial field strength of 0.5-1.0 kG (see Fig. 1). The calculations assumed a Gaussian shape with FWHM = 8 km s −1 for the local Stokes I profile. The adopted central wavelength and effective Landé factor, λ 0 = 5630 Å andḡ = 1.215, corresponded to the mean values of the LSD line mask used for V410 Tau (see Sect. 3). The star was assumed to rotate with v e sin i = 50 km s −1 and to be inclined by i = 60 • with respect to the line of sight. Figure 1 shows the spherical maps of the radial magnetic field as well as the Stokes V and CSV profile for three different rotational phases. When the surface magnetic field distribution is dominated by one of the magnetic polarities, the Stokes V profile exhibits the classical S-shape pattern. The corresponding CSV profiles show a single bump, which can be either positive or negative, depending on the dominant magnetic field polarity. On the other hand, when several regions of different polarity are present on the stellar disk (e.g. middle panel in Fig. 1), the Stokes V profile exhibits multiple components, which usually cannot be interpreted in terms of the surface magnetic field distribution without carrying out a detailed modelling. In contrast, the CSV profiles for the same rotational phase are simpler and readily show spots of different polarity at the longitudes corresponding to the Doppler shift within a spectral line profile. Deconvolved longitudinal magnetic field density A simplified form of Eq. (17) can be obtained by assuming that the local residual intensity profile is represented by a Gaussian function which does not vary across the stellar surface. This approximation, together with the weak-field assumption, is commonly made in the context of modelling stellar circular polarisation spectra (e.g. Petit et al. 2004;Petit & Wade 2012). Then, omitting λ 0 , the normalised residual profile becomes r I (λ, x) W I = 1 σ √ 2π exp − (λ − ∆λ R x) 2 2σ 2 .(20) Since the kernel described by this equation is known once σ and v e sin i are specified, it is possible to define deconvolved longitudinal magnetic field density B z (λ) = B z (∆λ R x) = + √ 1−x 2 − √ 1−x 2 B z (x, y)i c (x, y)dy,(21) in which the effect of averaging over the line profile width is taken out. In practice, B z (λ) can be obtained from the observed B z (λ) with the help of a suitable deconvolution algorithm. For the interpretation of B z (λ) it can also be assumed that the centre-tolimb variation of the continuum intensity is described by a linear limb-darkening law i c (x, y) = 3(1 − ε + εµ) π(3 − d) ,(22) where ε is a limb-darkening coefficient and µ ≡ 1 − x 2 − y 2 . Dynamic CSV profiles A two-dimensional plot of the Steaks I profile variability pattern as a function of wavelength (or velocity) and rotational phase is commonly used to assess distribution of the stellar surface inhomogeneities. Although similar plots of the Stokes V profile variation are also occasionally published (Donati et al. 1999(Donati et al. , 2006, they could not be directly employed for obtaining information about the stellar surface magnetic field distributions. The dynamic cumulative Stokes V profiles, on the other hand, allow one to trace individual magnetic spots and characterise the stellar magnetic field geometry. An example of the dynamic CSV profiles is shown in Fig. 2. This figure presents polarisation spectra and corresponding B z profiles for the model magnetic field distribution discussed in Sect. 2.1. The longitudinal positions and polarities of the four magnetic spots can be directly read out from the dynamic CSV O. Kochukhov: Cumulative Stokes V profiles plot. The latitudes of the spots can be deduced from the velocity span of their signatures in the B z profile. Deriving CSV profiles from noisy data The cumulative Stokes V profiles exhibit non-trivial noise properties. On the one hand, since the CSV spectra are computed by integrating the Stokes V signatures, one may expect some cancellation of the random noise. On the other hand, the resulting noise in the CSV profiles themselves is highly correlated, leading to a qualitatively different behaviour compared to the initial Stokes V profiles. These special noise properties have to be considered when applying CSV diagnostic to real observational data. We investigated the noise properties of the CSV profiles with several sets of Monte-Carlo simulations. First, we considered the model Stokes V spectra corresponding to the four-spot magnetic field distribution discussed in Sect. 2.1. The mean peak-to-peak amplitude of these circular polarisation profiles is ≈ 2×10 −3 . The profiles were sampled with a step of 1 km s −1 , yielding about 100 spectral points across the line. A random, normally-distributed noise with σ = 2 × 10 −4 was added to these profiles. Typical CSV signatures resulting from the forward integration according to Eq. (14) are presented in Fig. 3. It is evident that, while the CSV spectra appear to have less random noise compared to the initial Stokes V profiles, they suffer from a systematic, ramping deviation from the zero line, which increases as the integration proceeds from blue to red. The bottom panel of Fig. 3 illustrates the B z CSV observable normalised by the residual profile intensity 1 − R I according to Eq. (19). The devastating impact of the noise at profile edges is apparent. It is possible to define the CSV profiles using an alternative, backward, red-to-blue integration scheme G − V (λ) = − 1 W I λ max λ R V (λ )dλ .(23) This formula gives identical results to Eq. (14) in the absence of noise. When the noise is present, it will lead to a systematic discrepancy similar to the one illustrated in Fig. 3, but increasing towards the blue side of the line. The overall systematic deviation from zero can be minimised by a weighted mean of the forward and backward integrations G V (λ) = 1 W I w + λ λ min R V (λ )dλ − w − λ max λ R V (λ )dλ ,(24) where the weights w ± are given by w + = λ max − λ λ max − λ max and w − = λ − λ min λ max − λ max .(25) The correction expressed by these equations is equivalent to subtracting from the original CSV profile G V (λ) a straight line going through 0 at λ min and G V (λ max ) at λ max . Application of Eqs. (24-25) still results in a substantially higher reduced χ 2 than found for the original Stokes V profiles corrupted by noise. Using 10 6 realisations of normallydistributed random noise in the absence of any background signal, we have built a cumulative distribution function for the χ 2 values deduced by performing integration according to Eq. (24) with associated error propagation. As shown in the upper panel of Fig. 4, integration leads to substantially larger χ 2 values compared to the initial Stokes V profiles. For example, the 10 −3 probability threshold corresponds to χ 2 ν ≈ 6 for the CSV profiles compared to χ 2 ν ≈ 1.4 for the initial profiles. Based on this empirical CDF of χ 2 ν of CSV profiles, we estimated the fraction of such profiles that will be detected above the noise level with the false alarm probability of 10 −3 or smaller for the four-spot magnetic field geometry sampled at 100 equidistant rotational phases. These MC simulations were carried out for noise amplitudes in the range from 2 × 10 −4 to 2 × 10 −3 , i.e. 10-100% of the mean peak-to-peak Stokes V profile amplitude. Simulations were repeated 10 4 times for each value of the noise amplitude. As can be seen from the lower panel of Fig. 4, the detection of magnetic field signatures using CSV profiles is less successful than with the original Stokes V spectra. Therefore, the CSV diagnostic technique does not represent a useful alternative, in terms of the mere field detection, to the widely used χ 2 -based Stokes V profile detection method. Observational data For the purpose of testing the CSV diagnostic method on real data we analysed high-resolution spectropolarimetric observations available from the Polarbase archive . This database provides reduced, calibrated, one-dimensional Stokes parameter spectra obtained with ESPaDOnS at CFHT and Narval at TBL of the Pic du Midi observatory. Both instruments are fibre-fed, thermally stabilised, echelle spectropolarimeters covering the wavelength range from 3694 to 10483 Å in a single exposure at the resolving power of λ/∆λ = 65 000. As an example of the Stokes V profiles of a cool active star with a complex magnetic field we considered spectropolarimetric observations of the weak-line T Tauri star V410 Tau. This data set, discussed in the papers by Skelly et al. (2010) and Rice et al. (2011), comprises 45 Stokes V observations obtained in the span of about two weeks in January 2009. The signal-to-noise ratio of these spectra, S/N ≈ 150, is insufficient for the detection and analysis of circular polarisation signatures in individual spectral lines. Therefore, we applied the least-squares deconvolution approach to each spectrum, combining about 4400 spectral lines into a single high S/N ratio profile. The data were phased with the ephemeris of Stelzer et al. (2003). We also analysed the Stokes V observations of the He-strong Bp star HD 37776. This object is known to possess a complex magnetic field, strongly deviating from an oblique dipolar geometry (Kochukhov et al. 2011). There are 27 Stokes V spectra of HD 37776, with a typical S/N ratio of 450, available in Polarbase. These data were acquired over the period of 2006-2012. We used the variable-period ephemeris of Mikulášek et al. (2008) to compute rotational phases. Since the magnetic field of HD 37776 is very strong, circular polarisation signatures are readily observable in individual spectral lines. In this paper we study the He i 6678 Å line. Its circular polarisation shows a variability pattern very similar to the He i 5876 Å line analysed by Kochukhov et al. (2011) using a lower quality data. Examples of the CSV diagnostic Weak-line T Tau star V410 Tau The young rapidly rotating star V410 Tau (HD 283518) exhibits ample signs of the surface magnetic activity and was targeted by a number of temperature DI studies (Hatzes 1995;Rice & Strassmeier 1996;Rice et al. 2011). These analyses revealed a persistent large polar spot and evolving temperature inhomogeneities at lower latitudes. Magnetic field was detected on V410 Tau by . Skelly et al. (2010) reconstructed the surface magnetic field topology of this star with ZDI, finding a significant toroidal magnetic component and a complex radial field distribution. On the other hand, Carroll et al. (2012) reported a relatively simple, predominantly poloidal magnetic field structure consisting of two large polar magnetic spots with an opposite polarity. The Stokes V spectropolarimetric observations interpreted by Skelly et al. (2010) and Carroll et al. (2012) were obtained with the twin instruments (ESPaDOnS and Narval) and at practically the same time, meaning that the discrepant magnetic inversion results cannot be ascribed to the intrinsic variation of the surface magnetic field of V410 Tau. Here we use the combined ESPaDOnS and Narval spectropolarimetric data set to reconstruct independent maps of the magnetic field topology and brightness distribution. The inversion methodology applied to V410 Tau is described in detail by Kochukhov et al. (2014). Briefly, the magnetic field of the star is represented in terms of a superposition of the poloidal and toroidal harmonic terms with the angular degree up to = 15. A penalty function prohibits unnecessary contribution of the highorder harmonic modes. A separate regularisation procedure applied to the brightness map minimises any deviation from the reference (photospheric) brightness value. We employed the analytical Unno-Rachkovsky Stokes parameter profiles (e.g. Landi Degl 'Innocenti & Landolfi 2004) to approximate the local intensity and circular polarisation spectra. The brightness and magnetic field maps were recovered self-consistently from the LSD Stokes I and V profiles, adopting i = 60 • and v e sin i = 74 km s −1 . The LSD profile fits and the resulting brightness and magnetic field maps are presented in Fig. 5. The surface distributions which we obtain are generally compatible with the maps published by Skelly et al. (2010). Similar to these authors we find a dominant dark polar spot and numerous small-scale brightness inhomogeneities at lower latitudes. The poloidal and toroidal field components contribute 63% and 37% respectively to the total magnetic field energy. The field strength reaches ∼ 1 kG locally on the stellar surface. The azimuthal field exhibits two large unipolar regions while the radial field shows a more complex structure. The darkest areas in the brightness map do not co- Fig. 5. Results of the ZDI analysis of V410 Tau. The four panels on the left side of the figure compare the observed (histogram) and theoretical (solid line) LSD Stokes I and V profiles. In these plots the spectra corresponding to different rotational phases are shifted vertically. The phase is indicated to the right of each profile. Reconstructed maps of the radial, meridional and azimuthal magnetic field components as well as the brightness distribution are presented in the right column. The star is shown using the flattened polar projection between latitudes −60 • and +90 • . The thick circle corresponds to the rotational equator. The colour bars give the field strength in kG and the brightness relative to the photospheric value. incide with any particular magnetic field features. We have also verified that a very similar magnetic field geometry is obtained if one performs a direct ZDI reconstruction of each magnetic field component without using the spherical harmonic formalism. Based on the magnetic inversion results, we calculated the CSV profiles using both the observed and the ZDI model Stokes I and V spectra of V410 Tau. Figure 6 compares profiles obtained by applying Eq. (14), Eq. (23), and the weighted mean of the forward and backward integrations given by Eq. (24). It is clear that, in the first and second case (Figs. 6ab), a significant systematic deviation between the observed and theoretical CSV profiles appears due to the noise accumulation. On the other hand, systematic effects are largely brought under control in the third set of CSV profiles (Fig. 6c). Figure 6d illustrates the impact of employing the 1 − R I normalisation in place of the W I normalisation used elsewhere. The B z profiles shown in this panel directly provide the average lineof-sight magnetic field component at different longitudes on the stellar surface. However, these profiles, being morphologically similar to the B z spectra, are visibly distorted by the noise artefacts at the line edges. We tested the accuracy of the CSV diagnostic with the model LSD profiles corresponding to the magnetic field map shown in Fig. 5. For clarity we disregarded the non-uniform brightness distribution and considered the line profiles for 10 equidistant rotational phases. The CSV profiles were computed from the simulated observations using Eq. (14) and by performing a numerical integration of the line-of-sight magnetic field component according to the right-hand side of Eq. (17). In these calculations we used the same local non-magnetic r I profile as was adopted for the ZDI inversions. As expected, the two sets of CSV profiles agree perfectly (Fig. 7a). The CSV method is based on the weak-field approximation, which is usually considered to be valid only for the magnetic field strengths below ∼ 1 kG. It is useful to assess the usefulness of the cumulative Stokes V profiles for much stronger magnetic fields. To this end, we scaled all the vector components of the ZDI map of V410 Tau by a factor of 10. This increased the mean field strength from 570 G to 5.7 kG. We then repeated the direct and geometrical evaluation of the CSV profiles. The resulting spectra are compared in Fig. 7b. The two sets of B z profiles show only marginal discrepancies despite that the characteristic magnetic field strength now significantly exceeds 1 kG. Therefore, it appears that the CSV diagnostic is usable well beyond the nominal limit of the weak-field approximation. Using the ZDI maps of V410 Tau we calculated variation of the Stokes parameter profiles with a dense rotational phase sampling. The resulting dynamic circular polarisation and CSV spectra are presented in Fig. 8. Unlike Fig. 2 it shows a stationary pattern (phases 0.0-0.3) in addition to features travelling across the stellar disk (e.g. phase 0.6-0.8). The travelling features are associated with the radial field spots, in particular the negative polarity spot best visible at the rotational phase 0.75. The stationary features correspond to the large unipolar azimuthal fields regions. Such behaviour of the dynamic CSV profiles can be considered as an evidence for the presence of a significant toroidal magnetic field on the stellar surface. He-strong Bp star HD 37776 The B-type magnetic chemically peculiar star HD 37776 (V901 Ori) is known for its unusually complex, double-wave longitudinal magnetic field variation (Thompson & Landstreet 1985). This behaviour indicates a large deviation of the stellar surface magnetic field topology from an oblique dipolar geometry. In the literature this object is often considered to be an O. Kochukhov: Cumulative Stokes V profiles example of the star with a quadrupolar magnetic field structure. Indeed, Bohlender (1994) and Khokhlova et al. (2000) derived quadrupolar magnetic geometry models for HD 37776 based on fitting the longitudinal field curve and Stokes V profile variation, respectively. However, using a more extensive set of phase-resolved spectropolarimetric observations, Kochukhov et al. (2011) showed that the circular polarisation spectra of HD 37776 cannot be fitted with an axisymmetric quadrupolar field structure. Instead, the ZDI inversion carried out in that study suggested a complex field distribution comprising a series of magnetic spots with alternating positive and negative polarity. The high-resolution circular polarisation spectra of HD 37776 analysed here enable a straightforward verification of the ZDI results of Kochukhov et al. (2011) using the cumulative Stokes V profile diagnostic method. Figure 9 shows the Stokes V and CSV spectra centred on the He i 6678 Å line. The dynamic CSV profiles are dominated by a series of travelling features. There is no evidence of stationary structures like in the case of V410 Tau, indicating that no strong toroidal magnetic field is present. Furthermore, a careful examination of the CSV profiles allows one to identify three spots with negative polarity and the same number of positive-field features. This corresponds very well to the six well-defined spots seen in the radial magnetic field map recovered with ZDI (see fig. 5 Kochukhov et al. 2011). On the other hand, considering the dynamic CSV profiles in Fig. 9 one can immediately rule out the quadrupolar field hypothesis. A quadrupolar magnetic field geometry with a large obliquity, as required to reproduce the B z variation of HD 37776, exhibits four regions of alternating field polarity along the stellar rotational equator. But Fig. 9 shows at least six regions. Summary and discussion In this paper we examined the information content of the cumulative circular polarisation profiles and assessed their usefulness for the analysis of stellar surface magnetic fields. This work represents a further development of the idea put forward by Gayley & Owocki (2015). Starting with the weak-field approximation, we gave a definition of the new magnetic observable as a normalised integral of the Stokes V spectrum and showed its relation to the underlying surface magnetic field structure. The transformation from the Stokes V to the CSV profiles greatly simplifies the morphology of polarisation spectra and enables a direct and intuitive identification of the regions of different field polarity on the stellar surface. The CSV observable characterises the lineof-sight magnetic field component weighted by the continuum intensity and by the Doppler-shifted local residual Stokes I profile. The CSV spectrum thus provides a velocity-resolved measure of the stellar longitudinal magnetic field. Consequently, this observable is most useful for rapidly rotating stars with complex magnetic field topologies. Using theoretical circular polarisation profile calculations and real observational data we assessed visibility of the surface magnetic field features in the dynamic CSV profiles. Unlike the dynamic Stokes V spectra, which can hardly be interpreted directly, the dynamic CSV profiles enable a qualitative analysis of the magnetic star spot distributions. With the help of Monte Carlo simulations we studied the effect of a random observational noise on the CSV profiles. The noise signatures in the CSV spectra are highly correlated, leading to a systematic ramping offset from a noise-free version of the line profiles. We showed that this problem can be minimised using a weighted mean of the CSV profiles computed with the forward and backward integration of the original Stokes V data. Nevertheless, the presence of correlated noise in the CSV profiles generally makes them inferior for the purpose of field detection compared to the original Stokes V spectra. With the goal of testing the CSV magnetic field diagnostic methodology we studied the archival high-resolution circular polarisation observations of the cool active star V410 Tau and the hot magnetic star HD 37776. Both objects are rapid rotators with complex surface magnetic field topologies. For the weak-line T Tauri star V410 Tau we obtained brightness and magnetic field distributions with the help of the ZDI modelling of the least-squares deconvolved Stokes I and V profiles. The resulting magnetic map bears a close resemblance to the magnetic field topology obtained by Skelly et al. (2010) from the same data. However, our inversion results contradict the magnetic field map published by Carroll et al. (2012) whose data set was also included in our analysis. Our ZDI modelling accounted for the effect of an inhomogeneous brightness distribution on the Stokes V profiles. We have also verified that very similar magnetic field maps are obtained using the direct, pixel-based ZDI and the magnetic inversions relying on the spherical harmonic parameterisation of the stellar magnetic field. Therefore, these two aspects are unlikely to be responsible for the discrepancies of the ZDI maps presented here and in Skelly et al. (2010) on the one hand and in Carroll et al. (2012) on the other hand. It is more likely that this disagreement is rooted in the differences between the LSD (used here and by Skelly et al. 2010) and PCA (used by Carroll et al. 2012) mean polarisation profiles. A comprehensive comparative study of the LSD and PCA line-addition techniques is required to address this problem. The dynamic CSV profiles of V410 Tau indicate the presence of a strong azimuthal magnetic field component on the stellar surface. This agrees with our ZDI results and suggests a non-negligible contribution of the toroidal magnetic field. We also used the ZDI map of V410 Tau to verify interpretation of the integral of Stokes V profiles in terms of the disk-averaged, weighted line-of-sight magnetic field component. These calculations were repeated for a scaled-up version of the stellar magnetic field geometry, demonstrating that the usefulness of the CSV diagnostic extends far beyond the nominal limit of the weak field approximation. For the magnetic Bp star HD 37776 we analysed the CSV dynamic spectrum of the He i 6678 Å line. The variation of its CSV profiles is clearly inconsistent with the quadrupolar magnetic field topology frequently suggested for this star. On the other hand, multiple magnetic spots which can be identified in the dynamic CSV spectra of HD 37776 are consistent with the surface magnetic features in the ZDI map of this star derived by Kochukhov et al. (2011). The CSV analysis thereby confirms the complex, non-quadrupolar nature of the magnetic field topology of HD 37776. Fig. 1 . 1Model Stokes V (upper curves) and cumulative Stokes V (lower curves) profiles for a star with several large circular magnetic spots of different polarities. The line profiles and the spherical maps of the radial magnetic field component are shown for three different rotational phases. The phases are indicated above each line profile panel. Fig. 2 . 2Theoretical dynamic Stokes V (left panel) and cumulative Stokes V (right panel) profiles corresponding to the radial magnetic field maps shown inFig. 1. Fig. 3 . 3Effect of a random noise with σ = 2 × 10 −4 on the Stokes V (upper panel) and CSV profiles with W I (middle panel) and 1 − R I (bottom panel) normalisations. The thick double line shows the noise-free calculations corresponding to the rotational phase 0.125 of the model Stokes profiles discussed in Sect. 2.1. The thin lines represent profiles for different noise realisations. The B z and B z profiles are obtained with the forward integration according to Eq. (14). Fig. 4 . 4Upper panel: cumulative distribution of χ 2 ν values for a set of independent, normally-distributed random variables (solid line) and for χ 2 ν corresponding to G V (λ) given by Eq. (24) (dashed line). Lower panel: fraction of detections of magnetic field signatures in the Stokes V (solid line) and CSV profiles (dashed line) as a function of random noise added to the model Stokes profiles. Fig. 6 . 6Cumulative Stokes V profiles of V410 Tau computed from the observed (histogram) and theoretical (thick solid line) LSD circular polarisation spectra of V410 Tau shown inFig. 5. The CSV profiles are shifted vertically according to their rotational phases similar to the Stokes V profiles inFig. 5. The first three panels show the CSV spectra computed with a) forward integration, b) backward integration, and c) weighted mean of the backward and forward integration. The last panel d) shows the result of converting the CSV profiles in panel c) to the 1 − R I normalisation according toEq. (19). Fig. 7 . 7Theoretical Stokes I, V and CSV profiles corresponding to a) the magnetic field map of V410 Tau shown in Fig. 5 and b) the same map scaled by a factor of 10. In each case the CSV column compares B z (λ) obtained from the Stokes I and V line profiles (solid line) with the results of numerical integration of the line-of-sight magnetic field component over the stellar disk (dashed line). Fig. 8 . 8Theoretical dynamic Stokes V (left panel) and cumulative Stokes V (right panel) profiles corresponding to the ZDI map of V410 Tau shown inFig. 5. Fig. 9 . 9Observed dynamic Stokes V (left panel) and cumulative Stokes V (right panel) profiles of the He i 6678 Å line in Bp star HD 37776. Acknowledgements. This research is supported by the grants from the Knut and Alice Wallenberg Foundation, the Swedish Research Council, and the Swedish National Space Board. The author thanks Dr. K. Gayley for helpful discussions of the CSV diagnostic method and Dr. J. Silvester for critical reading of manuscript.Article number, page 10 of 11 O. Kochukhov: Cumulative Stokes V profiles . M Aurière, G A Wade, J Silvester, A&A. 4751053Aurière, M., Wade, G. A., Silvester, J., et al. 2007, A&A, 475, 1053 . S Bagnulo, M Innocenti, M Landolfi, G Mathys, A&A. 3941023Bagnulo, S., Landi Degl'Innocenti, M., Landolfi, M., & Mathys, G. 2002, A&A, 394, 1023 . J R Barnes, A Collier Cameron, D J James, J Donati, MNRAS. 314162Barnes, J. R., Collier Cameron, A., James, D. J., & Donati, J. 2000, MNRAS, 314, 162 D A Bohlender, Pulsation, Rotation, and Mass Loss in Early-Type Stars. L. A. Balona, H. F. Henrichs, & J. M. Le Contel162IAU SymposiumBohlender, D. A. 1994, in IAU Symposium, Vol. 162, Pulsation, Rotation, and Mass Loss in Early-Type Stars, ed. L. A. Balona, H. F. Henrichs, & J. M. Le Contel, 155-166 . S F Brown, J.-F Donati, D E Rees, M Semel, A&A. 250463Brown, S. F., Donati, J.-F., Rees, D. E., & Semel, M. 1991, A&A, 250, 463 . T A Carroll, K G Strassmeier, A&A. 56356Carroll, T. A. & Strassmeier, K. G. 2014, A&A, 563, A56 . T A Carroll, K G Strassmeier, J B Rice, A Künstler, A&A. 54895Carroll, T. A., Strassmeier, K. G., Rice, J. B., & Künstler, A. 2012, A&A, 548, A95 . J.-F Donati, S F Brown, A&A. 3261135Donati, J.-F. & Brown, S. F. 1997, A&A, 326, 1135 . J.-F Donati, A Collier Cameron, G A J Hussain, M Semel, 302437MN-RASDonati, J.-F., Collier Cameron, A., Hussain, G. A. J., & Semel, M. 1999, MN- RAS, 302, 437 . J.-F Donati, T Forveille, A C Cameron, Science. 311633Donati, J.-F., Forveille, T., Cameron, A. C., et al. 2006, Science, 311, 633 . J.-F Donati, M Semel, B D Carter, D E Rees, A Collier Cameron, MNRAS. 291658Donati, J.-F., Semel, M., Carter, B. D., Rees, D. E., & Collier Cameron, A. 1997, MNRAS, 291, 658 K G Gayley, S P Owocki, IAU Symposium. G. Meynet, C. Georgy, J. H. Groh, & P. Stee307IAU SymposiumGayley, K. G. & Owocki, S. P. 2015, in IAU Symposium, Vol. 307, IAU Sym- posium, ed. G. Meynet, C. Georgy, J. H. Groh, & P. Stee, 375-376 . A P Hatzes, ApJ. 451784Hatzes, A. P. 1995, ApJ, 451, 784 . S Hubrig, P North, M Schöller, Astronomische Nachrichten. 328475Hubrig, S., North, P., & Schöller, M. 2007, Astronomische Nachrichten, 328, 475 . C M Johns-Krull, J A Valenti, ApJ. 45995Johns-Krull, C. M. & Valenti, J. A. 1996, ApJ, 459, L95 . V L Khokhlova, D V Vasilchenko, V V Stepanov, I I Romanyuk, Astronomy Letters. 26177Khokhlova, V. L., Vasilchenko, D. V., Stepanov, V. V., & Romanyuk, I. I. 2000, Astronomy Letters, 26, 177 . O Kochukhov, S Bagnulo, G A Wade, A&A. 414613Kochukhov, O., Bagnulo, S., Wade, G. A., et al. 2004, A&A, 414, 613 . O Kochukhov, & MiMeS CollaborationT Lüftinger, & MiMeS CollaborationC Neiner, & MiMeS CollaborationE Alecian, & MiMeS CollaborationA&A. 56583Kochukhov, O., Lüftinger, T., Neiner, C., Alecian, E., & MiMeS Collaboration. 2014, A&A, 565, A83 . O Kochukhov, A Lundin, I Romanyuk, D Kudryavtsev, ApJ. 72624Kochukhov, O., Lundin, A., Romanyuk, I., & Kudryavtsev, D. 2011, ApJ, 726, 24 . O Kochukhov, V Makaganiuk, N Piskunov, A&A. 5245Kochukhov, O., Makaganiuk, V., & Piskunov, N. 2010, A&A, 524, A5 . O Kochukhov, M J Mantere, T Hackman, I Ilyin, A&A. 55084Kochukhov, O., Mantere, M. J., Hackman, T., & Ilyin, I. 2013, A&A, 550, A84 . O Kochukhov, N Piskunov, A&A. 388868Kochukhov, O. & Piskunov, N. 2002, A&A, 388, 868 . O Kochukhov, V Tsymbal, T Ryabchikova, V Makaganyk, S Bagnulo, A&A. 460831Kochukhov, O., Tsymbal, V., Ryabchikova, T., Makaganyk, V., & Bagnulo, S. 2006, A&A, 460, 831 . O Kochukhov, G A Wade, A&A. 51313Kochukhov, O. & Wade, G. A. 2010, A&A, 513, A13 Landi Degl&apos;innocenti, E Landolfi, M , Polarization in Spectral Lines. Kluwer Academic Publishers307Landi Degl'Innocenti, E. & Landolfi, M. 2004, Astrophysics and Space Science Library, Vol. 307, Polarization in Spectral Lines (Kluwer Academic Publish- ers) . J D Landstreet, G Mathys, A&A. 359213Landstreet, J. D. & Mathys, G. 2000, A&A, 359, 213 . S C Marsden, P Petit, S V Jeffers, MNRAS. 4443517Marsden, S. C., Petit, P., Jeffers, S. V., et al. 2014, MNRAS, 444, 3517 . G Mathys, A&AS. 89121Mathys, G. 1991, A&AS, 89, 121 . G Mathys, A&A. 293733Mathys, G. 1995, A&A, 293, 733 . G Mathys, S Hubrig, J D Landstreet, T Lanz, J Manfroid, A&AS. 123353Mathys, G., Hubrig, S., Landstreet, J. D., Lanz, T., & Manfroid, J. 1997, A&AS, 123, 353 . Z Mikulášek, J Krtička, G W Henry, A&A. 485585Mikulášek, Z., Krtička, J., Henry, G. W., et al. 2008, A&A, 485, 585 . P Petit, J Donati, G A Wade, MNRAS. 3481175Petit, P., Donati, J., Wade, G. A., et al. 2004, MNRAS, 348, 1175 . P Petit, F Lignières, M Aurière, A&A. 53213Petit, P., Lignières, F., Aurière, M., et al. 2011, A&A, 532, L13 . P Petit, T Louge, S Théado, PASP. 126469Petit, P., Louge, T., Théado, S., et al. 2014, PASP, 126, 469 . V Petit, G A Wade, MNRAS. 420773Petit, V. & Wade, G. A. 2012, MNRAS, 420, 773 . N Piskunov, O Kochukhov, A&A. 381736Piskunov, N. & Kochukhov, O. 2002, A&A, 381, 736 . A Reiners, G Basri, ApJ. 6561121Reiners, A. & Basri, G. 2007, ApJ, 656, 1121 . J B Rice, K G Strassmeier, A&A. 316164Rice, J. B. & Strassmeier, K. G. 1996, A&A, 316, 164 . J B Rice, K G Strassmeier, M Kopf, ApJ. 72869Rice, J. B., Strassmeier, K. G., & Kopf, M. 2011, ApJ, 728, 69 . L Rosén, O Kochukhov, A&A. 5488Rosén, L. & Kochukhov, O. 2012, A&A, 548, A8 . D Shulyak, A Reiners, U Seemann, O Kochukhov, N Piskunov, A&A. 56335Shulyak, D., Reiners, A., Seemann, U., Kochukhov, O., & Piskunov, N. 2014, A&A, 563, A35 . M B Skelly, J.-F Donati, J Bouvier, MNRAS. 403159Skelly, M. B., Donati, J.-F., Bouvier, J., et al. 2010, MNRAS, 403, 159 . B Stelzer, M Fernández, V M Costa, A&A. 411517Stelzer, B., Fernández, M., Costa, V. M., et al. 2003, A&A, 411, 517 . I B Thompson, J D Landstreet, ApJ. 2899Thompson, I. B. & Landstreet, J. D. 1985, ApJ, 289, L9 . G A Wade, J.-F Donati, J D Landstreet, S L S Shorlin, MNRAS. 313823Wade, G. A., Donati, J.-F., Landstreet, J. D., & Shorlin, S. L. S. 2000, MNRAS, 313, 823
[]
[ "Hierarchical memories: Simulating quantum LDPC codes with local gates", "Hierarchical memories: Simulating quantum LDPC codes with local gates", "Hierarchical memories: Simulating quantum LDPC codes with local gates", "Hierarchical memories: Simulating quantum LDPC codes with local gates" ]
[ "Christopher A Pattison \nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCA\n", "Anirudh Krishna \nDepartment of Computer Science\nStanford University\n94305StanfordCA\n\nStanford Institute for Theoretical Physics\nStanford University\n94305StanfordCA\n", "John Preskill \nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCA\n\nAWS Center for Quantum Computing\n91125PasadenaCA\n", "Christopher A Pattison \nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCA\n", "Anirudh Krishna \nDepartment of Computer Science\nStanford University\n94305StanfordCA\n\nStanford Institute for Theoretical Physics\nStanford University\n94305StanfordCA\n", "John Preskill \nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCA\n\nAWS Center for Quantum Computing\n91125PasadenaCA\n" ]
[ "Institute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCA", "Department of Computer Science\nStanford University\n94305StanfordCA", "Stanford Institute for Theoretical Physics\nStanford University\n94305StanfordCA", "Institute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCA", "AWS Center for Quantum Computing\n91125PasadenaCA", "Institute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCA", "Department of Computer Science\nStanford University\n94305StanfordCA", "Stanford Institute for Theoretical Physics\nStanford University\n94305StanfordCA", "Institute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCA", "AWS Center for Quantum Computing\n91125PasadenaCA" ]
[]
Constant-rate low-density parity-check (LDPC) codes are promising candidates for constructing efficient fault-tolerant quantum memories. However, if physical gates are subject to geometric-locality constraints, it becomes challenging to realize these codes. In this paper, we construct a new family of N, K, D codes, referred to as hierarchical codes, that encode a number of logical qubits K = Ω(N/ log(N ) 2 ). The N th element H N of this code family is obtained by concatenating a constant-rate quantum LDPC code with a surface code; nearest-neighbor gates in two dimensions are sufficient to implement the syndrome-extraction circuit C H N and achieve a threshold. Below threshold the logical failure rate vanishes superpolynomially as a function of the distance D(N ). We present a bilayer architecture for implementing C H N , and estimate the logical failure rate for this architecture. Under conservative assumptions, we find that the hierarchical code outperforms the basic encoding where all logical qubits are encoded in the surface code.
null
[ "https://export.arxiv.org/pdf/2303.04798v1.pdf" ]
257,405,273
2303.04798
5ac4be33a8264ced9d8e87fcddd7fb62ee7a1744
Hierarchical memories: Simulating quantum LDPC codes with local gates March 9, 2023 Christopher A Pattison Institute for Quantum Information and Matter California Institute of Technology 91125PasadenaCA Anirudh Krishna Department of Computer Science Stanford University 94305StanfordCA Stanford Institute for Theoretical Physics Stanford University 94305StanfordCA John Preskill Institute for Quantum Information and Matter California Institute of Technology 91125PasadenaCA AWS Center for Quantum Computing 91125PasadenaCA Hierarchical memories: Simulating quantum LDPC codes with local gates March 9, 2023 Constant-rate low-density parity-check (LDPC) codes are promising candidates for constructing efficient fault-tolerant quantum memories. However, if physical gates are subject to geometric-locality constraints, it becomes challenging to realize these codes. In this paper, we construct a new family of N, K, D codes, referred to as hierarchical codes, that encode a number of logical qubits K = Ω(N/ log(N ) 2 ). The N th element H N of this code family is obtained by concatenating a constant-rate quantum LDPC code with a surface code; nearest-neighbor gates in two dimensions are sufficient to implement the syndrome-extraction circuit C H N and achieve a threshold. Below threshold the logical failure rate vanishes superpolynomially as a function of the distance D(N ). We present a bilayer architecture for implementing C H N , and estimate the logical failure rate for this architecture. Under conservative assumptions, we find that the hierarchical code outperforms the basic encoding where all logical qubits are encoded in the surface code. Introduction Quantum error-correcting codes encode quantum information in entangled states over many qubits. They are defined by a set of operators called stabilizer generators. Errors can accumulate in the state due to imperfect control and interactions with the environment. Stabilizer generators can be measured using syndrome-extraction circuits; the outcome of these measurements are called syndromes, classical information used to infer corrections to these errors. To minimize the probability of corrupting information beyond recovery, it is imperative to minimize the points of failure in the syndrome-extraction circuit. This can be realized by restricting the number of gates that each qubit interacts with and minimizing the total space-time volume of this circuit. The extent to which this can be done depends on the choice of error-correcting code and physical constraints. Syndrome-extraction circuits are the workhorse of quantum memories, devices that can reliably store qubits for some fixed duration. In this paper, we are concerned with designing memories that can encode a growing number of qubits and simultaneously have a low probability of failure. 1 We focus on their design when qubits are embedded in a two-dimensional lattice and gates are subject to constraints on geometric locality. 1 We leave fault-tolerant computation for future work. Quantum low-density parity-check (LDPC) codes are natural candidates for constructing quantum memories. A quantum LDPC code refers to a family {Q n } n of { n, k(n), d(n), ∆ q , ∆ g } codes. This notation means that the n th element in the family uses n data qubits to encode k = k(n) logical qubits and has distance d = d(n), i.e. it is robust to (d(n) − 1)/2 Pauli errors. A quantum LDPC code is one where, for all codes in the code family, every stabilizer generator only involves at most a constant number ∆ g of qubits, and each qubit is supported within at most a constant number ∆ q of stabilizer generators. Such codes can encode a number of qubits that increases with the code size; simultaneously, the probability of any error on the encoded level is suppressed exponentially in the distance d(n). Furthermore, the syndrome-extraction circuit C n can be efficient as measured by two figures of merit. The depth of the syndrome-extraction circuit is the number of time steps T(C n ) it takes to implement. The width of the syndrome-extraction circuit is the total number of qubits W(C n ) it uses (including ancilla qubits in addition to data qubits). The size or volume of the circuit is the product of the depth and the width. Building on a result by Kovalev and Pryadko [KP13], Gottesman [Got14] constructed fault-tolerant syndrome-extraction circuits that have volume which is a constant times the volume of the noise-free syndrome-extraction circuit -there exists a threshold q such that if gates fail with fixed probability p < q, the probability of the circuit failing falls exponentially in the distance d(n). However, realizing this architecture in a 2-dimensional layout is challenging. It requires high-fidelity gates acting on qubits that may be far apart. Some architectures might not support such interactions. It is known that geometric locality severely constrains quantum error-correcting codes in 2 and 3 (Euclidean) dimensions. The most famous codes that are implemented using only geometrically-local gates are surface codes [Kit03,BK98] and color codes [BMD06,KB15]. Seminal results by Bravyi and Terhal [BT09], and later by Bravyi, Poulin and Terhal [BPT10] showed that these codes are optimal for quantum LDPC codes defined using geometrically-local stabilizers. Subsequently, it was shown that to implement LDPC codes where the parameters k and d are both strictly better than the surface code, we require a growing amount of long-range connectivity [BK21a,BK21b]. When restricted to using only nearest-neighbor gates in 2 dimensions, Delfosse et al. [DBT21] proved the following tradeoff for syndrome-extraction circuits for constant-rate LDPC codes: 2 T(C n ) = Ω n W(C n ) ,(1) where T(C n ) is the depth of the syndrome-extraction circuit and W(C n ) is the total number of qubits, data and ancilla, used in the circuit 3 . In words, this shows that given only nearest-neighbor gates to build a syndrome-extraction circuit for constant-rate LDPC codes, we can choose to minimize either the depth or the width of C n , but cannot do both. This sets the stage for presenting the main questions we address in this paper: does the family of circuits saturating Equation (1) still have a threshold? If not, how do we modify the code and associated circuit to achieve a threshold as efficiently as possible? How do we construct the most efficient syndrome-extraction circuits given access to gates whose range is more than merely nearest neighbor? Can we improve on the bound in Equation (1)? Our contributions This paper is centered around the theme of implementing efficient quantum memories. Our main result is that our proposal, called a hierarchical code, has a threshold and that it achieves asymptotically better error suppression than the surface code. As it brings together a few different ideas, we present a short summary of each section and how to navigate the paper. Although these results build on each other, our presentation is modular -readers ought to be able to proceed to their section of choice after reading this overview and Section 2 where we define all the concepts required to formally state our results. (The statements of the main theorems of each section are only presented informally below.) Section 3 : Permutation routing on graphs Connectivity beyond nearest-neighbor interactions is being explored in many architectures. There is evidence that some architectures can support gates of range R where R can be large [LLC + 19, PCK + 21]. Motivated by these developments, we ask: given nearest-neighbor Clifford gates and SWAP gates of range R, can we reduce the depth of the syndromeextraction circuit for constant-rate LDPC codes? To this end, we will permute qubits to bring them within range to apply an entangling gate. This is expressed as a permutation routing, a task on a graph G = (V, E) specified by a permutation α : V → V . In this task, two vertices labeled u and v connected by an edge (u, v) are allowed to exchange labels. The objective is to ensure that all labels match destinations α(u) while minimizing the total time required. Permuting vertices in parallel is non-trivial-the paths along which one permutes different pairs can overlap and thereby require more time. Section 3.1 reviews a permutation routing algorithm due to Annexstein and Baumslag [AB90]. This algorithm yields a permutation routing on a product of two graphs given permutation routings on each of the input graphs. In Section 3.2, we build on this algorithm to permute vertices on an L × L lattice where two vertices separated by a distance R are connected by an edge using a sparse subgraph. The main technical result of this section is the following existence result. Theorem 1.1 (Permutation routing). For R even, there is an efficient construction of a degree-12 graph G = (V, E) whose vertex set V is identified with an L × L lattice with edges of length at most R. Given a permutation α : V → V , a permutation routing implementing α can be performed in depth 3L/R + O(log 2 R). While it is itself not the main result of our paper, it will be used in service of proving Theorem 1.2 which demonstrates the existence of efficient syndrome-extraction circuits given SWAP gates of range R. This section is entirely technical and only discusses graph properties and permutation routings. Section 4 and Section 5: Hierarchical codes & the bilayer architecture Given access to only nearest-neighbor gates, Delfosse et al. present some evidence against the existence of a threshold if one were to permute qubits to bring them within range to perform a CNOT (see Figure 2 of [DBT21]). In particular, in the setting where W(C n ) = Θ(n) and T(C n ) = Θ( √ n), it appears too many errors accumulate before we can complete executing the syndrome-extraction circuit. We circumvent this problem using code concatenation. We concatenate a constant-rate n, k, d, ∆ q , ∆ g LDPC code {Q n } with a d 2 , 1, d rotated surface code RS to obtain the hierarchical code {H N } with parameters denoted N, K, D . This means that each qubit of the syndrome-extraction circuit for the LDPC code Q n , henceforth referred to as the "outer code", is itself the logical qubit of a rotated surface code RS , which we refer to as the "inner code" or sometimes as a "tile." As a rotated surface code can suppress errors exponentially in d , we can suppress errors long enough to complete syndrome measurements of the outer quantum LDPC code using relatively small inner codes. The lattice length d of the inner code only scales logarithmically in the size of the outer LDPC code, i.e. d = Θ(log(n)). Here indexes the qubits in the rotated surface code, 2 = 2d 2 − 1. Section 4 is dedicated to the construction of syndrome-extraction circuits C H N corresponding to H N . The hierarchical code family {H N } is not LDPC: The stabilizer generators for the outer code act on a number of physical qubits that scales with the size d of the inner code. However, local operations are sufficient to implement the corresponding syndrome-extraction circuit C H N . The main result of this section is summarized in the following theorem. Theorem 1.2. The N, K, D hierarchical code H N is constructed by concatenating an outer code, a constant-rate n, k, d, ∆ q , ∆ g quantum LDPC code Q n , and an inner code, a rotated surface code RS where d = Θ(log(n)). Let ρ > 0 and δ ≥ 1/2, such that k = ρ · n and d = Θ(n δ ). The code H N has parameters K(N ) = Ω N log 2 (N ) , D(N ) = Ω N δ log 2δ−1 [N/ log(N )] . There exists an explicit and efficient construction of an associated family of syndrome-extraction circuits C H N constructed using only local Clifford operations and SWAP gates of range R such that W(C H N ) = O(N ) , T(C H N ) = O √ N R . Our construction works for all values of δ > 0; we choose δ ≥ 1/2 to make theorem statements simpler. Before describing how the circuit C H N is constructed, we motivate why it is interesting-it has a threshold. We work in a model where errors occur in a stochastic manner. We declare a logical failure if any of the K encoded qubits fail. More generally, we declare failure if any logical error occurs on the code space. The main result of Section 5 is the following theorem. Suppose the outer code Q n has constant rate k = ρn and distance d(n) = Θ(n δ ). If we repeat the syndrome-extraction circuit C H N for d(n) rounds, then there exists a threshold q ∈ (0, 1] corresponding to C H N such that, if each gate fails with fixed probability 0 < p < q, then the probability of logical failure under minimum-weight decoding, p H (N ), obeys p H (N ) ≤ exp −c H · N δ log 2δ (N ) , for some positive number c H independent of N . The theorem is only stated informally here because we have not yet defined the noise model with respect to which this result holds. We will consider a locally decaying error model to account for correlated errors that may occur in a circuit. This error model is defined in Section 2.2. Section 5 is dedicated to a proof of the existence of a threshold. We build on Gottesman's proof of the existence of a threshold for syndromeextraction circuits (Theorem 4 of [Got14]). The central idea is the requirement that the probability of failure for a qubit per round of syndrome extraction, denoted p round , remains a sufficiently small constant. This is reviewed in Section 2.4. Gottesman's result was based on syndrome-extraction circuits for LDPC codes that have constant depth. As Equation (1) highlights, this is not possible when subject to locality constraints. We study the dependence of p round on the circuit depth in Section 5.1. In Section 5.3, we show that = Θ(log(n)) is sufficient for C H N to have a threshold. As we ask to minimize circuit width W(C H N ) and subject the circuit to locality constraints, we pay a price -in addition to the growing depth, the number of encoded qubits K(N ) and distance D(N ) are suppressed by factors of approximately log(N ) relative to the outer code Q n which has constant rate and distance d(n) = Θ(n δ ). Furthermore, for fixed gate error rates p ∈ [0, 1], the sub-threshold scaling of the logical error rate p H (N ) of {H N } is subexponential, but superpolynomial, in the distance D(N ); for any positive constants α, β, the logical failure probability p H (N ) vanishes faster than any polynomial function N −β but slower than any exponential function exp(−α · N ): p H (N ) N −β N →∞ − −−− → 0 , p H (N ) exp(−α · N ) N →∞ − −−− → ∞ . Having motivated why we are interested in H N , we return to the construction of C H N . In Section 4, we propose a novel bilayer architecture to implement it. We begin the section by presenting the syndromeextraction circuit C n for the constant-rate n, k, d, ∆ q , ∆ g LDPC code. Physical qubits are arranged in two parallel layers, each a lattice of side length approximately L = Θ( √ n). To obtain the syndromeextraction circuit C H N for the concatenated code, each of the W qubits in C Q n is replaced by a rotated surface code. In Section 4.2, we describe how to arrange W = W(C Q n ) surface codes RS in a bilayer architecture. Each layer now has side length approximately 2L qubits to accommodate the tiles. An instance of a single layer is shown in Figure 1 (a). We assume access to nearest-neighbor physical Clifford operations and SWAP gates of range R within a layer and Clifford operations between adjacent qubits in different layers. These physical qubits are aggregated into d 2 , 1, d codes RS . See Figure 1 (b). There are 2L 2 tiles in total. Even though we are only implementing a quantum memory, we still need to understand how to perform a limited set of logical operations on tiles to implement the syndrome-extraction circuit for the outer code. The advantage of the bilayer architecture is that it allows for transversal CNOT and CZ to implement logical CNOT and CZ respectively. We propose a new technique to perform logical SWAP operations between tiles. This yields all required logical Clifford operations between tiles to perform syndrome-extraction for the outer code. We note that the existence of a threshold does not depend on using the bilayer architecture. For example, tiles can be arranged in a single layer and Clifford gates can be implemented via lattice surgery [Lit19,HFDVM12]. For an alternative implementation in the context of measurement-based quantum computation, see [BDM + 21]. Although we do not prove it here, it is possible to show that a threshold exists also in this setting using similar techniques. (a) (b) Figure 1: The bilayer architecture used to implement the syndrome-extraction circuit C H N for the hierarchical code H N . (a) represents a single layer of the bilayer architecture. Colored dots represent syndrome qubits and gray dots represent data qubits. Transparent dots represent inactive qubits. At any given time step, the qubits that participate in the circuit are depicted as opaque dots and form a lattice of side length L ; its location within the larger lattice can shift relative to the second layer. This is used to facilitate logical Clifford operations. (b) represents parallel tiles of distance d . Each tile represents an outer qubit of the hierarchical code construction. Light gray dots will be used to facilitate Clifford operations but are not used in the syndrome-extraction circuit for RS . The circuits C H N are constructed such that each lattice position remains connected to a fixed and constant-sized set of other lattice positions for any R = ω(1). Furthermore, the connectivity does not change dynamically over the course of the circuit. This way, the wiring can be decided ahead of time. (M ) < p H (N ), we require W(C B M ) = Ω N log(N ) 1+2δ , T(C B M ) = Ω N log 2 (N ) δ . We can compare this with parameters for C H N from Theorem 1.2. For all δ > 0, the width W of C H N is less than that of C B M . Furthermore, if the outer code has a single-shot decoder, i.e. if a constant number of applications of C H N are sufficient to achieve a threshold, then the depth T of C H N is also less than that of C B M . Efficient single-shot decoders are known to exist for constant-rate LDPC codes [LTZ15,FGL18a,FGL18b]. Having said this, it is unclear whether this advantage manifests for practically-relevant code sizes and error rates. To make such a comparison, we use numerical estimates. We choose the size M = M (N ) such that the syndrome-extraction circuits for the hierarchical scheme H N and the basic encoding B M use the same number of physical qubits. Fixing the total number of qubits in this manner, we look for a crossover point, the gate error rate q 0 at which the hierarchical code achieves a lower logical failure rate than the basic encoding. We estimate the circuit-level failure rate using some assumptions about the sub-threshold scaling of the logical failure rate for LDPC codes. We assume the threshold of the surface code is 10 −2 and the threshold for constant-rate LDPC codes under circuit-level noise is 10 −3 . Our model takes into consideration how the logical failure rate depends on the depth of the circuit C H N , and how hook errors could reduce the effective distance. Hook errors are harmful errors that spread from the ancilla qubits to the data qubits during syndrome extraction. These are explained in Section 6.2.3. We offer evidence that against circuit-level depolarizing noise, the crossover happens at a gate error rate as high as 5 × 10 −3 depending on the choice of outer code family and inner/outer code sizes. See the leftmost plot in Figure 2. These numbers are merely a proof-of-concept and depend on the aforementioned assumptions which are discussed in Section 6.2. We arrive at these estimates assuming all gates fail with the same probability. While such an assumption is convenient for proofs, in some architectures, it may be possible to perform SWAP operations with higher fidelity than CNOT or CZ is entirely different than that used to perform other two-qubit gates and, in principle, could have much better fidelity. These considerations are especially important to us as the main source of noise in the hierarchical scheme stems from SWAP gates. We present variations of our numerical estimates when the SWAP gates have better fidelity than the CNOT gates. The middle plot and right-most plot in Figure 2 represent estimates for the failure rate when the SWAP gates are 10× and 100× better than entangling gates respectively. As mentioned, our estimates are predicated on some assumptions. We re-examine these assumptions in Section 6.4 and propose ways to improve the failure rate for hierarchical codes. We show how we can reduce the effect of hook errors by designing noise-biased qubits. A qubit is said to have a noise bias if X and Y errors are suppressed with respect to Z errors. We can introduce a bias on Level-1 qubits using unbiased Level-0 (physical) qubits. As the inner code is a surface code, we can engineer a bias simply by making the surface code longer in one direction of our choosing. See Figure 3 (a). Based on our estimates, we expect this can reduce the size of the code considerably. Figure 3 (b) shows the crossover points for the hierarchical code and the basic encoding with the assumption that SWAP gates are 10× better than entangling gates using a much smaller outer code. Secondly, we believe that decoders for the hierarchical code can take advantage of their concatenated structure. To achieve this, we propose using message-passing decoders between the outer and inner codes. These ideas can be used in soft decoders for the outer code to partially overcome the problems of degeneracy [PC08]. Figure 3: (a) Creating Level-1 qubits such that the probability of logical X failure is less than the probability of Z failure. This is accomplished by changing the aspect ratio of the tiles. (b) Estimating crossover points when SWAP gates are 10× better than entangling gates. [AAB + 19, KARB + 19, MCS + 20, EWL + 21, MBL + 22]. Protection of surface codes from burst errors was initially studied in [XSY + 22] by concatenating a small constant-sized stabilizer code with surface codes. The hierarchical scheme is robust to these errors because each inner surface code represents a qubit of the outer code which we know is resistant to some number of erasure errors. Related work: Gottesman [Got00] demonstrated that it is possible to find a threshold using only local gates and concatenation. Svore, Divincenzo and Terhal [STD05,SDT06] studied this issue further and established a numerical lower bound on the threshold in a scheme with many layers of concatenation. Yamasaki and Koashi [YK22] show that concatenated codes can be used to achieve constant overhead quantum computation that is also time efficient. In contrast to these approaches, we consider a qualitatively different setting. In our hierarchical model, the concatenated code has only two layers. The outer LDPC code grows quickly to improve the error rate, while the inner code grows slowly to achieve a threshold. The number of encoded logical qubits in the code therefore increases (sublinearly) with the size of the code. Consequently, the rate of error suppression is significantly better. Finally, Baspin et al. [BFS23] have recently generalized the result of Delfosse et al. in another direction. In contrast to the constructive approach in this paper, they approach this problem top-down -given access to arbitrary local operations and classical communication (not merely Clifford operations), they study syndrome-extraction circuits for LDPC codes and their ability to suppress stochastic errors. They prove the existence of a tradeoff between the parameters of the syndrome-extraction circuit and the subthreshold error scaling (See Theorem 28 of [BFS23]). For fixed gate error rate p, suppose we use an N, K, D code H N and desire a sub-threshold scaling of the logical failure rate p H (N ) = exp(−f (N )) for some function f (N ). Let C N be the corresponding family of syndrome-extraction circuits. Assuming f (N ) = O(N ), we express Theorem 28 of [BFS23] in our notation W(C N ) K = Ω f (N ) T(C N ) .(2) To compare with our result, suppose we only use SWAP gates of constant range, i.e. R = O(1). From Theorem 1.2, the syndrome-extraction circuit C H N achieves p H (N ) = exp(−Θ(N δ / log 2δ (N )) with W(C H N ) = Θ(N ) and T(C H N ) = O( √ N ). W(C H N ) K = O(log(N )) , f (N ) T(C H N ) = O N (δ−1)/2 log δ (N ) .(3) Comparing with Equation (2), we can see that the bound is satisfied for any constant δ > 0. Note that such a low logical error rate is only feasible because our syndrome-extraction circuit C H N has polynomially growing depth. Background & Notation In this section, we begin by formally defining concepts needed to state our results. Section 2.1 defines syndrome-extraction circuits. We review gadgets used to construct them and how to use these gadgets to obtain a syndrome-extraction circuit given an error correcting code. Section 2.2 reviews locally decaying distributions that describe errors on states and faults on circuits. These are general error models that can describe the types of correlated errors that we might witness in a circuit. A noise model is parameterized by a failure rate which quantifies the probability of errors. We described how error correcting codes and their associated syndrome-extraction circuits are robust to some amount of errors occurring below a threshold failure rate. Section 2.3 reviews syndrome-extraction circuits for concatenated codes. The hierarchical code is constructed by concatenating a constant-rate quantum LDPC code and the surface code. These are defined in Section 2.4 and Section 2.5 respectively. We review Gottesman's requirements [Got14] for the existence of a threshold. This will be an important idea in the proof of the existence of a threshold for hierarchical codes. Basic definitions Let P = X, Z /{±i, ±1} denote the (projective) single-qubit Pauli group (where we ignore phases); for n ∈ N, let P n denote the n-fold tensor product P ⊗n . For P ∈ P n , supp(P) ⊆ [n] denotes the support of P, i.e. the set of qubits on which P acts non-trivially. The weight of a Pauli operator P is | supp(P)|, the number of qubits in its support. For brevity, we denote this as |P|. For a, b ∈ {0, 1} n , let X(a) = ⊗ i X a i , and Z(b) = ⊗ j Z b j . Any Pauli operator P ∈ P n can be expressed uniquely as P = X(a)Z(b) for a, b ∈ {0, 1} n . We use P| X , P| Z ∈ {0, 1} n to denote the X and Z components of P respectively, i.e. P| X := a and P| Z := b. Stabilizer codes: An n-qubit quantum error correcting code is the simultaneous +1-eigenspace of a set of commuting Pauli operators. These Pauli operators form a subgroup S of the Pauli group called the stabilizer group. The stabilizer group S is generated by elements S 1 , ..., S m . The codespace Q is then defined as Q = {|ψ ∈ (C 2 ) ⊗n | S i |ψ = |ψ ∀i ∈ [m]} . The number of encoded qubits k is the base 2 logarithm of the number of linearly-independent vectors in Q. Equivalently, given S, it is simply k = n − m. The distance d of the code is the minimum weight Pauli operator such that we can map one element of Q to a distinct element of Q. Equivalently, d = min P∈Pn\S [P,S i ]=0 |P| . We say such a code is an n, k, d (stabilizer) code. The code is said to be a CSS code if every generator can be chosen such that it is a tensor product of only X or Z Pauli operators [CS96,Ste96]. We can define the Xand Z-distances d X and d Z of a CSS code as d X = min P∈{I,Z} ⊗n \S [P,S i ]=0 |P| d Z = min P∈{I,X} ⊗n \S [P,S i ]=0 |P| . Let 1 ≤ b ≤ m X and 1 ≤ c ≤ m Z index the X-type and Z-type stabilizer generators {S X b } and {S Z c }. For 1 ≤ a ≤ n, we use a ↔ b and a ↔ c to mean a is in the support of S X b and S Z c respectively. Syndrome-extraction circuits & measurement gadgets: A syndrome-extraction circuit C for a code Q can be composed of the following elements that are allowed to be classically controlled. Definition 2.1 (Clifford operations). Consider a set of qubits arranged in a lattice in 2 dimensions. We define the set K of elementary Clifford operations as follows: 1. Initialization of new qubits in state |0 or |+ , 2. Single-qubit Pauli gates, 3. Two-qubit Clifford gates CNOT and CZ between nearest-neighbor qubits, 4. Single-qubit Pauli X and Z measurements, 5. Physical SWAP operation with range R. At any given time step, a qubit in C can be involved in at most one of these operations. In addition, we assume instantaneous classical communication and access to classical computation for processing measurement data. To obtain the syndrome, we use gadgets to measure Pauli operators which are described as follows. Consider a CSS code Q with m X X-type stabilizer generators S X = {S X b } m X b=1 and m Z Z-type stabilizer generators S Z = {S Z b } m Z c=1 . The entire set of stabilizer generators is S = S X ∪ S Z . We assume operations in K can be performed in parallel. We shall present one way of using parallel operations to build efficient syndrome-extraction circuits for quantum LDPC codes in Section 3. Figure 4: Performing the syndrome extraction corresponding to the operator X i1 X i2 X i3 X i4 X i5 on the left and Z j1 Z j2 Z j3 Z j4 on the right. The measurements are performed on some qubits {i 1 , ..., i 5 , j 1 , ..., j 4 } ⊆ [n]. 1. For 1 ≤ b ≤ m X ,X |+ i 1 i 2 i 3 i 4 i 5 X |+ j 1 Z j 2 Z j 3 Z j 4 Z Noise & imperfect syndrome-extraction circuits In practice, C is imperfect. In general, errors on multiple locations with complicated correlations can arise at the end of a syndrome-extraction circuit. Under the action of a two-qubit gate for instance, single-qubit errors which occur with probability p can transform into two-qubit correlated errors which occur with probability p. Two-qubit gates themselves can fail and introduce errors on both qubits where there were none before. As yet another example, small clusters of qubits that are near each other can also fail together, for example, due to crosstalk, stray magnetic fields, etc. These errors are outside the scope of an i.i.d. errors model and hence, we consider a generalization. We say the distribution Pr is locally decaying with rate p ∈ [0, 1] if for all E ⊆ [n], Pr(E) ≤ p |E| . We first consider general errors on an n-qubit state. We assume every set of qubits has some probability of being corrupted by an arbitrary Pauli error. Consider a Pauli operator E ∈ P n such that E = X(x )Z(z ). Let E(x , z ) be the probability of the error E . By definition, E is itself a map from Pow(n) × Pow(n) to [0, 1]. Let X (x) : Pow(n) → R and Z(z) : Pow(n) → R denote X (x) = x ⊇x z E(x , z ) , Z(z) = x z ⊇z E(x , z ) .(4) In other words, X and Z denote the probability that a random error E distributed according to E has X and Z components x and z respectively. For brevity, we have used x ⊇ x and z ⊇ z to mean that the supports of x, z are contained in x , z respectively. Treating X and Z separately in this way does not prevent correlations between X and Z errors. Definition 2.3 (Locally decaying errors on qubits) . Given an n-qubit state with Pauli errors distributed according to E. We say that errors are described by a locally decaying errors model to mean that X and Z are both locally decaying distributions with failure rate p. We want to extend this idea to describe errors caused by faulty circuits. A location in a circuit C refers to a one-or two-qubit gate (including identity), single-qubit preparation or single-qubit measurement operation at some time step 1 ≤ t ≤ T(C). A fault location is a location which performs a random Pauli operation following the desired Clifford operation. We assume that a fault location introduces a Pauli operator on the qubits in its support chosen according to some distribution F. Given a set F of fault locations in C, the support of F is the set supp(F ) ⊆ [W(C)] of qubits that are in some location in F . For a set F of locations, let F(F ) denote the probability of the set of locations F being faulty. For a set F of fault locations, the total probability F(F ) is F(F ) = F ⊇F F(F ) .(5) Definition 2.4 (Locally decaying faults on circuits). Let C be a depth 1 circuit with faults distributed according to F. We say that the faults are described by a locally decaying faults model if F is a locally decaying distribution with failure rate p phys -for all sets of locations F , F(F ) ≤ p |F | phys . Note that the probability of failure falls with the number of locations |F | and not the number of qubits | supp(F )|. In practice, different locations may have different failure rates. To prove the existence of a threshold, we assume that p phys is the maximum failure probability across all gates. We return to this assumption in Section 6, where we discuss how the logical failure rate behaves if gates have different failure rates. Definition 2.4 pertains to circuits of depth 1-we assume faults in successive time steps are independent. In a more general model for faults, we could include arbitrary fault patterns for a circuit of growing depth so long as the probability of a particular fault path falls exponentially with the size of the fault path. As a state undergoes circuit operations, errors can spread and accumulate. Consider a CNOT gate acting on two qubits. Figure 5 illustrates how a generating set of 2-qubit Pauli operators {XI, IZ, IX, ZI} on these two qubits evolve under ideal CNOT. The error doubles in size in the worst-case scenario. As shorthand, we say that Pauli operators 'flow' within circuits to refer to this spreading. X operators flow down a CNOT and Z operators flow up. In a similar way, CZ gates also spread errors. However, because these gates are diagonal in the computational basis, they do not affect products of Z operators-X ⊗ I is mapped to X ⊗ Z and I ⊗ X is mapped to Z ⊗ X. The structure of syndrome-extraction circuits is special. For P ∈ {X, Z}, a controlled-P gate within the syndrome-extraction circuit uses ancilla qubits as control qubits and data qubits as target qubits (See Figure 4). This means that errors only flow in limited ways-for example, X errors always flow from ancilla qubits to data qubits, and Z errors flow from data qubits to ancilla qubits when CNOT gates are applied. Implementing the imperfect circuit C, we obtain an imperfect syndrome. To overcome this problem, we repeat the syndrome-measurement circuit for r rounds. Let σ σ σ = (σ σ σ (1) X , σ σ σ (1) Z , ..., σ σ σ (r) X , σ σ σ (r) Z ) be the r faulty syndromes. Failure rate per round: Consider a corrupted code state E |ψ where ψ is a code state and E = X(e x )Z(e z ) is some Pauli operator. If the syndrome-extraction circuit C has no faults, the joint state of the data and ancilla qubits after one round of syndrome extraction is described by E |ψ ⊗ Z(σ σ σ) |+ ⊗m ,(6) where σ σ σ represent the ideal syndromes for Xand Z-type stabilizer generators. However, because of faults in the circuit, the state after the circuit is (D ⊗ A)(E |ψ ⊗ Z(σ σ σ) |+ ⊗m ) ,(7) where D and A represent errors on the data and ancilla qubits respectively caused by faults in C that then spread. Let E (D ⊗ A) denote the probability of errors per round on the qubits. Let X , Z denote the induced distributions for errors on data and ancilla qubits of X and Z type respectively. Definition 2.5 (Probability of errors per round). We say that the probability of errors per round is locally decaying if X , Z , are locally decaying distributions with failure rates p round ∈ [0, 1] respectively. A priori, E can depend on r and the input error E. However, as entangling gates restrict the direction of error propagation, errors do not propagate from one data qubit to another or from one ancilla qubit to another. In Section 5.1, we use this to show that p round does not depend on how many prior rounds of the syndrome-extraction circuit have already been applied. We show that p round is a function of p phys of the form a · p b phys , where a is a function of the depth T(C) and b is a function of the degrees ∆ q and ∆ g . Recovering the state: After performing r rounds of syndrome extraction, a decoding algorithm dec : (F m 2 ) ×r → P n maps the observed syndrome σ σ σ to a deduced error. The applied correction may not completely correct all errors due to faults in the syndrome extraction circuit. We declare success if, after applying the correction, the final state is 'not too far' from the desired output of the ideal circuit C. To this end, we consider the ideal recovery map R [AGP05]-a fictitious quantum channel that is not subject to geometric constraints or noise. We gauge the accuracy of the circuit C using the logical failure probability p Q , which is the probability that the residual error is correctable by the ideal recovery map. To be precise, p Q is the probability that any logical qubit fails in one round of error correction. The probability p Q also referred to as the Word Error Rate (WER). Ideal recovery map & Thresholds: To understand whether a scheme is scalable, we are interested in properties of a family of codes {Q n } to process an ever increasing number n of qubits. Consider a code family {Q n } and suppose errors are described by a locally decaying distribution E with failure rate p in . Let {C n } be the corresponding set of syndrome-extraction circuits to {Q n }, where faults are described by F, a locally decaying distribution with failure rate p phys ∈ [0, 1]. We can compute p round as a function of p phys as shown in Section 5.1. For our purposes, we say that the family has a threshold with respect to the noise model and decoding algorithm if there exists a pair q in , q round ∈ (0, 1] such that if p in < q in , p round < q round ,(8) the probability of logical failure p Q decreases with the size n of the code. The logical probability of failure is defined with respect to family of ideal recovery maps. It depends on p in and p phys and the thresholds. Whether a threshold exists with respect to a given noise model, the exact value of the threshold, as well as how quickly the logical failure probability decreases as a function of n (e.g. polynomially or exponentially), depend not only on the choice of quantum error-correcting code Q n , but also the implementation of the syndrome-extraction circuit C n and the decoding algorithm. In our construction, the code family is a concatenated code where the syndrome-extraction circuit is subject to constraints on geometric locality. While the state after error correction is 'close enough' to the codespace, undoing the deduced error may not correct all errors. The remaining errors on the state are described by E res that is a locally decaying distribution with failure rate p res . We can perform another round of error correction and thereby keep the state alive for arbitrary duration if p res < p in . For this reason, we will specify the residual failure rate after error correction in addition to the logical failure probability p Q . Concatenated codes A concatenated code is a quantum code obtained via the composition of two codes, an inner code Q 0 and an outer code Q. We consider the simple case of a n 0 , 1, d 0 code Q 0 that only encodes 1 qubit and a suitable n, k, d code Q. Level-2 qubits Level-1 qubits Level-0 qubits Code parameters: The concatenated code, denoted H with parameters N, K, D , is constructed by replacing each qubit of the code Q by a copy of Q 0 , resulting in n copies of the inner code Q 0 . The benefit of this construction is that the distance D of the code H is amplified with respect to the constituent codes. To be precise, N = n · n 0 , K = k , D = d · d 0 . The physical qubits are referred to as Level-0 qubits, the logical qubits of Q 0 which form the block Q are referred to as Level-1 qubits, and the logical qubits of H are referred to as Level-2 qubits. See the schematic in Figure 6. When errors on qubits are distributed in an i.i.d. manner, the advantage of concatenation becomes apparent when we "coarse grain" details of the concatenated code. Consider a simple setting where qubits are subject to independent X and Z errors. Suppose we use the code Q without concatenation. By assumption, the probability of failure of each of the physical qubits is p. However, after concatenation, the probability of failure of the Level-1 qubits is suppressed-it fails with probability proportional to p d 0 /2 . This is because at least d 0 /2 errors are required to cause a logical error for Q 0 . The inner code thus adds an extra layer of protection and consequently, the logical failure rate for the outer code is that much lower. As we shall see, we have to be more careful when making this sort of argument in the context of circuits. Syndrome-extraction circuit: Let C 0 and C Q denote the syndrome-extraction circuits for Q 0 and Q respectively such that both can be implemented in 2 dimensions using K, the set of local Clifford operations and R-local SWAP gates. To implement a CNOT or CZ between distant qubits, we may need to permute qubits using SWAP gates to bring them within range of a two-qubit gate. We discuss to how to design such a permutation in Section 3. A syndrome-extraction circuit C H for the concatenated code H can be expressed in terms of the syndromeextraction circuits C 0 and C Q . Each data and ancilla qubit in the syndrome-extraction circuit for C is now replaced with a copy of Q 0 . Each gate in C Q is replaced by the corresponding logical Clifford gate between Level-1 qubits. Thus, even for constructing a quantum memory, we need to understand how to perform a restricted set of inner code logical operations in a fault-tolerant manner. We perform error correction either after the logical gate or in an interleaved manner. We discuss this in the context of our explicit architecture in Section 4. The ideal recovery map R H for H is obtained by first decoding the n copies of the inner code Q 0 using R 0 and then decoding the outer code using R Q . Here R 0 and R Q refer to the ideal recovery maps for Q 0 and Q respectively. Thus R H = R Q • (R 0 ) ⊗n . We generalize the notion of location in the context of circuits. A Level-1 location refers to a Level-1 gate, including the error correction rounds. The location is faulty if it implements the incorrect logical operation on the Level-1 qubits in its support. In the context of C Q , a single Level-1 location in the circuit could refer to a SWAP gate or an entangling gate or a preparation or measurement of a logical qubit of Q 0 . When "coarse graining" circuits for concatenated codes, more care is needed than the i.i.d. errors setting. We illustrate using the following examples. Problem # 1: Level-1 failure rates are not additive Consider a n 0 -qubit code state of the inner code ρ in with Level-0 errors E in . The error E in is not catastrophic-the ideal decoder R 0 can correct it. The state is therefore correctable. Consider the syndrome-extraction circuit C 0 with Level-0 faulty locations F . Suppose there is some error supported on supp(F ) but that this error is not a logical error. We may then be tempted to extend the notion of correctability to include circuits and declare the circuit C 0 correctable. However, this is misleading as a correctable circuit acting on a correctable input state need not produce a correctable output state. Let ρ (0) out denote the output state and E (0) out denote the errors on this state. Suppose the faulty locations F result in an error E F . The product E in · E F might not be correctable. In addition, the errors E in and E F can spread in unpredictable ways within the circuit. We therefore cannot calculate the Level-1 output failure probability by merely knowing the input state and the faults individually resulted in correctable errors. We need additional structure. Problem # 2: Level-0 failure rate is not always sustainable Secondly, the thresholds are decoder dependent. By definition, the ideal decoder R 0 has no faults; if the errors on the state ρ (0) out are correctable, then R 0 is successful. On the other hand, C 0 can contain faults and may be unable to deal with as many errors as the ideal decoder R 0 . This can result in instances where the output state ρ (0) out will be correctable by R 0 ; by our criteria for success, the output state is decodable. However, the number of residual errors may be above the threshold for error correction. In other words, as error correction is itself faulty, these faults can combine with existing errors to cause a logical failure. In our construction, we address these problems in Section 5.2. We shall show that for sufficiently low failure rates, we can indeed ignore dealing with the syndrome-extraction circuit for the outer code assuming a failure rate that depends on the inner code. This statement relies on the structure of LDPC codes and surface codes. We now proceed to review these codes. Constant-rate LDPC codes An n, k, d code family {Q n } is said to be a low-density parity-check code if 1. each stabilizer generator S i , i ∈ [m] , only acts non-trivially on at most a constant number ∆ g of qubits for all elements in {Q n }. 2. each qubit only participates in at most a constant number ∆ q of stabilizer generators for all elements in {Q n }. To include the degree of stabilizer generators and qubits, we shall say that a code family {Q n } is an n, k, d, ∆ q , ∆ g LDPC family. We will choose the outer code to be a code with constant rate, i.e. k = Θ(n). Constructing a constantrate LDPC code is a non-trivial task because there is a conflict between the constraints on stabilizer generators. On one hand, all stabilizer generators need to commute with each other to form a well-defined stabilizer code; on the other hand, the stabilizer generators need to have weight at most ∆ g . Despite these difficulties, there exists constant-rate quantum LDPC codes, i.e. k(n) = Θ(n), with distance d(n) = Θ(n δ ) for 0 < δ ≤ 1. LDPC codes have a threshold [KP13,Got14] if operations in K are not subject to any locality constraints. In this setting, we can construct syndrome-extraction circuits where each qubit is involved in a constant number of two-qubit gates. Consider a family of n, k, d, ∆ q , ∆ g quantum LDPC codes {Q n } where k = ρ · n for some constant ρ > 0 and distance d = Θ(n δ ) for some δ > 0. Suppose qubits are subject to the following errors: 1. the input state is subject to locally decaying errors with failure rate per qubit p in . 2. the syndrome-extraction circuit is subject to locally decaying faults with failure rate per gate p phys . We restate a result from Gottesman [Got14] (Theorem 4) which guarantees the existence of a threshold for arbitrary LDPC codes. In this construction, we require r = d(n) rounds of syndrome extraction. After syndrome extraction, the (imperfect) syndromes are processed by a minimum-weight decoder dec. We do not describe the decoder in detail here and merely note that it exists. For generic LDPC codes, the minimum-weight decoder is not necessarily efficient. There exist q in , q round in the interval (0, 1] such that when p in ≤ q in , p round ≤ q round ,(9) the following is true. The minimum-weight decoder dec yields a correction such that: 1. the final state is recoverable by an ideal recovery operator R Q with probability at least to 1 − p Q (n) where p Q (n) := exp[−Θ(d(n))]. 2. the physical qubits have residual errors that are described by a locally decaying error model with failure rate at most p round . The first condition guarantees that the probability of logical failure falls exponentially with the distance of the code. It is worth noting that we declare a logical failure if any logical qubit fails. This is qualitatively different from codes that only encode a constant number of qubits. The second condition on the residual error is not what is in the theorem statement of Theorem 4 of [Got14]; however, the proof implies it. For sufficiently low values of p phys , it guarantees that we can continue to perform error correction for arbitrarily many rounds (conditioned on no logical errors). In other words, we require p round < p in . We highlight that this result applies to arbitrary LDPC codes, i.e. it is independent of the rate of the code. In particular, it applies to the surface code. We note that the threshold is stated in terms of p round , and not directly in terms of p phys . This is for two reasons: (1) this is how Theorem 4 of [Got14] is itself stated, and (2) in our construction, the dependence of p round on p phys can change depending on the depth of the syndrome-extraction circuit. Stating the thresholds in this manner will allow us to derive the functional dependence between p phys and the depth of the syndrome-extraction circuit. In Gottesman's construction [Got14], the syndrome-extraction circuit is constant depth and therefore p round is also a constant. In contrast, our construction is more complicated because of constraints on geometric locality. It is known that codes defined by geometrically-local stabilizer generators in 2 dimensions cannot achieve both constant rate and growing distance [BT09,BPT10]. To achieve a constant rate and distance d = Θ(n δ ) with fixed degrees ∆ q and ∆ g , the amount of non-locality scales with the parameters k and d [BK21a,BK21b]. In other words, there exist Θ(n) stabilizer generators such that qubits in their support cannot be close to each other in the 2-dimensional lattice. In the context of syndrome-extraction circuits, the result by Delfosse et al. [DBT21] states that the depth of the syndrome-extraction circuit will grow when we only have geometrically-local gates and a limited number of ancilla qubits. (Recall Equation (1).) In Section 5, we show that p round grows if the syndrome-extraction circuit C is constrained by geometric locality. In other words, it is not constant and we need an approach different than Gottesman's to prove the existence of a threshold. In our alternative approach using the hierarchical code, the growth of the inner code suppresses Level-1 logical errors sufficiently to ensure that the Level-2 logical failure rate drops rapidly as the outer LDPC code scales up. Finally, we discuss the choice of quantum LDPC code. While the result above applies to generic quantum LDPC codes, more is known about specific constructions. Quantum expander codes are one family of constant-rate quantum LDPC codes for which d = Θ( √ n) [TZ14,LTZ15]. It has been rigorously proven that these codes can be equipped with an efficient decoder called small-set-flip [FGL18a,FGL18b]. Furthermore, it was shown that the decoder is single shot meaning that it only requires a constant number of rounds of syndrome measurements for the decoder to function even when the syndrome is noisy. Similar to Gottesman's requirements for the existence of a threshold, all that is needed in Fawzi et al. [FGL18a] is for p round to remain constant. However, unlike Gottesman's construction, it was shown that these codes have an efficient decoding algorithm that only require a constant number of rounds of syndrome measurement. Thus, if we wish to implement a quantum expander code, we can use the same machinery presented in this paper to justify an efficient single-shot decoder for the outer code. More recently, constant-rate LDPC codes with d = Θ(n) have been discovered that have efficient decoding algorithms given perfect syndrome measurements [BE21, PK21a, LZ22a, LZ22b, LZ22c, LH22, DHLV22, GPT22]. However, we are yet to understand if constant-rate LDPC codes with d = Θ(n) have efficient decoders that are robust to syndrome errors. In this paper, we do not place any constraints on the outer LDPC code other than it have constant rate ρ > 0 and distance d = Θ(n δ ) for 1/2 ≤ δ ≤ 1. Our construction works for all δ, but we choose δ ≥ 1/2 to simplify some theorem statements. Surface codes We consider the rotated surface code [BK98,HFDVM12], arguably the simplest code that can be laid out on a 2-dimensional lattice. The surface code is an LDPC code, albeit with vanishing asymptotic rate. An example is shown in Figure 7. The code is implemented on a rotated lattice, i.e. the points of the lattice correspond to the vertices of squares that run in 45 degree angles relative to the x and y axes. The points of the lattice are labeled (a, b) where a, b ∈ Z/2. Each black dot represents a data qubit; these are located on integer points, i.e. on points (a, b) where (a, b) ∈ Z. Each colored dot represents a syndrome qubit; these are located on half-integer points, i.e. on points (a + 1 /2, b + 1/2) where (a, b) ∈ Z. Corresponding to each blue face, we define an X-type stabilizer generator that jointly measures X ⊗4 on adjacent data qubits. Similarly, corresponding to each red face, we define a Z-type stabilizer generator that jointly measures Z ⊗4 on adjacent qubits. The semi-circles represent stabilizers that only act on two qubits in their support, i.e. they measure X ⊗2 or Z ⊗2 jointly. The rotated surface code RS encodes exactly one qubit and has distance d . It uses d 2 data qubits and d 2 − 1 syndrome qubits. The total number of qubits is thus 2 = 2d 2 − 1. We use to parameterize the code family. We also refer to each code as a tile. Figure 7: A surface code of distance d = 5. Each dark gray dot represents a data qubit. Light faces correspond to X checks and dark faces correspond to Z checks. They are measured using the qubit represented as a blue or orange dot respectively in the center of each face. Note that data qubits reside at integer points Z 2 and ancilla qubits reside at the points of this lattice shifted by (1/2, 1/2). Using operations K, the syndrome-extraction circuit for the surface code has depth 6. Thresholds for error correction: To motivate our noise model, we consider a simple setting where n = 1, i.e. we have a single tile RS . Suppose we are given physical qubits, each qubit in some fixed computational-basis state, and use the syndrome-measurement circuit to project this state onto a (fixed) code state of the surface code. If the physical qubits are subject to locally decaying errors at failure rate p phys , we can derive the Level-1 probability of failure p RS ( ) for the surface code. The superscripts denote the noise on Level-1 and Level-0 qubits respectively. Contrast this with the scenario where we obtain the surface code from another party. Upon receipt, we are only informed that the tile has already failed with failure rate p (1) in ; we do not have additional information, such as syndrome histories from prior rounds of error correction. If the code has not already failed, then we are guaranteed that the physical failure rate is p (0) phys . Failure after error correction can thus result in two ways: either the tile fails prior to us receiving the state with probability p (1) in or conditioned on it being correct, it fails because of error correction with probability p (1) RS ( ) = exp(−c EC · ) for some positive number c EC that does not depend on . The Level-1 failure rate after error correction is thus p (1) in +p (1) RS ( ). When performing repeated rounds of error correction, we require the Level-1 failure rate p (1) in to bound the probability that the code has already failed in prior rounds. Surface codes will form the inner code in our concatenated construction. Consider the syndrome-extraction circuit C for the constant-rate LDPC code Q n . Suppose W = W(C) is the width of the circuit. We require an arrangement of RS ⊗W in two dimensions. In Section 4, we introduce a bilayer architecture for arranging tiles in two parallel layers. We return to the explicit description of this layout in Section 4. Consider an input state of the code RS ⊗W . The errors are distributed in the following manner: 1. Level-1 errors are described by E (1) , a locally decaying distribution with failure rate p The code RS ⊗W is itself an LDPC code (with vanishing rate asymptotically), and therefore, we can apply Theorem 4 from [Got14]. We note that although the original theorem is itself is not stated in this way, the proof implies the following. There exist thresholds q (0) in , q (0) round on Level-0 failure rates such that, below threshold, the probability of logical failure after error correction is described by a locally decaying Level-1 error p (1) in + p (1) RS ( ) where p (1) RS ( ) = exp(−c EC · ) for some positive number c EC that is independent of . In addition, the state after error correction is described by locally decaying errors with failure rate proportional to p (0) round . This guarantees that if we are sufficiently below threshold, then the number of residual errors is low enough such that we can apply another round of error correction. Unlike the case for general LDPC codes, surface codes possess a minimum weight decoder that runs in poly(n) time by mapping the decoding problem to a minimum-weight perfect matching problem. Logical Clifford operations: As highlighted in the subsection on concatenated codes, we need to implement logical Clifford operations for the surface code to be able to use it within a concatenated construction. Extending our notation from Definition 2.1, we let K 0 denote the physical geometricallylocal Clifford gates and R-local SWAP operation on the physical qubits. Let C 0 be the syndrome-extraction circuit for RS constructed using K 0 . Let K 1 denote the corresponding logical operations on the surface code. Single-tile operations in K 1state preparation in a fixed stabilizer state, (destructive) measurement of logical Pauli operators and applying Pauli corrections -can be performed using operations in K 0 regardless of how two-tile gates are implemented. The only Clifford operations we require are two-tile operations: CNOT, CZ and R-local SWAP. These are discussed in Section 4. Permutation routings on sparse graphs in two dimensions In this section, we prove Theorem 1.1, restated here for convenience. Theorem. For R even, there is an efficient construction of a degree-12 graph G = (V, E) whose vertex set V is identified with an L × L lattice with edges of length at most R. Any permutation α : V → V can be performed in depth 3L/R + O(log 2 R). We use this result in the next section to construct syndrome-extraction circuits for quantum LDPC codes. We shall study permutation routings on graphs and focus on NN 2 (L, R), the L × L lattice in 2 dimensions where two vertices share an edge if they are separated by a distance of at most R. Based on the idea of a permutation routing on product graphs, we demonstrate that we can implement an arbitrary permutation in depth O(L/R). For the special case of the 2D lattice, we can make heavy use of sorting networks to find implementations of target permutations. Using sorting networks to implement long-range connectivity is itself not a new idea [BBG + 13]. For instance, it was used in Delfosse et al. [DBT21] to construct syndrome-extraction circuits for quantum expander codes to match the bound in Equation (1). The results in this section generalize this idea to arbitrary syndrome-extraction circuits with constant spatial overhead. To the best of our knowledge, this is the first work to construct sparse syndrome-extraction circuits when R can scale as a function of L. Permutation Routing on product graphs A permutation routing is sometimes explained in terms of a pebble-exchange game, where pebbles are placed on the vertices of an (undirected) connected graph G = (V, E). The pebble on vertex u ∈ V has an address α(u). The addresses of all the pebbles together specify a permutation α on the vertices of G. We are allowed to swap any two pebbles along an edge of G. Formally, every vertex has a label and for every edge (u, v) =: e ∈ E, we are equipped with an edge permutation π(e) that exchanges the labels of u and v. Edge permutations can be performed in parallel as long as every pebble is involved in at most one edge permutation in one time step. We say β is a simple permutation on G if it is the product of edge permutations {π(e)} e that commute. The objective of the pebble-exchange game is to find a minimum sequence of simple permutations so that the pebble that began at u is located on the vertex α(u) afterwards. In other words, we wish to find the smallest sequence of simple permutations β 1 , ..., β T(α) such that α = β T(α) • · · · • β 1 . Here T(α) denotes the minimum number of simple permutations required to perform α. We represent permutations using the one-line notation [Wik22] where α = α 1 α 2 · · · α n means 1 is mapped to α 1 , 2 is mapped to α 2 and so on. Given any permutation α, the permutation α −1 can be computed efficiently by applying the permutation to a list of consecutive integers [n]. The R-nearest-neighbor graph: The R-nearest-neighbor graph in 1 dimension of length L is denoted NN 1 (L, R) = (V, E) where V = {1, ..., L}, E = {(u, v) : |u − v| 2 ≤ R} ,(10) where | · | 2 represents the standard 2-norm. This is the graph for which the vertices are simply the positive integers up to L and two vertices are connected by an edge if their difference is less than R. In particular, consider the graph NN 1 (L, 1) which corresponds to the path graph. Fact: We can perform an arbitrary permutation α of pebbles placed on the vertices of the path graph NN 1 (L, 1) in depth L − 1 [Knu97]. The explicit permutation routing algorithm Path-Routing is presented in Algorithm 1. Algorithm 1 Path-Routing(α) Input: Permutation α. Output: simple permutations β 1 , ..., β L−1 such that α = β L−1 • · · · • β 1 . 1: labels ← {α −1 (1), ..., α −1 (L)} 2: t = 1. 3: while t ≤ L − 1 do 4: β t ← {1, . . . , L} 5: for i ∈ {1, ..., L/2 } do 6: a ← 2i − 1 if t is even else 2i 7: b ← 2i if t is even else 2i + 1 8: if label(a) < label(b) then Swap 9: β t (a) ← b 10: β t (b) ← a 11: Exchange label(a) and label (b). 12: t ← t + 1. 13: return β 1 , ..., β L−1 . To illustrate, we consider a permutation α on the path graph on 8 vertices in Figure 8. Here, α = 6 7 2 5 3 4 8 1 . We can generalize this concept and define the R-nearest-neighbor graph in 2 dimensions which we denote by NN 2 (L, R). It has vertices {u : u = (u x , u y ) ∈ [L]×[L]}; two vertices u, u share an edge if |u−u | 2 ≤ R. Our objective is to build up to a routing algorithm on this graph. Before considering this general case, we study the case where R = 1. The idea used there will be used again for general R in the next subsection. Routing on graph products: The main idea we present in this subsection are techniques due to Annexstein and Baumslag [AB90] to route on Cartesian products of graphs. They showed that we can derive routing algorithms for the Cartesian product G 1 × G 2 using routing algorithms for graphs G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ). The general routing algorithm Product-Routing presented in Algorithm 2 below applies to any two graphs G 1 and G 2 for which routing routines are known. Each v 1 ∈ V 1 , defines a "row" R v 1 = {v 1 } × V 2 , and each u 2 ∈ V 2 defines a "column" C u 2 = V 1 × {u 2 } 4 . We call a permutation α of the vertices of G 1 × G 2 a row permutation if the permutation respects a decomposition into rows i.e. for all v 1 ∈ V 1 , α : R v 1 → R v 1 . Likewise, for a column permutation, we have that for all u 2 ∈ V 2 , α : C u 2 → C u 2 . A row or column permutation can be implemented using routing routines for G 2 or G 1 by applying the routine to each copy in the Cartesian product. Lemma 3.1 (Annexstein & Baumslag [AB90] ). For any routing α 12 on G 1 × G 2 , there exist column permutations α 1 , α 1 and row permutations α 2 on G 2 such that: α 12 = α 1 • α 2 • α 1 . These permutations can all be computed in polynomial time. If the permutations α 1 and α 1 require depth at most T 1 and α 2 requires depth at most T 2 . Then α 12 requires depth at most 2T 1 + T 2 . We provide some intuition for this lemma. At first glance, one might expect that a row permutation α 1 followed by a column permutation α 2 ought to suffice. However, this will not always work-if two pebbles in a row share the same destination column, then no row permutation will be able to send both the pebbles to the correct column. To avoid collisions, we start the procedure with an additional step. We first send pebbles to rows in which no other pebbles shares the same destination column, so that the routing procedure performs a column permutation, a row permutation, and finally a column permutation. This problem will be rephrased as an edge-coloring problem where each color corresponds to the intermediate row that qubits will be routed through. We first construct a bipartite multigraph B over the vertices (V 2 V 2 ) with left and right vertex sets both copies of V 2 5 . To each pebble, we associate an edge between the initial column on the left and the destination column on the right. B is bipartite and has degree at most |V 1 |, so there exists an efficiently computable edge coloring with |V 1 | colors [S + 03] i.e. a decomposition into |R| disjoint matchings. To each color, τ ∈ [V 1 ], we will assign an arbitrary row. For each pebble (edge), we will first pre-route it to the assigned row (color) before completing a final routing along the rows then columns. In a valid coloring, no two edges (pebbles) of the same color (intermediate row) are incident to the same vertex (column). In the first step, this means that, for every column, each pebble has a unique intermediate destination row. Further, in the row permutation step, for every row, each pebble has a unique destination column. Finally, in the last column permutation step, each pebble is in its destination column, so, for each column, each pebble has a unique destination row. We assume we are given blackbox access to an efficient edge coloring algorithm for bipartite graphs [S + 03] and call it via a subroutine Edge-Coloring in Algorithm 2. To illustrate Algorithm 2, we describe how to obtain the permutation routing on the nearest-neighbor graph in 2-dimensions NN 2 (L, 1). Noting that NN 2 (L, 1) ∼ = NN 1 (L, 1) × NN 1 (L, 1), Lemma 3.1 implies that an arbitrary permutation on NN 2 (L, 1) can be implemented using a product of permutations on the components NN 1 (L, 1). Recalling that any permutation on the path graph NN 1 (L, 1) can be done in Algorithm 2 Product-Routing(α) Input: Permutation α : V 1 × V 2 → V 1 × V 2 Output: Row permutations α 1 , α 1 and a column permutation α 2 such that α = α 1 • α 2 • α 1 . 1: Initialize bipartite graph B ← (V 2 V 2 , ∅) with no edges. 2: for Every (v 1 , u 2 ) ∈ V 1 × V 2 do 3: Draw an edge between u 2 ∈ V 2 and u 2 ∈ V 2 if α(v 1 , u 2 ) = (v 1 , u 2 ). 4: τ ← Edge-Coloring(B) 5: for (v 1 , u 2 ) ∈ E do 6: α 1 (v 1 , u 2 ) ← (τ (e), u 2 ) 7: α 2 (τ (e), u 2 ) ← (τ (e), u 2 ) 8: α 1 (τ (e), u 2 ) ← (v 1 , u 2 ) depth L − 1 implies the following corollary. 1), and we can route on NN 1 (L, 1) using an even-odd sorting network (Algorithm 1). Using Algorithm 2, we have that the total number of steps is 3(L − 1). Figure 9 shows an example of a permutation of such a lattice using Algorithm 2. Corollary 3.2. Any permutation α on NN 2 (L, 1) can be performed in depth 3L − 3. Proof. NN 2 (L, 1) ∼ = NN 1 (L, 1) × NN 1 (L, Permutation routing given long-range gates In this subsection, we show how to route on NN 2 (L, R). We do this by finding graph approximations -subgraphs of our original graph that we can route on nearly as well. We will approximate NN 2 (L, R) using a two-step approach-first we show that the complete graph times a 2-dimensional nearest-neighbor graph approximates NN 2 (L, R) well; we then show that a sparse graph approximates the complete graph well. Together, this will result in a circuit with sparse connectivity that exploits long-range connectivity of range R. The complete graph & sparse approximations: The complete graph K m is a graph on m vertices with edges between every pair of vertices. Any permutation α on K m can trivially be accomplished in depth 2. Using a complete graph will simplify some of the analysis in this section. However, K m is a dense graph; in turn, the corresponding syndrome-extraction circuit we construct from it will require that qubits are involved in a super constant number of two-qubit gates. To avoid this problem, we replace K m by a sparse graph in exchange for a modest increasing in the depth of permutations. We state some facts about sparse approximations to the complete graph K m . Fri08]). For even d ≥ 4 and any ε > 0, a random d-regular graph is an (m, d, λ)-spectral expander for λ = 2 √ d − 1 + ε with high probability. Further, spectral expansion can be efficiently certified by diagonalizing the adjacency matrix, so there is an efficient randomized algorithm that always returns a graph with the desired properties. i } m i=2 satisfy |λ i | ≤ λ < d. G is said to be an (m, d, λ)-spectral expander. Fact 3.4 ([ This fact establishes that a random regular graph is a good spectral expander with high probability. For d = 4, such a graph can be defined on any even number of vertices, so this family is extremely flexible. For convenience, we will set d = 4 and m even. We will call a random 4-regular graph picked in this way on m vertices E m . The next fact concerns routing on spectral expanders in an efficiently computable manner. Fact 3.5 ([ACG93]). Let G be an (m, d, λ)-spectral expander. Then, any permutation σ : [m] → [m] can be performed in depth O d 2 (d−λ) 2 log 2 (m) . Take together, we can replace K m by a random 4-regular subgraph E m on which we can route in depth O(log 2 (m)). For our purposes, we assume that the routing algorithm for these sparse graphs can be accessed in a black-box manner. Now we are prepared to move on to implementing permutations on NN 2 (L, R). The depth we will find is nearly optimal, even when R > 1. For convenience, let us assume that L/R is an integer. First, note that at distances shorter than R, NN 2 (L, R) locally "looks" like a complete graph on R-vertices. We can leverage this to find a spanning subgraph 6 of NN 2 (L, R) that is a product of graphs that we know how to route on: K R and NN 2 (L/R, 1). Lemma 3.6. If R divides L, NN 1 (L/R, 1) × K R is a spanning subgraph of NN 1 (L, R). Proof. Using the coordinates [L/R] × [R] for NN 1 (L/R, 1) × K R and [L] for NN 1 (L, R), we can map the vertices of NN 1 (L/R, 1) × K R to those of NN 1 (L, R) using the bijection η : [L/R] × [R] → [L] (a, b) η − → (a − 1)R + b . Away from the boundary, the neighbors of an arbitrary vertex (a, b) of NN 1 (L/R, 1) × K R are (a ± 1, b) and (a, [R]\{b}). A vertex (a, b) at a boundary is adjacent to the vertices (a, [R]\{b}) and one of (a−1, b) or (a + 1, b); whichever is in the graph. All elements of these sets are at most a distance R from (a, b) under η, so it is a valid edge in NN 1 (L, R). Clearly, NN 1 (L, R) × NN 1 (L, R) is a spanning subgraph of NN 2 (L, R) given by retaining only those edges connecting vertices within a single row or column. Corollary 3.7. Any permutation on NN 2 (L, R) can be performed in depth 3L/R + 9. Proof. Denote NN 1 (L/R, 1) × K R by H. By Lemma 3.6, H is a spanning subgraph of NN 1 (L, R), and NN 1 (L, R)×NN 1 (L, R) is a spanning subgraph of NN 2 (L, R). It follows that H ×H is a spanning subgraph of NN 2 (L, R), so any simple permutation for H × H is a simple permutation for NN 2 (L, R). Permutations on K R and G 2 = NN 1 (L/R, 1) can be implemented in depth 2 and L/R − 1, respectively. Setting G 1 = K R and G 2 = NN 1 (L/R, 1) in Lemma 3.1, any permutation on H can therefore be implemented in depth (L/R − 1) + 2 · 2 = L/R + 3. Invoking Lemma 3.1 again with G 1 = G 2 = H, we can implement any permutation on H × H and hence, NN 2 (L, R) in depth 3L/R + 9. Note even though there are many edges that we do not use, the lowest depth routing can be no better than the graph diameter of NN 2 (L, R) which is roughly √ 2L/R, so this is nearly optimal. Furthermore, owing to the translation invariance of NN 2 (L/R, 1), the embedding of K R × K R × NN 2 (L/R, 1) is also translation invariant-far from the boundaries, the graph remains the same locally under translation of R units in the vertical and horizontal directions. Now, NN 2 (L, R) is not sparse: the degree of each vertex grows as R 2 . For practical purposes, it would be convenient if only a sparse subgraph of NN 2 (L, R) were used in the routing routine. In Corollary 3.7, we used only the edges contained in a subgraph NN 1 (L/R, 1) × K R × NN 1 (L/R, 1) × K R . NN 1 (L/R, 1) is sparse, but K R is not. However, we can replace K R by a sparse expander graph so that the subgraph we use is sparse. Fact 3.5 supplies such a family of graphs and a depth O(log 2 R) routing subroutine. We now bring these ideas together formally in the following corollary. Corollary 3.8. For R even, there is an efficiently constructable degree-12 spanning subgraph of 2dimensional R-nearest-neighbor lattice NN 2 (L, R) on which any permutation can be performed in depth 3L/R + O(log 2 R). Proof. We will use replace the use of the fully connected graph in Corollary 3.7 with a sparse expander. Consider a 4-regular random graph E R generated according to fact 3.4. By fact 3.5, we can route on E R in depth O(log 2 R). Now consider the graph H = E R × NN 1 (L/R, 1). E R is a spanning subgraph of K R , so it follows by lemma 3.6 that H is a spanning subgraph of NN 1 (L/R, 1). Further, using Lemma 3.1, we can implement any permutation on H in depth L/R + O(log 2 R), so we can also implement any permutation on H × H in depth 3L/R + O(log 2 R). By an identical argument to Corollary 3.7, we have that H × H is a spanning subgraph of NN 2 (L, R). Further, H × H has vertex degree 12 7 since the max vertex degree of the product of graphs is the sum of the max vertex degrees of the factors. This subgraph is illustrated in figure 10 (b). Later, we will use the contents of the corollary to construct syndrome-extraction circuits with two-qubit gates of range at most R and where each qubit only needs to interact with a constant number of other qubits. Bilayer implementation of hierarchical codes In this section, we will prove Theorem 1.2. We restate it here for convenience. Theorem (Theorem 1.2). The N, K, D hierarchical code H N is constructed by concatenating an outer code, a constant-rate n, k, d, ∆ q , ∆ g quantum LDPC code Q n and an inner code, a rotated surface code RS where d = Θ(log(n)). Let ρ > 0 and δ ≥ 1/2, such that k = ρ · n and d = Θ(n δ ). The code H N has parameters K(N ) = Θ N log(N ) 2 , D(N ) = Ω N δ / log 2δ−1 N log(N ) . There exists an explicit and efficient construction of an associated family of syndrome-extraction circuits C H N using only local Clifford operations and SWAP gates of range R such that W(C H N ) = Θ(N ) , T(C H N ) = O √ N R . We first define the hierarchical code family {H N } and corresponding syndrome-extraction circuits {C H N }. The element of this family indexed by N = N (n) = n · d 2 is created by concatenating an outer LDPC code Q n with an inner surface code RS , where = Θ(log(n)). Recall = 2d 2 − 1 is the total number of qubits used to construct the rotated surface code RS . The justification for this choice of will follow in the next section. To express n in terms of N , the following bounds will be useful: n = O N log(N ) , n = Ω N log 2 (N ) .(11) It follows from the definition of a concatenated code that the number of encoded qubits is K = k(n), the code distance is D = d(n) · d (See Section 2.3). As δ ≥ 1/2, 1 − 2δ ≤ 0. Using Equation (11), we can write K = Ω N log 2 (N ) , D(N ) = Ω N δ log 1−2δ N log(N ) .(12) This proves the first part of Theorem 1.2. The rest of this section is dedicated to constructing the syndrome-extraction circuit C H N for the concatenated code with the stated parameters. This circuit is constructed in a bilayer architecture and is described in detail below. A bilayer construction of the syndrome-extraction circuit C Q n is described in Section 4.1. To obtain C H N , each outer qubit in the syndrome-extraction circuit C Q n is replaced by a copy of the inner code RS as described in Section 2.3. If C Q n requires W = W(C Q n ) qubits, then we need a layout for W surface codes in 2 dimensions. In Section 4.2, we propose an implementation of RS ⊗W using a bilayer 2-dimensional architecture. The advantage of this architecture is that entangling gates between codes can be performed in a transversal manner which reduces the number of extra ancilla qubits. In Section 4.3, we describe a novel implementation of SWAP gates for this architecture. This completes the set of logical Clifford operations K 1 . We will bring these elements together to construct the syndrome-extraction circuit C H N in Section 4.5. Before doing so, we take a brief detour in Section 4.4 to design Level-1 qubits with noise bias. We will return to this construction in Section 6 to deal with hook errors. Syndrome-extraction circuit C Q n for the outer code In this section, we design a family of syndrome-extraction circuits {C Q n } for a constant-rate n, k, d, ∆ q , ∆ g LDPC code {Q n }. We assume k = ρ·n for ρ > 0 and that m = n−k is the number of stabilizer generators. In Section 2.1, we described measurement gadgets to measure each stabilizer generator. We now describe how these gadgets can be implemented in parallel subject to constraints on geometric locality. We first state the existence of an ideal circuit (C Q n ) ideal which is not constrained by geometric locality. While we include the proof of this construction for the sake of completeness, we note that the idea and the proof itself have been used before-for example, see [DBT21]. For this reason, the proof is relegated to Appendix B. Define constants ∆ and m 0 such that ∆ := max(∆ q , ∆ g ) , m 0 := max(m X , m Z ) . The circuit (C Q n ) ideal is divided into two phases, where in each phase we measure either X or Z syndromes. Each phase requires at most (∆ + 2) stages. It satisfies W := W[(C Q n ) ideal ] = n + m 0 , s := T[(C Q n ) ideal ] = 2(∆ + 2) .(13) It proceeds as follows: 1. In the first step of both phases, all m 0 ancilla qubits are prepared in the state |+ . 2. In each intermediate step 1 < t ≤ ∆ + 1, there is a subset P t of all W qubits such that P t is a disjoint union of m 0 pairs of qubits, where each pair contains one ancilla and one data qubit respectively. (a) In the first phase, these pairs correspond to control and target qubits for CNOT. (b) In the second phase, these pairs correspond to control and target qubits for CZ. 3. In the last step of both phases, all m 0 ancilla qubits are measured in the X basis. We now use this circuit to construct the syndrome-extraction circuit C Q n that is constrained by geometric locality. It will have the same space footprint W, but its depth will be different. Setup: Qubits are arranged in two parallel layers where each layer is a grid of dimensions L × L. We assume we have access to Clifford operations K where, in addition to nearest-neighbor gates between two qubits in the same layer, we can perform nearest-neighbor gates between two qubits that are adjacent but in different layers. We also assume that SWAP gates of range R > 1 are restricted to a single layer. Initialization: To accommodate all W qubits required for (C Q n ) ideal , it is sufficient to choose the smallest integer L that satisfies 2L 2 ≥ W. Initially, data qubits and syndrome qubits are distributed arbitrarily. While further optimization is likely possible, this will not affect the asymptotics, and certainly results in an upper bound on the circuit volume. Partition C Q n into stages: The circuit C Q n will be partitioned into s stages, where in each stage, we prepare and measure ancilla qubits or simulate long-range entangling gates between pairs of qubits specified by P t . To simulate a long-range entangling gate, we use a series of SWAP gates which bring each pair specified by P t close together, followed by the desired entangling gate when they are sufficiently close. The preparation and measurement stages are straightforward. We now describe how to perform the long-range entangling gate. Simulating long-range: In each simulation stage, qubits are arranged such that each pair of P t are adjacent but in different layers. In the first step of each stage, we apply a permutation to the ancilla qubit in each pair of P t to ensure that both qubits in the pair are not in the same layer. Qubits that are not in P t remain stationary. For simplicity, we only permute qubits in the top layer and keep the bottom layer stationary. This specifies a permutation α t on the top layer. As this layer is an L × L lattice, we can use Algorithm 2 to design a circuit so that SWAP operations can be performed in parallel. Using Theorem 1.1, we can construct a sparse spanning subgraph of NN 2 (L, R) with vertex degree 12 such that any permutation α t can be accomplished in depth at most 3L/R + O(log 2 (R)). This is followed by nearest-neighbor entangling gates as specified by P t . The simulation stages have depth 3 L R + O(log 2 (R)) + 1 .(14) Accounting for preparation and measurement steps in each phase, the circuit C Q n has parameters W(C Q n ) = W = n + m 0 , T(C Q n ) ≤ 2∆ 3 L R + O(log 2 (R)) + 1 + 4 .(15) We note that if R = o(L), then the depth is O( √ n/R) (as L = O( √ n). We shall assume that this is the case for the rest of the paper. We include the bound in Equation (15) with constants (ex. the 3 preceding L/R) and the dependence on the degree ∆ to highlight that, for R = 1, this is not merely an asymptotic result and can actually be executed in practice. The bounds in Equation (15) represent an achievability result-any quantum LDPC code can be simulated in depth O( √ n/R) as stated in the theorem above. However, it is not asymptotically tight for all code families (for example, consider the surface code). We expect future versions of this bound to depend on k and d, and how they scale as functions of n. Circuit connectivity: In addition to providing a bound on the depth of permutations, Theorem 1.1 guarantees that each lattice position interacts with at most 12 other locations all within a range R. This implies that the connectivity of the circuit C Q n we have constructed can be 'static'-once qubits have been connected by wires of length at most R, we do not change it afterwards. Implementation of the inner code We have shown that C Q n is constructed using two parallel layers of qubits, where each layer is a lattice of dimensions L×L. Here L is the smallest integer such that 2L 2 ≥ W, where W is the number of qubits used by C Q n . To construct the syndrome-extraction circuit C H N , we use two parallel rotated lattices. Each qubit in C Q n is replaced by a rotated surface code RS where = Θ(log(n)). As each tile uses 2 = 2d 2 −1 physical qubits, the circuit C H N requires at least 2L 2 · 2 physical qubits. We also use additional physical qubits which we refer to as buffer qubits to facilitate logical Clifford operations between tiles. See Figure 11. Buffer qubits are either placed along the periphery of each lattice or between tiles: 1. First, we include a thin band of "buffer" qubits along the perimeter of each layer for reasons that we will explain shortly. The band has thickness ( + 1)/2 and therefore this adds at most 2L( + 1) 2 ancilla qubits per layer. These are denoted as transparent dots in Figure 11 (a). 2. Second, for each surface code, we have d 2 data qubits, d 2 − 1 ancilla qubits, and 1 extra buffer qubit (light gray) for later convenience. These are denoted using dark gray, orange/blue and light gray respectively in Figure 11 (b). In total, accounting for both qubits used in tiles as well as buffer qubits, we use at most 2( + 1) 2 (L + 1) 2 physical qubits. These are arranged in two parallel (rotated) lattices of side length (L + 1) · ( + 1) 8 . (a) (b) Figure 11: A small 2 × 2 × 2 unit cell of the physical layout containing 8 distance-3 rotated surface code tiles. Physical qubits are drawn as dark gray dots. X and Z type stabilizer generators within a tile are indicated by a light or dark gray region with the colored dot used as an ancilla. Thin lines indicate gate connectivity: Each qubit has 5 neighbors, 4 in-plane and 1 out-of-plane. The light gray qubit in the center is unused when the tiles are idle. Note that this layout does not contain additional ancilla qubits between tiles for lattice surgery: all operations will be performed transversally. Physical gates can act either on two neighboring qubits in the same layer, or on adjacent qubits in different layers. Using only K 0 operations, we construct the necessary primitives to implement the syndromeextraction circuits C H N . As described in Section 4.1, the circuit C Q n is divided into s stages. Implementing the syndrome-extraction circuit for the outer code requires logical Clifford operations K 1 . At the outset, both data and ancilla tiles are arranged arbitrarily. Syndromes are measured in two phasesfirst we measure the X-type syndromes and then the Z-type syndromes. The first and last stages of each phase correspond to single-tile logical state preparation and single-tile logical measurements. Single-tile logical operations of state preparation and measurements can be done using only nearest-neighbor gates. If 8 Note that there are √ 2L qubits to a side due to the rotated lattice. Level-0 qubits can be prepared in |0 or |+ , then we can prepare Level-1 |0 and |+ simply by performing the syndrome-extraction circuit which projects the state into the code space. Similarly, we can perform destructive measurements of logical Pauli operators X and Z using single-qubit measurements of X and Z. For each stage where we simulate long-range entangling gates, there exists a partition of outer qubits P t ; Here, P t is the set of m 0 = max(m X , m Z ) pairs, where each pair has one outer ancilla qubit and one outer data qubit respectively. Depending on whether we are measuring X or Z syndromes, we perform the appropriate logical entangling gate using the outer ancilla qubit as control and outer data qubit as target. Data and syndrome tiles that are involved in entangling gates are always arranged such that the tiles are adjacent but in different layers. As the rotated surface code is a CSS code, we can perform entangling gates between data and ancilla qubits using transversal operations. For all these operations -single-tile preparation, single-tile measurement, transversal CNOT and CZ -we perform surface code error correction after the operation. To complete the description of the Level-1 syndrome extraction circuit, it only remains to explain how the logical SWAP operation is implemented. We propose a novel way to perform this gate when R < ; this is the focus of Section 4.3. We show that an arbitrary permutation requires O(L · d ) steps. Error correction for SWAP gates is performed in an interleaved manner-we perform a single round of error correction after each step as described below. We conclude Section 4.2 by discussing connectivity requirements. As mentioned in the introduction, we construct syndrome-extraction circuits such that once two lattice positions have been connected, this does not need to change dynamically over the course of the circuit. Furthermore, each lattice position only ever interacts with a constant-sized set of other lattice locations. For both error correction and logical operations, lattice positions (that store a physical qubit) will be involved in CNOT and CZ gates. It would be preferable if the pairs of lattice positions that need to interact did not change dynamically over the course of the circuit, and instead could be chosen ahead of time. If the circuit C Q n is implemented such that each lattice position requires only connectivity to a constant sized set of other lattice positions, then the entire syndrome-extraction circuit C H N for the hierarchical code H N will only use sparse connectivity of two-qubit physical gates. The proof of this claim is straightforward for single-tile logical state preparation and measurement -these are accomplished using local physical operations. Secondly, by construction, logical entangling gates are implemented transversally and therefore the connectivity does not change. Finally, we will show in Section 4.3 that for a given L and R, the connectivity required to implement an arbitrary permutation of tiles will be chosen ahead of time and will not change dynamically. SWAP Gate As discussed above, we can perform logical CNOT and CZ transversally. To complete K 1 , the final ingredient we need are SWAP gates. We first focus on the special case R = 1, and then generalize the construction to arbitrary R. To implement the permutation returned by Algorithm 2, it suffices to perform SWAP gates only along one orientation of the lattice at a time, either vertically or horizontally. This restriction is utilized to create a resource efficient SWAP operation that requires no additional ancilla qubits. The key insight is that movement of individual tiles may be accomplished by moving all the tiles within a single layer. By performing the transversal SWAP after movement and then moving back, we can accomplish a SWAP operation between two surface codes that are not directly on top of each other. Nearest-neighbor logical SWAP gates High-level overview: We first provide a high-level overview of the SWAP operation and refer to Figure 12. Consider a two parallel rows of tiles in the bilayer architecture as shown in Figure 12 (a). The tiles in the top row are labeled a 1 ,..., a 4 and the tiles in the bottom row are labeled b 1 ,...,b 4 . The tiles labeled ∅ are buffer qubits along the periphery. For ease of visualization, the picture depicts a single-tile width of buffer qubits along the top layer. In practice, we use two half-tile width of buffer qubits in both layers. In this example, we swap tiles a 2 and a 3 ; however, this process can be generalized to swap tiles in parallel. This is accomplished in 5 steps: 1. The logical SWAP operation begins by exchanging alternate tiles between two rows. In the bilayer architecture, this exchange is performed in a checkerboard pattern, i.e. we swap alternate tiles along both rows and columns. This can be accomplished using what we call the staggered SWAP primitive. 2. We can then slide the entire top layer one tile width to the left. In the bilayer architecture, this will be accomplished using what we call the walking primitive that we describe below. The top layer will shift a half-tile width in one direction while the bottom layer will move a half-tile width in the other. This is the reason we use buffer qubits along the periphery-to accommodate tiles after the walk step. 3. Pairs of tiles that we wish to exchange are now adjacent in different layers. For each pair that we wish to swap, we perform a swap operation using nearest-neighbor SWAP gates between adjacent layers. 4. The last two steps are the inverse of the first two steps-we apply the walk primitive and then perform a swap operation between layers on alternate tiles. Figure 12: Consider two parallel rows of tiles a 1 , ..., a 4 and b 1 , ..., b 4 in different layers, one on top of another. Tiles are depicted as squares. The tiles with the label ∅ represent a tile width of buffer qubits on the periphery of the top layer. This example demonstrates how to swap two tiles a 2 and a 3 . To begin, we swap alternate tiles in each row as shown in (a). We then use the walk operation to move tiles one unit as shown in (b). For every pair of tiles we wish to exchange, we perform a inter-layer SWAP as shown in (c). In this example, we only wish to swap tiles a 2 and a 3 so the other tiles remain stationary. We then undo the transformation by reversing the walk in (d) and undoing the alternate exchange in (e). The final panel (f) is the desired state. (a) ∅ a 1 b 1 a 2 b 2 a 3 b 3 a 4 b 4 ∅ (b) ∅ a 1 b 1 b 2 a 2 a 3 b 3 b 4 a 4 ∅ (c) ∅ a 1 b 2 b 1 a 3 a 2 b 4 b 3 ∅ a 4 (d) a 1 b 2 b 1 a 2 a 3 b 4 b 3 ∅ a 4 ∅ (e) ∅ a 1 b 1 b 2 a 3 a 2 b 3 b 4 a 4 ∅ (f) ∅ a 1 b 1 a 3 b 2 a 2 b 3 a 4 b 4 ∅ Walking primitive: By placing a 1/2-tile wide strip of buffer qubits on the periphery of the lattice, we can "walk" the entire memory by swapping the physical-level ancilla qubits and surface code data qubits ( Figure 14). 9 Using this walking primitive, we can move an entire layer an entire tile width in depth 2d using only SWAP-gates. This is a global operation. We will use this primitive in two ways: First, we implement a transversal SWAP between two tiles that are in different layers in the staggered-SWAP primitive (explained below). Second, we will use this repeatedly to move an entire layer half-a-tile width in some direction. Staggered SWAP primitive: When possible, we would like to avoid applying gates directly between data qubits of surface code blocks as this would introduce (small) extra correlations in the logical failure probability of tiles. Instead of a direct transversal swap between data blocks, the vertical SWAP can instead be performed between data qubits in one layer and syndrome qubits in the other layer. This is accomplished via the staggered SWAP primitive. See Figure 13. Figure 13: Performing a staggered SWAP operation between layers. Physical qubits are arranged such that data (ancilla) qubits in the top layer are adjacent to ancilla (data) qubits in the bottom layer. Each qubit in the top layer is swapped with the qubit immediately below it. This allows us to exchange tiles between layers without two data qubits directly interacting with each other. By default, the qubits in the two layers are positioned such that a data qubit in the top layer is above a data qubit in the bottom layer. This facilitates performing logical CNOT gates via transversal physical CNOT gates. We can perform a stagger operation using the walking primitive. This positions data qubits in the top layer above ancilla qubits in the bottom layer. We can then apply a transversal SWAP between layers, and undo the stagger operation if need be. Over the course of the Level-1 logical SWAP, however, we only need to undo the stagger operation at the very end. Throughout the logical SWAP, the two layers remain staggered. If syndrome qubits are reset before use in a syndrome-extraction round, the surface code blocks have undergone a somewhat complicated idle operation with no correlated errors generated between the two surface code blocks. Level-1 logical SWAP: We are ready to describe the logical SWAP operation. We begin by exchanging every other tile. This procedure is illustrated in Figure 15; this can be compared to Figure 12. Steps 1 and 3 (half step shifts) are performed globally with step 2A (vertical swap) or 2B (half step shifts) performed whether or not a logical swap or logical identity is scheduled for a given tile. The step 2B is necessary for the logical identity gate, because step 2A would otherwise lead to data qubits of adjacent tiles directly next to each other instead of separated by an ancilla qubit. In this way, syndrome extraction can optionally be performed after every layer of SWAP gates. To summarize, our SWAP-gate is implemented as follows where we account for the depth of each operation: 9 Not to be confused with the extra buffer qubit per tile. At every step, we have the necessary ancilla qubits to perform surface code syndrome extraction. At first, we might think to perform d rounds of error correction after each of the 5 steps above. However, because we are working with transversal SWAP operations, we only perform a single round of error correction after each step. Ideal SWAP gates do not spread errors, and therefore, the SWAP operation can be seen as a syndrome-extraction circuit on each tile with a higher failure rate. While performing just a single round of error correction may reduce the threshold, we expect this change to be minimal. Furthermore, it reduces the depth of the logical SWAP operation, so our tile SWAP-gate takes 2d + 9 steps of physical SWAP-gates. Recall from Section 4.1 that the syndrome-extraction circuit C Q n is split into s = 2∆+4 stages. Besides the preparation and measurement stages, we simulate a long-range CNOT between pairs of data and ancilla qubits in each stage. The stage begins by picking an element of each pair and ensuring that they are in different layers. We can then use the logical SWAP described here to permute tiles. Note that the SWAP operations can be performed in parallel between tiles on the same layer; the SWAP operations exchange tiles in the same row or column as required by the routing algorithm presented in Algorithm 2. We can use Lemma 3.1 to show that any permutation of tiles on an L × L lattice can be accomplished in 3L − 3 steps. This allows any desired permutation on the L × L × 2 lattice of tiles to be accomplished in depth (2d + 9)(3L − 3). In fact, we can optimize this further to avoid repeating redundant operations. For all but the first and last swap operations: 1) Steps 1 and 5 can be omitted. 2) Within step 3, we may omit the walking step Figure 15: 3-step transversal swap implementation between two stacked syndrome tiles that avoids directly swapping data qubits. The pair of tiles on the right undergoes an identity operation while the pair of tiles on the left are swapped. In step 1, the top layer is shifted by a half-unit using SWAP gates on the top layer. In step 2, the pairs of tiles are either swapped using vertical SWAP gates (2A) or shifted using horizontal SWAP gates to keep alignment (2B). Finally, in step 3, the lower layer is shifted back by a half-unit using SWAP gates in the bottom layer. This operation has the property that syndrome extraction can be performed in all three timesteps. The perspective is inclined slightly to show both layers. that offsets the upper and lower layers by a half lattice site, so the staggered SWAP becomes a simple transversal swap. Using these optimizations, any permutation on the L × L × 2 lattice of tiles can be accomplished in depth t route where t route := (2d + 1)(3L − 3) + 8 . (16) Logical permutation routings We restrict our attention to range R SWAP gates in a single layer. Interlayer operations are strictly nearest-neighbor gates, and can be accomplished using the primitives discussed in Section 4.3.1. In the following lemma, we show that an arbitrary permutation routing of tiles can be accomplished in depth O(L /R). Proof. We proceed in two cases, R ≥ and R < . Case 1: R ≥ Level-0 SWAP gates of range R can be used to implement Level-1 transversal gates of range R 1 = R/ . We can route on the Level-1 lattice NN 2 (L, R 1 ) using Corollary 3.8 with Level-1 tiles swapped transversally. This guarantees that the depth of any permutation routing of tiles is O(L/R 1 ). Each transversal SWAP is followed by a single round of syndrome extraction of the rotated surface code; this requires constant depth and does not affect the depth of permutation routing. Biased-noise qubits In Section 6.4.1, we will be interested in suppressing certain kinds of Pauli errors. When a qubit experiences X or Z errors with an asymmetric rate, it is said to be noise-biased. In this section, we will explain how to introduce such a noise bias on Level-1 qubits by modifying the bilayer architecture. Let η ≥ 1 be the desired noise bias of the Level-1 qubits, and suppose Z errors occur with a probability p and are η-times more likely to occur than X or Y errors. We can introduce a noise bias η > 1 on Level-1 qubits by elongating the surface code into rectangular regions where the minimum-weight X logical operator is longer than the minimum weight Z logical operator. See Figure 16. Suppose we have a rectangular surface code patch of dimensions d X by d Z such that the minimum weight X logical operator has weight d X and the minimum weight Z logical operator has weight d Z . Considering the minimum weight logical operators, we expect a failure rate ∝ p d X /2 in the X basis and ∝ p d Z /2 in the Z basis. By taking d X > d Z , we can introduce a noise bias. For simplicity, we assume d X = d Z + log(η)/ log(p) to guarantee a bias of at least η. We also assume that all Level-1 qubits, data and ancilla both, have been biased. We update the bilayer architecture to use two L X × L Z grids of rotated surface code tiles with X and Z distances d X and d Z respectively. Consider a constant-rate LDPC code Q n with rate ρ. To accommodate W outer qubits, we let L be Figure 16: Creating a biased Level-1 qubit by using a rectangular surface code. In this picture d X = 7 and d Z = 3. The bias is therefore η = O(p −2 ). defined by 2L 2 = W. We then define L Z and L X to be the smallest integers satisfying L Z ≥ L · d X /d Z and L X ≥ L · d Z /d X . d X = d Z + 4 d Z It is not sufficient that the qubits themselves are noise biased. In addition, we also require that all gates preserve this bias. We consider each Clifford operation in turn: Preparation & Measurement: Within the syndrome-extraction circuit for the outer code where all measurement ancilla qubits are prepared in the |+ state (Section 2.1), X errors on the ancilla qubits are suppressed by a factor of η. Entangling gates: In the bilayer architecture, entangling gates are performed transversally. Transversal gates are naturally bias preserving. 3. SWAP gates: The way we perform SWAP operations does not change as all tiles have the same dimensions. The buffer region on the periphery of the lattice must be slightly increased to accommodate the elongated surface code tiles during the walking operation. SWAP operations themselves do not spread errors and are also bias preserving. Together, this completes the requirements for implementing logical Clifford operations K 1 . Syndrome-extraction circuits for hierarchical codes The syndrome-extraction circuit C H N for H N is the circuit C Q n where we replace each outer qubit by a tile RS . In the circuit C H N , each gate of C Q n from the set K is replaced by the corresponding element in K 1 followed by surface code error correction on each outer qubit. Recall that in Section 4.2, we discussed how to perform preparation, measurement and logical entangling gates. In Section 4.3, we discussed how to perform SWAP gates. Theorem 4.2. Each element H N has an associated 2-dimensional syndrome-extraction circuit C H N with the following properties: W(C H N ) = Θ(N ) , T(C H N ) = O √ N R . Further, each lattice position in C H N only interacts with a fixed set of other lattice positions whose size is independent of N . Proof. Consider the family of n, k, d, ∆ q , ∆ g quantum LDPC codes {Q n } of constant rate ρ > 0. From Section 4.1, W(C Q n ) = Θ(n). To construct C H N , each qubit in C Q n is replaced by a surface code RS . It follows that W(C H N ) = Θ( ) 2 · Θ(n) = Θ(N ) .(17) Secondly, each K 1 operation in C H N requires depth Θ( ) for error correction. This is because entangling gates are implemented transversally followed by d rounds of error correction and, per Lemma 4.1, the Level-1 logical SWAP operation requires depth O(L · /R) = O( √ N /R). This implies that T(C H N ) = O √ N R .(18) By assumption, logical two-tile operations in K 1 can be implemented such that the set of lattice positions that interact with each other remain fixed. Furthermore, the construction of Overhead, threshold and asymptotics In this section, we prove that the hierarchical code has a threshold if we use the syndrome-extraction circuits C H N presented in Section 4. We present a formal version of Theorem 1.3. We recall the bound by Delfosse et al. in Equation (1) T(C Q n ) = Ω   n W(C Q n )   .(1) According to this bound, the physical circuit C Q n cannot have constant depth and space footprints simultaneously. This blowup in the volume of the circuit introduces additional failure modes. Consequently, saturating these bounds by no means ensures the existence of a threshold. This then seems to have defeated the purpose of simulating a non-local circuit using local operations. In this section, we present a geometrically-local construction of a circuit C H N that encodes a growing number of encoded qubits and guarantees that a threshold exists. We use code concatenation to define a family {H N } that we call hierarchical codes. The N th element of this family is obtained by concatenating an LDPC code Q n and a rotated surface code RS of size Θ( 2 ). Here N = Θ(n · 2 ). In Section 5.1, we study the failure rate per round p round and establish its dependence on p phys for a syndrome-extraction circuit for any n, k, d, ∆ q , ∆ g quantum LDPC code. The circuits are themselves faulty and are described by a locally decaying faults model. We show in Section 5.2 that logical gates in the bilayer architecture guarantee that Level-1 logical errors in surface code blocks are suppressed exponentially following logical Clifford operations. This allows us to deal with the Level-1 syndromeextraction circuit for the outer code directly without having to keep track of Level-0 failure probabilities. Permuting tiles can introduce Level-1 correlated errors among the tiles. In Section 5.3, we show that there exists a choice of such that the p (1) round is an arbitrarily small constant. We can then invoke Gottesman's threshold theorem which we discussed in Section 2.4 to prove the existence of a threshold. We conclude this overview by highlighting some features of this construction. 1. We show that = Θ(log(n)) is sufficient to achieve a threshold for the outer code. If the code Q n has constant rate, then the code H N has rate O(log(n) −2 ) → 0 as n → ∞. Although outer and inner codes are both LDPC codes, H N is itself no longer an LDPC code-it uses stabilizer measurements that have weight log(n) as the side length of the inner surface codes is = Θ(log(n)). However, the logical operators of the surface code can be measured (destructively) using only single-qubit measurements. 3. There is a cost to locality-the sub-threshold scaling of the logical failure rate is qualitatively different from the typical exponential error suppression as a function of the distance. Instead, we see a strictly sub-exponential, but still superpolynomial, suppression of the logical failure rate with the distance. Evolution of errors in syndrome-extraction circuit C Q n Consider an n, k, d, ∆ q , ∆ g LDPC code family {Q n } with constant rate, i.e. k = ρ · n for some constant ρ ∈ (0, 1) and distance d = Θ(n δ ) for some constant δ ∈ (0, 1]. In Section 2.4, we discussed how Gottesman's proof for the existence of a threshold for LDPC codes depends on the number of errors per round of syndrome extraction on both data and ancilla qubits. In this section, we shall study how the probability of errors per round depends on the circuit C Q n . Recall the circuit C Q n described in Section 4.1. Qubits are arranged on two parallel lattices where each lattice has dimensions L × L. Here, L is the smallest natural number that satisfies 2L 2 ≥ W. In total the circuit has s = 2∆ + 4 stages which can be broken down as follows. The syndrome-measurement circuit C Q n is divided into two phases, one for each of Xand Z-type syndrome measurements. Each phase is further divided into ∆ + 2 stages, where ∆ = max(∆ q , ∆ g ). In addition to one stage each to prepare and measure ancillas, there are ∆ stages where we simulate long-range entangling gates. In each such stage, we permute qubits in the lattice using SWAP gates. Given access to SWAP operations of range R, the permutation has bounded depth T perm = O(L/R). This is then followed by nearest-neighbor entangling gates. Our first result is technical and allows one to compose locally decaying distributions. Lemma 5.1. Let Pr 1 and Pr 2 be two independent and locally decaying distributions on [n] with rates p 1 and p 2 . Consider the distribution Pr : Pow(n) → R defined as Pr(E) = E 1 ,E 2 ⊆[n] E⊆E 1 ∪E 2 Pr 2 (E 1 ) · Pr 1 (E 2 ) .(19) Then Pr is a locally decaying distribution with rate p = p 1 + p 2 , i.e. for all E ⊆ [n], Pr(E) ≤ (p 1 + p 2 ) |E| .(20) Proof. For E ⊆ [n], we can write Pr(E) = E 1 ,E 2 ⊆[n] E⊆E 1 ∪E 2 Pr 1 (E 1 ) Pr 2 (E 2 ) (21) ≤ E 1 ,E 2 ⊆E E=E 1 E 2 Pr 1 (E 1 )Pr 2 (E 2 ) (22) ≤ |E| w=0 |E| w p w 1 p |E|−w 2 (23) = (p 1 + p 2 ) |E|(24) The result follows. Lemma 5.2. Let e ∈ F n 2 be a random binary vector such that E = supp(e) is distributed according to a locally decaying distribution with rate p. Let M ∈ F m×n 2 with row and column weight at most ∆ and f = M · e be a random variable induced from e. Then F = supp(f ) is distributed according to a 2 ∆ p 1/∆ locally decaying distribution. Proof. For a set of bits Pr(E) ,(25) i.e. in order for F to have occurred, there must be a set of errors on the input such that F is in the image. We can rewrite this sum in terms of the largest subset of the powerset I ⊆ Pow(n) such that for any single set E ∈ I: 1. We have that F ⊆ M(E). For all non-empty subsets G ⊂ E, F M(E \ G). Each element of I is minimal in the sense that it is a subset of no other element of I (Assumption 2) while still having F in its image (Assumption 1). The second condition will allow us to replace the Pr(·) with Pr(·) in the sum without loosening the upper bound. Additionally, the column weight of M is at most ∆, so the size of each element E ∈ I is at least |F |/∆. Using the locally decaying distribution assumption yields Pr(F ) ≤ E∈I Pr(E) (26) ≤ |I| · p |F |/∆ .(27) It now remains to count the number of elements of I: Let J ⊆ [m] be the preimage of F in the sense that for every element e in J, the intersection of M({e}) with F is not empty. The row weight of M is at most ∆, so J is no larger than |F |∆. Every set E in I must satisfy E ⊆ J or else there would be some element a ∈ E such that F ⊆ M(E \ {a}) (contradicting Assumption 2 on I). Finally, there are 2 |J| subsets of J, so |I| ≤ 2 |J| ≤ 2 |F |∆ . Continuing with the bound, we have that Pr(F ) ≤ |I| · p |F |/∆ (28) ≤ 2 |F |∆ · p |F |/∆ (29) = 2 ∆ · p 1/∆ |F | .(30) The result follows. The syndrome-extraction circuit C Q n has a special structure-errors do not spread from one data qubit to another or from one ancilla qubit to another. We show that this implies that D and A are distributed according to a locally decaying distribution. Before doing so, we review the symplectic representation formalism which we use in the following proofs. The symplectic representation: For W ∈ N, consider any Pauli operator P ∈ P W and suppose it is expressed as P = X(p x )Z(p z ) for p x , p z ∈ F W×1 2 . Clifford unitary operators U map Pauli operators to Pauli operators under conjugation, i.e. U PU † is a Pauli operator. Equivalently, this can be represented as a linear map on p x and p z . Corresponding to U , there exists a matrix M ∈ F 2W×2W 2 10 such that the 10 The matrix M has additional structure-it is symplectic [Got97], but this is not relevant for this proof. action of U on P can equivalently can be expressed as p x p z → M · p x p z (mod 2) .(31) All arithmetic on symplectic vectors is performed modulo 2; we drop the 'mod 2' suffix in the equations that follow. Recall that the W = W(C Q n ) qubits in C Q n are partitioned into data qubits and ancilla qubits respectively. Controlled-P gates for P ∈ P only use the ancilla qubits as control and data qubits as target. Let d x , d z ∈ F n×1 2 and a x , a z ∈ F m 0 ×1 2 represent the Pauli operators D and A on data and ancilla qubits respectively. For the purposes of understanding how errors accumulate over one round of syndrome measurements, we are not interested in the physical locations of the qubits. As far as their action on D and A are concerned, we treat SWAP gates as (noisy) idle gates 11 . In any given time step of C Q n where we apply entangling gates, all qubits interact with the same type of gate (CNOT or CZ) or remain idle. The corresponding symplectic matrices have a very special form. We can write the joint evolution of D and A under the Clifford transformation acting on the Xand Z-components separately: 1. If qubits are only involved in CNOT operations that use ancilla qubits as control qubits and data qubits as target qubits, then there exists a matrix M ∈ F m X ×n 2 such that d x a x → (M) T · a x + d x a x , d z a z → d z a z + M · d z .(32) For every pair of qubits indexed by i ∈ [m 0 ] and j ∈ [n] that are the control and target of a CNOT, the (i, j) entry of M is 1. The other entries are 0. In this setting, we note that a x and d z remain invariant. 2. If qubits are only involved in CZ operations that use ancilla qubits as control qubits and data qubits as target qubits, then there exists a matrix N ∈ F m Z ×n 2 such that d x a x → d x a x , d z a z → d z + N T · a x a z + N · d x .(33) For every pair of qubits indexed by i ∈ [m Z ] and j ∈ [n] that are the control and target of a CZ, the (i, j) entry of N is 1. The other entries are 0. In this setting, we note that d x and a x remain invariant. In the symplectic representation, we can see that the structure of a syndrome-extraction circuit is special because in each phase where we measure either X or Z syndromes, there is always an invariant subspace (for example, d x , a z when measuring X-type syndromes). The induced error model: In the symplectic representation, a faulty Clifford operation can be expressed as an affine map-there exists random variables b x , b z ∈ F W×1 2 such that the noisy operation can be expressed as q x q z = M · p x p z + b x b z .(34) 11 Colloquially, SWAP gates change the locations of qubits in physical space, not in 'math' space. For instance, suppose each qubit has a label 1, ..., n and we choose to represent the vector px as (px (1), ..., px(n)), where px(i) represents the Pauli operator on the i th qubit. Then moving qubits around in physical space using SWAP gates does not affect the i th component px(i). For this reason, we ignore the action of SWAP on px and pz. The errors b x , b z are caused by faults. The faults are themselves are distributed according to the locally decaying distribution F with failure rate p phys . Let X , Z be the induced distributions over b x and b z . For example, X (b x ) represents the sum of the probabilities over all events where the error is ( b x , b z ) such that supp(b x ) ⊆ supp(b x ) . In other words, it represents the total probability that the error has a non-trivial X component on supp(b x ). When the circuit C is composed of elements from K, we can say more about the induced distributions X and Z. Lemma 5.3. Consider a Clifford circuit of depth 1 composed of elements from K. The induced total probabilities X , Z are locally decaying distributions with failure rate √ p phys . Proof. We shall prove this statement for the distribution X ; the proof for the distribution Z is identical. Fix an arbitrary vector b x ∈ {0, 1} W . Suppose a fault F results in some error b x such that supp(b x ) ⊇ supp(b x ). This implies that F must obey supp(F ) ⊇ supp(b x ). Let F be the smallest set of fault locations such that supp(F ) ⊇ supp(b x ). Because the circuit C has depth 1 and is composed entirely of only 1and 2-qubit gates, this implies that |F | ≤ |b x | ≤ 2|F |. By definition, the total probability of the fault F is F(F ) a locally decaying distribution with failure rate p phys . X (b x ) ≤ F(F ) ≤ (p phys ) |F | (35) ≤ ( √ p phys ) |bx| .(36) The result follows. We are now ready to study p round and its dependence on C Q n . To set the stage, we first consider ideal syndrome extraction in the absence of circuit faults. We focus our attention on the extraction of X-type syndromes and note that the analysis for the Z-type syndromes is identical. Consider a corrupted code state E |ψ where ψ is a code state and E = X(e x )Z(e z ) is some Pauli operator. If the syndrome-extraction circuit C Q n has no faults, the joint state of the data and ancilla qubits after the circuit is described by 12 E |ψ ⊗ Z(σ σ σ X ) |+ ⊗m X ,(37) where σ σ σ X represent the ideal syndromes for X-type stabilizer generators. In this setting, we can use Equation (32) to update Xand Z-components of Pauli operators under the action of CNOT. Initially, the X and Z components of the state E |ψ ⊗ |+ ⊗m X can be expressed as (e x |0), (e z |0), where 0 is the all zeros vector of length m X . The vector 0 means that we assume that the input to the circuit is the state E |ψ ⊗ |+ ⊗m X ; preparation faults on the ancilla occur in the first time step. For 1 < t < ∆ + 2, we apply CNOT gates specified by a matrix M (t) ∈ F m X ×n 2 . If we do not apply an entangling gate (i.e. when we SWAP qubits), then M (t) is a matrix of zeros. Otherwise, the (i, j) entry of M (t) is 1 if and only if the i th syndrome qubit and the j th data qubit are involved in a CNOT gate in the t th time step. In the absence of circuit faults, the X components of the error (e x |0) are left unaffected during the phase where we measure X-type syndromes. On the other hand, the Z-components transform as e z 0 → e z t M (t) · e z .(38) The vector t M (t) · e z is the X-type syndrome σ σ σ X . In other words, t M (t) =: H X is the symplectic representation of the X-type stabilizer generators. Note that H X is a sparse matrix with at most ∆ q ones per row and ∆ g ones per column. Next, we move on to the setting where circuit components are faulty. The final state of the data and ancilla qubits is different from Equation (37) because of circuit faults. We express it as (D ⊗ A) E |ψ ⊗ Z(σ σ σ X ) |+ ⊗m X ,(39) where, D and A represent errors due to faults in the circuit C Q n on the data qubits and ancilla qubits respectively. The symplectic representation of the final Pauli operator on the data and ancilla qubits is e x + d x a x , e z + d z σ σ σ X + a z .(40) The probability of errors per round, p round , is the maximum failure rate for the distributions describing d x , d z , a x , and a z . Theorem 5.4. The induced distributions X and Z that govern the errors D ⊗ A are locally decaying distributions with failure rate p round , where p round ≤ 2 ∆+1 · T(C Q n ) · (p phys ) 1/(2∆+2) , where ∆ = max(∆ q , ∆ g ) is the number of stages in the circuit C Q n . Proof. Recall that the circuit C Q n proceeds in two phases, with the first phase used to measure X-type syndromes and the second phase used to measure Z-type syndromes. For brevity, we allow T X and T Z to be the depth of the circuit C Q n corresponding to each phase; this means T(C Q n ) = T X + T Z . Here, we focus on the first phase of C Q n which is used to measure X-type syndromes and study the evolution of Z-type errors; the proof of the remaining three cases is identical and for this reason we omit them. Let b (t) x , b (t) z ∈ {0, 1} n and c (t) x , c (t) z ∈ {0, 1} m X be the errors on data and ancilla qubits induced by faults caused at time t. In turn, these errors can spread to other qubits and interact with errors at later times. Using Equation (32) repeatedly, we can write the final error e z + d z , σ σ σ X + a z in terms of the errors at each step as follows: e z + d z σ σ σ X + a z =   e z + t b (t) z t M (t) · e z + t M (t) · t <t b (t ) z + t c (t) z   .(41) As we are only measuring X-type syndromes, all sums are over time steps t in the first phase of the circuit. For time steps t where we do not apply a CNOT, all entries of M (t) are 0. We can simplify Equation (41) by eliminating e z and σ σ σ X = t M (t) · e z : d z a z =   t b (t) z t M (t) · t <t b (t ) z + t c (t) z   .(42) While it is a straightforward consequence of the linear evolution under symplectic transformations, being able to write d z and a z without e x and e z means that the Z-components of the errors d z and a z do not depend on the input error E. Furthermore, the special structure of the syndrome-extraction circuit is reflected hered z is simply the sum of the errors b (t) z caused by faulty gates at each step. In other words, Z errors on data qubits are not affected by Z errors on ancilla qubits. We simplify this further using two observations. First, we will find it useful to reorder the sums within H X has row and column weight at most ∆, so the block matrix that appears in Equation (48) has column weight at most ∆ + 1. By Lemma 5.2, each term in the union is distributed according to a locally decaying distribution with failure rate 2 ∆+1 (p phys ) 1/2(∆+1) . Finally, Lemma 5.1 allows us to bound the failure rate of the compositions of independent locally decaying distributions. This, in turn, is an upper bound on the rate of the locally decaying distribution Z over d z , a z . The union extends over the depth T X of the circuit required to measure X-type syndromes terms. Applying Lemma 5.1 repeatedly, we find Z is a locally decaying distribution with failure rate 2 ∆+1 · T X · (p phys ) 1/2(∆+1) .(49) By an identical argument, the X errors are distributed according to a 2 ∆+1 · T X · p 1/2(∆+1) phys locally decaying distribution. In turn, this means that the induced distributions X and Z are locally decaying distributions with failure rate 2 ∆+1 · T X · (p phys ) 1/2(∆+1) . Repeating the same analysis for the Z-type syndrome measurements, we find that the X and Z distributions describing induced errors are locally decaying distributions with failure rate 2 ∆+1 · T Z · (p phys ) 1/2(∆+1) .(50) We can use Lemma 5.1 again to bound the failure rate per for the entire circuit C Q n . As T(C Q n ) = T X + T Z , we arrive at the result that X and Z are locally decaying distributions with failure rate p round where p round = 2 ∆+1 · T(C Q n ) · (p phys ) 1/2(∆+1) .(51) When qubits are arranged on an L × L lattice, the circuit depth T(C Q n ) is O( √ n/R). If gates are constrained by geometric locality, i.e. R = ω(L), then the depth of the circuit C Q n grows with the code size n. However, for the existence of a threshold, we require p round to be some fixed constant. We therefore only achieve a threshold if the physical failure probability vanishes as the size of the code increases: p phys = O 1 T(C Q n ) 2(∆+1) .(52) However, if we were to use a concatenated construction, where the outer code is the constant-rate LDPC code Q n and the inner code is a surface code RS , then we can choose p phys to decrease exponentially with the size of the inner code. We study this in the next section. Finally, we comment that the factor 2 ∆+1 that appears in Theorem 5.4 can very likely be reduced. However, this particular version of the theorem is sufficient for our purposes, namely to prove the existence of a threshold for the hierarchical scheme. For readers interested in applying the hierarchical scheme to the real world, we shall estimate the logical failure rate of the hierarchical scheme numerically in Section 6. Coarse graining concatenated circuits In the next two sections, we will analyze the concatenated code by applying Gottesman's theorem described in Section 2.4 to both the inner code and the outer code. In this section, we apply it to the inner code; for W = W(C Q n ), the inner code RS ⊗W is itself an LDPC code. In Section 5.3, we will apply Gottesman's theorem to the outer code. In Section 2.3, we described how we cannot ignore the details of the Level-0 syndrome-extraction circuit in a concatenated code. In this section, we show that if logical gates on surface codes are performed as described in Section 4, then they are fault tolerant. We show the existence of a threshold q (0) phys such that if the failure rate per round is below q (0) phys , then we can directly study Level-1 operations and ignore Level-0 operations. Consider an input state ρ in ∈ (RS ) ⊗W in the bilayer architecture. Let Level-0 faults on the syndromeextraction circuit be distributed according to a locally decaying distribution with failure rate p (0) phys . The failure rate per round on the data qubits and the syndrome qubits is the same because data and syndrome qubits both interact with 4 other qubits. Let q (0) in , q(0) round be the thresholds for surface code error correction as defined in Section 2.5. Suppose we are below threshold. Then after error correction, tiles that have not failed are described by a locally decaying Level-0 error model with failure rate p (0) round . Theorem 5.4 guarantees that the failure rate per round grows with the depth of the syndrome-extraction circuit; it also relies on the degree of the qubits and stabilizer generators. If we measure X and Z syndromes separately, the depth of the syndrome-extraction circuit is at most 12. The degree of the qubits and stabilizers is 4. Using Theorem 5.4, we can bound the failure rate per round of surface code syndrome extraction: 13 p (0) round < 384 p (0) phys 1/10 .(53) This bound can be much better-for example X and Z syndromes can be measured in parallel which, in turn, can reduce the depth of the circuit; we can also likely reduce the constant 384 in front of p round . However, we continue to use the bound in Equation (53) for simplicity. We can use Theorem 5.4 to show that the logical operations for the bilayer architecture are fault tolerant. We argue that both the Level-0 and Level-1 failure rates after the operation are constant. Theorem 5.5. Let C be the circuit on a state ρ in ∈ RS ⊗W such that each tile is involved in at most one logical gate in K 1 . Tiles that have not suffered a logical error are described by a locally decaying error with Level-0 input failure rate p round . There exists a threshold q Proof. Let ρ in ∈ RS ⊗W be a noisy code state with Level-0 errors described by a locally decaying distribution with failure rate p round < q (0) round .(0) Entangling gates: Entangling gates between data and ancilla blocks are performed in a transversal manner. Errors due to faults in the transversal gate are distributed according to a locally decaying distribution with failure rate p (0) phys . Lemma 5.1 shows that the input to error correction is a state with Level-0 errors described by a locally decaying distribution with failure rate p (0) round + p (0) phys . Error correction is successful if p (0) round + p (0) phys < q (0) in , p (0) round < q (0) round .(55) SWAP gates: Assume that the Level-0 failure rate is p round . The logical SWAP operation is decomposed entirely in terms of physical SWAP operations. As these are non-entangling operations, the error distribution is a locally decaying distribution with failure rate p (0) phys . We can use Lemma 5.1 to find the effective failure rate per round. This is equal to the sum of the failure rate per round of syndrome extraction and the failure rate of the SWAP gate itself. Note that because the SWAP gate has larger depth, we perform more than d rounds of syndrome extraction. Therefore, the failure rate per round on both data and ancilla qubits is p (0) round + p (0) phys . Error correction is successful if p (0) round < q (0) in , p (0) round + p (0) phys < q (0) round .(56) Logical measurement of Pauli operators: We will wish to measure logical operators on tiles that represent Level-1 ancilla qubits. Consider a state with Level-0 errors distributed according to a locally decaying distribution with failure rate p (0) round . We first study the logical measurement of a single tile. To destructively measure the logical X (Z) operator on a single tile, we can measure each of the physical qubits in the X (Z) basis. This is permitted by our available operations in K 0 . Faults on measurements are distributed according to a locally decaying distribution with rate p We can now study all tiles that undergo measurement. As measurements on each tile are performed separately, this induces a Level-1 measurement error with probability exp(−c EC · ). In the mean time, tiles that represent data qubits remain idle for 1 time step. As we assume idle errors are distributed according to a locally decaying distribution with failure rate p p (0) round + p (0) phys < q (0) in .(57) Combining requirements for all operations: We can use Equation (53) to state p (1) phys = exp(−c EC · ). The syndrome-extraction circuit C H N has a threshold In this section, we prove that the hierarchical code H N has a threshold if we measure syndromes using the circuit C H N . We review the construction first and the corresponding assumptions on failure rates. Thus far, we have simply stated the relationship between and n, i.e. that = Θ(log(n)), without justification. We show in Lemma 5.6 that letting the inner code have size = Θ(log(n)) is indeed sufficient to achieve arbitrarily small, but constant, Level-1 failure rate per round p (1) round . We bring these elements together in Theorem 5.7 to show that the hierarchical construction has a threshold. Recall that the hierarchical code H N is constructed by concatenating an outer n, k, d, ∆ q , ∆ g constantrate LDPC code {Q n } and inner d 2 , 1, d code RS . The family Q n has parameters k = ρ · n for ρ > 0 and distance d = Θ(n δ ) for δ > 0. round because it will make the following statements easier. Qubits are laid out on a bilayer architecture as described in Section 4. Physical qubits are aggregated to form W(C Q n ) rotated surface codes RS ; these form 2L 2 tiles where L is the smallest integer satisfying 2L 2 ≥ W(C Q n ). The product code RS ⊗W is itself an LDPC code. The tiles will be used to simulate long-range entangling gates required to perform the syndrome-extraction circuit C Q n for the outer code. Single-tile preparation and measurement, and two-tile entangling gates are described in Section 4.2; Level-1 SWAP gates and permutations of tiles were described in Section 4.3. Recall that q (0) phys ∈ (0, 1] was defined in Section 5.2. Per Theorem 5.5, if the input state has Level-0 errors described by a locally decaying distribution with failure rate p For the outer code to have a threshold, we require that the n, k, d, ∆ q , ∆ g LDPC code family {Q n } has a syndrome-extraction circuit such that p (1) round remains a sufficiently small constant as discussed in Section 2.4. In the following lemma, we show that = Θ(log(n)) is sufficient to achieve this. Lemma 5.6. Suppose Level-1 faults on the syndrome-extraction circuit C H N are distributed according to a locally decaying distribution with failure rate p (1) phys . Then, for arbitarily small constant ε > 0, p (1) round < ε can be achieved using = Θ(log(n)). Proof. From Theorem 5.4, the failure rate per round scales as p (1) round = 2 ∆+1 · T(C Q n ) · p (1) phys 1/2(∆+1) ,(59) where ∆ = max(∆ q , ∆ g ) is some constant for a fixed family Q n . We want p (1) round to be an arbitrarily small constant ε > 0. Per Theorem 5.5, Level-1 faults are distributed according to a locally decaying distribution with failure rate which implies that p phys can be satisfied by choosing = Θ(log(n)). In particular, there exists a threshold q (1) round for the outer code Q n for the syndrome-extraction circuit C Q n . This can be achieved for some such that = Θ(log(n)). Theorem 5.7. There exists a choice of such that = Θ(log(n)) and thresholds q (0) in , q(0) phys and q (1) in such that if max p (0) in , p (0) round < q (0) in , p (0) phys < q (0) phys , p(1) in < q (1) in , then the following is true. With probability at least 1−p H (N ), the state after error correction is correctable by an ideal decoder where, for some positive number c H that is independent of N , p H (N ) < exp −c H · N δ log 2δ (N ) . Furthermore, the residual errors are distributed according to a locally decaying distribution with failure rates p phys . By definition, this is sufficient to perform Level-1 logical gates and surface code error correction as per Theorem 5.5. Next, the LDPC code has thresholds q (1) in and q (1) round (See Section 2.4). The input state has Level-1 errors described by a locally decaying distribution with failure rate p (1) in . For the syndrome-extraction circuit on the outer LDPC code to be successful, we require p (1) in < q (1) in . Finally, we require that the Level-1 failure rate per round is below the corresponding threshold p (1) round < q (1) round .(60) From Lemma 5.6, this can be achieved using = Θ(log(n)). By definition, syndrome-extraction is successful if the ideal decoder R H is able to recover the final state. The code H N fails if the outer LDPC code Q n fails, i.e. the probability of failure is p H (N ) = p Q (n) = exp(−c Q · d(n)) = exp(−c Q · Θ(n δ )). Using Equation (11), we can express the probability of failure p H (N ) in terms of N : p H (N ) < exp −c H · N δ log 2δ (N ) .(61) for some positive number c H that is independent of N . Residual errors are distributed according to locally decaying distribution: 1. on Level-1 with failure rate p (1) round ; this is guaranteed by Gottesman's result applied to the outer code. 2. on Level-0 errors failure rate p (0) round ; This is guaranteed by Theorem 5.5. The result follows. We reiterate that p H (N ) is an upper bound on the failure rate for the Level-2 error probability distribution. This analysis depends crucially on the failure rate of the SWAP gates; p (1) round , and therefore the size of the inner code, scales with the depth of the circuit because of noisy SWAP operations. In proving Theorem 5.7, we were agnostic to the failure modes in the circuit and assumed that all Level-1 two-qubit gates fail with probability p (1) phys . However, if the fidelity of physical SWAP gates can be improved over the fidelity of entangling gates, this can reduce the overhead for the hierarchical scheme significantly. We provide evidence for this in Section 6.3 where we estimate the logical failure rate for the hierarchical scheme. In certain architectures such as trapped neutral atoms, SWAP gates can be performed by physically moving the trap [BLS + 22]. In this case, the failure rate for the SWAP gates may have no direct connection to the failure rate for CNOT and CZ operations. Comparisons with the basic encoding We have shown that the hierarchical code {H N } has a syndrome-extraction circuit that can be constructed using gates restricted by geometric locality such that it has a threshold. Below threshold, the WER is suppressed superpolynomially, but subexponentially in N . It is natural to ask whether the resources spent in performing SWAP gates can be better spent simply building a more robust surface code. In this section, we consider the basic encoding B M which encodes K logical qubits in surface codes RS M . We compare the hierarchical scheme and the basic enconding in different ways. We show in Section 6.1 that for a target WER, the syndrome-extraction circuit for the hierarchical memory is more efficient than the syndrome-extraction circuit for the basic encoding. This is measured by the depth and width of the corresponding circuits. We will state and prove a formal version of Theorem 1.4. Depending on the value of the threshold for the outer LDPC codes however, it is not immediately obvious that this scaling manifests for practical block lengths. In the rest of this section, we present numerical estimates for the WER p H (N ) of the hierarchical memory and contrast it with the WER p B (M ) for the basic encoding. We do this by demanding a fixed total number of qubits for both schemes and compare p H (N ) and p B (M ). We demonstrate that there is a crossover point, i.e. a value of the physical error rate where, for fixed total number of qubits, the hierarchical memory outperforms the basic encoding, i.e. p H (N ) < p B (M ). In our estimates, this happens at gate error rates roughly between 10 −3 and 10 −4 . While these are preliminary estimates, they are promising nonetheless as they are in the realm of possibility. In Section 6.2, we briefly discuss the codes we use as outer and inner codes. To estimate the crossover point, we make some assumptions about the noise model, gates, and decoder. Owing to these assumptions, our estimates should only be interpreted as a proof-of-principle that the overhead of the hierarchical scheme pays off in a reasonable parameter regime. In Section 6.3, we present the results of our simulations. All together we believe these assumptions, especially those related to the decoder, code, and noise model, are conservative. We return to these assumptions in Section 6.4 and for each assumption, we outline how one might expect it to change (1) in the future, and (2) in a more realistic setting. In general, we expect that with careful engineering (e.g. high-rate linear-distance codes, architecture-specific considerations, improved decoding algorithms) and more realistic noise modeling (e.g. including significant error correlations), the cross-over to when hierarchical memories outperform surface codes will occur at smaller numbers of logical qubits, higher physical error rates, and higher target WERs than in our estimates. Asymptotic comparison with surface code We have proved the existence of a threshold when we simulate an LDPC code using local gates. However, the existence of a threshold alone might not warrant switching over to a different scheme when there already exists an excellent local scheme -the surface code. We recall that we are only constructing a quantum memory, and not a scheme for universal, fault-tolerant quantum computation. In this section, we ask how the surface code would perform if we used the same total number of qubits used in the concatenated scheme above to plainly encode all logical qubits. We find that there is a space-time tradeoff to implementing a hierarchical scheme. The hierarchical scheme {H N } with corresponding fault tolerant syndrome-extraction circuits {C H N } achieves the following costs: W(C H N ) = Θ(N ) T(C H N ) = O √ N R .(62) This family encodes k(n) = ρ · n qubits. Note that the depth is for a single round of syndrome extraction. We will return to this point shortly. We assume that the Level-0 physical failure rates are sufficiently below threshold to perform surface code error correction, i.e. p We declare failure if any of the k tiles of B M fails, which implies that p RS ( ) ≤ p B (M ) ≤ 1 − (1 − p RS ( )) k exp(−c EC · M ) ≤ p B (M ) ≤ n · exp(−c EC · M ) .(64) To guarantee that the error rate p B (M ) is lower than p H (N ), we at least require that exp(−c EC · M ) ≤ exp −c H · N δ log(N ) 2δ .(65) This implies that M = Ω(N δ / log(N ) 2δ ). We can now compute the space and depth requirements for C B M . The space cost W(C B M ) is Θ(k · 2 M ). The hierarchical memory H N uses a constant-rate n, k, d, ∆ q , ∆ g quantum LDPC code Q n where k(n) = ρ · n and d(n) = Θ(n δ ). This implies that W(C B M ) = Ω N log(N ) 1+2δ .(66) Furthermore, each tile requires M rounds of error correction; syndrome-extraction circuits on separate tiles can be run in parallel. Therefore T(C B M ) = Θ( M ) = Ω(N δ / log 2δ (N )) .(67) This completes the proof. Comparing with Equation (62), the basic encoding requires a larger space overhead for all δ > 0: W(C B M ) W(C H N ) = Ω N 2δ log 1+2δ (N ) .(68) As stated, however, the time overhead is worse. Although the depth of the syndrome-extraction circuit C H N is O( √ N /R), we will need to perform d(n) rounds of syndrome extraction to be fault tolerant. However, this is not a fundamental requirement; it is due to the nature of Gottesman's proposal in [Got14] which uses an inefficient minimum-weight decoder. There exist constant-rate LDPC codes that possess efficient, single-shot decoding algorithms, i.e. syndrome extraction only needs to be performed a constant number of times for the decoding algorithm to work [LTZ15,FGL18a,FGL18b]; furthermore the algorithm requires O(N ) time. For such codes, we can compare the depth of the syndrome-extraction circuit T(C H N ) and that of the basic encoding C B M . In addition to the width blowup, the basic encoding also requires a larger time overhead when δ > 1/2. T(C B M ) T(C H N ) = Ω N δ−1/2 R · log 2δ (N ) .(69) Using LDPC codes with single-shot decoding algorithms, the hierarchical memory is a more efficient way to achieve a low logical error rate in terms of both circuit depth and width. Having said this, it is not clear if this advantage manifests for practical block lengths. For small codes and high error rates, it may well be that it is still optimal to use the basic encoding. We expect to see a crossover point-a value of physical error rate where the hierarchical scheme {H N } has a lower logical failure rate than the basic encoding {B M } with the same overhead. Where exactly this crossover happens will depend on a number of parameters that are specific to the implementation, including the choice of the outer code, its threshold and our choice of decoders. In the rest of this section, we attempt to estimate where this happens. Setup for numerical estimates Outer code: We choose a quantum expander code as our outer code [TZ14,LTZ15]. We do not utilize any of the structure of the code, so any LDPC code with constant rate and polynomially scaling distance will suffice. For these reasons, we only briefly discuss the code construction. We pick a classical code by sampling the check matrix from the ensemble of m × n matrices with 5 ones in each column and 8 ones in each row. In particular, we pick a classical code with length 896 and 336 encoded bits. Work by Litsyn and Shevelev [LS02] computes the asymptotic weight distribution of codewords -with high probability, this code has distance 119. The resulting quantum code has parameters N = 1 116 416, K = 112 896, D = 119, ∆ q = 16, ∆ g = 13 . We choose this code as it has a high rate which is necessary to reduce the amount of overhead in the scheme. The large block length we consider here is a consequence of using code families with sub-linear distance scaling. However, the full trade-off for rate, distance, check weight, etc for linear-distance codes has not yet been explored. We note that even at a relative distance d/n of 10 −3 , a linear-distance code of such large block length would achieve a distance of roughly 10 3 . While the hierarchical memory construction has good asymptotic performance guarantees, if the overhead is too high then the hierarchical memory wins only at an extremely low WER. 14 Inner code: As in the earlier sections, we consider the square rotated surface codes RS for the inner code. In our estimates, we allow for d = 3, 5, 9, 15, 21, 27. We make some assumptions about errors on both the logical and physical levels. We present these assumptions together below and discuss justifications for some assumptions in what follows. Level-0 noise model: We assume circuit-level Pauli noise on each physical qubit. We treat SWAP gates and other Clifford operations separately. 1. Each t-qubit gate (except SWAP gates) at the physical level fails with a probability p and leaves behind one of the 4 t − 1 non-trivial t-qubit Pauli operators picked uniformly at random. We assume that qubit reset completely removes all traces of the original state. However, it may reset to the wrong computational basis state with probability p. 2. The failure probability of the physical SWAP-gate is r SWAP · p, where r SWAP = 1, 10 −1 , 10 −2 . In this setting, the surface code syndrome-extraction circuit is performed every 1/r SWAP SWAP-gates, so that at the physical level the circuit-level noise model remains relatively unchanged for different values of r SWAP . This assumption will be discussed in detail in Section 6.2.2. Level-1 noise model: We assume that the surface code fails at a probability p per d physical level timesteps where one physical level timestep is one round of syndrome extraction plus one (optional) transversal gate which totals roughly 6 gates. This assumption is discussed in Section 6.2.1. 2. The effective error rate per long-range CNOT/ CZ gate is p (1) = 1 − (1 − p (1) phys (d )) troute+1 . It is analogous to the two-qubit gate error rate in the model with long-range gates. t route is the time required for permutation routing presented in Equation (16). Level-2 noise model: Finally, we assume that the logical failure rate for the LDPC code, p Q (n), is consistent with a minimum-weight decoder. 1. For our LDPC code, we assume that the WER under circuit-level Pauli noise using long-range gates is p Q = p (1) 10 −3 10(72) per cycle of syndrome extraction. The threshold of the code is assumed to be about 10 −3 under circuit noise. The exponent is 10 rather than half the distance which is ∼ 55 because of hook errors. This assumption is discussed in Section 6.2.4. If desired, readers can skip ahead to the numerical estimates in Section 6.3 and return to the justification of the noise model later. Decoder performance for the inner code Consider Equation (71) for the scaling of the logical failure rate for a surface code of distance d . We assumed a surface code threshold of 10 −2 . This equation neglects: 1. finite-size effects present at very small code distances. 2. the slight reduction in threshold from inserting a layer of gates failing with rate p between syndrome extraction cycles 15 . Recall that this is necessary to implement a logical SWAP operation in the bilayer architecture as discussed in Section 4.3. 3. the distinction between rotated and standard surface codes. Owing to this, the expression for the logical error rate is an order of magnitude estimate. We expect our conclusions should be somewhat insensitive to the precise form of the logical error rate and also apply to more general locally decaying error models. For calculational convenience, we assume that the failure rate q after T syndrome extraction rounds is given by 1 − q = 1 − p(1)phys (d ) T /d . Physical SWAP fidelity Recall that simulating a long-range CNOT via SWAP gates results in a CNOT/ CZ failure rate of p (1) . We assume that the effective failure rate witnessed by the outer code is p (1) = 1 − (1 − p (1) phys (d )) r SWAP t route d +1 .(73) The parameter t route is the time required to perform a permutation routing in the bilayer architecture as specified in Equation (16). For convenience, we restate it here t route = (2d + 1)(3L − 3) + 8 .(16) The parameter r SWAP bounds the (in)fidelity of the SWAP operation in terms of the CNOT gate (in)fidelity as we now explain. In the previous sections, we assumed that all gates failed at the same rate. As noted in Section 5.3, the main source of noise in the hierarchical model stems from the SWAP gates. This worst-case model was convenient for a proof of the existence of a threshold. Furthermore, in many devices the SWAP gate is implemented using the same mechanism as the two-qubit entangling gates and so the noise rates are comparable. However, this is not the only way to implement SWAP gates. In platforms where the qubits can be physically moved, we can effectively "rewire" the connectivity of the device at runtime. Physically swapping qubits does not require the qubit degree of freedom to be coupled to, and so one might expect that it is an easier task to perform with higher fidelity or speed. In this setting, it is possible that the SWAP gate has much higher fidelity than CNOT gates. Accordingly, in our model, we assign a constant r SWAP which specifies the ratio of SWAP-gate and idle noise to CNOT-gate noise. With a less noisy r SWAP , we perform 1/r SWAP level-0 SWAP operations per round of surface code syndrome extraction such that the physical error rate in the surface code syndrome extraction circuit remains constant with respect to r SWAP . Utilizing this optimization, an entire permutation takes r SWAP t route rounds of syndrome extraction. Equation (73) is in terms of the surface code cycle (d rounds of syndrome extraction), and the total number of surface code cycles is r SWAP troute d for a permutation and 1 for an entangling gate. We have omitted floor and ceiling functions in this discussion for simplicity. For example, in a neutral atom system [BLS + 22], an array of qubits with a coherence time of seconds was rearranged with an average rearrangement speed of several microseconds per lattice site moved. If the dominant source of errors in rearrangement is due to idle errors, then we should assign an infidelity to the SWAP gate of roughly 10 −5 whereas the two-qubit gate possessed an infidelity of about 10 −2 i.e. r SWAP ≈ 10 −3 −10 −2 . Routing does not require generating entanglement, so the qubit can remain encoded in well-isolated degrees of freedom. Owing to this, we consider three scenarios: where r SWAP is 10 0 , 10 −1 , or 10 −2 . We note that it is a simplification to consider the rearrangement primitive in each platform (tweezers, ion shuttling, etc.) as simply SWAP-gates: Frequently there are effects like accumulated motional heating, recooling, acceleration speed limits, etc, but we expect the basic routing ideas and qualitative conclusions remain the same even in the more complicated setting. Figure 17: Hook errors: errors flowing onto data qubits. In this case, an X error appears in the midst of a syndromeextraction circuit. This then propagates to the data qubits. Hook errors |+ t 0 t 1 t 2 t 3 t 4 X X X X X X X Current decoder technology for LDPC codes is relatively immature, so we assume a WER scaling consistent with a minimum-weight decoder. At physical failure rate p, we assume that the logical failure rate of an n, k, d, ∆ q , ∆ g LDPC code Q n below threshold is dominated by a term proportional to p t where t is the smallest number of fault locations that is uncorrectable. If our intuition is informed by an i.i.d. error model on qubits, we may expect t ≈ d/2. However, this is not true in the context of syndrome-extraction circuits as corrupted syndrome qubits can spread errors to many data qubits. These errors, called hook errors [DKLP02], are harmful errors that can dominate the lower error rate performance of the quantum code. By a rough estimate, they can reduce the distance of a n, k, d, ∆ q , ∆ g LDPC code by a factor of ∆ g /2. To explain how they work, consider the example measurement circuit for an X-type stabilizer generator as shown in Figure 4. An X error on the ancilla can propagate to a much larger data error. In theory, the hook error is O(1)-sized and the syndrome extraction circuit is fault-tolerant. However, addressing these errors can significantly reduce the size of the LDPC code required to achieve a target logical failure rate. Suppose ancilla qubits fail with probability p. A measurement circuit for a weight w operator can create hook errors with weight ranging from 1 up to w. If the circuit is measuring the checks of a code, the weight of the hook error can be reduced by using the measured stabilizer generator giving a maximum reduced weight of w/2 . An error is uncorrectable if it has weight at least d/2 . If we assume that each ancilla failure results in an error of weight ∆ g /2 , then we only need t failures to cause an uncorrectable error, where t satisfies t · ∆ g /2 ≥ d/2 .(74) Then the probability of logical failure is p t where t ≈ d/∆ g . This assumption is conservative -hook errors depend on the choice of syndrome-extraction circuit and may be minimized by particular choices of gate scheduling. For example, in the rotated surface code, there is a two-qubit gate schedule for the syndrome-extraction circuit such that the hook error has intersection-1 with a logical operator [TS14,CB18]. Using such a schedule, the below-threshold scaling is ∝ p d/2 as one would expect from a depolarizing noise model. For general LDPC codes, the existence of measurement schedules that reduce the effects of hook errors is not yet clear. While there exist many methods for suppressing hook errors such as Shor, Steane or Knill error correction [NC02], nearly all require more ancilla qubits. This presents a trade-off where, for a given number of qubits, either a larger block length code with correspondingly better parameters or a more resourceintensive syndrome-extraction circuit could be used. In the setting of a constant-rate LDPC code, larger distances come with more logical qubits, so the lowest overhead solution is to use the naive syndromeextraction circuit with as large a code as possible 16 . Later, in Section 6.4.1, we will propose a method to mitigate the effects of hook errors outside of the asymptotic regime. Decoder performance for the outer code Hook errors discussed in the previous section need to be considered in the context of the concatenated scheme -the probability that an ancilla qubit fails is p (1) . Following the discussion in Section 6.2.3 on hook errors, d/2 / ∆ g /2 = 10 for the code selected in Section 6.2. Assuming a threshold of about 10 −3 under circuit noise, the WER under circuit-level Pauli noise using long-range gates goes as p (1) 10 −3 10 .(75) per cycle of syndrome extraction. For context, a slightly better threshold of about 3 × 10 −3 has been observed for (3, 4) hypergraph product codes using efficient decoders [TDB21] under circuit-level noise for syndrome-extraction circuits with long-range gates. In practice, more information is available to the decoder owing to the concatenated structure: A decoder using this extra information about individual qubit reliability is likely to have a better threshold. We return to this subject in Section 6.4.2. Lines of the same color use roughly the same number of physical qubits including all necessary ancilla qubits. All memories store 112 896 logical qubits. The 3 plots correspond to different values of r SWAP equal to 10 0 , 10 −1 , and 10 −2 (left to right) under the decoder performance assumptions made in Section 6.2. The surface code distance is rounded up, so it always uses slightly more qubits. The WER is with respect to the hierarchical memory syndrome extraction cycle. Results Using the duration of the hierarchical code syndrome extraction cycle as the unit of time that defines the WER, the results of the estimates are shown in Figure 18 for r SWAP = 10 0 , 10 −1 , or 10 −2 and several sizes of inner rotated surface code. We can see the better scaling with gate error rate that the hierarchical memory achieves. While the LDPC code distance is fairly large, the "effective" distance has been reduced immensely by the weight-6 hook errors (potentially arising from the measurement of weight-13 check operators); because the outer code has distance d = 119, under our pessimistic assumptions just 10 fault locations are sufficient to cause an uncorrectable error. We expect future LDPC codes with better distance and better understanding of hook errors in syndrome extraction circuit gate scheduling will improve the WER scaling. Under a standard circuit-level noise model, below a gate error rate of around 10 −3 , and for a target WER of 10 −20 to 10 −10 , the hierarchical scheme may realize significant resource savings. Especially so if SWAP gates have much lower gate error rates than CNOT gates. We plot such a comparison in Figure 19 with a gate error rate of 3 × 10 −3 (99.7% gate fidelity) and r SWAP = 0.1 With further engineering and more 10 25 10 22 10 19 10 16 10 13 10 10 10 7 Target WER 1 2 3 4 5 Resource Savings k = 7 10 4 k = 3 10 5 k = 1 10 6 k = 4 10 6 k = 2 10 7 k = 7 10 7 Figure 19: Estimated resource savings over surface codes for a hierarchical memory for r SWAP = 10 −1 and a gate error rate of 3 · 10 −3 under the performance assumption of Section 6.2. The resource here refers to the total space footprint of the circuits; the y-axis represents the ratio W(C B M )/W(C H N ). We plot the resource savings for the (4, 8) family of quantum expander codes with input code block lengths 512 · 2 m for m ∈ {0, 1, 2, 3, 4, 5}. The number of logical qubits is indicated in the legend. Discontinuities in the plot are due to discretization of the surface code distance. Rare noise sources that create high weight errors may provide further resources savings over surface codes. careful modeling, we believe the overhead of the hierarchical scheme can be reduced much more, so that the crossover point occurs at a practically relevant gate error rate and target WER. In the next section we will outline two ideas that will improve the performance of the hierarchical scheme: The decoder for the outer code is given far more information about the level-1 qubit reliabilities than in a circuit level noise model, and in the presence of noise bias, the syndrome extraction circuit can be tailored to reduce the effects of hook errors. Another reason we expect this estimate is conservative is that we have assumed that the noise model is independent circuit noise which creates only 2-body correlated errors in the underling surface codes. In the setting of large, long-lived quantum memories, we expect it will become necessary to address noise sources that affect large patches of the system. Sources of such noise could include cosmic rays (superconducting qubits), large deviations in global control devices such as lasers (AMO systems), lightning strikes, power supply ripples, etc. For large memories, different parts of the memory may rely on systems operating independently (ex. lasers, fridges, power supplies, etc) which would make such "global" noise large on the scale of any reasonable surface code patch, but small on the scale of the full hierarchical memory. Concatenation of surface codes with constant length outer codes [XSY + 22] has previously been considered in order to address such issues. It may be practical 17 to protect against such noise sources with a hierarchical scheme without additional overhead. Future Performance Improvements Having concluded a rough estimate of what the performance of the hierarchical memory might look like, we outline some ideas that could further improve the performance of the hierarchical memory relative to surface codes. In this section, we re-examine the WER for LDPC codes using biased-noise qubits and message-passing decoders. Noise-Bias Tailored Syndrome Extraction As discussed in Section 6.2.3, hook errors can be very damaging for general LDPC codes. In this section, we estimate the failure rate for hierarchical codes by making further assumptions on the dependence of logical failure given η-biased qubits. In particular, Equation (76) presented below is an ansatz for the logical failure probability p Q of the outer code. However, we expect that this estimate can be considerably improved in the future by investigating in more detail how p Q and depends on the bias η. X errors on the ancilla qubit will propagate to an X or Z error on the data while Z errors on the ancilla qubit will simply flip the measurement outcome without propagating to a higher weight data error. If X errors can be suppressed on the ancilla qubits, then hook errors become much less likely. In many platforms, such noise is common or can be engineered into the experiment [GFP + 20, LVP + 20, CLK + 22]. Noise bias has been exploited in the past by tailoring the quantum error correction scheme [AP08, WBP15, PSJG + 20, BATB + 21, RCQ + 22] to the noise. In Section 4.4, we introduced a technique to modify the bilayer architecture such that Level-1 qubits are noise biased. We can use this noise bias to suppress errors on the ancilla (X) that propagate to higher weight data errors. We modify the assumptions of Subsection 6.2.4 and Equation (75) in a way that attempts to capture this behavior. Further study will be needed to make more precise estimates of logical error rates in this modified architecture. The modified bilayer architecture uses elongated Level-1 qubits. If we choose the X distance to be larger than the Z distance according to d X = d Z + 2 log(η)/ log(1/p) , then the logical X error rate of the inner code is suppressed relative to the logical Z error rate by the bias factor 1/η. If the accuracy threshold of the outer code is still 10 −3 as we assumed for the case without noise bias, then for the modified architecture our estimate for the Level-2 WER becomes p Q = p (1) 10 −3 d/2 + p (1) /η 10 −3 d/2 / ∆g/2 .(76) The first term is the contribution from Level-1 logical Z errors; these do not propagate from ancilla to data, so that d/2 Level-1 errors are needed to cause a logical error at level 2. The second term arises from Level-1 logical X errors. These can propagate from ancilla to data, but they occur at a rate suppressed by the bias factor 1/η. Since surface codes are CSS codes, the X and Z noise can be corrected independently, so the X and Z logical failure rates can be examined independently up to small correlations introduced by Y-errors. Ignoring theses correlations and assuming that Equation (71) still holds with d replaced by d X or d Z , We plot a similar comparison to Figure 18 with d X = 2d Z + 1, so that d X /2 = 2 d Z /2 18 (Figure 20). Using the greater resilience to hook errors, we also pick a smaller code with higher rate with parameters 327 680, 65 536, 32 . Notice the effect of the bias is to increase the rate at which the WER falls with the level-0 gate error rate. The increased slope only persists until the two terms in Equation (76) become equal. One can see that using the bias, the hook errors are greatly suppressed leading to a better logical failure rate scaling in practically relevant regimes and a crossover point at a larger physical gate error rate. Decoders that use the concatenated structure Our asymptotic analysis used the underlying surface codes in a black-box manner-when decoding the outer LDPC code {Q n }, the tiles had either failed or succeeded. In contrast to this "hard information", much more information is available to the decoder for the outer code in the hierarchical setting. We may have access to "soft information", i.e. information about how reliable individual surface code patches are, which can then be passed to the outer code decoder. It is known that maximum-likelihood decoding on each level of a concatenated code, together with message passing between levels is an optimal decoding algorithm [Pou06]. Choice of outer code decoder: Soft information can be used in the quantum setting using Belief Propagation (BP), a class of iterative algorithms. Broadly, in each iteration, BP makes a series of graphlocal decisions-qubits that are in the support of a stabilizer generator exchange information and update their beliefs about whether they have been corrupted. As there are only a constant number of qubits in the support of each stabilizer generator, the decision requires a constant-sized computation. Although it is very successful in the classical setting, BP faces difficulties when applied to quantum codes. In the classical setting, BP converges to an distribution over bits that corresponds to the most likely error. In the quantum setting, it was pointed out early on [PC08] that degeneracy is a major issue for BP-there are many errors that are equivalent as they differ only by a stabilizer generator. However, BP is unable to tell the difference and gets stuck in a local minimum. One simple way to get around this issue would be if more information were available about the qubits. If each qubit were known to fail with a different probability -even if that difference is small -it can help BP avoid local minima. Since then, many ideas have been developed to use soft information in the quantum setting that overcome the shortcomings of BP [PK21b, RWBC20, QVRC21, GGKL21, KL22, LP19, DCMS22]. We now discuss ways to obtain soft information from the surface code. Choice of inner code decoder: The tensor network decoders [BSV14, Chu21, TBF18, BATB + 21, TDC + 19] are one class of surface code decoders that yield such soft information. The decoder outputs the probability of different (coset) logical failures for each tile. Unfortunately, it is unclear how to implement these algorithms in the fault-tolerant setting where syndrome information is unreliable. This setting requires growing bond dimension which makes implementing the decoder quite challenging. More recently, BP decoders have been implemented for surface codes [RWBC20, OR22, AAA + 22]. It is conceivable that such an algorithm could serve as a soft decoder for the surface codes as well. A natural question is whether standard decoders such as Min-Weight Perfect Matching (MWPM) [DKLP02] or the Union-Find Decoder (UFD) [DN17] could be modified to yield soft information. For simplicity, consider bit flip noise at a rate of p. We define the decoding graph given by associating a vertex with each measured stabilizer generator. We add a special boundary vertex to which we associate the total parity of all measured stabilizer generators. Including the boundary vertex, each single qubit error is detected in exactly two places. For each error, an edge is added between the vertices where it is detected. To each edge, assign the weight − log p 1−p which is the log-likelihood of an error. The most likely error given the syndrome is then a subset of edges with minimal weight that produces the syndrome and can be computed efficiently by mapping onto the minimum-weight perfect matching problem. On average, the expected weight of an error (and correction) will be linear in the block length. This is asymptotically larger than the distance of a surface code, so the most important feature of the correction is its shape. The Union-Find Decoder operates in two steps: First, it identifies clusters such that a valid correction is contained within the support of the clusters. Then, it treats the identified clusters as an erasure and runs an erasure correction decoder which produces a valid correction contained within the erasure. One such way to obtain soft information from this process is to compute the log-likelihood of the minimum weight error that would lead to a logical fault when combined with the erasure. This can be computed efficiently by setting the edge weights within the erasure to 0 and computing the minimal weight path between inequivalent boundaries. Call this quantity φ. We note that when no errors are detected, φ = −d log p 1−p , and when the cluster spans the system, φ = 0. In the first case, it is extremely unlikely (∝ p d ) for a logical fault to have occurred while in the latter case, there is a 50% probability for a logical fault to have occurred. When passed to an outer-level decoder, φ or a monotonic function of φ may yield sufficient information to improve the logical failure rate dramatically. Conclusions We have constructed a quantum memory with a threshold using geometrically local gates to simulate longrange connectivity. We did so by constructing a code family {H N } that we refer to as the hierarchical code. N indexes the size of the code; the N th element H N of this code is obtained by concatenating a constantrate quantum LDPC code Q n (the n-qubit outer code) and the surface code RS (the inner code). The outer code has a number of encoded qubits k(n) = ρ · n and distance d(n) = Θ(n δ ) for positive constants ρ, δ. Our construction builds on Gottesman's proof of the existence of a threshold using quantum LDPC codes (Theorem 4 from [Got14]). The central idea in Gottesman's construction is that if the failure rate per round of syndrome extraction, denoted p round , is a sufficiently small constant, then logical errors can be suppressed exponentially in the distance of the code. We showed that the requisite constant error rate per round can be achieved using geometrically-local gates if the inner code has suitable properties. Although H N is no longer an LDPC code, local operations suffice for extracting the error syndrome. In Section 4, we presented an explicit family of syndrome-extraction circuits {C H N } for H N . This circuit has width W(C H N ) = Θ(N ) and depth T(C H N ) = O( √ N /R), where R denotes the range of physical SWAP gates. To describe this circuit for the hierarchical code, we first presented a construction of the syndromeextraction circuit C n for the outer LDPC code Q n in Section 4.1. This circuit is based on a bilayer architecture -physical qubits are laid out in two layers in 2 dimensions. In our concatenated construction, the outer qubits of Q n are replaced by rotated surface codes referred to as tiles. In Section 4.3, we demonstrated how to perform Level-1 logical Clifford operations on tiles using physical nearest-neighbor gates, including a novel technique for performing nearest-neighbor logical SWAP gates. We also discussed how to perform logical SWAP operations on tiles with range R 1 using physical SWAP operations with range R 0 . In Section 5, we showed that for fixed values of the physical failure rate p phys , the error rate per round of syndrome-extraction, p round , is a polynomial function of the depth T(C H N ). Using an inner surface code with linear size , which can suppress errors exponentially in , we can guarantee that the Level-1 error rate per round is a constant by choosing = Θ(log(n)). The resulting concatenated code H N encodes a number of encoded qubits K = Ω(N/ log(N ) 2 ). Furthermore, if the distance of the LDPC code Q n is d(n) = Θ(n δ ), then H N can suppress errors superpolynomially; the Word Error Rate (WER) satisfies p H (N ) < exp(−Θ[N δ / log 2δ (N )]). Given access to physical SWAP operations of range R, the syndrome-extraction circuit C H N has depth O( √ N /R). Using this architecture we made numerical estimates of the WER p H (N ) in Section 6. We contrasted this with the WER p B (M ) of the basic encoding B M , where all logical qubits are encoded using only the surface code. We first made comparisons in the asymptotic regime, showing in Section 6.1 that if the outer constant-rate LDPC code has an efficient single-shot decoder, then a target logical error rate can be achieved more efficiently using the hierarchical encoding rather than the basic encoding. We then proceeded with numerical estimates probing whether this advantage holds for practical code sizes and noise parameters. For this purpose, we compared the WERs of the basic encoding and hierarchical encoding when both schemes use the same total number of physical qubits. We found that the physical error rate has a crossover point; when the physical error rate is below this value, the hierarchical code outperforms the basic encoding. To perform these estimates, we made assumptions about the noise model and about the WER for surface codes and LDPC codes, and we assessed the impact of these assumptions on our conclusions. We also discussed some ways to reduce the WER of hierarchical codes by modifying the syndrome-extraction circuit, improving the fidelity of SWAP operations, and using more sophisticated decoding algorithms. 1. We made the conservative assumption that propagation of error from Level-1 ancilla qubits to Level-1 data qubits reduces the effective distance of the outer code by a factor of ∆ g , the degree of the outer-code stabilizer generators. This error propagation can be mitigated if the noise in Level-1 qubits is highly biased, with X errors occurring much less frequently than Z errors. Even if the noise afflicting the physical qubits is unbiased, this Level-1 noise bias can be enforced by using an asymmetric surface code as the inner code of the hierarchical scheme. 2. The failure rate of the outer code grows in proportion to the depth of the permutation routing, and hence is sensitive to the error rate of Level-1 SWAP operations. By improving the error rate of physical SWAP gates we can improve the performance of the hierarchical code significantly. 3. We assumed that the decoding algorithm for the outer code makes no use of the syndrome information from the inner code blocks. We expect that a much better decoding scheme for the hierarchical code can be achieved by exploiting such information from the inner code when decoding the outer code. Finally, we also highlighted that a hierarchical architecture might deal effectively with "burst" errors that damage a large cluster of physical qubits simultaneously. A severe burst error could corrupt several of the inner-code tiles, but the resulting Level-1 erasure errors can be adequately addressed by the decoder for the outer code. Acknowledgements AK is supported by the Bloch Postdoctoral Fellowship from Stanford University. AK acknowledges funding from NSF award CCF-1844628. CAP acknowledges funding from the Air Force Office of Scientific Theorem 1 . 3 ( 13Informal). Consider the N, K, D family of hierarchical codes H N and the associated family of syndrome-extraction circuits C H N . measure a product of X operators: (a) Initialize the b th ancilla qubit in |+ b . (b) Perform a CNOT gate from the b th ancilla qubit to each data qubit in the support of S X b . (c) Perform a measurement of the b th ancilla in the X basis. 2. For 1 ≤ c ≤ m Z , measure a product of Z operators: (a) Initialize the c th ancilla qubit in |+ c . (b) Perform a CZ gate from the c th ancilla qubit to each data qubit in the support of S Z c . (c) Perform a measurement of the c th ancilla in the X basis. Figure 4 4illustrates gadgets for measuring an X operator of weight 5 and a Z operator of weight 4. Given a circuit C, we use two figures-of-merit to quantify its size:1. W(C): the width of the circuit, i.e. the total number of physical qubits, data and ancilla, used in the circuit. 2. T(C): the depth of the circuit, i.e. the number of time steps required to measure all syndromes. Definition 2. 2 . 2Let n ∈ N and Pow(n) = {E : E ⊆ [n]}. Consider a probability distribution Pr : Pow(n) → [0, 1] and for E ⊆ [n], let Pr(E) be the total probability Pr(E) = E ⊇E Pr(E ) . Figure 5 : 5Evolution of Pauli errors under the action of CNOT. The first qubit is the control qubit and the second qubit is the target. The operators X ⊗ I and I ⊗ Z double in size. The red paths show how X 'flows down' a CNOT gate and Z 'flows up' a CNOT gate. Figure 6 : 6Visualizing a concatenated code H. are described by E (0) , a locally decaying distribution with failure rate p the syndrome-extraction circuit are described by F (0) , a Level-0 locally decaying distribution with failure rate p Figure 8 : 8Example: implementing the permutation α = Figure 9 : 9Visualizing the routing algorithm via the space-time path of individual qubits. For each swap location, a gray rectangle indicated the plane of the swap is drawn for visualization purposes. Note that in the intervals (t 0 , t 1 ) and (t 2 , t 3 ) there are only row swaps, and in the interval (t 1 , t 2 ) there are only column swaps. Each of the swaps within a single row or column we obtain by a 1D even-odd sorting network(Figure 8). Definition 3. 3 ( 3Spectral Expander). Let G be a d-regular graph on m vertices where all eigenvalues of the adjacency matrix except for the largest {λ Figure 10 : 10(a) A square lattice NN 2 (L, R) with a qubit on each lattice point. The blue circle of radius R denotes the interaction radius for a qubit in the lattice. Such a circle exists around each qubit; we only draw one for clarity.(b) Approximating the lattice using the sparse product graph E R × E R × NN 2 (L/R, 1). Different colored edges come from different factors in the product. Figure 14 : 14Walking primitive used in the SWAP implementation with qubits outside of the 2 × 2 unit cell not drawn. Swaps of Level-0 qubits are drawn as black lines with crosses. Data qubits become ancilla qubits and ancilla qubits become data qubits, so that syndrome extraction is possible at every step. A full 1-unit step to the right of the data qubits can be accomplished following this 1/2 unit step by swapping each (initially) syndrome qubit with the data qubit up and to the right. Later, we will increase the speed at which the top and the bottom layers are shifted relative to each other by shifting the top layer in one direction and the bottom layer in the other.1. Use the staggered SWAP on every other tile in a checkerboard pattern to put them in different layers (depth-3). 2. Use the walking primitive to translate the top and bottom layers d lattice sites in opposite directions so that originally adjacent tiles are now stacked (depth-d ). 3. Optionally perform a staggered SWAP (depth-3). 4. Translate the top and bottom layers d lattice sites back (depth-d ) 5. Bring the tiles back to the same layer by undoing a staggered SWAP (depth-3). Lemma 4. 1 . 1Consider access to physical SWAP operations with range R = o(L · ). We can implement any arbitrary permutation of Level-1 qubits in the bilayer architecture in depth O (L /R) . The range R Level-0 SWAP gate can be used to speed up the walking primitive presented in Section 4.3.1. Parallelized Level-1 nearest-neighbor SWAP gates implemented in this way take time Θ( /R). Combined with the Corollary 3.2, we have that routing takes time O(L /R). Section 4.1 guarantees that each of the outer positions in C H N only interact with a fixed and constant-sized set of other outer positions. Therefore, all of the physical positions of C H N need only interact with a fixed and constant-sized set of positions in C H N . This completes the proof. A ⊆ [n], denote its image under M by M(A) defined as the union of columns M i of M in A. I.e. M(A) := i∈A supp M i . The probability that an error set F ⊆ [m] on the output occurs Pr(F ) = E⊆[n] F ⊆M(E) the circuit C is described by a Level-1 locally decaying faults model with Level-1 failure rate p(1) phys := exp(−c EC · ). 2. the output is described by a Level-0 locally decaying errors model with failure rate less than p low logical failure rate, we can use Gottesman's result presented in Section 2.4 to bound the failure rate for error correction and to show that after error correction, the Level-0 errors are locally decaying distributions with failure rate p(0) round . State preparation: Suppose we wished to prepare the state |0 ⊗m for the code RS ⊗m . Each Level-0 qubit is prepared in |0 and we then perform the syndrome-extraction circuit for the surface code on all m copies. The Level-0 errors are described by a locally decaying error model with failure rate p in the syndrome-extraction circuit C are also described by a locally decaying faults model with failure rate p (0) phys . Error correction is successful if p . We can use each of the individual Level-0 qubit outputs to infer the values of each of the X-type (Z-type) stabilizer generators and correct Z (X) errors. This fails with probability exp(−c EC · ). 1/10 . Therefore, we can define the threshold q(0) phys using the bounds in Equations (54), Equation (55), Equation (56) and Equation (57): Suppose we are given an input state of the concatenated code H N subject to the following error model:1. Errors on the state are distributed according to locally decaying distributions: the failure rate per round of syndrome extraction for the rotated surface code. As the depth of the syndrome-extraction circuit for the rotated surface code is constant, for fixed values of p Level-0 residual errors are described by a locally decaying distribution with failure rate p (0) round on tiles that have not failed. This result allows us to coarse grain the Level-0 circuit and study Level-1 errors and faults directly. exp(−c EC · ). From Section 4.1, T(C Q n ) = O(L/R) = O(√ n/R). Therefore, the upper bound on p Consider the basic encoding defined by the family {B M } where M = Θ(k · 2 M ), and B M = k i=1 RS M is a k-fold product of rotated surface codes RS M . Each code RS M has distance d M = Θ( M ). Let the corresponding circuits be denoted {C B M }. To compare with H N , we probe the parameters of C B M required to guarantee the same logical error suppression. Let p B (M ) denote the failure rate for the Level-1 logical probability of failure for B M -we declare failure if any of the k logical qubits fail. . . Furthermore, we also assume that the code state B M is prepared such that there are no input errors, i.e. p This allows us to isolate the rate of error suppression because of error correction.Lemma 6.1. Let {B M } be the basic encoding such that p B (M ) < exp(−c H · N δ / log(N ) 2δ ) where c H therefore we can perform error correction. The Level-1 logical failure probability for the code RS ⊗k M is described by a locally-decaying error model with failure rate p RS ( M ) (See Section 2.5), where p RS ( M ) = exp(−c EC · M ) . ( 1 ) 1RS (d ), and that the effective noise witnessed by the outer code because of all the SWAP gates is p (1) .1. The logical error rate p d ) of each × rotated surface code tile is[WFH11,FMMC12] Figure 18 : 18Comparison of a hierarchical memory (solid line) using a (5, 8) quantum expander code with parameters 1 116 416, 112 896, 119 and inner code distance d , and a surface code (dashed line) with distance M . Figure 20 : 20Comparison of a hierarchical memory (solid line) using a(4, 8) quantum expander code with parameters 327 680, 65 536, 32 and inner code distance d Z = d , and a surface code (dashed line) with distance M . Lines of the same color use roughly the same number of physical qubits including all necessary ancilla qubits. The noise bias permits a smaller block length, so all memories store 65 536 logical qubits. The 3 plots correspond to different values of r SWAP equal to 10 0 , 10 −1 , and 10 −2 (left to right) under the modified decoder performance assumptions made in subsection 6.4.1. The surface codes underlying the hierarchical memory are rectangular with d X = 2d + 1 The WER is for one round of the hierarchical memory's syndrome-extraction cycle. Section 6 : Comparisons to surface code Finally, we compare the hierarchical memory H N with a simple memory that only uses rotated surface codes. At the outset, it may seem unclear whether the use of extra resources to execute the constant-rate LDPC code's syndrome-extraction circuit can be better spent simply building better surface codes which are ideally suited for 2-dimensional local interactions.We let {B M } refer to the basic encoding where each logical qubit is encoded in a separate surface code; for some distance d M , we let B M be the K-fold product of the surface code, i.e. B M = RS ⊗K M . The index M represents the total number of qubits in this encoding, i.e. M = Θ(Kd 2 M ). To contrast H N and B M , we present both an asymptotic comparison as well as numerical estimates based on some conservative assumptions.Theorem 1.4 (Informal). Let H N be a specific N, K, D hierarchical code family such that the (outer) constant-rate LDPC code Q n has distance d = Θ(n δ ). Let B M be the basic encoding RS ⊗K M that encodes K qubits in separate rotated surface codes of distance d M . Let C B M be the corresponding family of syndromeextraction circuits for B M . Let p B (M ) denote the logical failure probability under minimum-weight decoding for B M where we declare failure if any logical qubit fails. Suppose the gate error rate p is below the thresholds for both the basic encoding and the hierarchical code. To achieve p B [EBK + 16, BDLL + 16, BLS + 22, HOS + 06, KRS + 17]. For example, in ion trap and neutral atom trap architectures, SWAP gates can be performed by moving the traps. The mechanismFigure 2: Comparing the logical failure rate for the hierarchical memory versus the logical failure rate for the basic encoding. The outer LDPC code has parameters 1 116 416, 112 896, 119 . Each color represents an inner code of distance d . The solid and dashed lines are estimates for the WER for the hierarchical memory and basic encoding respectively. The legend shows the size of the surface codes in each setting. For example, the solid blue line represents a hierarchical code with inner code lattice length d = 3. The dashed blue line represents a basic encoding that uses surface codes of lattice length 12.The three panels correspond to three different assumptions about the error rate in SWAP gates, as described in the text. In the left-most plot, SWAP gates are assumed to fail at the same rate as entangling gates. In contrast, in the middle and right plots, SWAP gates have a fidelity 10× and 100× better than entangling gates respectively.10 5 10 4 10 3 Gate Error Rate 10 25 10 20 10 15 10 10 10 5 10 0 WER 10 5 10 4 10 3 Gate Error Rate 10 5 10 4 10 3 Gate Error Rate d M = 12 d = 3 d M = 19 d = 5 d M = 35 d = 9 d M = 57 d = 15 d M = 80 d = 21 d M = 103 d = 27 Similar ideas have proved useful in the context of concatenating GKP codes and LDPC codes [RRR + 22]. Lastly, we expect the hierarchical scheme to be resistant to burst errors. Unlike typical errors which affect only one or two qubits at a time, burst errors can wipe out entire patches of qubits. This can happen when there exists poorly localized error mechanisms such as the absorption of cosmic rays in superconducting circuits [VKO + 20, MFA + 22, TEL + 22, AAA + 22, CCC + 23] or blackbody radiation mediated transitions to other Rydberg states in neutral atom platforms [FLS + 22, ZVBS + 16]. Large deviations may also occur in single points of failure in the control hardware such as power supplies, local oscillators, lasers, etc(a) d X = d Z + 4 d Z (b) 10 5 10 4 10 3 Gate Error Rate 10 25 10 19 10 13 10 7 10 1 WER d M = 13 d = 3 d M = 20 d = 5 d M = 35 d = 9 d M = 58 d = 15 d M = 80 d = 21 d M = 103 d = 27 Such techniques have been demonstrated in some experimental platforms: Rearrangable tweezers in Rydberg platforms [EBK + 16, BDLL + 16, BLS + 22] and ion shuttling in trapped ion platforms [HOS + 06, KRS + 17]. The bound applies to classes of codes that are called locally expanding. The exact definition of locally-expanding codes is not relevant; the interested reader is pointed to the paper by Delfosse et al.[DBT21]. For our purposes, it includes some important classes of quantum LDPC codes such as hypergraph product codes[TZ14] and good quantum LDPC codes[BE21,PK21a,LZ22c,LH22].3 For an explanation of O(·), Θ(·) and Ω(·) notation, please refer to Appendix A. This notation is inspired by thinking about the vertices arranged as a matrix of size V1 × V2. A multigraph is a generalization of a graph where two vertices are allowed to share multiple edges A subgraph H of a graph G is said to be a spanning subgraph if all vertices of G are contained in H. Degree-8 can be achieved by replacing the ER × ER factor in the decomposition with a single expander graph after some modification of parameters. The ancillas are disentangled from the data block by measurement. The constant 2 ∆+1 = 32 and 32 × 12 = 384. 1 year is ≈ 10 16 nanoseconds, 1 Hubble time is ≈ 10 10 years or ≈ 10 26 nanoseconds. For any practical purpose a WER of 10 −25 per gate time should suffice. In general, the precise value of the circuit-level threshold already requires some assumptions about what gates are native in the device: The optimal syndrome extraction circuit with our physical layer layout requires 5 to 8 gates depending on these assumptions, so the insertion of an additional gate is relatively unimportant. For very resource-constrained settings, it may still be worthwhile to use more sophisticated syndrome extraction circuits for a better effective relative distance. Physics is local, so a very large surface code is likely sufficient, but it may be impractically large. This choice is somewhat arbitrary. For a given WER target and gate error rate, the optimal aspect ratio is likely to be such that the target is at the "kink" of the WER inFigure 20. The total number of entangling stages is therefore 2∆, where ∆ = max(∆ q , ∆ g ). Accounting for one stage for preparing and measuring ancilla qubits in each phase, we have a total of s = 2∆ + 4 stages to measure syndromes.19 This is unlike the surface code where both types of syndromes are measured at once. this equation as follows:where 1[t < t] is the indicator function, i.e. it is 1 when t < t and 0 otherwise. The terms on the righthand sides of these equations have a natural interpretation -for example, the error b (t ) z that occurs on data qubits at time t can propagate to ancilla qubits at times t > t .Second, it is difficult to directly deal with sums of random vectors modulo 2 that appear in Equation (42). Instead, we re-write Equation (42) in terms of the support of the vectors. To this end, we note thatTogether, these observations mean we can rewrite Equation(42)aswhere I n and I m X are identity matrices of dimensions n and m X respectively. We pause to explain the two simplifications in words. Substituting t >t M (t ) with H X corresponds to a worst-case setting-an error on an ancilla qubit can propagate to all data qubits in its support regardless of when the error on the ancilla qubit occurs. Second, by dealing with the union of the supports of the vectors instead of the vectors themselves, we upper bound the maximum size of the final error. Evaluating the probability of this event allows us to upper bound the probability of a final error D ⊗ A.We can bound the probabilities of the terms in Equation(47). The errors at time t,, are independent of errors occurring at t = t -induced errors at different time steps are independent because faults occurring at different time steps are independent. As shown in Lemma 5.3, the induced distributions over errors at each time step are locally decaying distributions with failure rate √ p phys .Next, Lemma 5.2 describes how the distribution is transformed when errors undergo linear transformations. Consider the terms in Equation(47):B Constructing the ideal syndrome-extraction circuit (C Q n ) idealIn this section, we return to the claim in Section 4.1. We prove that the syndrome-extraction circuit (C Q n ) ideal for a n, k, d, ∆ q , ∆ g code can be constructed such that its depth is at most s := 2∆ + 4, where ∆ = 2 max(∆ q , ∆ g ).Proof. By definition, each qubit participates in at most ∆ q stabilizer generators and each stabilizer generator contains at most ∆ g qubits in its support. We use the Tanner graph T (Q n ) = (V ∪ C X ∪ C Z , E), a tripartite graph corresponding to the code Q n where:1. There is a vertex v ∈ V for each qubit in the code. |V | = n. 2. There is a vertex u X i ∈ for each X-type generator S X i . |C X | = m X . 3. There is a vertex w Z j for each Z-type generator S Z j . |C Z | = m Z .Consider the bipartite Tanner graph T X = (V ∪ C X , E) that corresponds to the X-type generators of the code Q.In each step, each qubit can be involved in at most one gate. This can be phrased as a graph coloring problem: we color the edges of T X such that no two edges incident to a vertex have the same color. Since T X is bipartite, such an edge coloring can be computed efficiently using max(∆ q , ∆ g ) colors [S + 03]. Suppressing quantum errors by scaling a surface code logical qubit. Igor + 22] Rajeev Acharya, Richard Aleiner, Allen, Markus Trond I Andersen, Frank Ansmann, Kunal Arute, Abraham Arya, Juan Asfaw, Ryan Atalaya, Babbush, arXiv:2207.06431arXiv preprint+ 22] Rajeev Acharya, Igor Aleiner, Richard Allen, Trond I Andersen, Markus Ansmann, Frank Arute, Kunal Arya, Abraham Asfaw, Juan Atalaya, Ryan Babbush, et al. Suppressing quantum errors by scaling a surface code logical qubit. arXiv preprint arXiv:2207.06431, 2022. Quantum supremacy using a programmable superconducting processor. Kunal Aab + 19] Frank Arute, Ryan Arya, Dave Babbush, Bacon, C Joseph, Rami Bardin, Rupak Barends, Sergio Biswas, Boixo, Gsl Fernando, David A Brandao, Buell, Nature. 5747779AAB + 19] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando GSL Brandao, David A Buell, et al. Quantum supremacy using a programmable superconducting processor. Nature, 574(7779):505-510, 2019. A unified approach to off-line permutation routing on parallel networks. F Annexstein, M Baumslag, Proceedings of the Second Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA '90. the Second Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA '90New York, NY, USAAssociation for Computing MachineryF. Annexstein and M. Baumslag. A unified approach to off-line permutation routing on parallel networks. In Proceedings of the Second Annual ACM Symposium on Parallel Algo- rithms and Architectures, SPAA '90, page 398-406, New York, NY, USA, 1990. Association for Computing Machinery. Routing permutations on graphs via matchings. Noga Alon, R K Fan, Ronald L Chung, Graham, Proceedings of the twenty-fifth annual ACM symposium on Theory of Computing. the twenty-fifth annual ACM symposium on Theory of ComputingNoga Alon, Fan RK Chung, and Ronald L Graham. Routing permutations on graphs via matchings. In Proceedings of the twenty-fifth annual ACM symposium on Theory of Com- puting, pages 583-591, 1993. Quantum accuracy threshold for concatenated distance-3 codes. Panos Aliferis, Daniel Gottesman, John Preskill, quant-ph/0504218arXiv preprintPanos Aliferis, Daniel Gottesman, and John Preskill. Quantum accuracy threshold for con- catenated distance-3 codes. arXiv preprint quant-ph/0504218, 2005. Fault-tolerant quantum computation against biased noise. Panos Aliferis, John Preskill, Physical Review A. 78552331Panos Aliferis and John Preskill. Fault-tolerant quantum computation against biased noise. Physical Review A, 78(5):052331, 2008. The xzzx surface code. + , ] J Pablo Bonilla Ataides, K David, Tuckett, D Stephen, Bartlett, T Steven, Benjamin J Flammia, Brown, Nature communications. 121+ 21] J Pablo Bonilla Ataides, David K Tuckett, Stephen D Bartlett, Steven T Flammia, and Benjamin J Brown. The xzzx surface code. Nature communications, 12(1):1-12, 2021. Efficient distributed quantum computing. Bbg + 13] Robert, Stephen Beals, Oliver Brierley, Aram W Gray, Samuel Harrow, Noah Kutin, Dan Linden, Mark Shepherd, Stather, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 46920120686BBG + 13] Robert Beals, Stephen Brierley, Oliver Gray, Aram W Harrow, Samuel Kutin, Noah Linden, Dan Shepherd, and Mark Stather. Efficient distributed quantum computing. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 469(2153):20120686, 2013. An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays. Daniel Barredo, Vincent Sylvain De Léséleuc, Thierry Lienhard, Antoine Lahaye, Browaeys, Science. 3546315Daniel Barredo, Sylvain De Léséleuc, Vincent Lienhard, Thierry Lahaye, and Antoine Browaeys. An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic ar- rays. Science, 354(6315):1021-1023, 2016. Logical blocks for fault-tolerant topological quantum computation. Chris + 21] Hector Bombin, Dawson, V Ryan, Naomi Mishmash, Fernando Nickerson, Sam Pastawski, Roberts, arXiv:2112.12160arXiv preprint+ 21] Hector Bombin, Chris Dawson, Ryan V Mishmash, Naomi Nickerson, Fernando Pastawski, and Sam Roberts. Logical blocks for fault-tolerant topological quantum computation. arXiv preprint arXiv:2112.12160, 2021. Balanced product quantum codes. P Nikolas, Jens N Breuckmann, Eberhardt, IEEE Transactions on Information Theory. Nikolas P. Breuckmann and Jens N. Eberhardt. Balanced product quantum codes. IEEE Transactions on Information Theory, pages 1-1, 2021. A lower bound on the overhead of quantum error correction in low dimensions. Nouédyn Baspin, Omar Fawzi, Ala Shayeghi, arXiv:2302.04317arXiv preprintNouédyn Baspin, Omar Fawzi, and Ala Shayeghi. A lower bound on the overhead of quantum error correction in low dimensions. arXiv preprint arXiv:2302.04317, 2023. Sergey Bravyi, Alexei Yu Kitaev, quant-ph/9811052Quantum codes on a lattice with boundary. arXiv preprintSergey Bravyi and Alexei Yu Kitaev. Quantum codes on a lattice with boundary. arXiv preprint quant-ph/9811052, 1998. Nouédyn Baspin, Anirudh Krishna, arXiv:2106.00765Connectivity constrains quantum codes. arXiv preprintNouédyn Baspin and Anirudh Krishna. Connectivity constrains quantum codes. arXiv preprint arXiv:2106.00765, 2021. Quantifying nonlocality: how outperforming local quantum codes is expensive. Nouédyn Baspin, Anirudh Krishna, arXiv:2109.10982arXiv preprintBLS + 22Nouédyn Baspin and Anirudh Krishna. Quantifying nonlocality: how outperforming local quantum codes is expensive. arXiv preprint arXiv:2109.10982, 2021. [BLS + 22] A quantum processor based on coherent transport of entangled atom arrays. Dolev Bluvstein, Harry Levine, Giulia Semeghini, T Tout, Sepehr Wang, Marcin Ebadi, Alexander Kalinowski, Nishad Keesling, Hannes Maskara, Markus Pichler, Greiner, Nature. 6047906Dolev Bluvstein, Harry Levine, Giulia Semeghini, Tout T Wang, Sepehr Ebadi, Marcin Kali- nowski, Alexander Keesling, Nishad Maskara, Hannes Pichler, Markus Greiner, et al. A quan- tum processor based on coherent transport of entangled atom arrays. Nature, 604(7906):451- 456, 2022. . Hector Bombin, Miguel Angel Martin-Delgado, Topological quantum distillation. Physical Review Letters. 9718180501Hector Bombin and Miguel Angel Martin-Delgado. Topological quantum distillation. Phys- ical Review Letters, 97(18):180501, 2006. Tradeoffs for reliable quantum information storage in 2D systems. Sergey Bravyi, David Poulin, Barbara Terhal, Physical Review Letters. 104550503Sergey Bravyi, David Poulin, and Barbara Terhal. Tradeoffs for reliable quantum information storage in 2D systems. Physical Review Letters, 104(5):050503, 2010. Efficient algorithms for maximum likelihood decoding in the surface code. Sergey Bravyi, Martin Suchara, Alexander Vargo, Physical Review A. 90332326Sergey Bravyi, Martin Suchara, and Alexander Vargo. Efficient algorithms for maximum likelihood decoding in the surface code. Physical Review A, 90(3):032326, 2014. A no-go theorem for a two-dimensional self-correcting quantum memory based on stabilizer codes. Sergey Bravyi, Barbara Terhal, New Journal of Physics. 11443029Sergey Bravyi and Barbara Terhal. A no-go theorem for a two-dimensional self-correcting quantum memory based on stabilizer codes. New Journal of Physics, 11(4):043029, 2009. Flag fault-tolerant error correction with arbitrary distance codes. Christopher Chamberland, E Michael, Beverland, 53Christopher Chamberland and Michael E Beverland. Flag fault-tolerant error correction with arbitrary distance codes. Quantum, 2:53, 2018. Disentangling the sources of ionizing radiation in superconducting qubits. Ccc + 23] L Cardani, Colantoni, Cruciani, G D&apos; De Dominicis, M Imperio, Laubenstein, L Mariani, Pagnanini, Pirro, Tomei, The European Physical Journal C. 83194CCC + 23] L Cardani, I Colantoni, A Cruciani, F De Dominicis, G D'Imperio, M Laubenstein, A Mari- ani, L Pagnanini, S Pirro, C Tomei, et al. Disentangling the sources of ionizing radiation in superconducting qubits. The European Physical Journal C, 83(1):94, 2023. Hardware-efficient, fault-tolerant quantum computation with rydberg atoms. Christopher Thomas Chubb ; Iris Cong, Harry Levine, Alexander Keesling, Dolev Bluvstein, Sheng-Tao Wang, Mikhail D Lukin, arXiv:2101.04125General tensor network decoding of 2D Pauli codes. 1221049arXiv preprintCLK + 22Christopher Thomas Chubb. General tensor network decoding of 2D Pauli codes. arXiv preprint arXiv:2101.04125, 2021. [CLK + 22] Iris Cong, Harry Levine, Alexander Keesling, Dolev Bluvstein, Sheng-Tao Wang, and Mikhail D Lukin. Hardware-efficient, fault-tolerant quantum computation with rydberg atoms. Physical Review X, 12(2):021049, 2022. Good quantum error-correcting codes exist. Robert Calderbank, W Peter, Shor, Physical Review A. 5421098A Robert Calderbank and Peter W Shor. Good quantum error-correcting codes exist. Phys- ical Review A, 54(2):1098, 1996. Bounds on stabilizer measurement circuits and obstructions to local implementations of quantum LDPC codes. Nicolas Delfosse, E Michael, Maxime A Beverland, Tremblay, arXiv:2109.14599arXiv preprintNicolas Delfosse, Michael E Beverland, and Maxime A Tremblay. Bounds on stabilizer measurement circuits and obstructions to local implementations of quantum LDPC codes. arXiv preprint arXiv:2109.14599, 2021. Stabilizer inactivation for messagepassing decoding of quantum LDPC codes. Mehdi Julien Du Crest, Valentin Mhalla, Savin, 2022 IEEE Information Theory Workshop (ITW). IEEEJulien Du Crest, Mehdi Mhalla, and Valentin Savin. Stabilizer inactivation for message- passing decoding of quantum LDPC codes. In 2022 IEEE Information Theory Workshop (ITW), pages 488-493. IEEE, 2022. Irit Dinur, Min-Hsiu Hsieh, Ting-Chun Lin, Thomas Vidick, arXiv:2206.07750Good quantum LDPC codes with linear time decoders. arXiv preprintIrit Dinur, Min-Hsiu Hsieh, Ting-Chun Lin, and Thomas Vidick. Good quantum LDPC codes with linear time decoders. arXiv preprint arXiv:2206.07750, 2022. Topological quantum memory. Eric Dennis, Alexei Kitaev, Andrew Landahl, John Preskill, Journal of Mathematical Physics. 439Eric Dennis, Alexei Kitaev, Andrew Landahl, and John Preskill. Topological quantum memory. Journal of Mathematical Physics, 43(9):4452-4505, 2002. Nicolas Delfosse, H Naomi, Nickerson, arXiv:1709.06218Almost-linear time decoding algorithm for topological codes. arXiv preprintNicolas Delfosse and Naomi H Nickerson. Almost-linear time decoding algorithm for topo- logical codes. arXiv preprint arXiv:1709.06218, 2017. Atom-by-atom assembly of defect-free one-dimensional cold atom arrays. Manuel Endres, Hannes Bernien, Alexander Keesling, Harry Levine, Eric R Anschuetz, Alexandre Krajenbrink, Crystal Senko, Vladan Vuletic, Markus Greiner, Mikhail D Lukin, Science. 3546315Manuel Endres, Hannes Bernien, Alexander Keesling, Harry Levine, Eric R Anschuetz, Alexandre Krajenbrink, Crystal Senko, Vladan Vuletic, Markus Greiner, and Mikhail D Lukin. Atom-by-atom assembly of defect-free one-dimensional cold atom arrays. Science, 354(6315):1024-1027, 2016. Quantum phases of matter on a 256-atom programmable quantum simulator. + 21] Sepehr Ebadi, T Tout, Harry Wang, Alexander Levine, Giulia Keesling, Ahmed Semeghini, Dolev Omran, Rhine Bluvstein, Hannes Samajdar, Wen Wei Pichler, Ho, Nature. 5957866+ 21] Sepehr Ebadi, Tout T Wang, Harry Levine, Alexander Keesling, Giulia Semeghini, Ahmed Omran, Dolev Bluvstein, Rhine Samajdar, Hannes Pichler, Wen Wei Ho, et al. Quantum phases of matter on a 256-atom programmable quantum simulator. Nature, 595(7866):227- 232, 2021. Constant overhead quantum fault-tolerance with quantum expander codes. Omar Fawzi, Antoine Grospellier, Anthony Leverrier, IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS). IEEEOmar Fawzi, Antoine Grospellier, and Anthony Leverrier. Constant overhead quantum fault-tolerance with quantum expander codes. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 743-754. IEEE, 2018. Efficient decoding of random errors for quantum expander codes. Omar Fawzi, Antoine Grospellier, Anthony Leverrier, Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. the 50th Annual ACM SIGACT Symposium on Theory of ComputingACMOmar Fawzi, Antoine Grospellier, and Anthony Leverrier. Efficient decoding of random errors for quantum expander codes. In Proceedings of the 50th Annual ACM SIGACT Sym- posium on Theory of Computing, pages 521-534. ACM, 2018. Blackbody-radiation-induced facilitated excitation of rydberg atoms in optical tweezers. Lorenzo Festa, Nikolaus Lorenz, Lea-Marina Steinert, Zaijun Chen, Philip Osterholz, Robin Eberhard, Christian Gross, Physical Review A. 105113109Lorenzo Festa, Nikolaus Lorenz, Lea-Marina Steinert, Zaijun Chen, Philip Osterholz, Robin Eberhard, and Christian Gross. Blackbody-radiation-induced facilitated excitation of ryd- berg atoms in optical tweezers. Physical Review A, 105(1):013109, 2022. Surface codes: Towards practical large-scale quantum computation. G Austin, Matteo Fowler, Mariantoni, M John, Andrew N Martinis, Cleland, Physical Review A. 86332324Austin G Fowler, Matteo Mariantoni, John M Martinis, and Andrew N Cleland. Sur- face codes: Towards practical large-scale quantum computation. Physical Review A, 86(3):032324, 2012. A proof of Alon's second eigenvalue conjecture and related problems. Joel Friedman, American Mathematical SocJoel Friedman. A proof of Alon's second eigenvalue conjecture and related problems. American Mathematical Soc., 2008. Stabilization and operation of a kerr-cat qubit. Alexander Grimm, E Nicholas, Shruti Frattini, Puri, O Shantanu, Steven Mundhada, Mazyar Touzard, Mirrahimi, Shyam Steven M Girvin, Michel H Shankar, Devoret, Nature. 5847820Alexander Grimm, Nicholas E Frattini, Shruti Puri, Shantanu O Mundhada, Steven Touzard, Mazyar Mirrahimi, Steven M Girvin, Shyam Shankar, and Michel H Devoret. Stabilization and operation of a kerr-cat qubit. Nature, 584(7820):205-209, 2020. Combining hard and soft decoders for hypergraph product codes. Antoine Grospellier, Lucien Grouès, Anirudh Krishna, Anthony Leverrier, 432Antoine Grospellier, Lucien Grouès, Anirudh Krishna, and Anthony Leverrier. Combining hard and soft decoders for hypergraph product codes. Quantum, 5:432, 2021. Stabilizer codes and quantum error correction. Daniel Gottesman, quant- ph/9705052arXiv preprintDaniel Gottesman. Stabilizer codes and quantum error correction. arXiv preprint quant- ph/9705052, 1997. Fault-tolerant quantum computation with local gates. Daniel Gottesman, Journal of Modern Optics. 472-3Daniel Gottesman. Fault-tolerant quantum computation with local gates. Journal of Modern Optics, 47(2-3):333-345, 2000. Fault-tolerant quantum computation with constant overhead. Daniel Gottesman, Quantum Information & Computation. 14Daniel Gottesman. Fault-tolerant quantum computation with constant overhead. Quantum Information & Computation, 14(15-16):1338-1372, 2014. An efficient decoder for a linear distance quantum LDPC code. Shouzhen Gu, A Christopher, Eugene Pattison, Tang, arXiv:2206.06557arXiv preprintShouzhen Gu, Christopher A Pattison, and Eugene Tang. An efficient decoder for a linear distance quantum LDPC code. arXiv preprint arXiv:2206.06557, 2022. T-junction ion trap array for two-dimensional ion shuttling, storage, and manipulation. Clare Horsman, G Austin, Simon Fowler, Rodney Devitt, Van Meter ; Wk, S Hensinger, Olmschenk, Stick, M Hucul, M Yeo, L Acton, C Deslauriers, J Monroe, Rabchuk, Applied Physics Letters. 141234101New Journal of PhysicsClare Horsman, Austin G Fowler, Simon Devitt, and Rodney Van Meter. Surface code quantum computing by lattice surgery. New Journal of Physics, 14(12):123011, 2012. [HOS + 06] WK Hensinger, S Olmschenk, D Stick, D Hucul, M Yeo, M Acton, L Deslauriers, C Monroe, and J Rabchuk. T-junction ion trap array for two-dimensional ion shuttling, storage, and manipulation. Applied Physics Letters, 88(3):034101, 2006. Ants Karb + 19 ; Christian Kraglund Andersen, Stefania Remm, Sebastian Balasiu, Johannes Krinner, Jean-Claude Heinsoo, Mihai Besse, Andreas Gabureac, Christopher Wallraff, Eichler, Entanglement stabilization using parity detection and real-time feedback in superconducting circuits. arXiv e-prints. 1902KARB + 19] Christian Kraglund Andersen, Ants Remm, Stefania Balasiu, Sebastian Krinner, Johannes Heinsoo, Jean-Claude Besse, Mihai Gabureac, Andreas Wallraff, and Christopher Eichler. Entanglement stabilization using parity detection and real-time feedback in superconducting circuits. arXiv e-prints, pages arXiv-1902, 2019. Universal transversal gates with color codes: A simplified approach. Aleksander Kubica, E Michael, Beverland, Physical Review A. 91332330Aleksander Kubica and Michael E Beverland. Universal transversal gates with color codes: A simplified approach. Physical Review A, 91(3):032330, 2015. Fault-tolerant quantum computation by anyons. Yu Kitaev, Annals of Physics. 3031A Yu Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2- 30, 2003. Exploiting degeneracy in belief propagation decoding of quantum codes. npj Quantum Information. Yueh Kao, Ching-Yi Kuo, Lai, 8Kao-Yueh Kuo and Ching-Yi Lai. Exploiting degeneracy in belief propagation decoding of quantum codes. npj Quantum Information, 8(1):1-9, 2022. The art of computer programming. Donald Ervin Knuth, 3Donald Ervin Knuth. The art of computer programming, volume 3. Pearson Education, 1997. Fault tolerance of quantum low-density parity check codes with sublinear distance scaling. A Alexey, Leonid P Kovalev, Pryadko, Physical Review A. 87220304Alexey A Kovalev and Leonid P Pryadko. Fault tolerance of quantum low-density parity check codes with sublinear distance scaling. Physical Review A, 87(2):020304, 2013. Fast ion swapping for quantum-information processing. Henning Kaufmann, Thomas Ruster, T Christian, Marcelo A Schmiegelow, Vidyut Luda, Jonas Kaushal, Schulz, Ferdinand David Von Lindenfels, Ulrich G Schmidt-Kaler, Poschinger, Physical Review A. 95552319Henning Kaufmann, Thomas Ruster, Christian T Schmiegelow, Marcelo A Luda, Vidyut Kaushal, Jonas Schulz, David von Lindenfels, Ferdinand Schmidt-Kaler, and Ulrich G Poschinger. Fast ion swapping for quantum-information processing. Physical Review A, 95(5):052319, 2017. Min-Hsiu Ting-Chun Lin, Hsieh, arXiv:2203.03581Good quantum LDPC codes with linear time decoder from lossless expanders. arXiv preprintTing-Chun Lin and Min-Hsiu Hsieh. Good quantum LDPC codes with linear time decoder from lossless expanders. arXiv preprint arXiv:2203.03581, 2022. Deterministic bidirectional communication and remote entanglement generation between superconducting qubits. npj quantum information. Daniel Litinski, ; N Leung, Y Lu, Chakram, Rk Naik, Earnest, Ma, A N Jacobs, D I Cleland, Schuster, 518A game of surface codes: Large-scale quantum computing with lattice surgeryDaniel Litinski. A game of surface codes: Large-scale quantum computing with lattice surgery. Quantum, 3:128, 2019. [LLC + 19] N Leung, Y Lu, S Chakram, RK Naik, N Earnest, R Ma, K Jacobs, AN Cleland, and DI Schuster. Deterministic bidirectional communication and remote entanglement generation between superconducting qubits. npj quantum information, 5(1):18, 2019. Neural belief-propagation decoders for quantum errorcorrecting codes. Ye-Hua Liu, David Poulin, Phys. Rev. Lett. 122Ye-Hua Liu and David Poulin. Neural belief-propagation decoders for quantum error- correcting codes. Phys. Rev. Lett., 122:200501, May 2019. On ensembles of low-density parity-check codes: asymptotic distance distributions. Simon Litsyn, Vladimir Shevelev, IEEE Transactions on Information Theory. 484Simon Litsyn and Vladimir Shevelev. On ensembles of low-density parity-check codes: asymptotic distance distributions. IEEE Transactions on Information Theory, 48(4):887- 908, 2002. Quantum expander codes. Anthony Leverrier, Jean-Pierre Tillich, Gilles Zémor, Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on. IEEEAnthony Leverrier, Jean-Pierre Tillich, and Gilles Zémor. Quantum expander codes. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 810-824. IEEE, 2015. Exponential suppression of bit-flips in a qubit encoded in an oscillator. Raphaël Lescanne, Marius Villiers, Théau Peronnin, Alain Sarlette, Matthieu Delbecq, Benjamin Huard, Takis Kontos, Mazyar Mirrahimi, Zaki Leghtas, Nature Physics. 165Raphaël Lescanne, Marius Villiers, Théau Peronnin, Alain Sarlette, Matthieu Delbecq, Ben- jamin Huard, Takis Kontos, Mazyar Mirrahimi, and Zaki Leghtas. Exponential suppression of bit-flips in a qubit encoded in an oscillator. Nature Physics, 16(5):509-513, 2020. Efficient decoding up to a constant fraction of the code length for asymptotically good quantum codes. Anthony Leverrier, Gilles Zémor, arXiv:2206.07571arXiv preprintAnthony Leverrier and Gilles Zémor. Efficient decoding up to a constant fraction of the code length for asymptotically good quantum codes. arXiv preprint arXiv:2206.07571, 2022. A parallel decoder for good quantum LDPC codes. Anthony Leverrier, Gilles Zémor, arXiv:2208.05537arXiv preprintAnthony Leverrier and Gilles Zémor. A parallel decoder for good quantum LDPC codes. arXiv preprint arXiv:2208.05537, 2022. . Anthony Leverrier, Gilles Zémor, arXiv:2202.13641Quantum tanner codes. arXiv preprintAnthony Leverrier and Gilles Zémor. Quantum tanner codes. arXiv preprint arXiv:2202.13641, 2022. Universal gate operations on nuclear spin qubits in an optical tweezer array of yb 171 atoms. Alex P Shuo Ma, Genyue Burgers, Jack Liu, Bichen Wilson, Jeff D Zhang, Thompson, Physical Review X. 12221028+ 22] Shuo Ma, Alex P Burgers, Genyue Liu, Jack Wilson, Bichen Zhang, and Jeff D Thompson. Universal gate operations on nuclear spin qubits in an optical tweezer array of yb 171 atoms. Physical Review X, 12(2):021028, 2022. High-fidelity entanglement and detection of alkaline-earth rydberg atoms. S Ivaylo, Jacob P Madjarov, Adam L Covey, Joonhee Shaw, Anant Choi, Alexandre Kale, Hannes Cooper, Pichler, Nature Physics. 168+ 20+ 20] Ivaylo S Madjarov, Jacob P Covey, Adam L Shaw, Joonhee Choi, Anant Kale, Alexan- dre Cooper, Hannes Pichler, Vladimir Schkolnik, Jason R Williams, and Manuel Endres. High-fidelity entanglement and detection of alkaline-earth rydberg atoms. Nature Physics, 16(8):857-861, 2020. Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits. + 22] Matt Mcewen, Lara Faoro, Kunal Arya, Andrew Dunsworth, Trent Huang, Seon Kim, Brian Burkett, Austin Fowler, Frank Arute, C Joseph, Bardin, Nature Physics. 181+ 22] Matt McEwen, Lara Faoro, Kunal Arya, Andrew Dunsworth, Trent Huang, Seon Kim, Brian Burkett, Austin Fowler, Frank Arute, Joseph C Bardin, et al. Resolving catastrophic error bursts from cosmic rays in large arrays of superconducting qubits. Nature Physics, 18(1):107-111, 2022. Quantum computation and quantum information. A Michael, Isaac Nielsen, Chuang, Michael A Nielsen and Isaac Chuang. Quantum computation and quantum information, 2002. Generalized belief propagation algorithms for decoding of surface codes. Josias Old, Manuel Rispler, arXiv:2212.03214arXiv preprintJosias Old and Manuel Rispler. Generalized belief propagation algorithms for decoding of surface codes. arXiv preprint arXiv:2212.03214, 2022. On the iterative decoding of sparse quantum codes. David Poulin, Yeojin Chung, Quantum Information & Computation. 810David Poulin and Yeojin Chung. On the iterative decoding of sparse quantum codes. Quan- tum Information & Computation, 8(10):987-1000, 2008. Eric S + 21] Avikar Periwal, Philipp Cooper, Kunkel, F Julian, Emily J Wienand, Monika Davis, Schleier-Smith, arXiv:2106.04070Programmable interactions and emergent geometry in an atomic array. arXiv preprint+ 21] Avikar Periwal, Eric S Cooper, Philipp Kunkel, Julian F Wienand, Emily J Davis, and Monika Schleier-Smith. Programmable interactions and emergent geometry in an atomic array. arXiv preprint arXiv:2106.04070, 2021. Pavel Panteleev, Gleb Kalachev, arXiv:2111.03654Asymptotically good quantum and locally testable classical LDPC codes. arXiv preprintPavel Panteleev and Gleb Kalachev. Asymptotically good quantum and locally testable classical LDPC codes. arXiv preprint arXiv:2111.03654, 2021. Degenerate quantum LDPC codes with good finite length performance. Quantum. Pavel Panteleev, Gleb Kalachev, 5585Pavel Panteleev and Gleb Kalachev. Degenerate quantum LDPC codes with good finite length performance. Quantum, 5:585, 2021. Optimal and efficient decoding of concatenated quantum block codes. David Poulin, Physical Review A. 74552333David Poulin. Optimal and efficient decoding of concatenated quantum block codes. Physical Review A, 74(5):052333, 2006. Alexandre Blais, et al. Bias-preserving gates with stabilized cat qubits. + 20] Shruti, Lucas Puri, Jonathan A St-Jean, Alexander Gross, Nicholas E Grimm, Frattini, S Pavithran, Anirudh Iyer, Steven Krishna, Liang Touzard, Jiang, Science advances. 6345901+ 20] Shruti Puri, Lucas St-Jean, Jonathan A Gross, Alexander Grimm, Nicholas E Frattini, Pavithran S Iyer, Anirudh Krishna, Steven Touzard, Liang Jiang, Alexandre Blais, et al. Bias-preserving gates with stabilized cat qubits. Science advances, 6(34):eaay5901, 2020. Singleshot error correction of three-dimensional homological product codes. Michael Armanda O Quintavalle, Joschka Vasmer, Roffe, Campbell, PRX Quantum. 220340Armanda O Quintavalle, Michael Vasmer, Joschka Roffe, and Earl T Campbell. Single- shot error correction of three-dimensional homological product codes. PRX Quantum, 2(2):020340, 2021. Lawrence Z + 22] Joschka Roffe, Armanda O Cohen, Daryus Quintivalle, Chandra, Campbell, arXiv:2202.01702Bias-tailored quantum LDPC codes. arXiv preprint+ 22] Joschka Roffe, Lawrence Z Cohen, Armanda O Quintivalle, Daryus Chandra, and Earl T Campbell. Bias-tailored quantum LDPC codes. arXiv preprint arXiv:2202.01702, 2022. Narayanan + 22] Nithin Raveendran, Filip Rengaswamy, Ankur Rozpędek, Liang Raina, Bane Jiang, Vasić, Finite rate QLDPC-GKP coding scheme that surpasses the css hamming bound. Quantum. 6767+ 22] Nithin Raveendran, Narayanan Rengaswamy, Filip Rozpędek, Ankur Raina, Liang Jiang, and Bane Vasić. Finite rate QLDPC-GKP coding scheme that surpasses the css hamming bound. Quantum, 6:767, 2022. Decoding across the quantum low-density parity-check code landscape. Joschka Roffe, R David, Simon White, Earl Burton, Campbell, Physical Review Research. 2443423Joschka Roffe, David R White, Simon Burton, and Earl Campbell. Decoding across the quantum low-density parity-check code landscape. Physical Review Research, 2(4):043423, 2020. Combinatorial optimization: polyhedra and efficiency. Alexander Schrijver, Springer24Alexander Schrijver et al. Combinatorial optimization: polyhedra and efficiency, volume 24. Springer, 2003. Noise threshold for a faulttolerant two-dimensional lattice architecture. M Krysta, Svore, P David, Barbara M Divincenzo, Terhal, quant-ph/0604090arXiv preprintKrysta M Svore, David P DiVincenzo, and Barbara M Terhal. Noise threshold for a fault- tolerant two-dimensional lattice architecture. arXiv preprint quant-ph/0604090, 2006. Local fault-tolerant quantum computation. Barbara M Krysta M Svore, David P Terhal, Divincenzo, Physical Review A. 72222317Krysta M Svore, Barbara M Terhal, and David P DiVincenzo. Local fault-tolerant quantum computation. Physical Review A, 72(2):022317, 2005. Multiple-particle interference and quantum error correction. Andrew Steane, Proceedings of the Royal Society A. 452Andrew Steane. Multiple-particle interference and quantum error correction. Proceedings of the Royal Society A, 452(1954):2551-2577, 1996. Ultrahigh error threshold for surface codes with biased noise. K David, Tuckett, D Stephen, Steven T Bartlett, Flammia, Physical review letters. 120550505David K Tuckett, Stephen D Bartlett, and Steven T Flammia. Ultrahigh error threshold for surface codes with biased noise. Physical review letters, 120(5):050505, 2018. A Maxime, Nicolas Tremblay, Michael E Delfosse, Beverland, arXiv:2109.14609Constant-overhead quantum error correction with thin planar connectivity. arXiv preprintMaxime A Tremblay, Nicolas Delfosse, and Michael E Beverland. Constant-overhead quan- tum error correction with thin planar connectivity. arXiv preprint arXiv:2109.14609, 2021. Tailoring surface codes for highly biased noise. K David, Andrew S Tuckett, Darmawan, T Christopher, Sergey Chubb, Bravyi, D Stephen, Steven T Bartlett, Flammia, Physical Review X. 9441031David K Tuckett, Andrew S Darmawan, Christopher T Chubb, Sergey Bravyi, Stephen D Bartlett, and Steven T Flammia. Tailoring surface codes for highly biased noise. Physical Review X, 9(4):041031, 2019. Ted Thorbeck, Andrew Eddins, Isaac Lauer, T Douglas, Malcolm Mcclure, Carroll, arXiv:2210.04780Tls dynamics in a superconducting qubit due to background ionizing radiation. arXiv preprintTed Thorbeck, Andrew Eddins, Isaac Lauer, Douglas T McClure, and Malcolm Carroll. Tls dynamics in a superconducting qubit due to background ionizing radiation. arXiv preprint arXiv:2210.04780, 2022. Low-distance surface codes under realistic quantum noise. Yu Tomita, Krysta M Svore, Physical Review A. 90662320Yu Tomita and Krysta M Svore. Low-distance surface codes under realistic quantum noise. Physical Review A, 90(6):062320, 2014. Quantum LDPC codes with positive rate and minimum distance proportional to the square root of the blocklength. Jean- , Pierre Tillich, Gilles Zémor, IEEE Transactions on Information Theory. 602Jean-Pierre Tillich and Gilles Zémor. Quantum LDPC codes with positive rate and min- imum distance proportional to the square root of the blocklength. IEEE Transactions on Information Theory, 60(2):1193-1202, 2014. Impact of ionizing radiation on superconducting qubit coherence. P Antti, Vepsäläinen, H Amir, John L Karamlou, Orrell, S Akshunna, Ben Dogra, Francisca Loer, Vasconcelos, K David, Alexander J Kim, Bethany M Melville, Niedzielski, L Jonilyn, Yoder, Nature. 5847822Antti P Vepsäläinen, Amir H Karamlou, John L Orrell, Akshunna S Dogra, Ben Loer, Francisca Vasconcelos, David K Kim, Alexander J Melville, Bethany M Niedzielski, Jonilyn L Yoder, et al. Impact of ionizing radiation on superconducting qubit coherence. Nature, 584(7822):551-556, 2020. Reducing the overhead for quantum computation when noise is biased. Paul Webster, D Stephen, David Bartlett, Poulin, Physical Review A. 92662309Paul Webster, Stephen D Bartlett, and David Poulin. Reducing the overhead for quantum computation when noise is biased. Physical Review A, 92(6):062309, 2015. Surface code quantum computing with error rates over 1%. S David, Austin G Wang, Lloyd Cl Fowler, Hollenberg, Physical Review A. 83220302David S Wang, Austin G Fowler, and Lloyd CL Hollenberg. Surface code quantum computing with error rates over 1%. Physical Review A, 83(2):020302, 2011. Wikipedia contributors. Permutation -Wikipedia, the free encyclopedia. 232022Online; accessedWikipedia contributors. Permutation -Wikipedia, the free encyclopedia. https://en. wikipedia.org/w/index.php?title=Permutation&oldid=1118545340, 2022. [Online; ac- cessed 23-November-2022]. Distributed quantum error correction for chip-level catastrophic errors. Qian Xu, Alireza Seif, Haoxiong Yan, Nam Mannucci, Bernard Ousmane Sane, Rodney Van Meter, N Andrew, Liang Cleland, Jiang, Physical review letters. 12924240502Qian Xu, Alireza Seif, Haoxiong Yan, Nam Mannucci, Bernard Ousmane Sane, Rodney Van Meter, Andrew N Cleland, and Liang Jiang. Distributed quantum error correction for chip-level catastrophic errors. Physical review letters, 129(24):240502, 2022. Time-efficient constant-space-overhead fault-tolerant quantum computation. Hayata Yamasaki, Masato Koashi, arXiv:2207.08826arXiv preprintHayata Yamasaki and Masato Koashi. Time-efficient constant-space-overhead fault-tolerant quantum computation. arXiv preprint arXiv:2207.08826, 2022. Many-body interferometry of a rydberg-dressed spin lattice. Rick Zvbs + 16] Johannes Zeiher, Peter Van Bijnen, Sebastian Schauß, Jae-Yoon Hild, Thomas Choi, Immanuel Pohl, Christian Bloch, Gross, Nature Physics. 1212ZVBS + 16] Johannes Zeiher, Rick Van Bijnen, Peter Schauß, Sebastian Hild, Jae-yoon Choi, Thomas Pohl, Immanuel Bloch, and Christian Gross. Many-body interferometry of a rydberg-dressed spin lattice. Nature Physics, 12(12):1095-1099, 2016. . . . = {1, N}, Set notation: for natural numbers n ∈ N, [n] = {1, ..., n}. Sums over sets: For a set S and a subset A ⊆ S, the sum B⊇A f (B) is taken over all subsets B ⊆ S such that A ⊆ B. Sums over sets: For a set S and a subset A ⊆ S, the sum B⊇A f (B) is taken over all subsets B ⊆ S such that A ⊆ B. we say (a) f (n) = O(g(n)) if there exists. ; Asymptotics, N → R, such that for all n > n 0 , f (n) ≤ g(n)Asymptotics: for functions f, g : N → R, we say (a) f (n) = O(g(n)) if there exists an n 0 ∈ N and a positive number c independent of n such that for all n > n 0 , f (n) ≤ g(n). Ω(g(n)) if g(n) = O(f (g)). f (n) = Ω(g(n)) if g(n) = O(f (g)). Θ(g(n)) if there exists an n 0 ∈ N and positive numbers a, b independent of n such that a · g(n) ≤ f (n) ≤ b · g(n). f (n) = Θ(g(n)) if there exists an n 0 ∈ N and positive numbers a, b independent of n such that a · g(n) ≤ f (n) ≤ b · g(n). Ω p (·) and Θ p (·) to indicate that the numbers a. We may use O p (·). We may use O p (·), Ω p (·) and Θ p (·) to indicate that the numbers a, b and c may depend on some parameter p pertinent to the problem at hand. Step: A single timestep in which each qubit may participate in only one gate. 5. (Circuit) Stage: This refers to the time interval in the circuit C Q n required to simulate one entangling gate. One stage has at most T perm steps(Circuit) Step: A single timestep in which each qubit may participate in only one gate. 5. (Circuit) Stage: This refers to the time interval in the circuit C Q n required to simulate one entangling gate. One stage has at most T perm steps. Round: A complete measurement of all the stabilizer generators of the code producing one outcome for each stabilizer generator. (Measurement) Round: A complete measurement of all the stabilizer generators of the code produc- ing one outcome for each stabilizer generator. K is the set of Clifford operations we use to construct syndrome-extraction circuits in 2 dimensions. It includes the following elements. (a) Initialization of new qubits in state |0 or |+ , (b) Single-qubit Pauli gates, (c) Two-qubit Clifford gates CNOT and CZ between nearest-neighbor qubits, (d) Single-qubit Pauli X and Z measurements. e) Physical SWAP operation with range RK is the set of Clifford operations we use to construct syndrome-extraction circuits in 2 dimensions. It includes the following elements. (a) Initialization of new qubits in state |0 or |+ , (b) Single-qubit Pauli gates, (c) Two-qubit Clifford gates CNOT and CZ between nearest-neighbor qubits, (d) Single-qubit Pauli X and Z measurements, (e) Physical SWAP operation with range R. To measure the X-type syndromes, the first phase of the circuit (C Q n ) ideal is partitioned into max(∆ q , ∆ g ) steps. In the t th step, we perform the two-qubit gates corresponding to the edge color tTo measure the X-type syndromes, the first phase of the circuit (C Q n ) ideal is partitioned into max(∆ q , ∆ g ) steps. In the t th step, we perform the two-qubit gates corresponding to the edge color t. Once completed, the same process is repeated for the Z-type syndromes. Following a similar line of reasoning, this requires ∆ = max(∆ q , ∆ g ) applications of two. qubit gatesOnce completed, the same process is repeated for the Z-type syndromes. Following a similar line of reasoning, this requires ∆ = max(∆ q , ∆ g ) applications of two-qubit gates. The circuit thus has two phases: first the X-type syndromes are measured followed by the Z-type syndromes 19 which completes a measurement of all stabilizer generators. The circuit thus has two phases: first the X-type syndromes are measured followed by the Z-type syn- dromes 19 which completes a measurement of all stabilizer generators.
[]
[ "CTQ 414: A New Gravitational Lens 1", "CTQ 414: A New Gravitational Lens 1" ]
[ "Nicholas D Morgan ", "Alan Dressler ", "José Maza ", "Paul L Schechter ", "Joshua N Winn " ]
[]
[]
We report the discovery and ground based observations of the new gravitational lens CTQ 414. The source quasar lies at a redshift of z = 1.29 with a B magnitude of 17.6. Ground based optical imaging reveals two point sources separated by 1. ′′ 2 with a magnitude difference of roughly 1 mag. Subtraction of two stellar point spread functions from images obtained in subarcsecond seeing consistently leaves behind a faint, residual object. Fits for two point sources plus an extended object places the fainter object collinear with the two brighter components. Subsequent HST/NICMOS observations have confirmed the identification of the fainter object as the lensing galaxy. VLA observations at 8.46 GHz reveal that all components of the lensing system are radio quiet down to the 0.2 mJy flux level.
10.1086/301056
[ "https://export.arxiv.org/pdf/astro-ph/9906173v1.pdf" ]
9,827,270
astro-ph/9906173
b295433eef8e70d085b6d6da3ef72ce69c19ec2c
CTQ 414: A New Gravitational Lens 1 Jun 1999 Nicholas D Morgan Alan Dressler José Maza Paul L Schechter Joshua N Winn CTQ 414: A New Gravitational Lens 1 Jun 1999arXiv:astro-ph/9906173v1 9Subject headings: gravitational lensing -quasars: individual (CTQ 414) We report the discovery and ground based observations of the new gravitational lens CTQ 414. The source quasar lies at a redshift of z = 1.29 with a B magnitude of 17.6. Ground based optical imaging reveals two point sources separated by 1. ′′ 2 with a magnitude difference of roughly 1 mag. Subtraction of two stellar point spread functions from images obtained in subarcsecond seeing consistently leaves behind a faint, residual object. Fits for two point sources plus an extended object places the fainter object collinear with the two brighter components. Subsequent HST/NICMOS observations have confirmed the identification of the fainter object as the lensing galaxy. VLA observations at 8.46 GHz reveal that all components of the lensing system are radio quiet down to the 0.2 mJy flux level. INTRODUCTION Gravitational lensing is a powerful tool with a wide range of astrophysical applications (Kochanek and Hewitt 1996). In particular, multiply imaged quasars have become a useful probe for a number of cosmological investigations, such as measurements of the Hubble constant (Refsdal 1964) and statistical constraints on the cosmological constant (Kochanek 1996). The usefulness of gravitationally lensed quasars, however, is limited by the relatively few systems discovered to date. In this paper we report our discovery and analysis of a new multiply imaged quasar. CTQ 414 (1 h 58 m 41. s 43, -43 • 25 ′ 3. ′′ 4, J2000.0) was originally identified as a z = 1.29, B = 17.2 quasar from the Calán Tololo Survey (CTS) (Maza et al. 1995), a survey designed to discover quasars and emission-line galaxies in the southern hemisphere. In August of 1997, optical observations of approximately 200 of the CTS quasars were carried out at the Cerro Tololo Interamerican Observatory (CTIO). The primary purpose of this effort was to determine if any of the selected CTS quasars exhibited evidence of arcsecond, multiple images that would arise from gravitational lensing. Prior to this run, another Calán Tololo quasar, CTQ 286, had been found to be lensed by Claeskens et al. (1996). CCD exposures of CTQ 414 immediately revealed it to be double, with a separation of 1. ′′ 2 evident in B, V, R, and I filters. We present the details of these observations, along with their subsequent analysis, in §2. Also described in §2 are follow-up observations conducted at the Las Campanas Observatory (LCO) two weeks later, as well as our photometric analysis of the lens components utilizing astrometric positions obtained from HST/NICMOS imaging. In §3 we present our astrometric and photometric results for field stars in the quasar field. In May of 1998, the Very Large Array (VLA) was used to search for radio counterparts of CTQ 414. We discuss these observations in §4. Finally, in §5 we summarize our findings for CTQ 414. OPTICAL OBSERVATIONS AND ANALYSIS 2.1. Initial Optical Imaging: Detection of system duplicity Initial optical observations of CTQ 414 were obtained by one of us (P.L.S.) at CTIO between the nights of 1997 August 26-30. The 1.5 m telescope equipped with the Tektronix 2048 No. 6 CCD was used, although only the central 1536x1536 array was read out. The field of view of the camera was 6.2 arcmin square with a scale of 0.2417 arcsec/pixel, a gain of 2 e − /ADU, and a read noise of 5 e − . A total of 20 exposures of CTQ 414 were obtained with Johnson BV and Kron-Cousins RI filters on the nights of the 26 th , 27 th , and 29 th . Each exposure lasted 300 s, and were obtained through airmasses ranging from 1.027 to 1.143 over the course of the three nights. Seeing conditions ranged from 1. ′′ 16 to 1. ′′ 93 FWHM. Multiple 60 s BVRI exposures of two Landolt photometric standard fields (fields PG0231+051 and PG2331+055) were also obtained during two of the above nights. Our exposures of PG0231+051 and PG2331+055 each contained five and three Landolt standard stars, respectively. A log of the observations for CTQ 414 are presented in Table 1. Figure 1 shows an R band exposure of CTQ 414 and the surrounding field obtained in 1. ′′ 25 FWHM seeing. The reader will note the relative absence of comparably bright stars in this high latitude field. Each CCD frame was bias-subtracted, trimmed, and flat-field corrected using the Vista reduction program. The flat-field images were cleaned of cosmic rays using "autoclean," a program written and kindly supplied by J. Tonry. As mentioned in §1, images of CTQ 414 appeared double in all exposures (see Figure 2a, note caption). We therefore fit the double images with two empirical point spread functions (PSFs) using a variant of the program DoPHOT , designed to deal with close, point-like and extended objects (Schechter and Moore 1993). Star #7 as shown in Figure 1 was used as the empirical PSF. These fits yielded average separation distance between the two components in B, V , R, and I of 1. ′′ 200 ± 0. ′′ 006, 1. ′′ 222 ± 0. ′′ 012, 1. ′′ 215 ± 0. ′′ 009, and 1. ′′ 193 ± 0. ′′ 010, respectively. We note that the smaller separation in I band is consistent with the presence of a relatively red object between the double image. However, no consistent or suggestive pattern of residuals were found upon fitting and subtracting the two PSFs, although the difficulty of fitting for multiple point-sources separated by less than a seeing disc should be noted. Followup Optical Imaging: Detection of a Third Object Following the initial imaging of CTQ 414 taken in August at CTIO, an additional 12 exposures were obtained by one of us (A.D.) at LCO on the night of 1997 September 10. The du Pont 2.5 m telescope equipped with a Tektronix No. 2 CCD detector was used. The CCD camera has a field of view of 3.9 arcmin square with a scale of 0.2278 arcsec/pixel. The camera was set in the #3 gain position, providing a gain of 3.9 e − /ADU and a readnoise of 9.4 e − . Seeing for the night ranged from 0. ′′ 89 to 1. ′′ 08 FWHM, significantly better than the initial images from CTIO taken two weeks earlier. All 12 images were taken in R band and each exposure lasted 300 s. The frames were bias-subtracted, trimmed, and flat-field corrected, following the same procedure as mentioned above. Simultaneous fitting of two empirical PSFs (using the identical star as noted in the previous subsection) resulted in an average separation between the two components of 1. ′′ 198 ± 0. ′′ 001. This separation agrees well with the average R band separation obtained from the CTIO data. Subtraction of the two empirical PSFs produced a consistent pattern of residuals present in all 12 frames (see Figure 2b). This pattern consisted of a bright spot located slightly west of component A and northeast of component B, as well as two crescent-like arcs of positive residuals to the southeast of each component, and two regions of negative "cavities" located at the centers of A and B. The amplitude of the bright spot is quite small, of order 2-3% of the central intensity of component A. The crescent is 1-2% of A's central intensity, while the cavities for A and B were 3-4% and 2-3% of A's central intensity, respectively. One possible explanation for this pattern of residuals is that the fitting program is trying to account for three sources of light with just two point sources. If the unaccounted source of light was located between and displaced slightly to the northwest from the two components, then the PSFs would be dragged off-target from the centers of components A and B in an attempt to cover the extra light. This would produce the observed crescent pattern of residuals along the southeast edge. Also, the observed cavities would arise from attributing excessive flux to components A and B in an attempt to account for the three sources of light in the system. Another possibility is that we are seeing an artifact of a poor PSF template. Such an effect might arise from variations in the PSF across the surface of the CCD, or from mistakenly using an extended object as our model point source. However, we do not believe that either of these explanations is correct, since the identical residual pattern persists even when different stars from across the face of the detector are used as our PSF. Following the suggestive pattern of the residuals, we decided to fit each frame of the LCO data for two point sources plus an extended object (hereafter "C"), which we interpret here as the lensing galaxy. The object was modeled as a circularly symmetric pseudogaussian as descirbed in , convolved with the seeing conditions. The positions and fluxes of all three components, as well as the size σ of the pseudogaussian, were treated as free parameters. The averaged results for these fits placed component C collinear with components A and B and at an angular distance of 0. ′′ 72 ± 0. ′′ 03 away from A. The separation between components A and B was determined as 1. ′′ 29 ± 0. ′′ 02. Note that this solution for the lensing galaxy places it roughly midway between components A and B. The rms scatter in the galaxy position was 0. ′′ 034 along the N-S direction, and 0. ′′ 088 along the E-W direction, in both cases small compared to the relative distances between the three components. Flux ratios of components B to A and components C to A were 32.6% ± 0.6% and 20.5% ± 0.5%, respectively. The average σ of the pseudogaussian used to model component C was found to be 0. ′′ 32 ± 0. ′′ 010. This result for the position of the lensing galaxy, that it is very nearly equidistant from the two quasar images, would appear to be inconsistent with the unequal flux ratio of the two quasar components. For an isothermal lens, the flux ratio for the components would be proportional to the ratio of their distances from the lensing galaxy. A very similar situation was encountered in the case of FBQ 0951+2635 (Schechter et al. 1998), with the position of the lensing galaxy too well centered for the unequal flux ratio. The parameters for the ground based observations for this system were quite similar to those for CTQ 414. Subsequent imaging of FBQ 0951+2635 with HST/NICMOS confirmed the presence of the lensing galaxy ), but placed it considerably closer to the fainter image than did the ground based observations. A similar systematic error of this sort with CTQ 414 would not be surprising, given the difficulty of measuring a third object between two objects separated by only one seeing disk. Confirmation of the Lensing Galaxy In an attempt to confirm the lensing hypothesis, CTQ 414 was placed onto the CfA-Arizona Space Telescope Lens Survey (CASTLES) ) observing program at the suggestion of the authors. The suspected lens was imaged by HST/NICMOS in four 640 s H band exposures on 1998 August 4. These exposures clearly reveal the extended emission of the lensing galaxy, and place it approximately collinear with the two quasar images . Gaussian fits for the positions of components B and C with respect to component A places B 1. ′′ 22 from A at a PA of 251. • 0 E of N, and the nucleus of component C 0. ′′ 80 from A at a PA of 253. • 4 E of N. As suspected, the true position of the lensing galaxy is ∼10% closer to the fainter quasar image than indicated by the ground-based data. The HST images of CTQ 414 is available to download from the CASTLES ftp site, or may be viewed online at the CASTLES homepage at http://cfa-www.harvard.edu/glensdata. Photometric Analysis of Lens Components Photometry of the lens components was performed using the same variant of DoPHOT described above. Photometric solutions to the LCO data set were obtained by fixing the relative separations of components A, B, and C at the corresponding HST positions. The fluxes of the three components, as well as the overall position of the system and the size σ of the lensing galaxy, were treated as free parameters. In the process of fixing the relative separations of the system components, appropriate steps were taken to account for the plate scale and chip orientation of the LCO detector. Table 2 summarizes our photometric results for the LCO data set. Here we present the separate magnitudes of components A, B, and C, as well as magnitude differences between system components and the combined magnitude from the two quasar images. A similar attempt was made to determine the colors of the lens components using the BV RI data from CTIO. Again, relative separations were fixed at the corresponding HST values, while the overall position of the system, the fluxes of components A, B, and C, and the size σ of the lensing galaxy were treated as free parameters. Appropriate steps were again taken to account for the plate scale and chip orientation of the CTIO detector. Unfortunately, treating the flux and size of component C as free parameters introduced too much freedom into these models, and the fits failed to converge to our satisfaction. Fixing the size σ of the lensing galaxy across all wavelengths at the LCO R band value resulted in no noticeable improvement. The quality of seeing on the CTIO data set is simply not good enough to perform stable photometry of component C. Reliable colors of all three system components will require better images. In order to circumvent this problem to some extent, we decided to set the flux of the lensing galaxy to zero and only solve for the fluxes of components A and B. Relative positions were still held constant at the corresponding HST values, while the overall position of the system was free to vary. Table 3 presents our magnitude results for the combined flux of components A and B. Admittingly, these results contain a systematic error by failing to take into account light from the lensing galaxy. Upon comparison with the corresponding LCO result presented in Table 2, we find that the combined CTIO A+B magnitude in R is brighter than the corresponding LCO result by 0.08 mag. Under the assumption that the lensing galaxy gets brighter in redder wavelengths, we would expect a correspondingly smaller systematic error in B and V bands, and a larger one in I. It is hoped that these results will provide a handle on future variability of the system across BVRI wavelengths. PHOTOMETRY AND ASTROMETRY STANDARDS Photometric Standards Photometric and astrometric results for stars in the CTQ 414 field were obtained for use with future observations. Eight nearby reference stars within a 4 arcmin radius from the target lens were chosen for this purpose. These stars are identified by the labels shown in Figure 1. Observations of the Landolt (1992) standard fields PG0231+051 (PG0231+051, PG0231+051A, PG0231+051B, PG0231+051C, PG0231+051D) and PG2331+055 (PG2331+055, PG2331+055A, PG2331+055B) were used to derive color terms and zero-point offsets for calibration onto the Johnson-Kron-Cousins photometric system. Table 4 lists the color terms used for the transformations as well as the number of standard stars N used and the corresponding rms scatter of the fit. Color terms were extracted from the PG0231+051 field, making use of the field's greater sampling of B − V indices as compared to the PG2331+055 field. Zero-point offsets were derived from the PG2331+055 field. In deriving the I band color term, the faintest of the five observed PG0231+051 standard stars was discarded (see below). We have solved for the transformation equations in the form X − x = const. + a 1 * (B − V ) (1) B − V = const. + a 2 * (b − v)(2) where X represents the apparent magnitude in the standard BVRI system, x the extinction corrected instrumental magnitude above the earth's atmosphere, and a 1 and a 2 the respective color terms for the X − x and B − V transformations. Corrections for atmospheric extinction were applied using "typical" extinction coefficients as cited in the 1990 CTIO Facilities Manual (k B = 0.22, k V = 0.11, k R = 0.08, k I = 0.04). In the process of reducing the standard fields, the relatively faint star PG0231+051 (Landolt values of I=16.64 and B − V =-0.33) was discovered to be brighter in I by 0.30 ± 0.05 mag with respect to Landolt's (1992) listed magnitude. This residual is in the sense I stnd − I obs , where I stnd is the value reported by Landolt (1992), and I obs has been computed from the transformations given above. Similar I band discrepancies for PG0231+051, as compared to the Landolt (1992) standard value, have been reported by Geisler (1996), who found an I residual of 0.23 mag, and also by Rosvick (1995), who found an I residual of 0.15 mag. The residual reported by Geisler is in the same sense as mentioned above, while Rosvick does not report the sense of his residual. Because of the observed discrepancy, the I band observations of PG0231+051 were not included in the solution of the I band color term. The empirical PSF fitting described in the previous section yields magnitudes of stars in the quasar field with respect to the template PSF star. With the template star placed onto the standard photometric system, magnitudes for the field stars are straightforward to obtain. Apparent BV RI magnitudes for our eight reference stars are listed in Table 5, along with respective standard errors of the mean (σ/N 1/2 ) as derived from frame-to-frame scatter. These error bars do not include uncertainties in the reference star's calibration, which are listed in the footnote to Table 5. Object numbers in Table 5 correspond to the labels shown in Figure 1. These local standards were used to calibrate the LCO photometry results reported in Table 2. It should be noted that images taken from CTIO are afflicted by coma aberration. The CTIO 1.5 m is a Ritchey-Chrétien telescope designed to operate free from coma aberration at an f ratio of f /7.5, but not at f /13.5. Observations at CTIO were carried out at f /13.5 to make use of the smaller pixel scale at that f ratio. The presence of coma aberration results in a slight, off-axis distortion in the PSF shape across the face of the detector. By comparing magnitudes derived from multiple PSF fitting, we estimate that coma aberration for our images is a small effect, introducing uncertainties in our magnitude determinations of 0.02 mag for stars on the largest contours of constant wavefront aberration. Astrometric Standards Astrometric solutions were also obtained for the eight reference stars in the quasar field using one of the R band exposures taken through a seeing of 1. ′′ 32 FWHM. This particular exposure was chosen since it yielded the lowest rms position errors with respect to standard coordinates. Standard coordinates for the objects were taken from the APM Sky Catalogue at Cambridge, England. The astrometric results are included in Table 5. The reported positions are in the form of offsets from star #7, in the sense of star x minus star #7. RADIO OBSERVATIONS On 1998 May 19 we searched for radio emission from CTQ 414, using the NRAO Very Large Array (VLA). The search was carried out at 8.46 GHz, while the VLA was in the A configuration. This provided an east-west resolution of 0. ′′ 2. Our 14-minute observation bracketed the transit time of CTQ 414, at an altitude of 12 degrees, permitting a north-south resolution of approximately 1 ′′ . No significant sources of radio flux were detected within a 5 ′′ error circle around the position of CTQ 414. The rms noise level in this field was 0.065 mJy per synthesized beam, so our observation rules out (at the 3σ level) any sources of compact flux above 0.2 mJy. This is unfortunate but not particularly surprising, since the large majority of known quasars are radio quiet. SUMMARY AND CONCLUSIONS We have reported our discovery and photometric analysis of the new gravitational lens CTQ 414. Ground-based optical images of the quasar appear double with a separation of 1. ′′ 2 and a magnitude difference between the quasar images of roughly 1 mag. Fitting and subtracting two empirical point spread functions to images obtained in subarcsecond seeing consistently leaves behind a faint, residual object. Fits for two PSFs plus an extended object place the extended object collinear with the pair of brighter components. Subsequent HST imaging with NICMOS has indeed confirmed the extended object as the lensing galaxy. We have shown that ground-based photometric analysis of all three lens components is feasible with subarcsecond seeing conditions, and hope that the photometric analysis presented in this paper will provide a handle on any future variability of the quasar images. Fig. 1.-a) A 300 s R band image of CTQ 414 and surrounding stars taken at CTIO. Photometric and astrometric results have been obtained for the stars labeled 1 through 8. Labels have been placed directly beneath the objects they identify. Note. -Relative positions for components A, B, and C were obtained from HST/NICMOS imaging and were held fixed during our photometric solution. Error bars are from the observed dispersion between the images and do not include uncertainties in the magnitude of the PSF template star (see footnote to Table 5). Note. -Error bars are from the observed dispersion between the images and do not include uncertainties in the magnitude of the PSF template star (see footnote to Table 5). Note. -Magnitudes have been derived with respect to the PSF template star (#7). Reported error bars are from the observed dispersion between the images and do not include uncertainties in the magnitude of the reference star. BV RI uncertainties in the reference star's magnitude are ±0.007, ±0.011, ±0.014, and ±0.016 mag, respectively, and must be added in quadrature to the error bars quoted above. Fig. 2 . 2-a) A summed image of the 12 R band exposures of CTQ 414 taken at LCO. The scale for the image is 0. ′′ 2278 per pixel. East is up and North is to the left. The fainter component is at PA 251 • E of N. b) Summed image after fitting and subtracting two stellar point spread functions from the individual frames. c) Summed image after fitting and subtracting two stellar point spread functions plus a pseudogaussian from the individual frames. d) Summed image after fitting and subtracting two stellar point spread functions from the individual frames, but leaving the pseudogaussian unsubtracted. Panels b) and c) are displayed at a factor of 12 higher contrast than panel a). Panel d) is at a factor of 4 higher contrast than panel a). Table 1 . 1Log of Observations for CTQ 414 at CTIO Frame # Filter FWHM ( ′′ ) Frame # Filter FWHM ( ′′ )104 R 1.31 233 R 1.30 105 B 1.48 399 R 1.42 106 V 1.43 401 R 1.25 107 B 1.38 402 B 1.18 108 I 1.30 403 V 1.16 211 R 1.32 405 I 1.30 212 B 1.47 406 I 1.26 213 V 1.93 408 I 1.24 214 V 1.59 409 B 1.39 215 I 1.47 410 V 1.25 Table 3 . 3Photometric Solutions for CTIO Data B V R I m A+B 17.627 ± 0.004 17.334 ± 0.010 16.932 ± 0.007 16.728 ± 0.004 Table 4 . 4Color Terms for Transformation EquationsTable 5. Relative Astrometry and Absolute Photometry for Nearby Reference Stars 1 -18.255 149.32 19.563 ± 0.018 18.519 ± 0.023 17.987 ± 0.008 17.482 ± 0.025 2 -11.794 -87.49 19.462 ± 0.009 17.831 ± 0.009 16.775 ± 0.003 15.429 ± 0.007 3 -7.023 175.32 17.536 ± 0.017 16.812 ± 0.014 16.450 ± 0.001 16.077 ± 0.002 4 -6.938 97.31 21.162 ± 0.020 19.709 ± 0.015 18.899 ± 0.006 18.090 ± 0.011 5 -3.270 -42.97 19.332 ± 0.007 18.822 ± 0.006 18.514 ± 0.008 18.189 ± 0.007 6 -1.733 172.50 15.598 ± 0.009 14.789 ± 0.020 14.373 ± 0.003 13.957 ± 0.001 7 0.000 0.00 16.472 ± 0.001 16.121 ± 0.001 15.904 ± 0.001 15.671 ± 0.001 8 5.302 -67.74 17.323 ± 0.016 16.697 ± 0.004 16.388 ± 0.014 16.034 ± 0.008Index N a 1 a 2 rms B 15 -0.0807 - 0.0099 V 10 0.0110 - 0.0244 R 10 -0.0194 - 0.0134 I 8 0.0363 - 0.0184 B-V - - -0.0847 0.0218 . J. -F Claeskens, J Surdej, Remy , M , A&A. 3059Claeskens, J. -F., Surdej, J., and Remy, M. 1996 A&A, 305, L9 . E E Falco, C D Impey, C S Kochanek, J Lehár, B A Mcleod, H. -W Rix, AJ. 111480Private Communication GeislerFalco, E. E., Impey, C. D., Kochanek, C. S., Lehár, J., McLeod, B. A., and Rix, H. -W. 1998 Private Communication Geisler, D. 1996 AJ, 111, 480 . C S Kochanek, ApJ. 466638Kochanek, C. S. 1996 ApJ, 466, 638 . C S Kochanek, J N Hewitt, Astrophysical Applications of Gravitational Lensing. KluwerKochanek, C. S., and Hewitt, J. N. 1996 Astrophysical Applications of Gravitational Lensing (Dordrecht: Kluwer) C S Kochanek, E E Falco, C D Impey, J Lehár, B A Mcleod, H. -W Rix, AIP Conf. Proc. After the Dark Ages: When Galaxies Were Young. S. S. Holt and E. P. SmithNew YorkAIP163Kochanek, C. S., Falco, E. E., Impey, C. D., Lehár, J., McLeod, B. A., and Rix, H. -W. 1998, in AIP Conf. Proc. After the Dark Ages: When Galaxies Were Young, eds. S. S. Holt and E. P. Smith (New York: AIP), 470, 163. . A U Landolt, AJ. 104340Landolt, A. U. 1992, AJ, 104, 340 . J Maza, M Wischnjewsky, R Antezana, L E González, R.Mx.A.A. 31119Maza, J., Wischnjewsky, M., Antezana, R., and González, L. E. 1995 R.Mx.A.A., 31, 119 . B A Mcleod, Private Communication Refsdal, S. 128307MNRASMcLeod, B. A., et al. 1998 Private Communication Refsdal, S., 1964, MNRAS, 128, 307 . J M Rosvick, MNRAS. 2771379Rosvick, J. M. 1995 MNRAS, 277, 1379 . P L Schechter, M D Gregg, R H Becker, D J Helfand, R L White, AJ. 1151371Schechter, P. L., Gregg, M. D., Becker, R. H., Helfand, D. J., and White, R. L. 1998 AJ, 115, 1371 . P L Schechter, M Mateo, A Saha, PASP. 1051342Schechter, P. L., Mateo, M., and Saha, A., 1993, PASP, 105, 1342 . P L Schechter, C B Moore, AJ. 1051Schechter, P. L. and Moore, C. B., 1993, AJ, 105, 1
[]
[ "FOCUS: Dealing with Label Quality Disparity in Federated Learning", "FOCUS: Dealing with Label Quality Disparity in Federated Learning" ]
[ "Yiqiang Chen \nInstitute of Computing Technology\nThe Beijing Key Laboratory of Mobile Computing and Pervasive Device\nChinese Academy of Sciences\nBeijingChina\n\nUniversity of Chinese Academy of Sciences\nBeijingChina\n", "Xiaodong Yang \nInstitute of Computing Technology\nThe Beijing Key Laboratory of Mobile Computing and Pervasive Device\nChinese Academy of Sciences\nBeijingChina\n\nUniversity of Chinese Academy of Sciences\nBeijingChina\n", "Xin Qin \nInstitute of Computing Technology\nThe Beijing Key Laboratory of Mobile Computing and Pervasive Device\nChinese Academy of Sciences\nBeijingChina\n\nUniversity of Chinese Academy of Sciences\nBeijingChina\n", "Han Yu [email protected] \nNanyang Technological University\nSingapore\n", "Biao Chen \nXuanwu Hospital\nCapital Medical University\nBeijingChina\n", "Zhiqi Shen [email protected] \nNanyang Technological University\nSingapore\n" ]
[ "Institute of Computing Technology\nThe Beijing Key Laboratory of Mobile Computing and Pervasive Device\nChinese Academy of Sciences\nBeijingChina", "University of Chinese Academy of Sciences\nBeijingChina", "Institute of Computing Technology\nThe Beijing Key Laboratory of Mobile Computing and Pervasive Device\nChinese Academy of Sciences\nBeijingChina", "University of Chinese Academy of Sciences\nBeijingChina", "Institute of Computing Technology\nThe Beijing Key Laboratory of Mobile Computing and Pervasive Device\nChinese Academy of Sciences\nBeijingChina", "University of Chinese Academy of Sciences\nBeijingChina", "Nanyang Technological University\nSingapore", "Xuanwu Hospital\nCapital Medical University\nBeijingChina", "Nanyang Technological University\nSingapore" ]
[]
Ubiquitous systems with End-Edge-Cloud architecture are increasingly being used in healthcare applications. Federated Learning (FL) is highly useful for such applications, due to silo effect and privacy preserving. Existing FL approaches generally do not account for disparities in the quality of local data labels. However, the clients in ubiquitous systems tend to suffer from label noise due to annotators' varying skill-levels, biases or malicious tampering. In this paper, we propose Federated Opportunistic Computing for Ubiquitous Systems (FOCUS) to address this challenge. It maintains a small set of benchmark samples on the FL server and quantifies the credibility of the clients' local data without directly observing them by computing the mutual cross-entropy between performance of the FL model on the local datasets and that of the client's local FL model on the benchmark dataset. Then, a credit-weighted orchestration is performed to adjust the weight assigned to clients in the FL model based on their credibility values. FOCUS has been experimentally evaluated on both synthetic data and real-world data. The results show that it effectively identifies clients with noisy labels and reduces their impact on the model performance, thereby significantly outperforming existing FL approaches.
10.1007/978-3-030-63076-8_8
[ "https://arxiv.org/pdf/2001.11359v1.pdf" ]
210,966,398
2001.11359
6ff259ec6f8ddef591b59b9e44872c34185c1718
FOCUS: Dealing with Label Quality Disparity in Federated Learning Yiqiang Chen Institute of Computing Technology The Beijing Key Laboratory of Mobile Computing and Pervasive Device Chinese Academy of Sciences BeijingChina University of Chinese Academy of Sciences BeijingChina Xiaodong Yang Institute of Computing Technology The Beijing Key Laboratory of Mobile Computing and Pervasive Device Chinese Academy of Sciences BeijingChina University of Chinese Academy of Sciences BeijingChina Xin Qin Institute of Computing Technology The Beijing Key Laboratory of Mobile Computing and Pervasive Device Chinese Academy of Sciences BeijingChina University of Chinese Academy of Sciences BeijingChina Han Yu [email protected] Nanyang Technological University Singapore Biao Chen Xuanwu Hospital Capital Medical University BeijingChina Zhiqi Shen [email protected] Nanyang Technological University Singapore FOCUS: Dealing with Label Quality Disparity in Federated Learning Ubiquitous systems with End-Edge-Cloud architecture are increasingly being used in healthcare applications. Federated Learning (FL) is highly useful for such applications, due to silo effect and privacy preserving. Existing FL approaches generally do not account for disparities in the quality of local data labels. However, the clients in ubiquitous systems tend to suffer from label noise due to annotators' varying skill-levels, biases or malicious tampering. In this paper, we propose Federated Opportunistic Computing for Ubiquitous Systems (FOCUS) to address this challenge. It maintains a small set of benchmark samples on the FL server and quantifies the credibility of the clients' local data without directly observing them by computing the mutual cross-entropy between performance of the FL model on the local datasets and that of the client's local FL model on the benchmark dataset. Then, a credit-weighted orchestration is performed to adjust the weight assigned to clients in the FL model based on their credibility values. FOCUS has been experimentally evaluated on both synthetic data and real-world data. The results show that it effectively identifies clients with noisy labels and reduces their impact on the model performance, thereby significantly outperforming existing FL approaches. Introduction Today, the End-Edge-Cloud ubiquitous systems, which use end sensors/devices to collect data, carry out distributed edge computing tasks, and coordinate decision support in the cloud server, have emerged to benefit many application domains [Ren et al., 2019]. The growth of ubiquitous systems makes the collection and processing of massive amounts of personal data a possibility. This has raised privacy concerns and may hinder the development of such technologies if not addressed. Federated Learning (FL) has emerged to be a useful machine learning paradigm to help ubiquitous systems leverage * Corresponding Author personal data in a privacy preserving manner . Under FL, multiple clients collaborate to train an FL model without exchanging raw data. It has been applied in ubiquitous systems are shown promising results . Nevertheless, one key challenge that remains open and hinders wide spread adoption of FL in ubiquitous systems, especially in the healthcare domain, is label quality disparity. The quality of labels in clients' local datasets influences the performance of the FL model. Existing FL approaches implicitly assume that there is no significant difference among the quality of labels from local datasets [Kairouz et al., 2019]. Thus, popular FL approaches such as FedAvg treat model parameters from different clients equally [McMahan et al., 2016]. Due to difference in the annotators' skills, biases or malicious tampering, label noise is common in data collected by ubiquitous systems Zeni et al., 2019]. Taking healthcare as an example, there are generally more cases of miss-diagnosis in smaller hospitals than in larger more well-staffed hospitals. In FL, a noisy client can negatively impact the learned model [Kairouz et al., 2019]. Therefore, enabling FL to effectively deal with label quality disparity is of vital importance to its success in ubiquitous systems. In this paper, we propose the Federated Opportunistic Computing for Ubiquitous System (FOCUS) approach to address this challenging problem. It is designed to identify clients with noisy labels and aggregating their model parameters into the FL model in an opportunistic manner. FOCUS works for the cross-silo federated settings. It maintains a small set of benchmark samples in the FL server. During the FL model training process, a local model which is trained on the local data in the clients and the FL model which is the aggregated model on the FL server form a Twin Network. By defining a contrastive loss of the Twin Network, the credibility of each client's data can be measured. It is then used to determine the extent to which a given client is allowed to participate in FL. In each iteration, FOCUS performs credibility-weighted orchestration on the FL server to avoid update corruption. The term "Opportunistic" is used to indicate that a client model is not aggregated into the FL model by simple averging (as in the case of FedAvg [McMahan et al., 2017], but weighted by its credibility. To evaluate FOCUS, we firstly test it on a synthetic human activity recognition dataset in which labels are tampered in different ways in a subset of the clients. Then, it is tested on a real-world dataset from hospitals with diverse label qualities for detecting Parkinson's Disease symptoms. The experiment results show that FOCUS can detect clients with noisy labels and reduce their impact on the FL model performance more effectively compared to existing FL approaches. Related Work Label noise is common problem in machine learning, especially for deep learning on large datasets. There are two categories of methods to deal with this problem: 1) at the data level and 2) at the algorithm level. At the data level, existing methods generally aim to sanitize the noisy labels to mitigate their impact. [Cretu et al., 2008] uses small slices of the training data to generate multiple models and produce provisional labels for each input. This is used to determine if noisy labels are present. [Xie et al., 2019] designed Byzantine-robust aggregators to defend against label-flipping data poisoning attacks on convolutional neural networks. However, [Koh et al., 2018] recently found that a federated approach to data sanitizaiton is still vulnerable to data poisoning attacks. At the algorithm level, existing methods generally aim to train noise-tolerant models. [Natarajan et al., 2013] studied the impact of label noise in binary classification from a theoretical perspective, and proposed a simple weighted surrogate loss to establish a strong empirical risk bounds. Since deep learning models can easily overfit to the label noise, used meta-learning to train deep models, where synthetic noisy labels were generated to update the model before the conventional gradient update. Nevertheless, these existing methods cannot be directly applied in the context of federated learning as they require access to raw data. In FL, label noise is also related to non-IID issue. found that the non-IID clients produced a poor global model in FL since the large Earth Mover's Distance (EMD) among the clients' data made their models diverse. However, the proposed data sharing strategy requires more communication and risks diluting the clients' information. Furthermore, the calculation of EMD requires the FL server to have access to clients raw data, which is not permissible under FL settings. To the best of our knowledge, there is currently no published work on mitigating the impact of label noise under FL settings. Traditional FL Model Training Under FL, the training data are distributed among K clients, each storing a subset of the training data D k = (X k , Y k ), k = 1, . . . , K. Each client trains its local model M k by minimizing the loss function on its own dataset only. Many different machine learning algorithms can be trained with FL . For simplicity of exposition, we use the convolutional neural networks (CNN) architecture as the basis to train an FL classification model in this paper. In this context, the cross entropy as the objective function which needs to be minimized: L = − 1 n k n k i=1 y i log P (y i |x i ). (1) |n i | denotes the amount of the training data owned by the i-th client. After that, the FL server collects the model updates from the clients, and aggregates them to form the new global FL model M s . The most widely used FL aggregation method is the Federated Averaging FedAvg algorithm [McMahan et al., 2016], which is given by: M s t = K k=1 n k n M k t (2) where M t denotes the global model weight updates, n is the total amount used for FL model training by the clients involved, n = K k=1 n k . The Proposed FOCUS Approach The proposed FOCUS approach quantifies label noise in the dataset from each FL client under horizontal federated learning. It measures the quality of each client's data and aggregates their local model updates into the FL model in an opportunistic manner. For clarity, we only present the case where each client sends local model updates to the server in plaintext. Nevertheless, added protection mechanism, such as homomorphic encryption and secret sharing, can be added into FOCUS following methods explained in . The pipeline of FOCUS is shown in Figure 1. Once the K clients have sent the local models to the FL server, and each client has received the global FL model from the FL server: 1. Each client i evaluates the global FL model on its own local dataset, and sends the evaluation result, LL i , to the FL server. 2. The FL server evaluates each client i's local model M i one by one on its benchmark dataset and records the model performance as LS i . 3. Once the corresponding LL i value is received by the FL server, it computes the cross entropy between LL i and LS i to produce a credibility measure which reflects the quality of client i's local labels. 4. Finally, the credibility measure for each client i is used as its weight in a weighted FedAvg operation to produce a new global FL model. In the following parts of this section, we provide more details on the FOCUS pipeline. Client Label Noise Measurement Since there is no prior knowledge about the clients' annotation quality, FOCUS maintains a small set of benchmark samples D s = (X s , Y s ) in the FL server. A user adopting FOCUS needs to ensure that there is little noise in the benchmark dataset (i.e. data are labeled accurately). This may require the adopter to work closely with specialists in the target field, which is beyond the scope of this paper. Once this requirement is satisfied, we have the following theorem: As D s has accurate annotations, a similar data distribution indicates that the client dataset also has accurate annotations. However, its inversion is not always correct as due to potential concept drift. In this paper, we do not address this issue. To measure clients' label noise accurately, FOCUS considers information about how the global FL model performs on a given local dataset. For this purpose, we define a mutual cross-entropy between the global FL model and the local model from each client to quantify the latent probability of noise, which is given by: E k = LS k + LL k (3) LS k = − (x,y)∈Ds y log P (y|x; M k ) (4) LL k = − (x,y)∈D k y log P (y|x; M s )(5) E k combines client k's local model performance on the benchmark dataset (LS k ) and the performance of the global FL model on client k's local dataset (LL k ). There are three possible cases when analyzing E k : • Small E k : A small E k indicates that the local data follows similar distribution as the benchmark dataset, meaning that client k's dataset possess accurate labels. • Large E k : If both of the global FL model and the local model perform badly when tested on each other's dataset, it will result in a large E k . This means either the client's dataset follows a different data distribution compared to the benchmark dataset. Thus, client k is likely to possess noisy labels. • Medium E k : If either one of the two the models performs badly, it will lead to a medium E k value. In this case, it is not a sufficient to determine that client k has noisy labels. If the local model is the one with poor performance, it means that the local dataset is not large enough to train a good model. If the global FL model is the one with poor performance, it means that there one or more other clients which contributed to training the FL model may have noisy labels. Even when a client k artificially inflates the LL k value he sends to the FL server, the resulting E k value will be categorized into this case, and its impact on FL model performance will be limited. Based on the mutual cross-entropy, we define a client k's credibility C k , which reflects the quality of their local data labels, as: C k = 1 − e αE k i e αE i .(6) α is a hyper-parameter for normalization. With this measure, we propose a new algorithm to aggregate model updates from clients based on their credibility values to improve FedAvg. Opportunistic FL Model Update To leverage the measured client credits, we rewrite the FedAvg model update rule from Eq. (2) as: M s t = K k=1 W k t−1 M k t .(7) Algorithm 1 FOCUS FL Server executes: 1: Initialize M s 0 , W k 0 = n k k n k 2: for each round t = 1, . . . , T do 3: for each client k = 1, . . . , K in parallel do 4: M k t ← ClientUpdate(k, M s t ) 5: LS k t ← ModelTest(M k t , D s ) 6: end for 7: M s t ← K k=1 W k t−1 M k t 8: for each client k = 1, . . . , K in parallel do 9: LL k t ← ModelTest(M s t , D k ) 10: E k t ← LS k t + LL k t 11: end for 12: The term related to the amount of data involved is combined with clients' credibility values, which may vary in different rounds. Given the client credits C k t is assigned to client k in round t, W k t is defined as: C k t ← 1 − e αE k i e αE i 13: W k t ← n k C k t K i=1 niC iW k t = n k C k t K i=1 n i C i t .(8) As the mutual cross-entropy is based on both the local models and the global one at round t, the opportunistic updating is weighted by the latest credibility values which is calculated at rount t − 1. Note that since K k=1 W k t+1 = 1, the convergence of the proposed FOCUS approach is guaranteed as long as the FedAvg algorithm in Eq. (2) converges. The proposed FOCUS approach is shown as Algorithm 1. The function ModelTest which is used to calculate the cross-entropy loss of a model M on a dataset D is shown as Algorithm 2. Communication Cost FOCUS requires two communications per round: 1) broadcasting the global model, and 2) clients submit local model parameter updates to the FL server for aggregation. During broadcast, the central server sends M s to all the clients. During aggregation, all or part of the K clients send their local model parameters, (LL k , M k ), k = 1, . . . , K, to the FL server. Compared with FedAvg, the only item that needs to be transmitted in addition model parameters is the performance value of the global FL model on each local dataset. Experimental Evaluation In this section, we report experimental evaluation results showing the advantages of FOCUS over existing approaches based on real-world datasets in the cross-silo scenarios. Experiment Settings Two healthcare-related datasets are adopted in our experiments. They are: • USC-HAD [Zhang and Sawchuk, 2012]: This dataset is a public benchmark dataset for human activity recognition, which contains 12 most common types of human activities in daily life from 14 subjects. The activity data were captured by a 9-axis inertial sensor worn by the subjects on their front right hip. The data were collected over 84 hours. • PD-Tremor [Chen et al., 2017]: This dataset was collected from Parkinson's Disease (PD) patients by measuring their tremor, which is one of the most typical motor symptoms. The subject was required to hold a smartphone for 15 seconds in a relaxing status. The hand motion data were collected by sensors embedded in the smartphone including the accelerometer, the gyroscope and the magnetic sensor. Data were collected from 99 subjects in 3 hospitals. A sliding-window is employed to segment the data stream. The window width is set to 1 second with no overlap. A convolutional neural networl (CNN) is used to train a model based on the datasets through Stochastic Gradient Descent (SGD). Each axis of the multi-modal sensors is regarded as a channel. We compare FOCUS with the popular FedAvg approach in our experiment. The dataset is split into the training set and the testing set.Accuracy of the global FL model is used as the evaluation metric, which is calculated as: Accuracy = 1 − N i=1 [y i =ȳ i ] N(9) whereȳ i denotes the predicted label of sample i. Evaluation on the Synthetic Dataset In this experiment, we study the negative impact of noisy clients in federated learning. USC-HAD is a dataset in which all the samples are correctly annotated. To simulate a federating learning setting, the whole dataset is divided into 5 parts. One part is selected at random to be the benchmark dataset on the FL server under FOCUS. The others are distributed to the clients. Then, one of the 4 clients is randomly selected to be the noisy client. The labels in this client are randomized. There are two scenarios for federating learning. One of which is referred to as "Normal", where all the four clients are annotated with correct labels. The other is referred to as "Noisy", where one of clients has noisy labels. The testing accuracy comparison between FedAvg and FOCUS under these two scenarios is shown in Figure 2. It can be observed that under the Normal scenario, FOCUS and FedAvg achieved the same performance in terms of accuracy. In this sense, FedAvg can be regarded as a special case of FOCUS, which does not take the credibility of clients into account. Under the Noisy scenario, due to the noisy client, some valuable information is lost and performance degradation is significant for both FedAvg and FOCUS. Since all the local models including that from the noisy client are aggregated indiscriminately in FedAvg, its performance is significantly poorer than FOCUS. Through noisy client detection and opportunistic model aggregation, FOCUS outperforms FedAvg by 5.82% in terms of accuracy. The opportunistic aggregation weights produced by FO-CUS for the 4 clients in the last learning iteration are shown in Figure 3. As the data on normal clients follows an identical data distribution, they are assigned almost equal weight during FL model aggregation. However, the weight for the noisy client which has been significantly reduced by FOCUS, which shows that the proposed method can correctly detect noisy client and take appropriate actions. l t f l = 1 K K k=1 L(M k t , D k )(10) L is the cross-entropy loss in our experiments. Both FedAvg and FOCUS take longer to converge under the Noisy scenario compared to under the Normal scenario. Nevertheless, the convergence rate of FOCUS under both scenarios is faster than that of FedAvg. Because of the incorrect labels, the data distribution of the noisy clients are different from the others, resulting in larger Earth Mover's Distance values and diverse model parameters . Thus, during the aggregation in the server under FO-CUS, the global model is less impacted by the noisy client due to its reduced weight. Another evidence for the reduced impact of the noisy client on the FL model is that the final loss achieved by FOCUS is larger than that of FedAvg. This is because the the global FL model under FOCUS does not fit the noisy data as well as the normal data. This results in a larger training loss on the noisy data. In other words, FOCUS is capable of avoiding over-fitting the noisy data. Evaluation on the Real-world Dataset In this section, we evaluate FOCUS onto a real-world practical dataset for Parkinson's Disease symptom recognition -PD-Tremor -by comparing it with the popular FedAvg approach. Among the 3 hospitals from which this dataset was collected, 2 of them are top-tier hospitals, and the third one is considered a lower-tier hospital. We regard each hospital as a client in FL. All the data are annotated by doctors from the 3 hospitals. As doctors from the lower-tier hospital tend to be less experienced and are more likely to make wrong annotations, we test FOCUS on this dataset to evaluate its effectiveness. To collect a set of benchmark samples, two experts were invited to make consistent annotations on a sample dataset. The benchmark samples are divided into two parts. One of them is used as the benchmark dataset on the FL server; and the other is used as the test set. The results in terms of prediction accuracy and client weights in FOCUS are shown in Figures 5 and 6, respectively. "Base" denotes the base model trained with the benchmark dataset. It can be observed that both FL-based approaches, FO-CUS and FedAvg, are able to learn more information from the clients and train stronger models. FOCUS outperformed FedAvg in terms of accuracy by 7.24%, which also confirmed our suspicion that there are noisy are in the clients. By observing the opportunistic weight of each client, we find that the lower-tier hospital is assigned a smaller weight, which indicates that its data are of low-quality and contain noisy labels. Base FedAvg In summary, FOCUS has significantly outperformed the state-of-the-art FedAvg algorithm under both the synthetic dataset and the real-world dataset. Conclusions and Future Work Label quality disparity is an important challenge facing today's federated learning field. So far, it remains open. Noisy labels in FL clients can corrupt the learned FL model. Since under FL, sensitive local data cannot be transmitted out of the owner client's data store in order to protect user privacy. This makes the problem of noisy local labels even more challenging to resolve. In this paper, we propose the Federated Opportunistic Computing for Ubiquitous System (FOCUS) to address this challenging problem. FOCUS maintains a small set of benchmark samples in the server. A novel mutual cross-entropy based credibility score is designed to compute the label quality of a client's dataset without requiring access to raw data. Based on the measured credibility, we further proposed a modification to the popular FedAvg algorithm to opportunistically aggregate client model updates into a global FL model. In this way, only a parameter which carries the local loss is extra communicated. Extensive experiments on both synthetic and real-world data demonstrated significant advantage of FOCUS over FedAvg. With FOCUS, we empower FL systems to effectively identify clients with noisy label and improve their model training strategy to mitigate the negative effects. To the best of our knowledge, it is the first FL approach capable of handling label noisy in a privacy preserving manner. Although FOCUS is proved to be effective in the federated learning with label quality disparity, there are still interesting problems which require further investigation. For example, how to distinguish clients who maliciously attack the FL system by faking their labels and the those facing genuine difficulties in providing correct labels is an important issue which affects how these clients should be dealt with. In subsequent research, we will focus on tackling this problem. Figure 1 : 1The pipeline of FOCUS Theorem 1 Given a benchmark dataset D s in the FL server, data a client D k follows an identical distribution to the benchmark dataset if the trained local model M k performs well on D s . for local step j = 1, . . . , N do 2: M ← M − η∇f (M; x, y) for (x, y) ∈ D k 3: end for 4: return M to server Algorithm 2 ModelTest ModelTest(M, D): 1: for (x, y) ∈ D do 2: l = l − y log p(x|y; M) 3: end for 4: return l Figure 4 Figure 2 :Figure 3 :Figure 4 : 4234shows the training loss comparison during each FL training iteration with a same learning rate setting. The The weights assigned to the clients by FOCUS The training loss comparison results on USC-HAD training loss at round t is calculated as: Figure 5 :Figure 6 : 56The The weights assigned to the hospitals by FOCUS Acknowledgments Pdassist: Objective and quantified symptom assessment of parkinson's disease via smartphone. [ References, Chen, 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEEReferences [Chen et al., 2017] Yiqiang Chen, Xiaodong Yang, Biao Chen, Chunyan Miao, and Hanchao Yu. Pdassist: Ob- jective and quantified symptom assessment of parkinson's disease via smartphone. In 2017 IEEE International Con- ference on Bioinformatics and Biomedicine (BIBM), pages 939-945. IEEE, 2017. Casting out demons: Sanitizing training data for anomaly sensors. [ Cretu, the 2008 IEEE Symposium on Security and Privacy (SP'08). Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.LIEEEKairouz et al., 2019[Cretu et al., 2008] Gabriela F Cretu, Angelos Stavrou, Michael E Locasto, Salvatore J Stolfo, and Angelos D Keromytis. Casting out demons: Sanitizing training data for anomaly sensors. In the 2008 IEEE Symposium on Se- curity and Privacy (SP'08), pages 81-95. IEEE, 2008. [Kairouz et al., 2019] Peter Kairouz, H. Brendan McMa- han, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Ar- jun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.L. Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. Salim El D&apos;oliveira, David Rouayheb, Josh Evans, Zachary Gardner, Adriã Garrett, Badih Gascã 3 N, Phillip B Ghazi, Marco Gibbons, Zaid Gruteser, Chaoyang Harchaoui, Lie He, Zhouyuan He, Ben Huo, Justin Hutchinson, Martin Hsu, Tara Jaggi, Gauri Javidi, Mikhail Joshi, Jakub Khodak, Aleksandra Konečný, Farinaz Korolova, Sanmi Koushanfar, Tancrã¨de Koyejo, Yang Lepoint, Prateek Liu, Mehryar Mittal, Richard Mohri, Nock, Rasmus Ayferözgür, Mariana Pagh, Hang Raykova, Daniel Qi, Ramesh Ramage, Dawn Raskar, Weikang Song, Sebastian U Song, Ziteng Stich, Ananda Theertha Sun, Florian Suresh, Praneeth Tramèr, Jianyu Vepakomma, Li Wang, Zheng Xiong, Qiang Xu, Felix X Yang, Han Yu, Sen Yu, ; Zhao, Koh, arXiv:1912.04977arXiv:1602.05629Eider Moore, Daniel Ramage, and Blaise Aguera y Arcas. Federated learning of deep networks using model averaging. CoRRProceedings of the 27thD'Oliveira, Salim El Rouayheb, David Evans, Josh Gard- ner, Zachary Garrett, Adrià Gascà 3 n, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchin- son, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konečný, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, AyferÖzgür, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. Advances and open problems in feder- ated learning. In CoRR, page arXiv:1912.04977, 2019. [Koh et al., 2018] Pang Wei Koh, Jacob Steinhardt, and Percy Liang. Stronger data poisoning attacks break data sanitization defenses. CoRR, arXiv:1811.00741, 2018. [McMahan et al., 2016] H Brendan McMahan, Eider Moore, Daniel Ramage, and Blaise Aguera y Arcas. Federated learning of deep networks using model averaging. page arXiv:1602.05629, 2016. [McMahan et al., 2017] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Ar- cas. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pages 1273-1282, 2017. [Natarajan et al., 2013] Nagarajan Natarajan, Inderjit S Dhillon, Pradeep K Ravikumar, and Ambuj Tewari. Learn- ing with noisy labels. In Proceedings of the 27th A survey on end-edge-cloud orchestrated network computing paradigms: Transparent computing, mobile edge computing, fog computing, and cloudlet. Ren, Conference on Neural Information Processing Systems (NeurIPS'13). Wang, Yiwen Han, Chenyang Wang, Qiyang Zhao, Xu Chen, and Min Chen52IEEE NetworkConference on Neural Information Processing Systems (NeurIPS'13), pages 1196-1204, 2013. [Ren et al., 2019] Ju Ren, Deyu Zhang, Shiwen He, Yaoxue Zhang, and Tao Li. A survey on end-edge-cloud or- chestrated network computing paradigms: Transparent computing, mobile edge computing, fog computing, and cloudlet. ACM Computing Surveys (CSUR), 52(6):1-36, 2019. [Wang et al., 2019] Xiaofei Wang, Yiwen Han, Chenyang Wang, Qiyang Zhao, Xu Chen, and Min Chen. In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning. IEEE Network, 33(5):156-165, 2019. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. Proceedings of the 36th International Conference on Machine Learning (ICML'19). the 36th International Conference on Machine Learning (ICML'19)Federated Learninget al., 2019] Cong Xie, Sanmi Koyejo, and Indranil Gupta. Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance. In Proceedings of the 36th International Conference on Machine Learning (ICML'19), pages 6893-6901, 2019. [Yang et al., 2019] Qiang Yang, Yang Liu, Yong Cheng, Yan Kang, Tianjian Chen, and Han Yu. Federated Learning. Metacleaner: Learning to hallucinate clean representations for noisy-labeled visual recognition. Zeni, Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'19). the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'19)ACM3Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous TechnologiesMorgan & Claypool Publishers, 2019. [Zeni et al., 2019] Mattia Zeni, Wanyi Zhang, Enrico Big- notti, Andrea Passerini, and Fausto Giunchiglia. Fixing mislabeling by human annotators leveraging conflict reso- lution and prior knowledge. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technolo- gies, 3(1):32, 2019. [Zhang and Sawchuk, 2012] Mi Zhang and Alexander A Sawchuk. Usc-had: a daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceed- ings of the 2012 ACM Conference on Ubiquitous Comput- ing (UbiComp'12), pages 1036-1043. ACM, 2012. [Zhang et al., 2019] Weihe Zhang, Yali Wang, and Yu Qiao. Metacleaner: Learning to hallucinate clean representations for noisy-labeled visual recognition. In Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'19), pages 7373-7382, 2019. Personal context recognition via skeptical learning. ; Wanyi Zhang, ; Zhang, Zhao, arXiv:1806.00582Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJ-CAI'19). Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandrathe 28th International Joint Conference on Artificial Intelligence (IJ-CAI'19)Federated learning with non-iid data. CoRRZhang, 2019] Wanyi Zhang. Personal context recognition via skeptical learning. In Proceedings of the 28th Inter- national Joint Conference on Artificial Intelligence (IJ- CAI'19), pages 6482-6483, 2019. [Zhao et al., 2018] Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Feder- ated learning with non-iid data. CoRR, arXiv:1806.00582, 2018.
[]
[ "Primer AI's Systems for Acronym Identification and Disambiguation", "Primer AI's Systems for Acronym Identification and Disambiguation" ]
[ "Nicholas Egan [email protected] \nPrimer AISan FranciscoCA\n", "John Bohannon [email protected] \nPrimer AISan FranciscoCA\n" ]
[ "Primer AISan FranciscoCA", "Primer AISan FranciscoCA" ]
[]
The prevalence of ambiguous acronyms make scientific documents harder to understand for humans and machines alike, presenting a need for models that can automatically identify acronyms in text and disambiguate their meaning. We introduce new methods for acronym identification and disambiguation: our acronym identification model projects learned token embeddings onto tag predictions, and our acronym disambiguation model finds training examples with similar sentence embeddings as test examples. Both of our systems achieve significant performance gains over previously suggested methods, and perform competitively on the SDU@AAAI-21 shared task leaderboard. Our models were trained in part on new distantly-supervised datasets for these tasks which we call AuxAI and AuxAD. We also identified a duplication conflict issue in the SciAD dataset, and formed a deduplicated version of SciAD that we call SciAD-dedupe. We publicly released all three of these datasets, and hope that they help the community make further strides in scientific document understanding.
null
[ "https://arxiv.org/pdf/2012.08013v2.pdf" ]
229,181,292
2012.08013
7f1bf9913a2985c586866254bb7d77b5263b2060
Primer AI's Systems for Acronym Identification and Disambiguation Nicholas Egan [email protected] Primer AISan FranciscoCA John Bohannon [email protected] Primer AISan FranciscoCA Primer AI's Systems for Acronym Identification and Disambiguation The prevalence of ambiguous acronyms make scientific documents harder to understand for humans and machines alike, presenting a need for models that can automatically identify acronyms in text and disambiguate their meaning. We introduce new methods for acronym identification and disambiguation: our acronym identification model projects learned token embeddings onto tag predictions, and our acronym disambiguation model finds training examples with similar sentence embeddings as test examples. Both of our systems achieve significant performance gains over previously suggested methods, and perform competitively on the SDU@AAAI-21 shared task leaderboard. Our models were trained in part on new distantly-supervised datasets for these tasks which we call AuxAI and AuxAD. We also identified a duplication conflict issue in the SciAD dataset, and formed a deduplicated version of SciAD that we call SciAD-dedupe. We publicly released all three of these datasets, and hope that they help the community make further strides in scientific document understanding. Introduction Writers of scientific documents frequently utilize abbreviations as tools to make unwieldy technical terms less verbose. These abbreviations often take the form of acronyms or initialisms, which are abbreviations formed from the first letters of words in the term. We refer to the abbreviated form as the "short form" or "acronym," and we refer to the full term as the "long form" or "expansion." The widespread usage of these abbreviations makes writing more convenient for scientists, but poses a challenge to machines and nonexpert humans attempting to read scientific documents. This has led to an accumulation of scientific jargon, and a need for AI tools to manage acronyms and their expansions. Veyseh et al. (2020b) recently released two large datasets for acronym understanding in scientific documents: the first is for the acronym identification task (AI), and the second is for the acronym disambiguation task (AD). The goal of acronym identification is to extract short and long form acronyms within a sentence, and the goal of acronym dis-Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ambiguation is to determine the expansion of a particular acronym given sentence context. Contributions In this paper, we describe our systems for the AI and AD tasks, which improve upon the models proposed by Veyseh et al. and perform competitively on the task leaderboard (Veyseh et al. 2020a). Our AI method projects learned token embeddings from a transformer-based language model onto tag predictions, and our AD method finds similar training examples for testing examples. We improved the performance of our systems through the development of distantly-supervised auxiliary datasets, which we are releasing to the public. Finally, we identified some issues with the SciAD dataset, and propose a remedy for these issues that we hope will make SciAD more useful as a tool for the NLP community. Our three datasets are publicly available on our GitHub data repository. 1 2 Datasets SciAI The SciAI dataset (Veyseh et al. 2020b) consists of 17,560 sentences annotated for acronym identification, where each sentence token is tagged for short form and long form acronym boundaries in BIO format. To construct this dataset, the authors assembled a corpus of 6,786 papers from arXiv, identified candidate sentences in these papers that likely contained acronyms, and hired Amazon Mechanical Turk workers to gold label the sentences. The candidate sentences were sentences containing consecutive (or near consecutive) word sequences for which the concatenation of the first few characters from these words could spell out another word in the document that consists of at least 50% capital letters. When labeling, humans were instructed to find all short form acronyms in the sentence, even if the acronym's long form did not appear in the sentence. Auxiliary AI Data To build on the training data provided by SciAI, we took a distantly supervised approach to build a more noisy dataset we call AuxAI. We started by scraping abbreviations with their expansions from Abbreviations.com within the Academic & Science, Computing, and Internet categories. We then searched for these terms on arXiv, finding paper abstracts for which the short and long form both appear. We were able to identify 6497 terms across 274,149 abstracts this way. After finding these abstracts, we searched our abstracts for any other abbreviations that appear with their expanded form, and labeled them as such. Additionally, since some acronyms like USA and DNA are common enough to stand on their own in short form without including their expanded forms, we compiled a list of "universal acronyms" which are acronyms in the SciAI training dataset that have no long form in their sentence for at least 2 sentences. We found 1807 such acronyms, which were all marked in our token labels. Overall, we were able to produce a dataset of consisting of 313,914 sentences. Table 1 shows that the tag distribution within this dataset skews slightly more towards the O tag, suggesting that our distantly supervised approach has imperfect recall. Another issue with this dataset is that it contains around 3800 different short form acronyms, while the SciAI training set contains around 6500 despite being smaller, suggesting that AuxAI captures less data diversity. During training, we experimented with subsampling AuxAI data in such a way that the ratio between unique terms and training examples matched that of SciAI. SciAD The SciAD dataset (Veyseh et al. 2020b) consists of 62,441 examples annotated for acronym disambiguation, where each example consists of a sentence, a short form within that sentence, and the correct long form that the short form refers to, which may or may not appear in the sentence. This dataset was split into 50,034 training examples, 6189 development examples, and 6218 test examples. To construct this dataset, the authors used SciAI to compile a dictionary of acronyms consisting of short forms with their possible long forms. The long forms were normalized through a combination of edit distance and human verification. They then used the one-sense-per-discourse assumption to infer that if a short form is mapped to a long form in the SciAI dataset, then they can find other short forms within the document to use as ambiguous acronyms. To make matters more interesting, duplicate examples do not always have duplicate labels: 10.5% of our duplicated examples in the train and development datasets contain more than one long form label. 93.1% of the development examples that have duplicates within the training dataset share a label with at least one of the duplicates, and 10.8% of the development examples that have duplicates within the training dataset have a conflicting label with at least one of the duplicates. Since the AD task asks us to find suitable long forms using features extracted from the sentence and short form alone based on the training data, we claim that the accuracy of any model should be upper bounded by 93.1% on the 45.4% of the development data that contains a duplicate in the training data. Duplicates in SciAD It is plausible that some of these label conflicts among duplicates is genuine: two different papers could write the exact same sentence yet refer to different acronym expansions. But we suspect that human error from the annotators is the more likely explanation for most of these cases. In order to remedy this problem, we propose that when one measures development and test performance, they ignore examples that also exist in the training data. In the experiments section, we report our model performance on both this subset of the data as well as the full dataset. Additionally, we propose removing training examples that are duplicates of other training examples for model training. In order to resolve conflicting labels, one can use the more common label among the duplicates. For convenience to other researchers, we released our deduplicated version of the SciAD dataset which we call SciAD-dedupe. 2 Auxiliary AD Data To build on the training data provided by SciAD, we took a distantly supervised approach to build a more noisy dataset we call AuxAD. We queried arXiv abstracts for acronyms found in the SciAD dictionary, and collected 56,874 such abstracts. We then assumed that if a short and long form from the dictionary both appeared within a sentence, then the short form can be resolved to the long form. Using the one-sense-per-discourse assumption, we found other sentences within the document that contained the short form and assumed that these short forms also corresponded to the same long form. This resulted in a dataset of 112,788 examples, which we release in our same data repository. While this AuxAD dataset contains more examples than SciAD, it is less diverse: both datasets started with the same dictionary of 2308 terms, but the SciAD training dataset contains 2152 unique terms and the AuxAD dataset contains 1268 unique terms. We suspect that this is because SciAD used full arXiv documents while AuxAD relied on arXiv abstracts, and thus had a harder time finding certain terms from the dictionary. Methods Acronym Identification Our model architecture consisted of using a transformerbased language model (Vaswani et al. 2017) to embed the input sentence tokens, followed by a linear projection onto logits for each BIO tag, an approach mirroring our model for Named Entity Recognition (Primer AI 2019). We started by joining together the word-tokenized input sentence, and retokenizing the sentence with SentencePiece byte-pair encoding (Kudo and Richardson 2018) to get L tokens. These L tokens are embedded with the XLNet language model (Yang et al. 2019) to get an R H embedding for each token, where H is 768 for XLNet-base and 1024 for XLNet-large. We run these embeddings through a linear layer to get T tag logits per token, where T is the number of BIO tags (in this case 5). We use the tag with the highest logit per token as the predicted tag, with the label of the first byte-pair encoded token within a word being used for the word. Model Training Training was performed on both the XL-Net encoder and the linear projection weight matrix with cross-entropy loss on output logits and an AdamW optimizer. Training hyperparameters included: • Pretrain on AuxAI then finetune on SciAI, or just train on SciAI • The subset of AuxAI to use when finetuning • XLNet model size • Whether or not to down-weigh the O tag • Learning rate We formed an ensemble of these XLNet models trained with different hyperparameter configurations, and averaged together their predicted logits during inference time. After picking the highest scoring tag per token, we cleaned up the predictions such that I tags could not follow O tags to get our final predictions. Acronym Disambiguation Veyseh et al. modeled AD as a classification problem: given a sentence and a short-form acronym within that sentence, they used a classifier to predict the acronym's expansion. We instead view it as an information retrieval problem: given a test sentence containing an acronym, we want to find the most similar training sentence and use its label. The intuition behind this approach is that contextual clues within a sentence can determine the subfield of research that the paper falls into. By computing the similarity between two sentences, we could perhaps identify if they are within the same research field based on how much their semantics align. For instance, a sentence talking about "CNN" would likely include either several machine learning terms or several news terms. We can compare our sentence to several others in the training dataset, and if the dataset is sufficiently comprehensive, we should be able to find a sentence semantically similar to the sentence in question. More specifically, we start by computing a sentence embedding for every example in the datasets. To infer a label for a given test example, we compute the cosine similarity between its embedding and the embedding of every sentence in the training dataset, pick the training sentence with highest cosine similarity, and use its label. We were able to squeeze out a small performance boost by additionally checking to see if any possible expansion for the acronym appears within the sentence itself, and using that expansion if we find it. Measuring the utility of this approach is complicated by the fact that the dataset contains many duplicate sentences across testing and training datasets, and sometimes their labels conflict. In cases where multiple duplicate sentences were found, we used the label that was more common overall in the training dataset. Model Training To train an embedding model for this task, we constructed datasets of sentence pairs from SciAD and AuxAD, where the sentence pairs share a short form acronym and are labeled as having the same long form or a different long form, with a balanced number of positive and negative pairs. We used various transformer-based language models as encoders, and trained these language models as Twin Networks (also known as Siamese Networks) (Chicco 2021): sentence embeddings (e 1 , e 2 ) were computed for sentence pair (s 1 , s 2 ) by running each sentence individually through the same encoder. The cosine similarity between the sentences was computed as cos(e 1 , e 2 ) = e T 1 e 2 ||e 1 || · ||e 2 || The encoder weights were optimized through mean squared error loss for the sentence pairs representing training examples: L M SE (D) = 1 n (s1,s2,y)∈D (y − cos(E(s 1 ), E(s 2 ))) 2 where D is our dataset of n training examples, E is our transformer embedding model, and y is our desired similarity score, which was 1 if the sentences shared a long form and 0 otherwise. Training our encoder in this way teaches it to learn an embedding space for which sentences containing acronyms with the same meaning will have higher cosine similarity than sentences containing acronyms with different meanings. Pretrained Models In addition to our trained model, we also tested the other embedding methods of SIF (Arora, Liang, and Ma 2017) and several pretrained models from sentence transformers (Reimers and Gurevych 2019): • XLM (Lample and Conneau 2019) trained for paraphrase detection • DistilRoBERTa ) trained for paraphrase detection • DistilRoBERTa trained for information retrieval on the MS MARCO dataset (Bajaj et al. 2016) • DistilRoBERTa trained for Quora question similarity Our final system was an ensemble of these models plus some trained models, where cosine similarity scores were averaged across models in the ensemble. Acronym Identification Experiments Model Building Our model was implemented using our existing codebase for Named Entity Recognition, which was based on PyTorch Transformers ). Each XLNet model took between 10 and 60 minutes to train on a single NVIDIA V100 GPU, depending on hyperparameters like the number of epochs and training dataset. Inference took 3ms per example when using a batch size of 16. (Schwartz and Hearst 2003) and the LSTM-CRF model proposed by Veyseh et al.. All scores are computed on the development dataset due to fact that test dataset labels are not yet publicly available, except for the LSTM-CRF model where scores are taken from their paper. It is clear that while a model trained on just our AuxAI dataset performs poorly, pretraining on AuxAI then finetuning on SciAI results in a measurable boost in performance. Our final ensemble method consisted of 15 different XLNet models trained with different hyperparameters, and achieved an F1 score of 92.60 on the test set. Performance Error Analysis We performed a small-scale error analysis by looking at a random sample of 50 mistakes made by the ensemble on the SciAI development dataset. Of those mistakes, 18 were genuine mistakes made by the model, 24 were errors made by the human annotators, and 8 were too ambiguous for us to tell. Both model and human mistakes were most commonly the result of failing to extract an acronym that should have been extracted, representing 39 of the errors: the fact that humans frequently missed acronyms within the data likely led to trained models being overly conservative. 7 of the errors came from a misalignment between the true and predicted boundaries of acronyms, and only 1 error came from incorrectly extracting a non-acronym. Acronym Disambiguation Experiments Model Building Our embedding models were all based on the sentencetransformers library (Reimers and Gurevych 2019), with the exception of SIF, for which we used fastText (Joulin et al. 2016). Our final ensemble consisted of the following embedding models: • SIF • XLM paraphrase • DistilRoBERTa paraphrase • DistilRoBERTa MS Marco • DistilBERT Quora • RoBERTa SciAD • XLM paraphrase finetuned on AuxAD The last two of these methods were models that we trained as Twin Networks. Training our Twin Network transformer models took around 20 minutes on an NVIDIA V100 GPU depending on what data was used and the number of training epochs. Evaluation consisted of embedding all of the sentences in the training and testing data, which took around 2 minutes per model on a V100, and computing distances between training and testing data, which took around a minute on 16 CPUs for the whole ensemble. When predicting labels for the development dataset, we used the SciAD training dataset for finding matches, and when predicting labels for the test dataset, we merged together the training and development datasets from SciAD. We experimented with using the AuxAD as well as the SciAD datasets at query time, but found that this led to a slight decrease in performance. Only 12% of SciAD-dedupe development examples had a closer match in AuxAD than SciAD training, despite AuxAD being a larger dataset, which can largely be explained by the fact that the AuxAD dataset contained fewer terms. Within the small proportion of AuxAD examples that are used, we tend to have less accurate predictions, with an accuracy of 87% on SciAD-dedupe versus an accuracy of 96% that we get on the chosen AuxAD examples. What is also interesting to see is that the systems tend to perform better on the SciAD-dedupe than SciAD, which is counter-intuitive considering the fact that leaking training data into the testing data should theoretically drive up performance scores. To investigate this, we extracted the subset of the SciAD development dataset that was duplicated from the training dataset, and measured the performance of a system that uses the most frequent long form of training examples with the same duplicated tokens. This method achieves an F1 score of 89.41, which is surprisingly lower than the F1 score of our models on the deduplicated data. This subset of the development dataset that repeats sentences from the training dataset thus seems to be quite noisy. where M indicates that our development example found a perfect match in the training dataset and C indicates that our predicted expansion is correct. We can also see that if we ignore the perfect matches, correct predictions tend to have higher similarity scores than incorrect predictions, suggesting that our model can trade off recall for boosts in precision by using the similarity scores as a threshold. Figure 1: Similarity score distributions for predictions on SciAD by our ensemble. Each score represents the cosine similarity between an example in the development dataset and its closest example in the training dataset. The blue distribution is for examples that were judged to be correct, and the orange distribution is for examples that were judged to be incorrect. Both distributions were normalized. Performance Error Analysis We performed a small-scale error analysis by looking at a random sample of 50 mistakes made by the ensemble on the SciAD development dataset. Of those mistakes, 18 were genuine mistakes made by the model, and 32 were mistakes made by human annotators. 10 of the model mistakes came from semantically similar sentences across development and training having different labels, which highlights a limitation of this approach. 8 of the model mistakes were from the lack of similar training examples for a given development example, which could be potentially fixed given a larger training corpus. 30 of the human mistakes came from conflicting labels in duplicate examples, and the remaining 2 of the human errors came from mistakes during canonicalization of long forms, such as "sum capacity" and "sum capacities" both existing in the acronym dictionary. Conclusion In conclusion, we have developed new neural models for acronym identification and disambiguation. Our acronym identification model uses a transformer followed by linear projection, and our acronym disambiguation model finds similar examples with embeddings learned from Twin Networks. Both models benefited from ensembling, and both models achieve significant performance gains over the models originally proposed by Veyseh et al. We introduced new datasets for acronym identification and disambiguation, AuxAI and AuxAD, which were labeled through distant supervision. We also identified a duplication issue in the SciAD dataset, and formed a deduplicated version of this dataset that we call SciAD-dedupe. We released all three of these datasets and we hope that they serve as useful tools for the NLP community. Figure 1 1shows the distribution of cosine similarity scores between development examples and the testing example inferred to be most similar within the training dataset for our ensemble on SciAD. The distribution of scores for correct examples and incorrect examples are shown and normalized separately. We can see visually that p(M |¬C) > p(M |C) When exploring the SciAD dataset, we noticed that it contained many duplicates: while there are 62,441 total examples in the dataset, there are only 42,945 examples with a unique (sentence, acronym) pair. For 12,672 of the examples, there exists at least one other example with the exact same sentence and acronym. 45.4% of the development examples contained a duplicate in the training data, and 45.1% of the test examples contained a duplicate in the training data. This overlap of data between train time and test time suggests that SciAD is a biased measure of performance on the AD task. Table 2 : 2Acronym identification performance of various models on the SciAI dataset. Results for the LSTM-CRF model were taken from Veyseh et al. Performance results are shown in table 2. In this table, we compare our methods to a rule based baselinewhich used the test dataset, while the other scores are on the development dataset. Our models are below the line, with "→" denoting finetuning. All performance metrics are macro-averaged be- tween short and long forms. Table 3 3shows the macro-averaged F1 scores for each of the individual embedding methods, the embedding ensemble, the GAD classifier proposed in Veyseh et al., and the baseline of using the most frequent expansion for an acronym. Performance is shown for the SciAD development set, as well as the development set of SciAD-dedupe. The exception is GAD, for which we include the performance on the test dataset reported in Veyseh et al.. We can see that theSystem SciAD SciAD-dedupe Baseline 59.73 59.97 GAD* 81.90 - SIF 88.11 89.13 XLM paraphrase 89.42 90.89 DistilRoBERTa paraphrase 89.20 90.56 DistilRoBERTa MS Marco 88.48 89.78 DistilBERT Quora 86.09 86.34 RoBERTa SciAD 88.18 89.46 XLM paraphrase → AuxAD 83.75 83.04 Ensemble 91.22 93.15 Table 3 : 3ensemble clearly outperforms the rest of the models, including every individual embedding model it is comprised of. The individual embedding models perform similarly, except for the finetuned XLM paraphrase model. Despite the poor performance from this model, we found that it was valuable to include as a member of the ensemble. Our ensemble achieved an F1 score of 91.58 on the test dataset.Acronym disambiguation performance of various systems on the SciAD dataset. Results for the GAD model were taken from Veyseh et al. which used the test dataset, while the other scores were computed for the development dataset. Models below the line are the methods we tested, which represent a combination of pretrained sentence trans- formers and models we trained ourselves, with "→" denot- ing finetuning. https://github.com/PrimerAI/sdu-data https://github.com/PrimerAI/sdu-data A simple but toughto-beat baseline for sentence embeddings. S Arora, Y Liang, T Ma, Arora, S.; Liang, Y.; and Ma, T. 2017. A simple but tough- to-beat baseline for sentence embeddings. Ms marco: A human generated machine reading comprehension dataset. P Bajaj, D Campos, N Craswell, L Deng, J Gao, X Liu, R Majumder, A Mcnamara, B Mitra, T Nguyen, arXiv:1611.09268Chicco, D. 2021. Siamese Neural Networks: An Overview. New York, NYSpringer USarXiv preprintBajaj, P.; Campos, D.; Craswell, N.; Deng, L.; Gao, J.; Liu, X.; Majumder, R.; McNamara, A.; Mitra, B.; Nguyen, T.; et al. 2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268. Chicco, D. 2021. Siamese Neural Networks: An Overview. New York, NY: Springer US. 73-94. A Joulin, E Grave, P Bojanowski, M Douze, H Jégou, T Mikolov, arXiv:1612.03651Fasttext.zip: Compressing text classification models. arXiv preprintJoulin, A.; Grave, E.; Bojanowski, P.; Douze, M.; Jégou, H.; and Mikolov, T. 2016. Fasttext.zip: Compressing text clas- sification models. arXiv preprint arXiv:1612.03651. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. T Kudo, J Richardson, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2018 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsBrussels, BelgiumAssociation for Computational LinguisticsKudo, T., and Richardson, J. 2018. SentencePiece: A sim- ple and language independent subword tokenizer and deto- kenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing: System Demonstrations, 66-71. Brussels, Belgium: Association for Computational Linguistics. Cross-lingual language model pretraining. G Lample, A Conneau, Advances in Neural Information Processing Systems (NeurIPS). Lample, G., and Conneau, A. 2019. Cross-lingual language model pretraining. Advances in Neural Information Process- ing Systems (NeurIPS). A new state of the art for named entity recognition. A I Primer, Primer AI. 2019. A new state of the art for named en- tity recognition. https://primer.ai/blog/a-new-state-of-the- art-for-named-entity-recognition/. Sentence-bert: Sentence embeddings using siamese bert-networks. N Reimers, I Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsReimers, N., and Gurevych, I. 2019. Sentence-bert: Sen- tence embeddings using siamese bert-networks. In Proceed- ings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing. Association for Computational Linguistics. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. V Sanh, L Debut, J Chaumond, T Wolf, Sanh, V.; Debut, L.; Chaumond, J.; and Wolf, T. 2020. Dis- tilbert, a distilled version of bert: smaller, faster, cheaper and lighter. A simple algorithm for identifying abbreviation definitions in biomedical text. A Schwartz, M Hearst, Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing. 4Schwartz, A., and Hearst, M. 2003. A simple algorithm for identifying abbreviation definitions in biomedical text. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing 4:451-62. Acronym identification and disambiguation shared tasks for scientific document understanding. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L U Kaiser, I Polosukhin, I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, F Dernoncourt, T H Nguyen, W Chang, L A Celi, Proceedings of the AAAI-21 Workshop on Scientific Document Understanding. the AAAI-21 Workshop on Scientific Document UnderstandingCurran Associates, Inc. Veyseh, A. P. B30Advances in Neural Information Processing SystemsVaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L. u.; and Polosukhin, I. 2017. Attention is all you need. In Guyon, I.; Luxburg, U. V.; Ben- gio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; and Gar- nett, R., eds., Advances in Neural Information Processing Systems, volume 30, 5998-6008. Curran Associates, Inc. Veyseh, A. P. B.; Dernoncourt, F.; Nguyen, T. H.; Chang, W.; and Celi, L. A. 2020a. Acronym identification and disam- biguation shared tasks for scientific document understand- ing. In Proceedings of the AAAI-21 Workshop on Scientific Document Understanding. What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambiguation. A P B Veyseh, F Dernoncourt, Q H Tran, T H Nguyen, Proceedings of COLING. COLINGVeyseh, A. P. B.; Dernoncourt, F.; Tran, Q. H.; and Nguyen, T. H. 2020b. What Does This Acronym Mean? Introducing a New Dataset for Acronym Identification and Disambigua- tion. In Proceedings of COLING. Transformers: State-of-the-art natural language processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Davison, S Shleifer, P Von Platen, C Ma, Y Jernite, J Plu, C Xu, T L Scao, S Gugger, M Drame, Q Lhoest, A M Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsWolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; Davi- son, J.; Shleifer, S.; von Platen, P.; Ma, C.; Jernite, Y.; Plu, J.; Xu, C.; Scao, T. L.; Gugger, S.; Drame, M.; Lhoest, Q.; and Rush, A. M. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Process- ing: System Demonstrations, 38-45. Online: Association for Computational Linguistics. Xlnet: Generalized autoregressive pretraining for language understanding. Z Yang, Z Dai, Y Yang, J Carbonell, R Salakhutdinov, Q V Le, NeurIPS. Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R.; and Le, Q. V. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS.
[ "https://github.com/PrimerAI/sdu-data", "https://github.com/PrimerAI/sdu-data" ]
[ "A GEOMETRIC PROOF OF LÜCK'S VANISHING THEOREM FOR THE FIRST L 2 -BETTI NUMBER OF THE TOTAL SPACE OF A FIBRATION", "A GEOMETRIC PROOF OF LÜCK'S VANISHING THEOREM FOR THE FIRST L 2 -BETTI NUMBER OF THE TOTAL SPACE OF A FIBRATION" ]
[ "Christopher Wulff " ]
[]
[]
A significant theorem of Lück says that the first L 2 -Betti number of the total space of a fibration vanishes under some conditions on the fundamental groups. The proof is based on constructions on chain complexes. In the present paper, we translate the proof into the world of CW-complexes to make it more accessible.
null
[ "https://arxiv.org/pdf/1611.08253v1.pdf" ]
119,620,570
1611.08253
7e0629cd0dd524ac8ecd91650e2af10c4472d14b
A GEOMETRIC PROOF OF LÜCK'S VANISHING THEOREM FOR THE FIRST L 2 -BETTI NUMBER OF THE TOTAL SPACE OF A FIBRATION 24 Nov 2016 Christopher Wulff A GEOMETRIC PROOF OF LÜCK'S VANISHING THEOREM FOR THE FIRST L 2 -BETTI NUMBER OF THE TOTAL SPACE OF A FIBRATION 24 Nov 2016 A significant theorem of Lück says that the first L 2 -Betti number of the total space of a fibration vanishes under some conditions on the fundamental groups. The proof is based on constructions on chain complexes. In the present paper, we translate the proof into the world of CW-complexes to make it more accessible. Introduction In [2], Lück proved the following significant theorem: Lück's proof is based on somewhat abstract constructions in the world of chain complexes, which make it quite hard to understand what really is going on geometrically. On closer examination, however, it turns out that most of these constructions do have a counterpart already at the level of CW-complexes. The purpose of the present paper is to elaborate these geometric counterparts and thereby translating Lück's proof into the world of CW-complexes. The hope is that the geometric version of the proof is more accessible to the generic reader. It should be said that the present paper is not meant to be considered independently of the original paper [2]. In particular, we use without any recapitulation the same notation and assume that the reader is familiar with the basic results in [2,Sections 1 & 2]. Furthermore, we only re-prove the original theorem shown above, but not any generalization such as [3,Theorem 6.67] (although the proof of the latter theorem contains a geometric construction which exhibits a slight similarity to what we do here). After all, the purpose of this paper is to simplify matters, not to complicate them. The fact that our proof takes more space then the original proof in [2] is mainly due to the fact that we included a few more details. Supported by the Program of Post-Doctoral Scholarships at the Universidad Nacional Autónoma de México. Outline of proof The idea of the proof is as follows. We will construct a more accessible CW-complex T and a 1-connected map h : T → E for which we can directly prove b 1 (E) ≤ b 1 (T, (h * ) * ℓ 2 (π 1 (E))) = 0 . In preparation of the proof we shall make the set-up precise. First of all, we assume that E has finite 2-skeleton and we can and will assume that all maps appearing (including loops defined on the unit interval I := [0, 1] with the obvious cell structure) are cellular. And secondly, we choose 0-cells e ∈ E and b := p(e) ∈ B as basepoints and let F := p −1 (b) with basepoint e ∈ F . Now, denote π := π 1 (B, b), Γ := π 1 (E, e) and ∆ = im(i * : π 1 (F, e) → Γ). Thus, we obtain a group extension 1 → ∆ → Γ p * − → π → 1 . For each w ∈ π we chose some arbitrary pre-image w ∈ Γ under p * . We shall also use the same letters w, w for representing loops I → B, I → E, respectively, and assume w = p • w. Choose a solution h(w) to the lifting problem F × {0} ∪ {e} × I i∪w / / incl. E p F × I w•pr I / / h(w) 4 4 ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ B and denote σ(w) := h( , 1) : (F, e) → (F, e). The pointed homotopy class of σ(w) is independent of the choices made and called the pointed fibre transport along w. Denote by T σ(w) := F × I/(x, 1) ∼ (σ(w)x, 0) the mapping torus of σ(w). We are now ready to define the CW-complex T . Choose a generating set S = {s 1 , . . . , s g } of π such that s 1 has infinite order and apply the constructions above to each w ∈ S. Then T is obtained by gluing together T σ(s 1 ) , . . . , T σ(sg ) along the common subcomplex F × {0}. It is obviously connected, because F is connected. All h(s 1 ), . . . , h(s g ) together assemble to a map h : T → E which fits into the self-explaining commutative diagram F ×{0} / / T h proj. / / g n=1 S 1 g n=1 sn F i / / E p / / B . On fundamental groups, this induces π 1 (F, e) i * / / π 1 (T, (e, 0)) h * / / / / Z * g ∆ / / Γ p * / / / / π and exactness of the lower row together with the indicated surjectivity of some of the maps immediately implies that h * is surjective, too, and so h is 1-connected. Denote by E → E and T → T the universal coverings and by T t − → T the connected covering of T associated to the subgroup ker(h * ). Thus, the latter has deck transformation group Γ and there is a Γ-equivariant lift h : T → E of h. We obtain a 1-connected ZΓ-chain map of free ZΓ-chain complexes ZΓ ⊗ ZΓ C * ( T ) ∼ = C * ( T ) h * − → C * ( E) and the proof of [2, Lemma 1.2.1] implies b 1 (E) ≤ b 1 (T, (h * ) * ℓ 2 Γ) . In the following section, we shall provide a more concrete construction of T which allows us to calculate the right hand side of this inequality directly in the final section. Explicit construction of T Denote by f : F → F the connected covering corresponding to the subgroup ker(i * ), which has ∆ as deck transformation group. Choose any 0-cell e ∈ F with f (e) = e as basepoint. For arbitrary w, the map h(w) : F ×I → E is a homotopy between i•σ(w) and i, which implies im((σ(w) • f ) * ) = (σ(w)) * (ker(i * )) = ker(i * ) = im(f * ) and thus σ(w) lifts to a map σ(w) : F → F which fixes e. This map is not ∆-equivariant, but: Lemma 1. For arbitrary δ ∈ ∆ we have σ(w) • δ = w −1 δw ∈∆ • σ(w) . Proof. Note that both sides are lifts F → F of the map σ(w) : F → F . It therefore suffices to prove the equality at the point e, i. e. that σ(w)(δ · e) = (w −1 δw) · e. Denote a representative loop I → F ⊂ E of δ by the same letter and let δ be a lift of δ to F with δ(0) = e. Then δ · e is defined as δ(1). With this data at hand, the point σ(w)(δ · e) is defined as α(1), where α : I → F is the lift of the loop σ(w) • δ with starting point α(0) = e. In other words, the action of σ(w) • δ ∈ ∆ takes e to σ(w)(δ · e). But σ(w) • δ = w −1 δw in Γ and therefore also in ∆, because h(w) gives rise to a homotopy in E between those loops. This proves the claim. is well-defined. Denote by T σ(w) = F ×I/ ∼ the mapping torus of σ(w). In this section, we define T by gluing together the T σ(s 1 ) , . . . , T σ(sg ) along the common F × {0} and claim that it is exactly the covering described in the previous section. First of all, note that T is indeed a covering of T with each of the subcomplexes T σ(sn) covering the corresponding subcomplex T σ(sn) , and clearly, the canonical Γ-action on T coming from the action on F is by deck transformations. Lemma 2. The space T is connected. Proof. Note that it clearly suffices to show that the points of Proof. We have already seen this for τ being one of the paths s n,γ ′ defined in the proof of the previous lemma. It is also clear for τ a path within F × {0}, because any such path is of the form r → [(γ ′ , τ ′ (r), 0)] with τ ′ a path in F satisfying τ ′ (0) = e and (γ ′ γ, e) ∼ (γ ′ , τ ′ (1)), which implies γ ∈ ∆ and τ ′ (1) = γe. The set of all paths which satisfy the claim is clearly closed under concatenation and taking reversed paths. It is thus sufficient to show that any path τ satisfying the prerequisites of the lemma can be homotoped into a concatenation of the s n,γ ′ and their inverses and paths within F × {0}. By cellular approximation and a subsequent homotopy within the parameter space I, any such τ can be written as a concatenation of finitely many paths τ 1 , . . . , τ k , each of which is a constant speed path along a 1-cell of T . These are either contained in F × {0} or run along a 1-cell of the form c × I ⊂ T σ(sn) ⊂ T with c being a 0-cell of F . Denote the paths along the latter in positive direction by ρ n,c . Note that for c = [(γ, e)] we recover the path s n,γ . Any c which is not of this form can be connected to some [(γ, e)] by a path α : I → F and an obvious homotopy in T shows ρ n,c · (σ(s n ) • α) ≃ α · s n,γ . This allows us to trade any of the τ m which is equal to some ρ n,c (or its inverse) for a concatenation of two paths in F and one of the s n,γ (or its inverse) in between. This shows the claim. The last two Lemmas imply that T is exactly the covering associated to ker(h * ): it is connected and if τ : I → T maps to a loop in T based at e, then τ itself is a loop if and only if h * [t • τ ] = 0. Furthermore, the last lemma shows that the two canonical actions of Γ on T as deck transformations, the action coming from general covering theory and the action induced by the Γ-action on F , are in fact the same. 4. Calculating b 1 (T, (h * ) * ℓ 2 Γ) = 0 The proof of the theorem is completed by calculating b 1 (T, (h * ) * ℓ 2 Γ) = 0. Note that T \ T σ(s 1 ) = g n=2 F × (0, 1) and we therefore obtain a short exact sequence of Γ-chain-complexes 0 → C * (T σ(s 1 ) ) → C * ( T ) → g n=2 C * −1 ( F ) → 0 . This induces by [1, Thm. 2.1 on p.10] a weakly exact L 2 -homology sequence H 1 (ℓ 2 Γ ⊗ ZΓ C * (T σ(s 1 ) )) → H 1 (ℓ 2 Γ ⊗ ZΓ C * ( T )) → g n=2 H 0 (C * (ℓ 2 Γ ⊗ ZΓ F )) . On the right hand side, the van Neumann dimension of the summands is b 0 (F, (i * ) * ℓ 2 Γ), which vanishes by [2, Lemma 1.2.5] as im(i * ) = ∆ is infinite. The van Neumann dimension of the left hand side is b 1 (T σ(s 1 ) , (φ * ) * ℓ 2 Γ), where φ : T σ(s 1 ) = F × I/∼ → E is a quotient of h(s 1 ). Let Γ ′ ⊂ Γ be the image of φ * , which is exactly the subgroup of Γ generated by ∆ and s 1 . As s 1 ∈ B has infinite order, the canonical map π 1 (T σ(s 1 ) , e) → Z factors as π 1 (T σ(s 1 ) , e) φ ′ − → Γ ′ → Z. Using [2, Lemma 1.2.3 and Theorem 2.1] we conclude b 1 (T σ(s 1 ) , (φ * ) * ℓ 2 Γ) = b 1 (T σ(s 1 ) , (φ ′ ) * ℓ 2 Γ ′ ) = 0 . Thus, the weakly exact sequence implies that the van Neumann dimension of the middle term H 1 (ℓ 2 Γ ⊗ ZΓ C * ( T )), which is exactly b 1 (T, (h * ) * ℓ 2 Γ), vanishes as well and the proof of the theorem is complete. B be a fibration of connected CW-complexes such that F and B have finite 2-skeletons. Then E has finite 2-skeleton up to homotopy. If the image of π 1 (F ) → π 1 (E) is infinite and π 1 (B) contains Z as a subgroup, then the first L 2 -Betti number of E vanishes: b 1 (E) = 0. Denote by F := Γ × F /∆ the Γ-CW-complex obtained from Γ × F by dividing out the equivalence relation (γ, x) ∼ (γδ −1 , δx). The Γ-action is the obvious left action on the first component. Lemma 1 now implies, that the Γ-equivariant map σ(w) : [(γ, x)] → [(γw, σ(w)x)] t −1 {(e, 0)} = {[(γ, e, 0)] | γ ∈ Γ} can be connected by paths in T . For each n = 1, . . . , g and γ ∈ Γ, the path s n,γ : I → T σ(sn) ⊂ T , r → [(γ, e, r)] connects [(γ, e, 0)] with [(γ, e, 1)] = [(γs n , σ(s n )e, 0)] = [(γs n , e, 0)]and is mapped to s n , s n under h • t and p • h • t, respectively. By applying this repeatedly, we see that each [(γ, e, 0)] is connected to [(δ, e, 0)] = [(1, δe, 0)] for some δ ∈ ker(p * ) = ∆, and this is in turn is connected to [(1, e, 0)], because F is path connected. Lemma 3 . 3If τ : I → T is a path connecting [(γ ′ , e, 0)] to [(γ ′ γ, e, 0)], then h • t maps τ to a representative loop I → E of γ. Bounds on the von Neumann dimension of L 2 -cohomology and the Gauss-Bonnet theorem for open manifolds. Jeff Cheeger, Mikhael Gromov, J. Differential Geom. 211Jeff Cheeger and Mikhael Gromov. Bounds on the von Neumann dimension of L 2 - cohomology and the Gauss-Bonnet theorem for open manifolds. J. Differential Geom., 21(1):1-34, 1985. L 2 -Betti numbers of mapping tori and groups. Wolfgang Lück, Topology. 332Wolfgang Lück. L 2 -Betti numbers of mapping tori and groups. Topology, 33(2):203- 214, 1994. L 2 -invariants: theory and applications to geometry and K-theory. Wolfgang Lück, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. 44Results in Mathematics and Related Areas. 3rdWolfgang Lück. L 2 -invariants: theory and applications to geometry and K-theory, volume 44 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics. Series. A Series of Modern Surveys in Mathematics]. . Springer-Verlag, BerlinSpringer-Verlag, Berlin, 2002.
[]
[ "GapTV: Accurate and Interpretable Low-Dimensional Regression and Classification", "GapTV: Accurate and Interpretable Low-Dimensional Regression and Classification" ]
[ "Wesley Tansey [email protected] \nDepartment of Computer Science\nDepartment of Information, Risk, and Operations Management\nDepartment of Statistics and Data Sciences\nUniversity of Texas at Austin\nUniversity of Texas at Austin\n\n", "James G Scott [email protected] \nDepartment of Computer Science\nDepartment of Information, Risk, and Operations Management\nDepartment of Statistics and Data Sciences\nUniversity of Texas at Austin\nUniversity of Texas at Austin\n\n" ]
[ "Department of Computer Science\nDepartment of Information, Risk, and Operations Management\nDepartment of Statistics and Data Sciences\nUniversity of Texas at Austin\nUniversity of Texas at Austin\n", "Department of Computer Science\nDepartment of Information, Risk, and Operations Management\nDepartment of Statistics and Data Sciences\nUniversity of Texas at Austin\nUniversity of Texas at Austin\n" ]
[]
We consider the problem of estimating a regression function in the common situation where the number of features is small, where interpretability of the model is a high priority, and where simple linear or additive models fail to provide adequate performance. To address this problem, we present GapTV, an approach that is conceptually related both to CART and to the more recent CRISP algorithm (Petersen et al., 2016), a state-of-the-art alternative method for interpretable nonlinear regression. GapTV divides the feature space into blocks of constant value and fits the value of all blocks jointly via a convex optimization routine. Our method is fully data-adaptive, in that it incorporates highly robust routines for tuning all hyperparameters automatically. We compare our approach against CART and CRISP and demonstrate that GapTV finds a much better trade-off between accuracy and interpretability.
null
[ "https://arxiv.org/pdf/1702.07405v1.pdf" ]
88,515,677
1702.07405
baffc1ac3210c749cd862adfe600a75f095f19f5
GapTV: Accurate and Interpretable Low-Dimensional Regression and Classification Wesley Tansey [email protected] Department of Computer Science Department of Information, Risk, and Operations Management Department of Statistics and Data Sciences University of Texas at Austin University of Texas at Austin James G Scott [email protected] Department of Computer Science Department of Information, Risk, and Operations Management Department of Statistics and Data Sciences University of Texas at Austin University of Texas at Austin GapTV: Accurate and Interpretable Low-Dimensional Regression and Classification We consider the problem of estimating a regression function in the common situation where the number of features is small, where interpretability of the model is a high priority, and where simple linear or additive models fail to provide adequate performance. To address this problem, we present GapTV, an approach that is conceptually related both to CART and to the more recent CRISP algorithm (Petersen et al., 2016), a state-of-the-art alternative method for interpretable nonlinear regression. GapTV divides the feature space into blocks of constant value and fits the value of all blocks jointly via a convex optimization routine. Our method is fully data-adaptive, in that it incorporates highly robust routines for tuning all hyperparameters automatically. We compare our approach against CART and CRISP and demonstrate that GapTV finds a much better trade-off between accuracy and interpretability. Introduction Many modern machine learning techniques, such as deep learning and kernel machines, tend to focus on the "big data, big features" regime. In such a scenario, there are often so many features and highly non-linear interations between features that model interpretability is generally a secondary consideration. Instead, effort is focused soley on a measure of model performance such as root mean squared error (RMSE). Under this research paradigm, only a model that out-performs the previous champion method warrants an investigation into understanding its decisions. But there is also a robust and recent line of machine-learning research in the equally important scenario of lowdimensional regression, with relatively few features and where interpretability is a primary concern. For example, lattice regression with monotonicity constraints has been shown to perform well in video-ranking tasks where interpretability was a prerequisite (Gupta et al., 2016). The interpretability of the system enables users to investigate the model, gain confidence in its recommendations, and guide future recommendations. In the two-and threedimensional regression scenario, the Convex Regression via Interpretable Sharp Partitions (CRISP) method (Petersen et al., 2016) has recently been introduced as a way to achieve a good trade off between accuracy and interpretability by inferring sharply-defined 2d rectangular regions of constant value. Such a method is readily useful, for example, when making business decisions or executive actions that must be explained to a non-technical audience. CRISP is similar to classification and regression trees (CART), in that it partitions the feature space into contiguous blocks of constant value ("interpretable sharp partitions"), but was shown to lead to better performance. Another area where data-adaptive, interpretable sharp partitions are useful is in the creation of areal data from a set of spatial point-referenced data-essentially turning a continuous spatial problem into a discrete one. A common application of the framework arises when dividing a city, state, or other region into a set of contiguous cells, where values in each cell are aggregated to help anonymize individual demographic data. Ensuring that the number and size of grid cells remains tractable, handling low-data regions, and preserving spatial structure are all important considerations for this problem. Ideally, one cell should contain data points which all map to a similar underlying value, and cell boundaries should represent significant change points in the value of the signal being estimated. If a cell is empty or contains a small number of data points, the statistical strength of its neighbors should be leveraged to both improve the accuracy of the reported areal data and further arXiv:1702.07405v1 [stat.ML] 23 Feb 2017 aide in anonymizing the cell which may otherwise be particularly vulnerable to deanonymization. Viewed through this lens, we can interpret the areal-data creation task as a machine learning problem, one focused on finding sharp partitions that still achieve acceptable predictive loss. 1 To this end, and motivated by the success of CRISP, we present GapTV, a method for interpretable, lowdimensional convex regression with sharp partitions. GapTV involves two main steps: (1) a non-standard application of the gap statistic (Tibshirani et al., 2001) to create a data-adaptive grid over the feature space; and (2) smoothing over this grid using a fast total variation denoising algorithm (Barbero & Sra, 2014). The resulting model displays a good balance between four key measurements: (1) interpretability, (2) average accuracy, (3) worst-region accuracy, and (4) degrees of freedom. Through a series of benchmarks against both a baseline CART model and the stateof-the-art CRISP model, we show both qualitatively and quantitatively that GapTV achieves superior performance. The end result is a fast, fully auto-tuned approach to interpretable low-dimensional regression and classification. The remainder of this paper is organized as follows. Section 2 presents technical background on both CRISP and graph-based total variation denoising. In Section 3, we detail our algorithm and derive the gap statistic for both regression and classification scenarios. We then present a suite of benchmark experiments in Section 4 and conclude in Section 5. Petersen et al. (2016) propose the CRISP algorithm for handling the prediction scenario described previously. As in our approach, they focus on the 2d scenario and divide the (x 1 , x 2 ) space into a grid via a data-adaptive procedure. For each dimension, they divide the space into q regions, where each region break is chosen such that a region contains 1/q of the data. This creates a q×q grid of differentlysized cells, some of which may not contain any observations. A prediction matrix M ∈ R q×q is then learned, with each element M ij representing the prediction for all observations in the region specified by cell (i, j). Background Convex Regression with Interpretable Sharp Partitions CRISP applies a Euclidean penalty on the differences be- 1 We note that such a task will likely only represent a single step in a larger anonymization pipeline that may include other techniques such as additive noise and spatial blurring. While we provide no proofs of how strong the anonymization is for our method, we believe it is compatible with other methods that focus on adherence to a specified k-anonymity threshold (e.g., (Cassa et al., 2006)). tween adjacent rows and columns of M . The final estimator is then learned by solving the convex optimization problem minimize M ∈R q×q 1 2 n i=1 (y i − Ω(M, x 1i , x 2i )) 2 + λP (M ) ,(1) where Ω is a lookup function mapping (x 1i , x 2i ) to the corresponding element in M . P (M ) is the group-fused lasso penalty on the rows and columns of M P (M ) = q−1 i=1 M i· − M (i+1)· 2 + M ·i − M ·(i+1) 2 ,(2) where M i· and M ·i are the i th row and column of M , respectively. By rewriting Ω(·) as a sparse binary selector matrix and introducting slack variables for each row and column in the P (M ) term, CRISP solves (1) via ADMM. The resulting algorithm requires an initial step of O(n + q 4 ) operations for n samples on a q × q grid, and has a per-iteration complexity of O(q 3 ). The authors recommend using q = n when the size of the data is sufficiently small so as to be computationally tractable, and setting q = 100 otherwise. In comparison to other interpretable methods, such as CART and thin-plate splines (TPS), CRISP is shown to yield a good tradeoff between accuracy and interpretability. Consequently, we use CRISP as our main method to compare against in Section 4. Graph-based Total Variation Denoising Total variation (TV) denoising solves a convex regularized optimization problem defined generally over a graph G = (V, E) with node set V and edge set E: minimize β∈R |V| s∈V (β s ) + λ (r,s)∈E |β r − β s | ,(3) where is some smooth convex loss function over the value a given node β s . The solution to (3) yields connected subgraphs (i.e. plateaus in the 2d case) of constant value. TV denoising has been shown to have attractive minimax rates theoretically (Wang et al., 2014) and is robust against model mispecification empirically, particularly in terms of worst-cell error (Tansey et al., 2016). Many efficient, specialized algorithms have been developed for the case when is a Gaussian loss and the graph has a specific constrained form. For example, when G is a one-dimensional chain graph, (3) is the ordinary (1D) fused lasso (Tibshirani et al., 2005), solvable in linear time via dynamic programming (Johnson, 2013). When G is a Ddimensional grid graph, (3) is typically referred to as total variation denoising (Rudin et al., 1992) or the graph-fused lasso, for which several efficient solutions have been proposed (Chambolle & Darbon, 2009;Barbero & Sra, 2011;. For scenarios with a general smooth convex loss and an arbitrary graph, the GFL method (Tansey & Scott, 2015) is efficient and easily extended to non-Gaussian losses such as the binomial loss required in Section 3.3. The TV denoising penalty was investigated as an alternative to CRISP in (Petersen et al., 2016). They note anecdotally that TV denoising over-smooths when the same q was used for both CRISP and TV denoising. In the next section, we present a principled approach to choosing q in a dataadaptive way that prevents over-smoothing and leads to a superior fit in terms of the accuracy-interpretability tradeoff. The GapTV Algorithm Prior to presenting our approach, we first note that we can rewrite (1) as a weighted least-squares problem minimize β∈R q 2 1 2 q 2 i=1 η i (ỹ i − β i ) 2 + λg(β) ,(4) where β = vec(M ) is the vectorized form of M , η i is the number of observations in the i th cell, andỹ i is the empirical average of the observations in the i th cell. g(·) is then a penalty term that operates over a vector β rather than a matrix M . Given the reformulation of the problem in (4), we now choose g(·) to be a graph-based total variation penalty g(β) = (r,s)∈E |β r − β s | ,(5) where E is the set of edges defining adjacent cells on the q × q grid graph. 2 Having formulated the problem as a graph TV denoising problem, we can now use the convex minimization algorithm of Barbero & Sra (2014) (or any other suitable algorithm) to efficiently solve (4). The remainder of this section is dedicated to our approach to auto-tuning the two hyperparameters: q, the granularity of the grid, and λ, the regularization parameter. We take a pipelined approach by first choosing q and then selecting λ under the chosen q value. Choosing bins via the gap statistic The recommendation for CRISP is to choose q = n, assuming the computation required is feasible. Doing so creates a very sparse grid, with q−1×q empty cells. However, by tying together the rows and columns of the grid, each CRISP cell actually draws statistical strength from a large number of bins. This compensates for the data sparsity problem and results in reasonably good fits despite the sparse grid. Unfortunately, choosing q = n does not work for our TV denoising approach. Since the graph-based TV penalty only ties together adjacent cells, long patches of sparsity overwhelm the model and result in over-smoothing. If one instead chooses a smaller value of q, however, the TV penalty performs quite well. The challenge is therefore to adaptively choose q to fit the appropriate level of overall data sparsity. We propose to do this via a novel use of the gap statistic (Tibshirani et al., 2001). In a typical clustering algorithm, such as K-means, one would have unlabeled data X = {x 1 , x 2 , . . . , x n }, some distance metric δ(x i , x j ) , and a specified number of K clusters to find. In K-means, cluster assignment is based on the nearest centroid, a i = argmin k δ(x i , c k ) ,(6) where c k = 1 |A k | i∈A k x i is the cluster centroid and A k = {i : a i = k, ∀i}. The gap statistic is an approach to choosing the value of K for a generic clustering algorithm by comparing it against a suitable null distribution. The best clustering is the one which minimizes the gap term: E n [log(W * 1 )] − log(W K ) ,(7) where W K is the sum of average pairwise distances in each cluster for a clustering with K clusters. To use the gap statistic, one must define a suitable null distribution over W 1 . In our case, the "clusters" are defined by a quantile grid over (x 1 , x 2 ). The number of cells is specified by the choice of q, which means choosing the value of q corresponds directly to choosing K. However, unlike typical clustering, a cluster centroid is defined by the y i values corresponding to the x i points in the cell. Therefore, our distance metric for computing the gap statistic is actually between pairs of (y i , y j ). In the regression case, we assume each y i ∼ N (µ, σ 2 ), where µ and σ 2 are unknown. For a distance metric, we use Euclidean distance, δ(y i , y j ) = (y i − y j ) 2 .(8) Since each y i is assumed to be IID normal, the null distribution over pairwise distances is W 1 ∼ 2σ 2 χ 2 ν , where ν = n 2 2 − n is the degrees of freedom. The expectation of the log of a χ 2 distribution can be calculated exactly (Walck, 2007) as E log(χ 2 ν ) = log 2 + ψ ν 2 ,(9) where ψ is the digamma function. Thus, up to an additive constant, we can calculate the reference distribution exactly without knowing the mean or variance. The procedure for choosing q is now straightforward. We first partition the points on a grid for a series of candidate q values in the range 1 < q ≤ q max ≤ n. For each candidate partitioning, we calculate the gap statistic gap(q) = ψ( ν 2 ) − q 2 k=1 1 η k i∈Ai j∈Ai,j>i δ(y i , y j ) .(10) We then choose the q which minimizes gap(q) and smooth using the TV denoising algorithm. Choosing the TV penalty parameter Once a value of q has been chosen, λ can be chosen by following a solution path approach. For the regression scenario with a Gaussian loss, as in (4), determining the degrees of freedom is well studied (Tibshirani & Taylor, 2011). Thus, we could select λ via an information criterion such as AIC or BIC. However, we chose to select λ via cross-validation as we found empirically that it produces better results. Classification extension The optimization problem in (4) focuses purely on the Gaussian loss case. When the observations are binary labels, as in classification, a binomial loss function is a more appropriate choice. The binomial loss case specifically has been derived in previous work (Tansey et al., 2016) and shown to be robust to numerous types of underlying spatial functions. Therefore, unlike CRISP, the inner loop of our method immediately generalizes to the non-Gaussian scenario, with only minor modifications. In order to adapt the gap statistic to the binomial case, we must find a suitable reference distribution. We assume every y i is Bernoulli distributed, from which it follows: y i , y j ∼ Bern(p) (11a) (y i − y j ) 2 ∼ Bern(2p(1 − p)) (11b) W 1 ∼ Bin n 2 − n 2 , 2p(1 − p) .(11c) Calculating the expectation of the log of a Binomial in closed form is not tractable, however we can make a close approximation via a Taylor expansion, E [log W 1 ] ≈ log(r * m) − 1 − r 2 * r * m ,(12) Experiments To evaluate the efficacy of our approach, we compare against a suite of both synthetic and real-world datasets. We first compare GapTV against two benchmark methods with sharp partitions, CART and CRISP, on a synthetic dataset with varying sample sizes. We also compare against CRISP with q fixed at the gap statistic solution in a method we call GapCRISP. We show that the GapTV method has much better interpretability qualitatively and leads to better AIC scores. We then demonstrate the advantage of the gap statistic by showing that it chooses grid sizes that offer a good trade-off between average and worst-cell accuracy. Finally, we test all four methods against two realworld datasets of crime reports for Austin and Chicago. Synthetic Benchmark We generated 100 independent 100 × 100 grids, each with six 1000-point plateaus. Each plateau was generated via a random walk from a randomly chosen start point and the means of the plateaus were -5, -3, -2, 2, 3, and 5; all points not in a plateau had mean zero. For each grid, we sampled points uniformly at random with replacement and added Gaussian noise with unit variance. Figure 1 shows an example ground truth for the means. Sample sizes explored for each grid were 50, 100, 200, 500, 1000, 2000, 5000, and 10000. For each trial, we evaluate the CART method from the R package rpart, CRISP, and the Gap* methods. For CRISP, we use q = max(n, 100) as per the suggestions in (Petersen et al., 2016); for the Gap* methods, we use the gap statistic to choose from q ∈ [2, 50]. For both CRISP and the Gap* methods, we chose λ via 5-fold cross validation across a log-space grid of 50 values. In order to quantify interpretability, we calculate the number of constant-valued plateaus in each model. Intuitively, this captures the notion of "sharpness" of the partitions by penalizing smooth partitions for their visual blurriness. Statistically, this corresponds directly to the degrees of freedom of a TV denoising model in the unweighted Gaussian loss scenario (Tibshirani & Taylor, 2011). Thus for all of our models this is only an approximation to the degrees of freedom. Nonetheless, we find the plateau-counting heuristic to be a useful measurement of the visual degrees of freedom which corresponds more closely to human interpretability. Finally, to quantify the trade-off of accuracy and interpretability, we use the Akaike information criterion (AIC) with the plateau count as the degrees of freedom surrogate. Figure 2 shows the quantitative results of the experiments, averaged over the 100 trials. The CRISP and Gap* methods perform similarly in terms of RMSE (Figure 2a), but both CRISP methods create drastically more plateaus. In the case of the original CRISP method, it quickly approaches one plateau per cell (i.e., completely smooth) as denoted by the dotted red horizontal line in Figure 2b. GapTV also presents a better trade-off point as measured by AIC (Figure 2c). Using the data-adaptive q value chosen by our gap statistic method helps improve the AIC scores in the lowsample regime, but as samples grow the GapCRISP method begins to under-smooth by creating too many plateaus. This demonstrates that it is not merely the size of the grid, but also our choice of TV-based smoothing that leads to strong results. Finally, Figure 4 shows qualitative results for the four smoothing methods as the sample size grows from 100 to 2000. CART (Panels A-C) tends to over-smooth, leading to very sharp partitions that are too coarse grained to produce accurate results even as the sample size grows large. On the other hand, CRISP (Panels D-F) under-smooths by creating very blurry images. The gap-based version of CRISP (Panels G-I) alleviates this in the low-sample cases, but tying across entire rows and columns causes the image to blur as the data increases. The GapTV method (Panels J-L) achieves a reasonable balance here by producing large blocks in the low-sample setting and progressively refining the blocks as the sample size increases, without substantially compromising the sharpness of the overall image. Gap Statistic Evaluation In order to understand the effect of the gap statistic, we conducted a series of synthetic benchmark experiments. For each GapTV trial and sample size in the experiment from Section 4.1, we exhaustively solved the graph TV problem for all possible values of q in the range [2,50]. Figure 3 shows how the choice of q impacts the average RMSE and maximum point error for three different sample sizes; the dotted vertical red line denotes the value selected by the gap statistic. As expected, when the sample size is small, the gap statistic selects much smaller values; as the sample size grows, the gap statistic selects progressively larger q values. This enables the model to smooth over increasingly finer-grained resolutions. Perhaps counter-intuitively, the gap statistic is not choosing the q value which will simply minimize RMSE. As the middle panel shows, the gap statistic may actually choose one of the worst possible q values from this perspective. Instead, the resulting model is identifying a good tradeoff between average accuracy (RMSE) and worst-case accuracy (max error). In small-sample scenarios like Figure 3a, RMSE is not substantially impacted by having a very coarse-grained q. Thus this trade-off helps prevent over-smoothing in the small sample regime-a problem observed by (Petersen et al., 2016) when using TV with a large q. As the data grows (Figure 3b), both overly-fine and overly-coarse grids may have problems, with the latter now creating the potential for the TV method to under-smooth similarly to how CRISP performed in the synthetic benchmarks. Once sample sizes become relatively large ( Figure 3c), making the grid very fine-grained poses less risk of under-smoothing. The gap statistic here prevents q from being chosen too low, which would create a much higher variance estimation. Austin and Chicago Crime Data As a final case study, we applied all four methods to a dataset of publicly-available crime report counts 3 in Austin, Texas in 2014 and Chicago, Illinois in 2015. To preprocess the data, we binned all observations into a fine-grained 100 × 100 grid based on latitude and longitude, then took the log of the total counts in each cell. Points with zero observed crimes were omitted from the dataset as it is unclear whether they represented the absence of crime or a location outside the boundary of the local police department. uate the methods, we ran a 20-fold cross-validation to measure RMSE and calculated plateaus with a fully-connected grid (i.e., as if all pixels were connected) which we then projected back to the real data for every non-missing point. Figure 5 shows the qualitative results for CART (Panel B), CRISP (Panel C), and GapTV (Panel D); due to space considerations, GapCRISP is omitted as it adds little insight. The CART model clearly over-smooths by dividing the entire city into huge blocks of constant plateaus; conversely, CRISP under-smooths and creates too many regions. The GapTV method finds an appealing visual balance, creating flexible plateaus that partition the city well. These results are confirmed quantitatively in Table 1, where GapTV outperforms the three other methods in terms of AIC. Conclusion This paper presented GapTV, a new method for interpretable low-dimensional regression. Through a novel use of the gap statistic, our model divides the covariate space into a finite-sized grid in a data-adaptive manner. We then use a fast TV denoising algorithm to smooth over the cells, creating plateaus of constant value. On a series of synthetic benchmarks, we demonstrated that our method pro- duces superior results compared to a baseline CART model and the current state of the art (CRISP). Finally, we provided additional evaluation through a real-world case study on crime rates in Austin and Chicago, showing that GapTV discovers much more interpretable and meaningful spatial plateaus. Overall, we believe the speed, accuracy, interpretability, and fully auto-tuned nature of GapTV makes it a strong candidate for low-dimensional regression. GapTV -Appendix A. Chicago Results Below are the results for the three main methods applied to the Chicago data. Figure 1 . 1An example 100×100 grid of ground truth means ranging from −5 to 5. Each grid has six randomly-generated plateaus of raised or lowered means from the background mean (zero); darker colors correspond to regions of higher value.where m = n 2 −n 2 and r = 2p(1 − p).Extensions to any other smooth, convex loss are straightforward. One must simply define a loss and a probabilistic model for each data point. Depending on the choice of model, the expectation of the log of the null may not always have a closed form solution. In such cases, we suggest following the simulation strategy specified in(Tibshirani et al., 2001). Fig- ure 5 (Figure 2 . 52Panel A) shows the raw data for Austin; the matching figure for Chicago is available in the appendix.Each of the four methods considered in the previous sections were tested. The gap methods used q values in the range [2, 100] and the CRISP method had q = 100. To eval-Performance of the four methods as the sample size increases for the example grid inFigure 1. While CRISP, GapCRISP, and GapTV achieve similar sample efficiency in terms of RMSE scores (panel A), CRISP and GapCRISP do so with drastically more change points (panel B); the dashed red horizontal line marks the maximum number of plateaus possible. Using AIC as a trade-off measurement (Panel C), both Gap* methods initially perform similarly but as the sample size (and thus the size of q) grows, the GapTV method continues to improve while the GapCRISP method begins to over-smooth. Figure 3 . 3RMSE (blue) and maximum error (orange) for the GapTV method for different sizes of the grid (q 2 ) for three different sample sizes; the dashed vertical red line indicates the value of q chosen by the gap statistic. The results demonstrate that the gap statistic chooses models which provide a balance between average and worst-case error. Figure 4 . 4Qualitative examples of the four benchmark methods as the sample size increases. Figure 5 . 5Areal data results for the Austin crime data. The maps show the raw fine-grained results (Panel A) and the results of the three main methods. Qualitatively, CART (Panel B) over-smooths and creates too few regions in the city; CRISP (Panel C) under-smooths, creating too many regions; and GapTV (Panel D) provides a good balance that yields interpretable sections. Figure 6 . 6Areal data results for the Chicago crime data. The maps show the raw fine-grained results (Panel A) and the results of the three main methods. Qualitatively, CART (Panel B) over-smooths and creates too few regions in the city; CRISP (Panel C) under-smooths, creating too many regions; and GapTV (Panel D) provides a good balance that yields interpretable sections. Table 1. Quantitative results for the four methods on crime data for Austin and Chicago. The GapTV method achieves the best trade-off between accuracy and the number of constant regions, as measured by AIC.Austin Crime Data RMSE Plateaus AIC CART 1.0522 10.4000 11139.2911 CRISP 0.9420 4699.1500 18326.3333 GapCRISP 0.9633 1361.7500 12064.2507 GapTV 0.9743 384.3500 10327.5860 Chicago Crime Data RMSE Plateaus AIC CART 1.0460 9.2500 43804.6942 CRISP 0.8450 9330.6000 47245.5734 GapCRISP 0.8476 8278.9000 45314.7106 GapTV 0.8581 2270.1500 34016.5952 Though our goal in this work is not to increase the computational efficiency of existing methods, we do note that CRISP can be solved substantially faster via the reformulation in (4). The weighted least squares loss enables a much more efficient solution to (1) via a simpler ADMM solution similar to the network lasso(Hallac et al., 2015). https://www.data.gov/open-gov/ Fast newton-type methods for total variation regularization. Álvaro Barbero, Suvrit Sra, ICML. Getoor, Lise and Scheffer, TobiasOmnipressBarbero,Álvaro and Sra, Suvrit. Fast newton-type methods for total variation regularization. In Getoor, Lise and Scheffer, To- bias (eds.), ICML, pp. 313-320. Omnipress, 2011. Modular proximal optimization for multidimensional total-variation regularization. Álvaro Barbero, Suvrit Sra, Barbero,Álvaro and Sra, Suvrit. Modular proximal optimization for multidimensional total-variation regularization. 2014. URL http://arxiv.org/abs/1411.0589. A context-sensitive approach to anonymizing spatial surveillance data. Christopher A Cassa, Shaun J Grannis, Overhage, Mandl Marc, D Kenneth, Journal of the American Medical Informatics Association. 132Cassa, Christopher A, Grannis, Shaun J, Overhage, J Marc, and Mandl, Kenneth D. A context-sensitive approach to anonymiz- ing spatial surveillance data. Journal of the American Medical Informatics Association, 13(2):160-165, 2006. On total variation minimization and surface evolution using parametric maximum flows. Antonin Chambolle, Jérôme Darbon, International journal of computer vision. 843Chambolle, Antonin and Darbon, Jérôme. On total variation min- imization and surface evolution using parametric maximum flows. International journal of computer vision, 84(3):288- 307, 2009. Monotonic calibrated interpolated look-up tables. Maya Gupta, Cotter, Andrew, Jan Pfeifer, Voevodski, Konstantin, Canini, Kevin, Alexander Mangylov, Wojciech Moczydlowski, Alexander Van Esbroeck, Journal of Machine Learning Research. 17109Gupta, Maya, Cotter, Andrew, Pfeifer, Jan, Voevodski, Kon- stantin, Canini, Kevin, Mangylov, Alexander, Moczydlowski, Wojciech, and van Esbroeck, Alexander. Monotonic calibrated interpolated look-up tables. Journal of Machine Learning Re- search, 17(109):1-47, 2016. URL http://jmlr.org/ papers/v17/15-243.html. Network lasso: Clustering and optimization in large-scale graphs. David Hallac, Jure Leskovec, Stephen Boyd, 21st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'15). Hallac, David, Leskovec, Jure, and Boyd, Stephen. Network lasso: Clustering and optimization in large-scale graphs. 21st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD'15), 2015. A dynamic programming algorithm for the fused lasso and l 0-segmentation. Nicholas A Johnson, Journal of Computational and Graphical Statistics. 222Johnson, Nicholas A. A dynamic programming algorithm for the fused lasso and l 0-segmentation. Journal of Computational and Graphical Statistics, 22(2):246-260, 2013. Convex regression with interpretable sharp partitions. Ashley Petersen, Noah Simon, Daniela Witten, Journal of Machine Learning Research. 1794Petersen, Ashley, Simon, Noah, and Witten, Daniela. Convex regression with interpretable sharp partitions. Journal of Ma- chine Learning Research, 17(94):1-31, 2016. URL http: //jmlr.org/papers/v17/15-344.html. Nonlinear total variation based noise removal algorithms. L Rudin, S Osher, E Faterni, Phys. D. 60Rudin, L., Osher, S., and Faterni, E. Nonlinear total variation based noise removal algorithms. Phys. D, 60(259-68), 1992. A fast and flexible algorithm for the graph-fused lasso. Wesley Tansey, James G Scott, arXiv:1505.06475Tansey, Wesley and Scott, James G. A fast and flexible algorithm for the graph-fused lasso. arXiv:1505.06475, 2015. Multiscale spatial density smoothing: an application to largescale radiological survey and anomaly detection. Wesley Tansey, Alex Athey, Alex Reinhart, James G Scott, Journal of the American Statistical Association. Tansey, Wesley, Athey, Alex, Reinhart, Alex, and Scott, James G. Multiscale spatial density smoothing: an application to large- scale radiological survey and anomaly detection. Journal of the American Statistical Association, 2016. Sparsity and smoothness via the fused lasso. R Tibshirani, M Saunders, S Rosset, J Zhu, K Knight, Journal of the Royal Statistical Society (Series B). 67Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., and Knight, K. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society (Series B), 67:91-108, 2005. The solution path of the generalized lasso. R J Tibshirani, J Taylor, Annals of Statistics. 39Tibshirani, R. J. and Taylor, J. The solution path of the generalized lasso. Annals of Statistics, 39:1335-71, 2011. Estimating the number of clusters in a data set via the gap statistic. Robert Tibshirani, Guenther Walther, Trevor Hastie, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 632Tibshirani, Robert, Walther, Guenther, and Hastie, Trevor. Esti- mating the number of clusters in a data set via the gap statistic. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(2):411-423, 2001. Handbook on statistical distributions for experimentalists. Christian Walck, Walck, Christian. Handbook on statistical distributions for exper- imentalists, 2007. Yu-Xiang Wang, Sharpnack, James, Alex Smola, Ryan J Tibshirani, arXiv:1410.7690Trend filtering on graphs. arXiv preprintWang, Yu-Xiang, Sharpnack, James, Smola, Alex, and Tibshi- rani, Ryan J. Trend filtering on graphs. arXiv preprint arXiv:1410.7690, 2014.
[]
[ "Multiple faults diagnosis using causal graph", "Multiple faults diagnosis using causal graph" ]
[ "Imtiez Fliss [email protected] \nUNIVERSITY OF MANOUBA\nNATIONAL SCHOOL OF COMPUTER SCIENCES\nSOIE LABORATORY\n2010MANOUBATUNISIA\n", "Moncef Tagina [email protected] \nUNIVERSITY OF MANOUBA\nNATIONAL SCHOOL OF COMPUTER SCIENCES\nSOIE LABORATORY\n2010MANOUBATUNISIA\n" ]
[ "UNIVERSITY OF MANOUBA\nNATIONAL SCHOOL OF COMPUTER SCIENCES\nSOIE LABORATORY\n2010MANOUBATUNISIA", "UNIVERSITY OF MANOUBA\nNATIONAL SCHOOL OF COMPUTER SCIENCES\nSOIE LABORATORY\n2010MANOUBATUNISIA" ]
[]
This work proposes to put up a tool for diagnosing multi faults based on model using techniques of detection and localization inspired from the community of artificial intelligence and that of automatic. The diagnostic procedure to be integrated into the supervisory system must therefore be provided with explanatory features. Techniques based on causal reasoning are a pertinent approach for this purpose. Bond graph modeling is used to describe the cause effect relationship between process variables. Experimental results are presented and discussed in order to compare performance of causal graph technique and classic methods inspired from artificial intelligence (DX) and control theory (FDI).
null
[ "https://arxiv.org/pdf/1203.5451v1.pdf" ]
14,055,401
1203.5451
fbd428235e4ac8c31321bde959bb9ecfac2e5a35
Multiple faults diagnosis using causal graph Imtiez Fliss [email protected] UNIVERSITY OF MANOUBA NATIONAL SCHOOL OF COMPUTER SCIENCES SOIE LABORATORY 2010MANOUBATUNISIA Moncef Tagina [email protected] UNIVERSITY OF MANOUBA NATIONAL SCHOOL OF COMPUTER SCIENCES SOIE LABORATORY 2010MANOUBATUNISIA Multiple faults diagnosis using causal graph multiple faultsdiagnosticcausal graphFDIBond Graph This work proposes to put up a tool for diagnosing multi faults based on model using techniques of detection and localization inspired from the community of artificial intelligence and that of automatic. The diagnostic procedure to be integrated into the supervisory system must therefore be provided with explanatory features. Techniques based on causal reasoning are a pertinent approach for this purpose. Bond graph modeling is used to describe the cause effect relationship between process variables. Experimental results are presented and discussed in order to compare performance of causal graph technique and classic methods inspired from artificial intelligence (DX) and control theory (FDI). INTRODUCTION With the continuous expansion of industrial applications, complex systems diagnosis has become an extremely discussed topic nowadays. Many methods have been proposed by different communities (artificial intelligence, control theory, statistics...). Several issues are increasingly asked, particularly regarding diagnosing faults. The general ones concern diagnosing single fault. Other questions tackle more complex points and present the problem of detection and localization of multiple faults (Multiple faults means that multiple breakdowns have effects that overlap in time). This problem occurs in many industrial systems. However, the number of research in this field is reduced due to the complexity of the task and combinatorial explosion problem. In our work, we propose to build a multi faults diagnosing tool based on model approach. We use techniques of detection and localization inspired from the community of artificial intelligence and that of automatic. The diagnostic procedure to be integrated into the supervisory system must therefore be provided with explanatory features. Techniques based on causal reasoning are a pertinent approach for this purpose. In this paper, we intend to present in the second section causal reasoning which is the base of our multiple faults diagnosing tool. Then, we will show and discuss the experimental results provided by our diagnosing tool in order to compare performance of causal graph technique and classic methods inspired from artificial intelligence and control theory. CAUSAL REASONING: THE STATE OF THE ART The diagnosis of complex systems has been an area of dynamic research for many years. Two research communities have been particularly involved in studying fault diagnosis: the artificial intelligence community, known as the diagnostic (DX) community, and the control theory community, known as the fault detection and isolation (FDI) community. The DX community has been concerned with the modeling of the diagnostic reasoning itself: the foundations of logical reasoning have always been considered as major research points. In the consistency based approach [18], the description of the behavior of the system is component-oriented and rests on first-order logic. The {SD (system description), COMP (components)} pair constitutes the model. The system description takes the form of logical operations [4]. Diagnosis in this framework is logically sound but a major drawback is the issue of combinatorial explosion for systems involving many components [3], as in the case of industrial processes, with whose diagnosis this paper is concerned. Reference [3] also, states that one of the main limitations in logical model based diagnosis is its computational complexity, and proposes a specific knowledge compilation approach to focus reasoning on abductive diagnosis. The FDI community is especially concerned with industrial process modeling and control. Reasoning is quantitative. [10] The model is numerical and as precise as required by the diagnostic objective. Generally, the model represents the normal behavior of the system, in the absence of any fault and characterizes deterministic phenomena, taken into consideration using basic "laws" of physics, biology, etc. But it can also include detailed knowledge about how faults or unknown disturbances affect the variables of the system. With the FDI approach, computations result in numerical quantities, the residuals, whose properties enable diagnosis with very accurate quantitative information. Nevertheless, several logical assumptions are implicit in the FDI formulation, whereas they are clearly formulated within the DX consistency-based diagnostic framework. Reasoning and computing may thus be considered in opposition. A combined method presented in [9] brings them together. It relies on both a qualitative causal representation of the process and quantitative local models. It has been inspired by artificial intelligence for the causal modeling of physical systems and for studying logical soundness. It is causal reasoning in which we are interested in this paper. A causal structure is a qualitative description of the effect or influence that system entities (variables, faults, etc.) have on other entities. It may be represented by a directed graph (digraph). A causal graph, which represents a process at a high level of abstraction, is appropriate for supervising the process. When the graph nodes represent the system variables, the directed arcs symbolize the normal relations among them and these relations are deterministic, the graph is frequently referred to as an influence graph. [9] Diagnosis based on influence graph consists in seeking for the variable source which deviation is sufficient to explain all deviations detected on other variables. [20] The algorithm is a backward/forward procedure starting from an inconsistent variable. The backward search bounds the fault space by eliminating the normal measurements causally upstream. Then each possible primary deviation generates a hypothesis, which is forward tested using the states of the variables and the functions of the arcs. [9] Localization phase consists in looking for the system component that doesn't correctly work using the system structure knowledge, its potential weaknesses and available observations. The result of diagnosis may be an arc pointing on a source variable (component fault) or a non measurable disturbance that directly affects that variable. A major advantage of causal approach is that, in general, faulty behavior knowledge is not necessary for localization. Two principle causal structures are proposed: a) Digraph that represents calculability issued from mathematic relations (differential equation …). It may be found through the causal-ordering mechanism [17] or through the graph bipartite theory. b) Digraph that represents functional knowledge of process; nodes are linked to significant considered variable and arcs are linked to physical phenomena [7]. Thus, the first type makes a link between causality and system describing equations (global analysis) and the second one makes a link between causality and system structure (local analysis). We have also bond graph -a many diagnosis approaches' base-such as temporal causal graph [16] The first step for causal graph diagnosing is the construction of the causal graph. This is a complex process that needs a structural and functional knowledge. Expert knowledge is also considered to define supervision needs. Reference [12] lists some tips to build causal graph within a complex system context: i. Physical system identification. i. Dividing physical system to subsystems. ii. Defining and affecting a configuration to every subsystem. iii. Identification of some physical relations. iv. Connecting relations to physical components. v. Causality determination. vi. Reducing (eliminating non measurable variables). vii. Approximation (eliminating negligible relations). viii. Quantification (identification of transfer function parameters). Many researches are based on causal graph. The principle approaches are numerated in [20]. In this paper, we focus on some approaches. Causal Engine (Ca-En)'s Approach: Ca-En is a qualitative simulator developed within the European project ESPRIT TIGER. It is a model based diagnosis system for complex dynamic process, integrated in the supervision system of gas turbine TIGER. The CA-EN formalism is based on a two level representation scheme for describing the relationships between the process variables: a local constraint level and a global constraint level. [21] The local constraint level is represented by a directed graph in which the paths presume the perturbation flow causality. The influences supported by the graph's edges allow for representing causal dependency type knowledge. The global constraint level is composed of functional numeric constraints associated with interval domains, such as constraints arising from physical laws. So, a global constraint is any mathematical equation, which might be nonlinear as well, in which each unknown is assumed to take on interval values. [21] The "Causalito" program automatically performs the difficult part of translating analytic knowledge into causal relations. "Causalito" uses causal-ordering concepts [17] for automatically generating the CA-EN causal graph as well as some of the influence attributes from a set of equations. Imprecise knowledge is considered through the definition of intervals on relation parameters (associated to influences). This allows prediction envelop generation and updates every Sampling period. Evsukoff 's Approach Supervision of complex process is also dealt by Alexandre Evsukoff [8]. He Suggests a causal approach similar to the one proposed by CA-EN. The difference lies in detection mechanism and localization process. Detection is based on fuzzy inference on the residual attributes where the residual (r) is the difference between process measurements y (t) and referential values ŷ (t) issued from the model. A study on the robustness and sensitivity of detection is also performed. The uncertain reasoning based on intervals and envelopes in Ca-EN is then replaced by a fuzzy reasoning on residuals. That allows a more refined reasoning on the differences that occur. Localization is based on causal reasoning: each variable is physically linked to other variables that cause and explain its behavior. It is based on a multi-model that defines for each variable global, causal (local), and propagated residual. The localization procedure is applied generally to every variable of the process. The mechanism of inference is made on each residual. A decision localization process verify (for each variable in alarm) if disturbance is detected locally or not. BOND GRAPH MODELING To ensure the best precision and reliability in detection and isolation, we have to choose carefully the model of the physical system. In fact, the quality of diagnostic system depends on the quality of the model. A model is a simple or abstract representation (diagram, graphic representation, mathematic equations, etc) of a physical system. Dynamic models of physical systems may be represented in different ways: logical statements [11], mathematic equations [6], [13], bond graph [15], bloc diagram and bond graph [22], digraphs … The preference of the adequate representation of the physical system depends on the purpose of the search. In our case, we focus on bond graph modeling [5]. In fact, Bond graph language allows to deal with the enormous amount of equations describing the process behavior and to display explicitly the power exchange between the process components starting from the instrumentation architecture. It is a unified language for all engineering science domains that considers energy and information channels. Indeed, that is very useful since multidisciplinary systems constitute the majority of industrial products that exist nowadays. The causality, which establishes the cause and effect relationship between the power variables, is an important characteristic used in bond graph models to derive the constitutive equations of the process behavior in a systematic and an algorithmic way. The verification of the causality assignment avoids design and numerical simulation problems. SYSTEM AND MODELS PRESENTATION As we presented at the beginning of this paper, our work consists in establishing a diagnosis system for multiple faults based on causal reasoning. We use for this purpose the method of influence graph for isolating faults described in [9]. In fact, causal structure of influence graphs provides a tool to know and understand how normal or abnormal variations propagate in the physical process from one variable to another [10]. This allows us to know the state of components even in case of multiple faults (our study case). Then, the results are compared with those given by FDI and Logical method with fault models (DX) in case of multiple faults. To test the performance of the proposed methods, we have chosen a benchmark in diagnosis domain: three tanks hydraulic system [14] [19] [1] … Fig. 1 illustrates the notation used in this section. The process consists of three cylindrical tanks. Tanks communicate through feeding valves. The process has two inputs: Msf1 and Msf2. We put five sensors: effort sensors De1, De2 and De3 to measure pressure of C1, C2 and C3 and flow sensors Df1and Df2 measuring flow level of the valves 1 and 2. Its global purpose is to keep a steady fluid level in the tanks. Then, we used a procedure described in [2] and [5] that let us get the bond graph model of the process shown in Fig2 Fig 2 . Bond Graph model of three tanks system. Thanks to structural, behavioral and causal properties of Bond Graph, the causal graph of the three tanks process can be generated as given in Fig3. In this case, the graph nodes represent the system variables; the directed arcs symbolize the normal relations among them (for instance Msf1->De1 means that modifications of Msf1 will necessary cause changes of De1). IMPLEMENTATION AND DISCUSSION The multi faults diagnosis tool we put up is original. It allows us to compare the results of three techniques inspired from the community of artificial intelligence and control theory: FDI, Logical method with fault models and influence graph. The experimental results were obtained during the simulation of the three -tank system. After generating the influence graph, the propagation paths in the graph are analyzed to determine whether this fault hypothesis is sufficient to account for secondary faults, resulting from its propagation in the process over time. The algorithm is a backward/forward procedure starting from an inconsistent variable (source variable). The backward search bounds the fault space by eliminating the normal measurements causally upstream. Then each possible primary deviation generates a hypothesis, which is forward tested using the states of the variables and the functions of the arcs. The results of this approach and those of FDI and Logical method with fault models (DX) tested on the three-tank process are presented in the following table: Influence Graph Logical method with fault models (DX) FDI Injected faults {Msf1} {Msf1} {Msf1} {Msf1} {Msf2} {Msf2} {Msf2} {Msf2} {De 1} {De 1} {De 1} {De1} {De2} {De2} {De2} {De2} {De3} {De3} {De1, f2} {Df2, f1} {De3, Df1} {De3} {Df1} {Df1} {Df1} {Df1} {Df2} {Df2} Based on the experimental results, we notice that the three techniques give good results in the majority of cases of a simple fault (the three techniques localized more than 71% of simple injected faults). However, in multiple faults instances (double and more), the three methods give different even don't give results. FDI localized only 16.6% of double and more faults. This can be explained by the fact that the generation and use of theoretical fault signatures reduce the diagnostic reasoning to a simple pattern-matching activity (this matches with considerations of [9]). Logical method with fault models gives results better than FDI: 50% of multiple injected faults (because the generated diagnosis is revised using fault model technique) but not than Influence Graph (as the diagnosis initially generated is minimal since it is a result of HS tree). On the other hand, Influence Graph method gives very interesting results localizing perfectly faults. The registration of all information concerning variables may explicate these results. Then, only variables that are really faulty are announced defective. To conclude, associating causal approaches would be an interesting solution for dynamic complex systems since they limit verification space of diagnosis system to relations sufficient to isolate faults and give remarkable end results in complex case: multiple faults without a need for supplementary processing. CONCLUSION This paper has presented multiple fault isolation method based on causal reasoning. Bond Graph modeling was used to describe the relationship cause effect existing between process variables. A comparison between results giving by different approaches: FDI, logical method with fault models and influence graph was done for explanation and implementation purposes. Experiments have shown that the causal reasoning through the example of influence graph can localize multiple faults in the three-tank process successfully. It is expected that the achieved results can also be extended to localizing multiple faults in genuine systems. In fact, we intend in future works to highlight the potential of using such a method in real application. Fig 1 . 1Three-tank system Fig 3 . 3Influence Graph of three -tank system {De1, f2} {Df2, Df1} {De3, Df1} {Df2} {Msf1,Df2} {Df2} {De1, f2} {Df2, Df1} {De3, Df1} {Msf1, Df2} {De1, Df2} {De1, Df2} {De1, Df2} {Df2, f1} {De3, Df1} {De1, Df2} {De3, Df2} {De3, Df2} { } {De3, Df2} {De1, De3} {De1, De3} {De1, Df2} {Df2, Df1} {De3, Df1} {De1, De3} {Df1, Df2} {Df1, Df2} {De1, Df2} {Df2, f1} {De3, Df1} {Df1, Df2} {Msf1, De1, Df2} {De1, Df2} {De1, Df2} {Df2, Df1} {De3, Df1} {Msf1, De1, Df2} {Df1, Df2, De3} {De1, De3, Df2} {Df1, Msf2} {De1, De3, Df2} {De1, Df1, Df2} {De1, Df1, Df2} {De1, Df2} {Df2, Df1} {De3, Df1} {De1, Df1, Df2} {Msf1, Msf2, Df1} {Msf2, Df1} { } {Msf1, Msf2, Df1} {Msf1, Msf2, Df1, Df2} {Df1, Df2} {De1, Df2} {Df2, Df1} {De3, Df1} {Msf1, Msf2, Df1, Df2} {Msf1, Msf2, De2, Df1, Df2} {De2, Df1, Df2} {De1, Df2} {Df2, Df1} {De3, Df1} {Msf1, Msf2, Df1, Df2, De2} {Msf1, Msf2, De1, De2, Df1, Df2} {De1, De2, Df1, Df2} {De1, Df2} {Df2, Df1} {De3, Df1} {Msf1, Msf2, De1, De2, Df1, Df2} Tab 1. Experimental resultsThese results are better presented in the chart below.Fig 4. Experimental results State estimation via multiple observer the three tank system. A Akhenak, M Chadli, D Maquin, J Ragot, 5e IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes SAFEPROCESS. Washington DC, USAA. Akhenak, M. Chadli, D. Maquin et J. Ragot. "State estimation via multiple observer the three tank system".5e IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes SAFEPROCESS, Washington DC, USA, 2003, pp. 1227-1232. Modélisation et identification des processus. P Borne, G Dauphin-Tanguy, J P Richard, F Rotella, I Zambettakis, Editions Technip. P. Borne, G. Dauphin-Tanguy, J. P. Richard, F. Rotella, I. Zambettakis. Modélisation et identification des processus. Editions Technip, 1992. Using compiled knowledge to guide and focus abductive diagnosis. L Console, L Portinale, D Theseider Dupré, IEEE Trans. Knowl. Data Eng. 8L. Console, L. Portinale, and D. Theseider Dupré, "Using compiled knowledge to guide and focus abductive diagnosis", IEEE Trans. Knowl. Data Eng., vol. 8, pp. 690-706, May 1996. Diagnostic par Intelligence Artificielle et Reconnaissance des Formes. P Dague, B Dubuisson, HermèsParis, FranceP. Dague and B. Dubuisson, Diagnostic par Intelligence Artificielle et Reconnaissance des Formes, Hermès, Paris, France, 2001. Les bond graph. G Dauphin-Tanguy, HERMES Sciences PublicationsG. Dauphin-Tanguy, Les bond graph. HERMES Sciences Publications, 2000. A Qualitative Physics Based on Confluences. J De Kleer, J S Brown, Artificial Intelligence. 241J. De Kleer, J. S. Brown, "A Qualitative Physics Based on Confluences", Artificial Intelligence, vol 24, n.1, pp. 7- 83, 1984. Theories of causal ordering. J De Kleer, Artificial Intelligence. 291J. De Kleer, "Theories of causal ordering", Artificial Intelligence, vol. 29, n.1, pp. 33-62, , 1986. Fuzzy reasoning in cooperative supervision systems. A Evsukoff, S Gentil, J Montmain, Control Engineering Practice. 8A. Evsukoff, S. Gentil, J. Montmain, "Fuzzy reasoning in cooperative supervision systems", Control Engineering Practice 8, pp. 389-407, 2000. Combining FDI and AI Approaches Within Causal-Model-Based Diagnosis. S Gentil, J Montmain, C Combastel, IEEE Transactions on Systems, Man, and Cybernetics -part B. 345S. Gentil, J. Montmain, C. Combastel, "Combining FDI and AI Approaches Within Causal-Model-Based Diagnosis", IEEE Transactions on Systems, Man, and Cybernetics -part B, vol. 34, n.5, pp. 2207-2221, 2004. Supervision des procédés complexes. S , Hermès Science PublicationParissous la direction)S. Gentil (sous la direction). Supervision des procédés complexes, Hermès Science Publication, Paris, 2007. Naive Physics I: Ontology for Liquids. P J Hayes, Formal Theories of the Commonsense World. Hobbs, J.&Moore, B.Abex PublishingP. J. Hayes, "Naive Physics I: Ontology for Liquids", in Hobbs, J.&Moore, B. (eds.), Formal Theories of the Commonsense World, Abex Publishing, pp. 71-89,1985. FCC Diagnosis Using Several Causal and Knowledge based Models. B Heim, S Gentil, B Celse, S Cauvin, L Través-Massuyès, 5th IFAC Symposium on Fault Detection. Supervision and Safety for Technical Processes (SAFEPROCESS-2003). Washington, USAB. Heim, S. Gentil, B. Celse, S. Cauvin, L. Través- Massuyès,"FCC Diagnosis Using Several Causal and Knowledge based Models", 5th IFAC Symposium on Fault Detection. Supervision and Safety for Technical Processes (SAFEPROCESS-2003), Washington, USA. 2003. Qualitative Simulation. B Kuipers, J. of Artificial Intelligence. 29B. Kuipers, "Qualitative Simulation", J. of Artificial Intelligence 29, pp. 289-388, 1986. An original approach for actuator and component fault detection and isolation. D Koenig, S Nowakowski, T Cecchin, 3e IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes SAFEPROCESS. Kingston Upon Hull, AngleterreD. Koenig, S. Nowakowski et T. Cecchin. "An original approach for actuator and component fault detection and isolation", 3e IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes SAFEPROCESS, Kingston Upon Hull, Angleterre, 1997, pp. 95-105. Contribution de l'outil bond graph pour la conception de systèmes de supervision des processus industriels. K Medjaher, Université des Sciences et Technologies de Lille, Laboratoire d'Automatique Génie Informatique et SignalThèse de doctoratK. Medjaher, Contribution de l'outil bond graph pour la conception de systèmes de supervision des processus industriels. Thèse de doctorat, Université des Sciences et Technologies de Lille, Laboratoire d'Automatique Génie Informatique et Signal, 2005. Diagnosis of Continuous Valued Systems in Transient Operation Regions. J Mosterman, G Biswas, IEEE Transactions on Systems, Man, And Cybernetics. 296J. Mosterman, G. Biswas, "Diagnosis of Continuous Valued Systems in Transient Operation Regions", IEEE Transactions on Systems, Man, And Cybernetics, vol. 29, n.6, pp. 554-565, 1999. Causal Ordering for multiple mode system. R Pons, L Través-Massuyès, 11th International Workshop on Qualitative Reasoning. Cortona11R. Pons, L. Través-Massuyès, " Causal Ordering for multiple mode system", 11th International Workshop on Qualitative Reasoning, Cortona (Italie), 1997, 11p. A theory of diagnosis from first principles. R Reiter, Artificial Intelligence. 32R. Reiter, "A theory of diagnosis from first principles," Artificial Intelligence, vol. 32, pp. 57-95, 1987. An assessment of fault detection methods for a benchmark system. D N Shields, S Du, 4e IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes SAFEPROCESS. Budapest, HongrieD.N. Shields et S. Du, "An assessment of fault detection methods for a benchmark system", 4e IFAC Symposium on Fault Detection Supervision and Safety for Technical Processes SAFEPROCESS, Budapest, Hongrie, 2000, pp. 937-942. Le Raisonnement qualitatif pour les sciences de l'ingénieur. L Travé-Massuyès, P Dague, F Guerrin, Editions Hermès. L. Travé-Massuyès, P. Dague, F. Guerrin, Le Raisonnement qualitatif pour les sciences de l'ingénieur. Editions Hermès, Paris, 1997. TIGERTM: Gas Turbine Condition Monitoring Focus on the Ca-En Qualitative model based system. L Travé-Massuyès, R Milne, IEEE Expert Intelligent Systems & Applications. L. Travé-Massuyès, R. Milne "TIGERTM: Gas Turbine Condition Monitoring Focus on the Ca-En Qualitative model based system", IEEE Expert Intelligent Systems & Applications, 1997. Automatic Modelling and Analysis of Dynamic Physical Systems Using Qualitative Reasoning and Bond Graphs. S Xia, D A Linkens, S Bennett, Intelligent Systems Engineering. 32S. Xia, D. A. Linkens, S. Bennett, "Automatic Modelling and Analysis of Dynamic Physical Systems Using Qualitative Reasoning and Bond Graphs", Intelligent Systems Engineering 3(2), pp.201-212, 1993.
[]
[ "Partitions of Z n into Arithmetic Progressions", "Partitions of Z n into Arithmetic Progressions" ]
[ "William Y C Chen [email protected] \nCenter for Combinatorics\nLPMC-TJKLC Nankai University\n300071TianjinP.R. China\n", "David G L Wang \nCenter for Combinatorics\nLPMC-TJKLC Nankai University\n300071TianjinP.R. China\n", "Iris F Zhang \nCenter for Combinatorics\nLPMC-TJKLC Nankai University\n300071TianjinP.R. China\n" ]
[ "Center for Combinatorics\nLPMC-TJKLC Nankai University\n300071TianjinP.R. China", "Center for Combinatorics\nLPMC-TJKLC Nankai University\n300071TianjinP.R. China", "Center for Combinatorics\nLPMC-TJKLC Nankai University\n300071TianjinP.R. China" ]
[]
We introduce the notion of arithmetic progression blocks or AP-blocks of Z n , which can be represented as sequences of the form (x, x + m, x + 2m, . . . , x + (i − 1)m) (mod n). Then we consider the problem of partitioning Z n into AP-blocks for a given difference m. We show that subject to a technical condition, the number of partitions of Z n into m-AP-blocks of a given type is independent of m. When we restrict our attention to blocks of sizes one or two, we are led to a combinatorial interpretation of a formula recently derived by Mansour and Sun as a generalization of the Kaplansky numbers. These numbers have also occurred as the coefficients in Waring's formula for symmetric functions.
10.1016/j.ejc.2008.09.027
[ "https://arxiv.org/pdf/0805.1622v1.pdf" ]
14,066,459
0805.1622
c4542e81d5f91060b37f98c8a442ac28fbfb24a3
Partitions of Z n into Arithmetic Progressions 12 May 2008 William Y C Chen [email protected] Center for Combinatorics LPMC-TJKLC Nankai University 300071TianjinP.R. China David G L Wang Center for Combinatorics LPMC-TJKLC Nankai University 300071TianjinP.R. China Iris F Zhang Center for Combinatorics LPMC-TJKLC Nankai University 300071TianjinP.R. China Partitions of Z n into Arithmetic Progressions 12 May 2008Kaplansky numbercycle dissectionm-AP-partitionseparation algo- rithm AMS Classification: 05A0505A15 We introduce the notion of arithmetic progression blocks or AP-blocks of Z n , which can be represented as sequences of the form (x, x + m, x + 2m, . . . , x + (i − 1)m) (mod n). Then we consider the problem of partitioning Z n into AP-blocks for a given difference m. We show that subject to a technical condition, the number of partitions of Z n into m-AP-blocks of a given type is independent of m. When we restrict our attention to blocks of sizes one or two, we are led to a combinatorial interpretation of a formula recently derived by Mansour and Sun as a generalization of the Kaplansky numbers. These numbers have also occurred as the coefficients in Waring's formula for symmetric functions. Introduction Let Z n be the cyclic group of order n whose elements are written as 1, 2, . . . , n. Intuitively, we assume that the elements 1, 2, . . . , n are placed clockwise on a cycle. Thus Z n can be viewed as an n-cycle, more specifically, a directed cycle. In his study of the ménages problem, Kaplansky [7] has shown that the number of ways of choosing k elements from Z n such that no two elements differ by one modulo n (see also Brauldi [1], Comtet [3], Riordan [14], Ryser [15] and Stanley [16,Lemma 2.3.4]) equals n n − k n − k k . (1.1) Moreover, Kaplansky [8] considered the following generalization. Assume that n ≥ pk + 1. Then the number of k-subsets {x 1 , x 2 , . . . , x k } of Z n such that x i − x j ∈ {1, 2, . . . , p} (1.2) for any pair (x i , x j ) of distinct elements, is given by n n − pk n − pk k . (1.3) Here we clarify the meaning of the notation (1.2). Given two elements x and y of Z n , x − y may be considered as the distance from y to x on the directed cycle Z n . Therefore, (1.2) says that the distance from any element x i to any other element x j on the directed cycle Z n is at least p + 1. From a different perspective, Konvalina [10] studied the number of k-subsets {x 1 , x 2 , . . . , x k } such that no two elements x i and x j are "uni-separated", namely x i − x j = 2 for all x i and x j . Remarkably, Konvalina discovered that the answer is also given by the Kaplansky number (1.1) for n ≥ 2k + 1. Other generalizations and related questions have been investigated by Hwang [5], Hwang, Korner and Wei [6], Munarini and Salvi [12], Prodinger [13] and Kirschenhofer and Prodinger [9]. Recently, Mansour and Sun [11] obtained the following unification of the formulas of Kaplansky and Konvalina. Theorem 1.1. Assume that m, p, k ≥ 1 and n ≥ mpk + 1. The number of k-subsets {x 1 , x 2 , . . . , x k } of Z n such that x i − x j ∈ {m, 2m, . . . , pm} (1.4) for any pair (x i , x j ), is given by the formula (1.3), and is independent of m. In the spirit of the original approach of Kaplansky, Mansour and Sun first solved the enumeration problem of choosing k-subset from an n-set with elements lying on a line. They established a recurrence relation, and solved the equation by computing the residues of some Laurent series. The case for an n-cycle can be reduced to the case for a line. They raised the question of finding a combinatorial proof of their formula. Guo [4] found a proof by using number theoretic properties and Rothe's identity: n k=0 xy (x + kz)(y + (n − k)z) x + kz k y + (n − k)z n − k = x + y x + y + nz x + y + nz n . This paper is motivated by the question of Mansour and Sun. We introduce the notion of arithmetic progression blocks or AP-blocks of Z n . A sequence of the form (x, x + m, x + 2m, . . . , x + (i − 1)m) (mod n) is called an AP-block, or an m-AP-block, of length i and of difference m. Then we consider partitions of Z n into m-AP-blocks B 1 , B 2 , . . . , B k of the same difference m. The type of such a partition is referred to as the type of the multisets of the sizes of the blocks. Our main result shows that subject to a technical condition, the number of partitions of Z n into m-AP-blocks of a given type is independent of m and is equal to the multinomial coefficient. This paper is organized as follows. In Section 2, we give a review of the cycle dissections and make a connection between the Kaplansky numbers and the cyclic multinomial coefficients. We present the main result in Section 3, that is, subject to a technical condition, the number of partitions of Z n into m-AP-blocks of a given type equals the multinomial coefficient and does not depend on m. We present a separation algorithm which leads to a bijection between m-AP-partitions and m ′ -AP-partitions of Z n . The correspondence between m-AP-partitions and cycle dissections (m ′ = 1) implies the main result Theorem 3.2. For the type 1 n−(p+1)k (p + 1) k we are led to a combinatorial proof which answers the question of Mansour and Sun. Cycle Dissections In their combinatorial study of Waring's formula on symmetric functions, Chen, Lih and Yeh [2] introduced the notion of cycle dissections. Recall that a dissection of an n-cycle is a partition of the cycle into blocks, which can be viewed by putting cutting bars on some edges of the cycle. Note that there at least one bar to cut a cycle into straight segments. A dissection of an n-cycle is said of type 1 k 1 2 k 2 · · · n kn if there are k i blocks of i elements in it. For instance, Figure 1 gives a 20-cycle dissection of type 1 8 2 3 3 2 . The following lemma is due to Chen-Lih-Yeh [2, Lemma 3.1]. Lemma 2.1. For an n-cycle, the number of dissections of type 1 k 1 2 k 2 · · · n kn is given by the cyclic multinomial coefficients: n k 1 + · · · + k n k 1 + · · · + k n k 1 , . . . , k n . (2.1) This lemma is easy to prove. Given a dissection, one may pick up any segment as a distinguished segment. This can be done in k 1 + k 2 + · · · + k n ways. On the other hand, any of the n elements can serve as the first element of the distinguished segment. Consider a cycle dissection of type 1 n−(p+1)k (p + 1) k . The set of the first elements of each segment of length p + 1 corresponds a k-subset of Z n satisfying (1.2). Thus the cyclic multinomial coefficient of type 1 n−(p+1)k (p + 1) k reduces to (1.3) and particularly the cyclic multinomial coefficient of type 1 n−2k 2 k reduces to the Kaplansky number (1.1). Partitions of Z n into Arithmetic Progressions In this section, we present the main result of this paper, namely, a formula for the number of partitions of Z n into m-AP-blocks of a given type. The proof is based on a separation algorithm to transform an m-AP-partition to an m ′ -AP-partition. We begin with some concepts. First, Z n is considered as a directed cycle. An arithmetic progression block, or an AP-block of Z n , is defined to be a sequence of elements of Z n of the following form B = (x, x + m, x + 2m, . . . , x + (i − 1)m) (mod n), where m is called the difference and i is called the length of B. An AP-block of difference m is called an m-AP-block. If B contains only one element, then it is called a singleton. The first element x is called the head of B. An m-AP-partition, or a partition of Z n into m-AP-blocks, is a set of m-AP-blocks of Z n whose underlying sets form a partition of Z n . For example, (7,9,11), (8), (10,12), (1), (2,4,6), (3), (5) (3.1) is a 2-AP-partition of Z 12 with four singletons and three non-singleton heads 7, 10 and 2. It should be noted that different AP-blocks may correspond to the same underlying set. For example, (1, 3) and (3, 1) are regarded as different AP-blocks of Z 4 , but they have the same underlying set {1, 3}. On the other hand, as will be seen in Proposition 3.1, it often happens that an AP-block is uniquely determined by its underlying set. For example, given the difference m = 3, the AP-block (12,15,2,5,8) of Z 16 is uniquely determined by the underlying set {2, 5, 8, 12, 15} since there is only one way to order these five elements to form an arithmetic progression of difference 3 modulo 16. For an m-AP-partition π, the type of π is defined by the type of the multisets of the sizes of the blocks. Usually, we use the notation 1 k 1 2 k 2 · · · n kn to denote a type for which there are k 1 blocks of size one, k 2 blocks of size two, etc. However, for the sake of presentation, we find it more convenient to ignore the zero exponents and express a type in the form i k 1 1 i k 2 2 · · · i kr r , where 1 ≤ i 1 < i 2 < · · · < i r and all k j ≥ 1. For example, the AP-partition (3.1) is of type 1 4 2 1 3 2 . Throughout this paper, we restrict our attention to m-AP-partitions with at least one singleton block and also at least one non-singleton block, namely, i 1 = 1 and r ≥ 2 in the above notation of types. Here is the aforementioned condition: k 1 k 2 + · · · + k r ≥ (m − 1)(i r − 1), (3.2) where the notation ⌈x⌉ for a real number x stands for the smallest integer that larger than or equal to x. Obviously, the condition (3.2) holds for m = 1. For m ≥ 2, (3.2) is equivalent to the relation k 1 ≥ (k 2 + · · · + k r ) (m − 1)(i r − 1) − 1 + 1. (3.3) We prefer the form (3.2) for a reason that will become clear in the combinatorial argument in the proof of Theorem 3.2. In fact on an n-cycle dissection, the r j=2 k j non-singleton heads divide the k 1 singletons into r j=2 k j segments. By virtue of the pigeonhole principle, there exists a segment containing at least (m−1)(i r −1) singletons. For example in the AP-partition (3.1), the three non-singleton heads divide the four singletons into three segments and therefore there exists one segment containing at least 2 singletons. In this particular partition it is the path from 2 to 7 that contains two singletons 3 and 5, see the right cycle in Figure 2. Proof. Let n = i r m. Consider the AP-blocks, B j = (x + jm, x + (j + 1)m, . . . , x + (j + i r − 1)m) (mod n), 0 ≤ j ≤ i r − 1. It is easy to see that these AP-blocks B j (j = 0, 1, . . . , i r − 1) have the same underlying set {x, x + m, . . . , x + (i r − 1)m}. Conversely, suppose that there is an m-AP-block B of length i s which is not uniquely determined by its underlying set. We may assume that there exists another AP-block B ′ having the same underlying set as B. Thus the difference between B and B ′ lies only in the order of their elements as a sequence. It follows that n = i s m for some 2 ≤ s ≤ r. If m = 1, then n = i s which yields s = r = 1, a contradiction. So we may assume that m ≥ 2 and 2 ≤ s ≤ r − 1. Hence i s ≤ i r−1 ≤ i r − 1, and so k 1 + r j=2 k j i j = n = i s m ≤ (i r − 1)m. In view of the condition (3.3), we deduce that (i r − 1)m − r j=2 k j i j ≥ k 1 ≥ [(m − 1)(i r − 1) − 1] r j=2 k j + 1 which can be rewritten as 1 + r−1 j=2 k j i j + (i r − 1)m r j=2 k j − 1 ≤ i r r−1 j=2 k j . Clearly, r j=2 k j − 1 ≥ r−1 j=2 k j , so (i r − 1)m < i r and thus i r < m/(m − 1) ≤ 2 which implies i r = 1, a contradiction. Thus we conclude that s = r. This completes the proof. For example, the AP-partition (3.1) is uniquely determined by its underlying partition: {7, 9, 11}, {8}, {10, 12}, {1}, {2, 4, 6}, {3}, {5}. We are now ready to present the main result of this paper. Theorem 3.2. Given a type 1 k 1 i k 2 2 · · · i kr r satisfying the condition (3.2), the number of m-AP-partitions of Z n does not depend on m, and is equal to the cyclic multinomial coefficient n k 1 + · · · + k r k 1 + · · · + k r k 1 , . . . , k r . (3.4) In fact, Theorem 3.2 reduces to Theorem 1.1 when we specialize the type to 1 n−(p+1)k (p + 1) k . In this case the condition (3.2) becomes n ≥ kmp + 1. The heads of the k AP-blocks of length p + 1 satisfy the condition (1.4). Conversely, any k-subset of Z n satisfying (1.4) determines an m-AP-partition of the given type. The cyclic multinomial coefficient (3.4) agrees with the formula (1.3) of Theorem 1.1. For example, given the type 1 4 2 1 3 2 and difference 2, the AP-partition (3.1) is determined by the selection of {7, 10, 2} as heads from Z 12 . Note that the cyclic multinomial coefficient (3.4) has occurred in Lemma 2.1. Indeed, Lemma 1 is the special case of Theorem 3.2 for m = 1. We proceed to describe an algorithm, called the separation algorithm, to transform m-AP-partitions to m ′ -APpartitions of the same type T = i k 1 1 i k 2 2 · · · i kr r , assuming the following condition holds: k 1 k 2 + · · · + k r ≥ (max{m, m ′ } − 1)(i r − 1). (3.5) The separation algorithm enables us to verify Theorem 3.2. We will state our algorithm for m-AP-partitions and m ′ -AP-partitions, instead of restricting m ′ to one, because it is more convenient to present the proof by exchanging the role of m and m ′ . Given a type T = 1 k 1 i k 2 2 · · · i kr r , let P m be the set of m-AP-partitions of type T . To prove Theorem 3.2, it suffices to show that there is a bijection between P m and P ′ m under the condition (3.5). Let π ∈ P m . Denote by H(π) the set of heads in π. For each head h of π, we consider the nearest non-singleton head in the counterclockwise direction, denoted h * . Then we denote by g(h) the number of singletons lying on the path from h * to h under the convention that h is not counted by g(h). For example, for the AP-partition π ′ on the right of Figure 2, we have H(π ′ ) = {1, 2, 3, 5, 7, 8, 10}, g(1) = g(3) = g(8) = 0, g(2) = g(5) = g(10) = 1 and g(7) = 2. The values g(h) will be needed in the separation algorithm. The Separation Algorithm. Let π be an m-AP-partition of type T . As the first step, we choose a head h 1 of π, called the starting point, such that g(h 1 ) is the maximum. Then we impose a linear order on the elements of Z n with respect to the choice of h 1 : h 1 < h 1 + 1 < h 1 + 2 < · · · < h 1 − 1 (mod n). (3.6) In accordance with the above order, we denote the heads of π by h 1 < h 2 < · · · < h t , where t = r i=1 k i . The m-AP-block of π with head h i is denoted by B i . Let l i be the length of B i , and so t i=1 l i = n. We now aim to construct m ′ -AP-blocks B ′ 1 , B ′ 2 , . . . , B ′ t such that B ′ i has the same number of elements as B i . We begin with B ′ 1 by setting h ′ 1 = h 1 and letting B ′ 1 be the m ′ -AP-block of length l 1 , namely, B ′ 1 = (h ′ 1 , h ′ 1 + m ′ , . . . , h ′ 1 + (l 1 − 1)m ′ ) . Among the remaining elements, namely, those that are not in B ′ 1 , we choose the smallest element with respect to (3.6), denoted by h ′ 2 , and let B ′ 2 be the m ′ -AP-block of length l 2 with head h ′ 2 . Repeating the above procedure, as will be justified later, after t steps we obtain an m ′ -AP-partition, denoted ψ(π), of type T with blocks B ′ 1 , B ′ 2 , . . . , B ′ t . Figure 2 illustrates the separation algorithm from a 1-AP-partition π to a 2-APpartition π ′ of the same type T = 1 4 2 1 3 2 and vice versa. The solid dots stand for singletons, whereas the other symbols represent different AP-blocks. We remark that, as indicated by the example, the starting point can never be a singleton. In fact, if s is a singleton and h is a non-singleton head such that all the heads lying on the path from s to h are singletons, then we have the relation g(h) > g(s). Since g(h 1 ) is maximum, we see that the starting point is always a non-singleton head. Clearly, it is necessary to demonstrate that the above algorithm ψ is valid, namely, we need to justify that underlying sets of the blocks B ′ 1 , B ′ 2 , . . . , B ′ t are disjoint. Proposition 3.3. The mapping ψ is well-defined, and for any π ∈ P m , we have ψ(π) ∈ P m ′ . Proof. Let π ∈ P m with AP-blocks B 1 , B 2 , . . . , B t . Without loss of generality, we may assume that h 1 , h 2 , . . . , h t are the heads of B 1 , B 2 , . . . , B t , where h 1 is the starting point for the mapping ψ and h ′ 1 , h ′ 2 , . . . , h ′ t are the corresponding heads generated by ψ. Let l i be the length of B i . Suppose to the contrary that there exist two heads h i and h j (i < j) such that h ′ i + am ′ ≡ h ′ j + bm ′ (mod n), where 0 ≤ a ≤ l i − 1 and 0 ≤ b ≤ l j − 1. If a ≥ b, then 0 ≤ a − b ≤ l i − 1 and h ′ j ≡ h ′ i + (a − b)m ′ (mod n). But the point h ′ i + (a − b)m ′ is in B ′ i , contradicting the choice of h ′ j . This yields a < b and thus 0 ≤ b − a ≤ l j − 1. We claim that the starting point h 1 lies on the path from h ′ j to h ′ i . In fact, when the Algorithm ψ is at the j-th step to deal with the head h j , all the points smaller than h ′ i lie in one of the blocks B ′ 1 , B ′ 2 , . . . , B ′ i . Then we see that h ′ j > h ′ i . Meanwhile, there are n − l 1 − l 2 − · · · − l j−1 > 0 points which are not contained in B ′ 1 , B ′ 2 , . . . , B ′ j−1 . But the head h ′ j is chosen to be the smallest point not in B ′ 1 , B ′ 2 , . . . , B ′ j−1 , we find that h ′ j lies on the path from h ′ i to h 1 . In addition to h ′ i and h ′ j , we assume that there are N points on the path from It follows that h ′ j to h ′ i . Since h ′ i ≡ h ′ j + (b − a)m ′ (mod n) and 1 ≤ b − a ≤ l j − 1, we obtain N = (b − a)mN ≥ (max{m, m ′ } − 1)(i r − 1) + (l j − 1). (3.7) Since N = (b − a)m ′ − 1 and 1 ≤ b − a ≤ l j − 1, we deduce that (m ′ − 1)(i r − 1) + (l j − 1) ≤ (b − a)m ′ − 1 ≤ (l j − 1)m ′ − 1, leading to the contradiction l j > i r . This completes the proof. Proof. Let π be an m-AP-partition of Z n . Suppose that u 1 , u 2 , . . . , u s (s ≥ 2) are all the heads such that g(u 1 ) = g(u 2 ) = · · · = g(u s ) is the maximum on π. Let u 1 be the starting point and u 1 < u 2 < · · · < u s with respect to (3.6). It suffices to show that when the Algorithm ψ processes u i (1 ≤ i ≤ s), the m ′ -AP-blocks which have been generated consist of all the elements smaller than u i . By induction we assume that this statement holds up to u j−1 . Let v q , v q−1 , . . . , v 1 , u j be all heads lying on the path Q from u j−1 to u j such that u j−1 = v q < v q−1 < · · · < v 1 < u j . Let B i be the m-AP-block containing v i . Let l i be the length of B i and B ′ i = (v ′ i , v ′ i + m ′ , . . . , v ′ i + (l i − 1)m ′ ) be the corresponding m ′ -AP-blocks generated by the Algorithm ψ. It suffices to show that the path Q consists of the elements of B ′ s , B ′ s−1 , . . . , B ′ 1 . Suppose that v 1 , v 2 , . . . , v p are all singletons, but v p+1 is not a singleton. Then p ≤ q − 1 since u j−1 is always a non-singleton head. The condition (3.5) yields that p ≥ (max{m, m ′ } − 1)(i r − 1). We now wish to show that for any 1 ≤ i ≤ q, the block B i lies entirely on the path Q. If i ≤ p, then B i = (v i ) is a singleton block lying on Q. Otherwise, we have i ≥ p + 1 and B i = (v i , v i + m, . . . , v i + (l i − 1)m). But the total number of points between any two consecutive elements of B i is (l i − 1)(m − 1) ≤ (max{m, m ′ } − 1)(i r − 1) ≤ p. Intuitively, all these points can be fulfilled by the singletons v p , v p−1 , . . . , v 1 . Since u j > v 1 , the largest element v i + (l i − 1)m in the block B i is smaller than u j . Hence the block B i (i = 1, 2, . . . , q) lies entirely on Q. Therefore, the total number of elements in B q , B q−1 , . . . , B 1 equals the length u j − u j−1 of the path Q. Since B ′ i has the same number of elements as B i , the total number of elements in B ′ q , B ′ q−1 , . . . , B ′ 1 also equals u j − u j−1 . Moreover, it can be shown that the block B ′ i also lies entirely on the path Q for any 1 ≤ i ≤ q. If i ≤ p, the block B ′ i = (v ′ i ) is a singleton given by the separation algorithm. Since the total number of elements in B ′ q , B ′ q−1 , . . . , B ′ i+1 is smaller than u j − u j−1 and v ′ i is chosen to be the smallest element which is not in B ′ q , B ′ q−1 , . . . , B ′ i+1 , we see the relation v ′ i < u j . Otherwise, we have i ≥ p + 1 and the total number of points between any two consecutive elements of B ′ i equals (l i − 1)(m ′ − 1) ≤ (max{m, m ′ } − 1)(i r − 1) ≤ p. Intuitively, all these points can be fulfilled by the singletons v ′ p , v ′ p−1 , . . . , v ′ 1 . Since u j > v ′ 1 , the largest element v ′ i + (l i − 1)m ′ in the block B ′ i is smaller than u j . Consequently, the block B ′ i lies entirely on Q. In summary, the total number of elements in B ′ q , B ′ q−1 , . . . , B ′ 1 which lie on the path Q coincides with the length of Q. Hence the path Q consists of the elements of B ′ s , B ′ s−1 , . . . , B ′ 1 . This completes the proof. Theorem 3.5. Let T be a type as given before. The separation algorithm induces a bijection between P m and P m ′ under the condition (3.5). Proof. We may employ the separation algorithm by interchanging the roles of m and m ′ to construct an m-AP-partition from an m ′ -AP-partition, and we denote this map by ϕ. We aim to show that ϕ is indeed the inverse map of ψ, namely, ϕ(ψ(π)) = π for any π ∈ P m . Let h 1 , h 2 , . . . , h t be the heads of π for the map ψ, where h 1 is the starting point. Assume that π has AP-blocks B 1 , B 2 , . . . , B t with h i being the head of B i . Let l i be the length of B i . By the construction of ψ, the generated heads h ′ 1 = h 1 , h ′ 2 , . . . , h ′ t have the order h ′ 1 < h ′ 2 < · · · < h ′ t in accordance with h 1 < h 2 < · · · < h t . It follows that g(h ′ 1 ) is the maximum considering all heads of the AP-partition ψ(π). We now apply the map ϕ on the m ′ -AP-partition ψ(π) and choose h ′ 1 as the starting point. Let h ′′ 1 , h ′′ 2 , . . . , h ′′ t be the heads generated by ϕ respectively. In light of the construction of ϕ, we have h ′′ 1 = h ′ 1 = h 1 and h ′′ 1 < h ′′ 2 < · · · < h ′′ t . For any i, the separation algorithm has the property that the length of the m-AP-block in ϕ(ψ(π)) containing h ′′ i is l i , which is the length of the m-AP-block in π containing h i . Note that both ϕ(ψ(π)) and π are m-AP-partitions. They have the same starting point h ′′ 1 = h 1 and the same length sequence (l 1 , l 2 , . . . , l t ). Thus for any i = 2, 3, . . . , t, the head h ′′ i is the smallest point which is not contained in the m-AP-blocks B 1 , B 2 , . . . , B i−1 , and so does h i . Hence we conclude that h ′′ i = h i and ϕ(ψ(π)) = π. This completes the proof. Figure 1 : 1A 20-cycle dissection of type 1 8 2 3 3 2 . Proposition 3 . 1 . 31Under the condition (3.2), an m-AP-block is not uniquely determined by its underlying set if and only if n = i r m and it is of length i r . Figure 2 : 2The algorithms ψ and ϕ for T = 1 4 2 1 3 2 , m = 1 and m ′ = 2. ′ − 1 . 1On the other hand, at the j-th step, in addition to the point h ′ j , there are at least l j − 1 points not contained in B ′ 1 , B ′ 2 , . . . , B ′ j−1 . Similarly, the choice of h 1 and the condition (3.5) yield that the largest (max{m, m ′ } − 1)(i r − 1) heads with respect to the order (3.6) are all singletons by the pigeonhole principle. Therefore, there are at least (max{m, m ′ } − 1)(i r − 1) points not contained in B ′ 1 , B ′ 2 , . . . , B ′ j−1 . Proposition 3. 4 . 4Given an m-AP-partition of Z n , the separation algorithm ψ generates the same m ′ -AP-partition regardless of the choice of the starting point subject to the maximum property. Acknowledgments. This work was supported by the 973 Project, the PCSIRT Project of the Ministry of Education, the Ministry of Science and Technology, and the National Science Foundation of China. R A Brualdi, Introductory Combinatorics. North-Holland, New YorkR.A. Brualdi, Introductory Combinatorics, North-Holland, New York, 1977. Cyclic tableaux and symmetric functions. W Y C Chen, K W Lih, Y N Yeh, Studies in Applied Math. 94W.Y.C. Chen, K.W. Lih and Y.N. Yeh, Cyclic tableaux and symmetric functions, Stud- ies in Applied Math. 94 (1995) 327-339. Advanced Combinatorics. L Comtet, D. Reidel Pub. CoDordrecht, HollandL. Comtet, Advanced Combinatorics, D. Reidel Pub. Co., Dordrecht, Holland, 1974. A new proof of a theorem of Mansour and Sun. V J W Guo, European J. Combin. to appearV.J.W. Guo, A new proof of a theorem of Mansour and Sun, European J. Combin. (2007), to appear. Cycle polynomials. F K Hwang, Proc. Amer. Math. Soc. 831F.K. Hwang, Cycle polynomials, Proc. Amer. Math. Soc., Vol. 83, No. 1 (1981) 215-219. Selecting non-consecutive balls arranged in many lines. F K Hwang, J Korner, V K , -W Wei, J. Combin. Theory Ser. A. 37F.K. Hwang, J. Korner, and V.K.-W. Wei, Selecting non-consecutive balls arranged in many lines, J. Combin. Theory Ser. A 37 (1984) 327-336. Problème des ménages. I Kaplansky, Bull. Amer. Math. Soc. 49Solution of theI. Kaplansky, Solution of the "Problème des ménages", Bull. Amer. Math. Soc. 49 (1943) 784-785. Selected Papers and Other Writings. I Kaplansky, SpringerI. Kaplansky, Selected Papers and Other Writings, Springer (1995) 25-26. Two selection problems revisited. P Kirschenhofer, H Prodinger, J. Combin. Theory Ser. A. 42P. Kirschenhofer and H. Prodinger, Two selection problems revisited, J. Combin. Theory Ser. A 42 (1986) 310-316. On the number of combinations without unit separation. J Konvalina, J. Combin. Theory, Ser. A. 31J. Konvalina, On the number of combinations without unit separation, J. Combin. Theory, Ser. A 31 (1981) 101-107. On the number of combinatorics without certain separations. T Mansour, Y Sun, European J. Combin. 295T. Mansour and Y. Sun, On the number of combinatorics without certain separations, European J. Combin. 29 5 (2008) 1200-1206. Scattered subsets. E Munarini, N Z Salvi, Discrete Math. 267E. Munarini, N.Z. Salvi, Scattered subsets, Discrete Math. 267 (2003) 213-228. On the number combinations without a fixed distance. H Prodinger, J. Combin. Theory Ser. A. 35H. Prodinger, On the number combinations without a fixed distance, J. Combin. Theory Ser. A 35 (1983) 362-365. An Introduction to Combinatorial Analysis. J Riordan, WileyNew YorkJ. Riordan, An Introduction to Combinatorial Analysis, Wiley, New York, 1958. . H J Ryser, Combinatorial Mathematics, Carus Monograph. 14WileyMathematical Association of AmericaH.J. Ryser, Combinatorial Mathematics, Carus Monograph 14, Mathematical Associa- tion of America, Wiley, New York, 1963. . R P Stanley, Enumerative Combinatorics. 1Cambridge University Press2nd ed.R.P. Stanley, Enumerative Combinatorics, Vol. 1, 2nd ed., Cambridge, New York, Cam- bridge University Press, 1997.
[]
[ "A Stronger Lower Bound on Parametric Minimum Spanning Trees", "A Stronger Lower Bound on Parametric Minimum Spanning Trees" ]
[ "David Eppstein \nComputer Science Department\nUniversity of California\nIrvine\n" ]
[ "Computer Science Department\nUniversity of California\nIrvine" ]
[]
We prove that, for an undirected graph with n vertices and m edges, each labeled with a linear function of a parameter λ, the number of different minimum spanning trees obtained as the parameter varies can be Ω(m log n).
10.1007/s00453-022-01024-9
[ "https://arxiv.org/pdf/2105.05371v1.pdf" ]
234,469,845
2105.05371
23467bd208067c49d7775fcc3e8f765f7894a1ba
A Stronger Lower Bound on Parametric Minimum Spanning Trees David Eppstein Computer Science Department University of California Irvine A Stronger Lower Bound on Parametric Minimum Spanning Trees We prove that, for an undirected graph with n vertices and m edges, each labeled with a linear function of a parameter λ, the number of different minimum spanning trees obtained as the parameter varies can be Ω(m log n). Introduction In the parametric minimum spanning tree problem [16], the input is a graph G whose edges are labeled with linear functions of a parameter λ. For any value of λ, one can obtain a spanning tree T λ as the minimum spanning tree of the weight functions, evaluated at λ. Varying λ continuously from −∞ to ∞ produces in this way a discrete sequence of trees, each of which is minimum within some range of values of λ. How many different spanning trees can belong to this sequence, for a worst case graph, and how can we construct them all efficiently? Known bounds are that the number of trees in a graph with n vertices and m edges can be Ω mα(n) (where α is the inverse Ackermann function) [9] and is always O(mn 1/3 ) [7]; both bounds date from the 1990s and, although far apart, have not been improved since. The sequence of trees can be constructed in time O(mn log n) [13] or in time O(n 2/3 log O(1) n) per tree [1]; faster algorithms are also known for planar graphs [12] or for related optimization problems that construct only a single tree in the parametric sequence [6,19]. In this paper we improve the 25-year-old lower bound on the number of parametric minimum spanning trees from Ω mα(n) to Ω(m log n). A broad class of applications of this problem involves bicriterion optimization, where each edge of a graph has two real weights of different types (say, investment cost and eventual profit) and one wishes to find a tree optimizing a nonlinear combination of the sums of these two weights (such as the ratio of total profit to total investment cost, the return on the investment). Each spanning tree of G may be represented by a planar point whose Cartesian coordinates are the sums of its two kinds of weights, giving an exponentially large cloud of points, one per tree. The convex hull of this point cloud has as its vertices the parametric minimum spanning trees (and maximum spanning trees) for linear weight functions obtained from the pair of weight values on each edge by using these values as coefficients. (Essentially, this construction of weight functions from pairs of weights is a form of projective duality transforming points into lines, and the equivalence between the convex hull of the points representing trees into the lower envelope of lines representing their total weight is a standard reflection of that projective duality.) Any bicriterion optimization problem that can be expressed as maximizing a quasiconvex function (or minimizing a quasiconcave function) of the two kinds of total weight automatically has its optimum at a convex hull vertex, and can be solved by constructing the sequence of parametric minimum spanning trees and evaluating the combination of weights for each one [18]. Other combinatorial optimization problems that have been considered from the same parametric and bicriterion point of view include shortest paths [3][4][5]11], optimal subtrees of rooted trees [2], minimum-weight bases of matroids [9], minimum-weight closures of directed graphs [10], and the knapsack problem [8,15,17]. The main idea behind our new lower bound is a recursive construction of a family of graphs (more specifically, 2-trees), formed by repeated replacement of edges by triangles ( Figure 1). We also determine the parametric weight functions of these graphs by a separate recursive construction ( Figure 3). However, this only produces an Ω(n log n) lower bound, because for a graph constructed in this way with n vertices, the number of edges is 2n − 3, only a constant factor larger than the number of vertices. To obtain our claimed Ω(m log n) lower bound we use an additional packing argument, in which we find a dense graph containing many copies of our sparse lower bound construction, each contributing its own subsequence of parametric minimum spanning trees to the total. Background and preliminaries The minimum spanning tree of a connected undirected graph with real-valued edge weights is a tree formed as a subgraph of the given graph, having the minimum possible total edge weight. As outlined by Tarjan [22], standard methods for constructing minimum spanning trees are based on two rules, stated most simply for the case when all edge weights are distinct. The cut rule concerns cuts in the graph, partitions of the vertices into two subsets; an edge spans a cut when its two endpoints are in different subsets. The cut rule states that (for distinct edge weights) the minimum-weight edge spanning any given cut in a graph belongs to its unique minimum spanning tree. The cycle rule, on the other hand, states that (again for distinct edge weights) the maximum-weight edge in any cycle of the graph does not belong to its unique spanning tree. One consequence of these rules is that the minimum spanning tree depends only on the sorted ordering of the edge weights, rather than on more detailed properties of their numeric values. An input to the parametric minimum spanning tree problem consists of an undirected connected graph whose edges are labeled with linear functions of a parameter λ rather than with real numbers. For any value of λ, plugging λ into these functions produces a system of real weights for the edges, and therefore a minimum spanning tree T λ . Different values of λ may produce different trees, and the task is either to obtain a complete description of which tree is minimum for each possible value of λ or, in some versions of the problem, to find a value λ and its tree optimizing another objective function. If we plot the graphs of the linear functions of a parametric minimum spanning tree instance, as lines in the (λ, weight) plane, then the geometric properties of this arrangement of lines are closely related to the combinatorial properties of the parametric minimum spanning tree problem. If no two edges have the same weight function, then all edge weights will be distinct except at a finite set of values of λ, the λ-coordinates of points where two lines in the arrangement cross. As λ varies continuously, the sorted ordering of the weights will remain unchanged except when λ passes through one of these crossing points, where the set of lines involved in any crossing will reverse their weight order. It follows from these considerations that the sequence of parametric minimum spanning trees is finite, and that these trees change only at certain breakpoints which are necessarily the λ-coordinates of crossings of lines. In particular, m lines have O(m 2 ) crossings and there can be only O(m 2 ) distinct trees in the sequence of parametric minimum spanning trees. However, a stronger bound, O(mn 1/3 ), is known [7]. The worst-case instances of the parametric minimum spanning tree problem, the ones with the most trees for their numbers of edges and vertices, have distinct edge weight functions whose arrangement of lines has only simple crossings, crossings of exactly two lines. For, in any other instance, perturbing the edge weight functions by a small amount will preserve the ordering of weights away from the crossings of its lines, and therefore will preserve its sequence of trees away from these crossings, while only possibly increasing the number of breakpoints near perturbed crossings of multiple lines, which become multiple simple crossings. For an instance in which the lines have only simple crossings, the only possible change to the minimum spanning tree at a breakpoint is a swap, a change to the tree in which one edge (corresponding to one of the two crossing lines at a simple crossing) is removed, and the other edge (corresponding to the other of the two crossing lines) is added in its place. For details on this correspondence between the geometry of line arrangements and the sequence of parametric minimum spanning trees, and generalizations of this correspondence to other matroids than the matroid of spanning trees, see our previous paper on this topic [9]. Replacing edges by triangles A 2-tree is a graph obtained from the two-vertex one-edge graph K 2 by repeatedly adding new degree-two vertices, adjacent to pairs of adjacent earlier vertices. Equivalently, they are obtained by repeatedly replacing edges by triangles. These graphs are planar and include the maximal outerplanar graphs [20]; their subgraphs are the partial 2-trees, graphs of treewidth ≤ 2 [23]. The graphs we use in our lower bound are a special case of this construction where we apply this edge replacement process simultaneously to all edges in a smaller graph of the same type. We define the first graph T 0 in our sequence of graphs to be the graph K 2 , and then for all i > 0 we define T i to be the graph obtained by replacing all edges of T i−1 by triangles. It seems natural to call these complete 2-trees, by analogy to complete trees (whose leaves are repeatedly replaced by stars for a given number of levels) but we have been unable to find this usage in the literature. The graphs T i for i ≤ 3 are depicted in Figure 1. Proof. The bound on the number of edges follows from the fact that each replacement of edges by triangles triples the number of edges. The bound on the number of vertices follows easily by induction on i, using the observations that each edge of T i−1 leads to a newly added vertex in T i and that (3 i−1 +3)/2+3 i−1 = (3 i + 3)/2. What happens when we replace an edge by a triangle in a parametric spanning tree problem? For a non-parametric minimum spanning tree, the answer is given by the following lemma. Lemma 2. Let graph G contain edge pq, and replace this edge by a triangle pqr to form a larger graph G + . Suppose that the edges in G + have distinct edge weights, and use these weights to assign weights to the edges in G, with the following exception: in G, give edge pq the weight of the bottleneck edge in triangle pqr (the maximum-weight edge on path from p to q in the minimum spanning tree of the graph of the triangle) instead of the weight of pq. Then, the minimum spanning tree of G+ has the same set of edge weights as the minimum spanning tree of G, together with the minimum weight of a non-bottleneck edge in triangle pqr. f 3 = 3 f 1 = λ -1 f 2 = 4 -λ f 2 = 4 -λ f 3 = 3 p q r path prq has bottleneck f 2 path prq has bottleneck f 1 path pq has bottleneck f 3 min(max(f 1 , f 2 ), f 3 ) = bottleneck of MST path from p to q f 1 = λ -1 Fig. 2. A parametric spanning tree problem on a single triangle pqr, and the graph of the bottleneck edge weight on the path from p to q in the parametric spanning tree, as a function of the parameter λ. Contracting this minimum non-bottleneck edge in G + produces a multigraph with two copies of edge pq, the lighter of which is the bottleneck edge. Therefore, if we keep only the lighter of the two edges, we obtain the weighting on G as a contraction of a minimum spanning tree edge in G + . This contraction preserves the set of remaining minimum spanning tree weights, as the lemma states. It follows that in the parametric case, replacing an edge pq by a triangle pqr, with linear parametric weights on each triangle edge, causes that edge to behave as if it has a nonlinear piecewise linear weight function attached to it, the function mapping the parameter λ to the bottleneck weight from p to q in triangle pqr. Figure 2 shows an example of three parametric weights on a triangle pqr and this bottleneck weight function, with the weights chosen so that the function has three breakpoints. Clearly, we can perturb these three weight functions within small neighborhoods of their coefficients, and obtain a qualitatively similar bottleneck weight function. Weighted 2-trees We now describe how to assign parametric weights to the edges of T i to obtain our Ω(n log n) lower bound. As a base case, we may use any linear function as the weight of the single edge of T 0 ; it can have only one spanning tree, regardless of this choice. For T i , with i > 0, we perform the following steps to assign its weights: -Construct the weight functions for the edges of T i−1 , recursively. -Apply a linear transformation to the parameter of these weight functions (the same transformation for each edge) so that, in the arrangement of lines representing the graphs of these weight functions, all crossings occur in the interval [0, 1] of λ-coordinates. Additionally, scale these weight functions by a sufficiently small factor so that, within this interval, they are close enough to the λ-axis, for a meaning of "close enough" to be specified below. shown here as an arrangement of lines in a plane whose horizontal coordinate is the parameter λ and whose vertical coordinate is the edge weight at that parameter value. The reversed text in the central recursive construction indicates that the construction is reversed left-to-right relative to the other two copies. green, as in Figure 2(left), with pq colored green and the other two edges colored red and blue (choosing arbitrarily which one to color red and which one to color blue). This construction is depicted schematically, in the (λ, weight) plane, in Figure 3. We are now ready to define what it means for the weight scaling factor to be small enough, so that the scaled weight functions are "close enough" to the λ axis: as shown in the figure, the left-to-right ordering of the crossings of the lines graphing the weight functions should be: 1. All crossings of blue with green lines 2. All crossings of two blue lines, in one copy of the recursive construction 3. All crossings of blue with red lines 4. All crossings of two red lines, in a second (reversed) copy of the recursive construction 5. All crossings of red with green lines 6. All crossings of two green lines, in the third copy of the recursive construction Our construction automatically places all monochromatic crossings into disjoint unit-length intervals with these orderings. The bichromatic crossings of Figure 2 are separated from these unit-length intervals by a horizontal distance of at least 0.25, and sufficiently small values of will cause the bichromatic crossings of T i to be close to the positions of the crossings with the same color in Figure 2. Therefore, by choosing small enough, we can ensure that the crossing ordering described above is obtained. Figure 4 depicts this construction for T 2 . We observe that, within each of the unit-length intervals containing a copy of the recursive construction, the bottleneck edges for each triangle pqr in the construction of T i are exactly the ones of the color for that copy of the recursive construction, and that within these intervals, the minimum non-bottleneck edge in each triangle does not change. Therefore, by Lemma 2, the changes in the sequence of parametric minimum spanning trees within these intervals exactly correspond to the changes in the trees of T i−1 from the recursive construction. Lemma 3. For weights constructed as above, the number of distinct parametric minimum spanning trees for T i is at least as large as N (i) = i3 i 2 + 3 i + 3 4 . Proof. We prove by induction on i that the number of trees is at least as large as the solution to the recurrence N (i) = 3N (i − 1) + 3 i − 3 2 . To prove this, it is easier to count the number of breakpoints, values of λ at which the tree structure changes; the number of trees is the number of breakpoints plus one. In each copy of the recursive construction, this number of breakpoints is exactly N (i − 1) − 1, so the total number of breakpoints appearing in these three copies is 3N (i − 1) − 3. Additional breakpoints happen within the ranges of values for λ at which (in the (λ, weight) plane) pairs of lines of two different colors cross. Because of the reversal of the red copy of the recursive construction, the minimum spanning trees immediately to the left and right of these regions of bichromatic crossings correspond to the same trees in T i−1 : the bottleneck edges that are included in these minimum spanning trees come from the same triangles, but with different colors. In the regions where the green lines cross lines of other colors, the minimum non-bottleneck edge in each triangle does not change, so each green bottleneck edge in the minimum spanning tree must be exchanged for a red or blue one. Each change to a tree within this crossing region removes a single edge from the minimum spanning tree and replaces it with another single edge, the two edges whose two lines cross at the λ-coordinate of that change. Therefore, no matter what sequence of changes is performed, to exchange all green bottleneck edges for all red or blue ones requires a number of crossings equal to the number of edges in the minimum spanning tree of T i−1 , which is (3 i−1 + 1)/2 by Lemma 1. We get this number of breakpoints at the region where the green and blue lines cross, and the same number at the region where the red and green lines cross. The analysis of the number of breakpoints at the region where the blue and red lines cross is similar, but slightly different. Immediately to the left and right of this region, the the bottleneck edge in each triangle and the minimum non-bottleneck edge in the triangle are red and blue, but in a different order to the left and to the right. Therefore, in triangles where the bottleneck edge is part of the minimum spanning tree (as is always the minimum non-bottleneck edge), nothing changes. However, in triangles where the bottleneck edge is not part of the minimum spanning tree, there is a change, to the minimum non-bottleneck edge, from before this crossing region to after it. These triangles correspond to edges of T i−1 which do not belong to the minimum spanning tree (for the parameter values in this range), of which there are (3 i−1 − 1)/2 by Lemma 1. By the same argument as before, the crossing region must contain at least this many breakpoints. Adding together the 3N (i − 1) − 3 breakpoints from the recursive copies, the (3 i−1 + 1)/2 breakpoints for the green-red and green-blue crossing regions, the (3 i−1 − 1)/2 breakpoints for the red-blue crossing region, and +1 to convert numbers of breakpoints to numbers of distinct trees, and simplifying, gives the right hand side of the recurrence. A straightforward induction shows that the solution to the recurrence is the formula given in the statement of the lemma. For instance, T 1 has three trees with the weighting given in Figure 2: the bottleneck function shown in the figure has four linear pieces, but the red and blue pieces both correspond to the same tree, with a different edge on the path pqr as the bottleneck edge. Figure 4 shows the 12 trees for T 2 . Packing into dense graphs The lower bound obtained from Lemma 3 applies only to sparse graphs, where the numbers of vertices and edges are within constant factors of each other. However, we want a bound that applies more generally, for graphs with significantly more edges than vertices. The other direction, for graphs with significantly fewer edges than vertices, is less interesting. To achieve many fewer edges than vertices, it is necessary to allow disconnected graphs, and consider minimum spanning forests instead of minimum spanning trees; but with these modifications one can obtain a lower bound simply by adding isolated vertices to the construction of Lemma 3. To achieve many more edges than vertices, we use the following construction for packing many instances of a sparse lower bound graph into a single denser graph. It does not require any detailed knowledge of the structure of the sparse graph. Lemma 4. Let G be a parametrically weighted graph with N vertices and M edges, whose sequence of parametric minimum spanning trees has length T , and let k be a positive integer satisfying k ≤ M . Then there is a parametrically weighted graph H with N + 3M vertices and (2k + 2)M edges whose sequence of parametric minimum spanning trees has length at least 2kT . Proof. We construct H from G in the following steps, illustrated in Figure 5. -Add additional edges from b i to a j and c j , for each i and each j = i + 1, i + 2, . . . i + k − 1 mod M . Given this construction, we define subgraphs H j as follows: -H 0 consists of all edges connecting vertices of G to new vertices a i or c i . -H j consists of all edges from b i to a i+j−1 or c i+j−1 , for all i, with indexes taken modulo m. Then, for i = 1, 2, . . . k, the graph H 0 ∪ H i is isomorphic to a subdivision of G, with H 0 ∪ H 1 being the subdivision we used to construct H and the others obtained in the same way but with permuted connections. As in Lemma 3, we flatten the arrangement of lines for the weighting of G so that its crossings all lie within a small neighborhood of the unit interval of the λ-axis, without changing its sequence of parametric minimum spanning trees. We then apply linear transformations to the system of weights for the edges in each copy H j with j > 0, as detailed below, while using small-enough weights for all edges in H 0 so that these edges belong to all minimum spanning trees for parameters in the range covered by the transformed unit intervals. shown in Figure 6. More specifically, for each j > 0 we use one transformed copy of the weights in G for the a-b edges in H j , and a second transformed copy for the b-c edges, arranged so that the transformed unit intervals containing the crossings within each copy project to disjoint intervals of the λ-axis, and so that all crossings of the a-b edges appear above all lines for the b-c edges and vice versa. Therefore, in the graph H 0 ∪ H j , the parametric trees in the parameter range where the a-b edges cross each other consist of all b-c edges (because those have smaller weight than the a-b edges in each path) together with a subset of the a-b edges corresponding to a spanning tree of G. Because we copied and transformed the weights of G for the a-b edges in this parameter range, we obtain T distinct trees of this type. To arrange the a-b and b-c parameter weights for H i in this fashion, we transform them so that the a-b weights lie near the line w = 3 − λ , with crossings in the range λ ∈ [1,2], and so that the b-c weights lie near the line w = λ − 3, with crossings in the range λ ∈ [4,5]. Then, we transform and flatten these combined weights of H i , so that they again lie near the λ-axis with all crossings of edges of either type in the range [0, 1]. We arrange the sets of lines associated with H 1 , H 2 , etc., so that the lines from each H j pass above the crossings for each other H j , j = j , and so that the range of parameters within which H j has the lowest lines contains the two subranges where its a-b lines cross and where its b-c lines cross, again as shown in the figure. We may do this by finding a convex-downward polygonal chain with k sides (for instance the upper part of a regular 2k-gon), in which all sides project to a range of λ-coordinates of more than unit length, and by transforming the weights of each H i so that the unit interval of the λ-axis, near which all crossings of these weights occur, is transformed to the interior of one of the sides of this polygonal chain. Figure 6 shows the weights for three subgraphs H 1 , H 2 , and H 3 , transformed in this way so that they are near the upper three sides of a hexagon. The weights for H 0 can be chosen to be near a horizontal line, below all crossings of the other weight functions, as also shown in the figure. H 0 H 1 (a-b) H 1 (b-c) H 3 (a-b) H 3 (b-c) H 2 (a-b) H 2 (b-c) Therefore, within these subranges, the parametric minimum spanning trees for all of H will be the same as the trees for H 0 ∪ H j , because H 0 ∪ H j spans H and has lower edge weights than any of the remaining edges. With this arrangement, we get 2kT distinct parametric minimum spanning trees, 2T for each H j with j > 0, as well as additional trees that are not counted in the lemma. With this, we are ready to prove our main result: Theorem 1. There exists a constant C such that the following is true. Let n and m be integers with n > 0 and 2n − 3 ≤ m ≤ m 2 . Then there exists a parametrically weighted graph with n vertices and m edges, with at least Cm log n parametric minimum spanning trees. Proof. Let G = T i , N = (3 i + 3)/2, and M = 3 i , with i chosen as large as possible so that N + 3M ≤ n and 4M ≤ m, and choose k as large as possible so that (2k + 2)M ≤ m; then N = Θ(n) and M = Θ(m/n). Apply Lemma 3 to give weights to G so that it has Ω(n log n) parametric minimum spanning trees, and apply Lemma 4 to construct a parametrically weighted graph H with N + 3M vertices and (2k + 2)M edges that has Ω(m log n) parametric minimum spanning trees. If necessary, add leaf vertices to H to increase its number of vertices to n, and then add high-weight edges to increase its number of edges to m without affecting this sequence of parametric spanning trees. Conclusions We have shown that the number of parametric minimum spanning trees can be Ω(m log n) in the worst case, improving a 25-year-old Ω mα(n) lower bound. Because of the structure of the graphs used in our lower bound construction, the new lower bound applies as well to the special cases of planar graphs and of bounded-treewidth graphs, both of which can have Ω(n log n) parametric minimum spanning trees. However, our new lower bound is still far from the O(mn 1/3 ) upper bound, so there is plenty of room for additional improvement. Another related question concerns the parametric bottleneck shortest path problem, a parametric version of the problem of finding a path between two specified vertices that minimizes the maximum edge weight on the path. In the non-parametric version of the problem, a minimum spanning tree path is an optimal path, although faster algorithms are possible and the problem is also of interest in the case of directed graphs [14]. The same problem is also known in the equivalent maximin form as the widest path problem, where an optimal solution can be found as a maximum spanning tree path [21]. The parametric versions of these problems differ somewhat: a breakpoint in the piecewise linear parametric minimum spanning tree function (the function mapping the parameter value λ to the weight of its minimum spanning tree) might not be a breakpoint in the bottleneck shortest path problem (the maximum weight of an edge on the bottleneck shortest path problem) or vice versa. However, the bottleneck breakpoints that look locally like the minimum of two linear functions do correspond to breakpoints of the minimum spanning tree problem. For this reason, any asymptotic lower bound on the parametric bottleneck shortest path problem would also be a lower bound for parametric minimum spanning trees, and any asymptotic upper bound on the parametric minimum spanning tree problem (including the known O(mn 1/3 ) bound) is also an upper bound on parametric bottleneck shortest paths. In fact, our previous Ω mα(n) lower bound also applies to parametric bottleneck shortest paths, but our new Ω(m log n) bound does not. Can we strengthen the Ω mα(n) bound for this problem? Fig. 1 . 1Recursively constructing a family of 2-trees Ti (here, i = 0, 1, 2, 3 in left-to-right order) by repeatedly replacing every edge of Ti−1 by a triangle. Lemma 1 . 1T i has 3 i edges and (3 i + 3)/2 vertices. -Fig. 3 . 3Construct T i by replacing each edge pq in T i−1 by a triangle pqr, with a new vertex for each triangle. Color the three edges of each triangle red, blue, and Recursive construction for the parametric weight functions of the graphs Ti, - Give each edge of T i a transformed copy of the weight function of the corresponding edge of T i−1 , transformed as follows: • For a green edge pq, corresponding to an edge of T i−1 with weight function f (λ), give pq the weight function f (λ − 4.5) + 3. This transformation shifts the part of the weight function where the crossings with other green edges occur to be close to the right green segment of Figure 2(right). • For a red edge pr, corresponding to an edge pq of T i−1 with weight function f (λ), give pr the weight function f (3.75 − λ) + λ − 1. This transformation shifts the part of the weight function where the crossings with other red edges occur to be close to the red segment of Figure 2(right), and (by negating λ in the argument to f ) reverses the ordering of the crossings within that region. • For a blue edge qr, corresponding to an edge pq of T i−1 with weight function f (λ), give qr the weight function f (λ − 1.25) + 4 − λ. This transformation shifts the part of the weight function where the crossings with other red edges occur to be close to the blue segment of Figure 2(right). -Perturb all of the weight functions, if necessary, so that all crossings of two weight functions have different λ-coordinates, without changing the left-toright ordering of the crossings between any one weight function and the rest of them. Fig. 4 . 4T2 (upper right) as parametrically weighted in our construction, with the graphs of each weight function shown as lines in the (λ, w) plane (upper right), and the resulting sequence of 12 parametric minimum spanning trees (bottom). The marked yellow crossings of pairs of lines correspond to breakpoints in the sequence of trees. For i = 0 0, 1, 2, . . . the number of trees given by this formula is 1, 3, 12, 48, 183, 669, 2370, 8202, 27885, 93495 . . . -Fig. 5 . 5Number the edges of G as e 0 , e 2 , . . . e M −1 arbitrarily. -Subdivide each edge e i of G, connecting two vertices u and v, into a four-edge path u-a i -b i -c i -v. (It is arbitrary which vertex of this path we call a i and which we call c i .) The construction of Lemma 4, applied to a graph G with four vertices and four edges (left), with the parameter k = 3. The central graph is a subdivision of each edge of this graph into a four-edge path, with vertices labeled as shown, and the graph on the right is the final construction H, with the colors and textures of edges indicating the partition of its edges into four subgraphs H0 (thin black edges), H1 (thick yellow edges), H2 (dotted blue edges), and H3 (dashed red edges). Fig. 6 . 6An arrangement of lines for the weight functions of Lemma 4 with k = 3. The small rectangles indicate transformed neighborhoods of the unit λ-interval, containing all crossings of the bundle of lines associated with each subgraph. Proof. If pq is the heaviest edge in pqr then the path from p to q in the minimum spanning tree of pqr passes through r, the bottleneck edge is the heavier of the two edges on this path, and the minimum non-bottleneck edge is the lighter of its two edges. Otherwise, pq is the bottleneck edge and again the minimum nonbottleneck edge is the lighter of the two remaining edges incident to r. Applying the cut rule to the cut separating r from the rest of the graph shows that the minimum non-bottleneck edge is an edge of the minimum spanning tree of G + . Since we did not include its edge weight in the weights for G, its weight is not included in the set of edge weights of the minimum spanning tree for G. Parametric and kinetic minimum spanning trees. K Pankaj, David Agarwal, Leonidas J Eppstein, Monika R Guibas, Henzinger, 10.1109/SFCS.1998.743510doi:10.1109/ SFCS.1998.743510Proc. 39th IEEE Symp. Foundations of Computer Science (FOCS '98). 39th IEEE Symp. Foundations of Computer Science (FOCS '98)Pankaj K. Agarwal, David Eppstein, Leonidas J. Guibas, and Monika R. Henzinger. Parametric and kinetic minimum spanning trees. In Proc. 39th IEEE Symp. Foundations of Computer Science (FOCS '98), pages 596-605, 1998. doi:10.1109/ SFCS.1998.743510. The weighted maximum-mean subtree and other bicriterion subtree problems. Josiah Carlson, David Eppstein, 10.1007/11785293_37Proc. 10th Scand. Worksh. Algorithm Theory (SWAT 2006). 10th Scand. Worksh. Algorithm Theory (SWAT 2006)Springer4059Josiah Carlson and David Eppstein. The weighted maximum-mean subtree and other bicriterion subtree problems. In Proc. 10th Scand. Worksh. Algorithm Theory (SWAT 2006), volume 4059 of Lect. Notes Comput. Sci., pages 397-408. Springer, 2006. doi:10.1007/11785293_37. Parametric cost shortest path problems. Unpublished memo, Bellcore. Patricia J Carstensen, Patricia J. Carstensen. Parametric cost shortest path problems. Unpublished memo, Bellcore, 1984. Network pricing problem with unit toll. Lorenzo Castelli, Martine Labbé, Alessia Violin, 10.1002/net.21701Networks. 691Lorenzo Castelli, Martine Labbé, and Alessia Violin. Network pricing problem with unit toll. Networks, 69(1):83-93, 2017. doi:10.1002/net.21701. Two-phase algorithms for the parametric shortest path problem. Sourav Chakraborty, Eldar Fischer, Oded Lachish, Raphael Yuster, Proc. 27th International Symposium on Theoretical Aspects of Computer Science (STACS 2010), volume 5 of LIPIcs. Jean-Yves Marion and Thomas Schwentick27th International Symposium on Theoretical Aspects of Computer Science (STACS 2010), volume 5 of LIPIcsSourav Chakraborty, Eldar Fischer, Oded Lachish, and Raphael Yuster. Two-phase algorithms for the parametric shortest path problem. In Jean-Yves Marion and Thomas Schwentick, editors, Proc. 27th International Symposium on Theoretical Aspects of Computer Science (STACS 2010), volume 5 of LIPIcs, pages 167-178. . 10.4230/LIPIcs.STACS.2010.2452Schloss Dagstuhl -Leibniz-Zentrum für InformatikSchloss Dagstuhl -Leibniz-Zentrum für Informatik, 2010. doi:10.4230/LIPIcs. STACS.2010.2452. Finding the shortest bottleneck edge in a parametric minimum spanning tree. Timothy M Chan, Proc. 16th ACM-SIAM Symposium on Discrete Algorithms (SODA 2005). 16th ACM-SIAM Symposium on Discrete Algorithms (SODA 2005)SIAMTimothy M. Chan. Finding the shortest bottleneck edge in a parametric minimum spanning tree. In Proc. 16th ACM-SIAM Symposium on Discrete Algorithms (SODA 2005), pages 917-918. SIAM, 2005. URL: https://dl.acm.org/citation.cfm?id= 1070432.1070561. Improved bounds for planar k-sets and related problems. K Tamal, Dey, 10.1007/PL00009354Discrete Comput. Geom. 193Tamal K. Dey. Improved bounds for planar k-sets and related problems. Discrete Comput. Geom., 19(3):373-382, 1998. doi:10.1007/PL00009354. Parametric solution for linear bicriteria knapsack models. Moshe Eben-Chaime, 10.1287/mnsc.42.11.1565Manag. Sci. 4211Moshe Eben-Chaime. Parametric solution for linear bicriteria knapsack models. Manag. Sci., 42(11):1565-1575, 1996. doi:10.1287/mnsc.42.11.1565. Geometric lower bounds for parametric matroid optimization. David Eppstein, 10.1007/PL00009396Discrete Comput. Geom. 204David Eppstein. Geometric lower bounds for parametric matroid optimization. Discrete Comput. Geom., 20(4):463-476, 1998. doi:10.1007/PL00009396. The parametric closure problem. David Eppstein, 10.1145/3147212ACM Trans. Algorithms. 141David Eppstein. The parametric closure problem. ACM Trans. Algorithms, 14(1):A2:1-A2:22, 2018. doi:10.1145/3147212. Maximum flows and parametric shortest paths in planar graphs. Jeff Erickson, 10.1137/1.9781611973075.65Proc. 21st ACM-SIAM Symposium on Discrete Algorithms (SODA 2010). Moses Charikar21st ACM-SIAM Symposium on Discrete Algorithms (SODA 2010)Jeff Erickson. Maximum flows and parametric shortest paths in planar graphs. In Moses Charikar, editor, Proc. 21st ACM-SIAM Symposium on Discrete Algorithms (SODA 2010), pages 794-804. SIAM, 2010. doi:10.1137/1.9781611973075.65. Linear-time algorithms for parametric minimum spanning tree problems on planar graphs. David Fernández, - Baca, Giora Slutzki, 10.1016/S0304-3975(96)00262-9Theoret. Comput. Sci. 1811David Fernández-Baca and Giora Slutzki. Linear-time algorithms for paramet- ric minimum spanning tree problems on planar graphs. Theoret. Comput. Sci., 181(1):57-74, 1997. doi:10.1016/S0304-3975(96)00262-9. Using sparsification for parametric minimum spanning tree problems. David Fernández-Baca, Giora Slutzki, David Eppstein, Nordic J. Comput. 34David Fernández-Baca, Giora Slutzki, and David Eppstein. Using sparsification for parametric minimum spanning tree problems. Nordic J. Comput., 3(4):352-366, 1996. Algorithms for two bottleneck optimization problems. N Harold, Robert E Gabow, Tarjan, 10.1016/0196-6774(88)90031-4J. Algorithms. 93Harold N. Gabow and Robert E. Tarjan. Algorithms for two bottleneck optimiza- tion problems. J. Algorithms, 9(3):411-417, 1988. doi:10.1016/0196-6774(88) 90031-4. Approximation schemes for the parametric knapsack problem. Alberto Giudici, Pascal Halffmann, Stefan Ruzika, Clemens Thielen, 10.1016/j.ipl.2016.12.003Inform. Process. Lett. 120Alberto Giudici, Pascal Halffmann, Stefan Ruzika, and Clemens Thielen. Approx- imation schemes for the parametric knapsack problem. Inform. Process. Lett., 120:11-15, 2017. doi:10.1016/j.ipl.2016.12.003. Bounds for the parametric minimum spanning tree problem. Dan Gusfield, Proceedings of the West Coast Conference on Combinatorics, Graph Theory and Computing. the West Coast Conference on Combinatorics, Graph Theory and ComputingArcata, Calif; Winnipeg, ManitobaUtilitas Math26Humboldt State Univ.Dan Gusfield. Bounds for the parametric minimum spanning tree problem. In Proceedings of the West Coast Conference on Combinatorics, Graph Theory and Computing (Humboldt State Univ., Arcata, Calif., 1979), volume 26 of Congress. Numer., pages 173-181, Winnipeg, Manitoba, 1980. Utilitas Math. An FPTAS for the parametric knapsack problem. Michael Holzhauser, Sven O Krumke, 10.1016/j.ipl.2017.06.006Inform. Process. Lett. 126Michael Holzhauser and Sven O. Krumke. An FPTAS for the parametric knapsack problem. Inform. Process. Lett., 126:43-47, 2017. doi:10.1016/j.ipl.2017.06. 006. Bicriteria network optimization problems. Naoki Katoh, IEICE Trans. Fundamentals of Electronics. Communications and Computer SciencesNaoki Katoh. Bicriteria network optimization problems. IEICE Trans. Funda- mentals of Electronics, Communications and Computer Sciences, E75-A:321-329, 1992. Notes on computing peaks in k-levels and parametric spanning trees. Naoki Katoh, Takeshi Tokuyama, 10.1145/378583.378675Proc. 17th Symposium on Computational Geometry. Diane L. Souvaine17th Symposium on Computational GeometryACMNaoki Katoh and Takeshi Tokuyama. Notes on computing peaks in k-levels and parametric spanning trees. In Diane L. Souvaine, editor, Proc. 17th Symposium on Computational Geometry (SoCG 2001), pages 241-248. ACM, 2001. doi: 10.1145/378583.378675. Linear algorithms to recognize outerplanar and maximal outerplanar graphs. Sandra L Mitchell, 10.1016/0020-0190(79)90075-9doi:10.1016/ 0020-0190(79Inform. Process. Lett. 95Sandra L. Mitchell. Linear algorithms to recognize outerplanar and maximal outerplanar graphs. Inform. Process. Lett., 9(5):229-232, 1979. doi:10.1016/ 0020-0190(79)90075-9. The maximum capacity route through a network. Maurice Pollack, 10.1287/opre.8.5.733Operations Res. 8Maurice Pollack. The maximum capacity route through a network. Operations Res., 8:733-736, 1960. doi:10.1287/opre.8.5.733. Data Structures and Network Algorithms. Robert E Tarjan, 10.1137/1.9781611970265CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics. 44Robert E. Tarjan. Data Structures and Network Algorithms, volume 44 of CBMS- NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics, 1983. doi:10.1137/1.9781611970265. Steiner trees, partial 2-trees, and minimum IFI networks. Joseph A Wald, Charles J Colbourn, 10.1002/net.3230130202Networks. 132Joseph A. Wald and Charles J. Colbourn. Steiner trees, partial 2-trees, and minimum IFI networks. Networks, 13(2):159-167, 1983. doi:10.1002/net.3230130202.
[]
[ "A Vergleichsstellensatz of Strassen's Type for a Noncommutative Preordered Semialgebra through the Semialgebra of its Fractions", "A Vergleichsstellensatz of Strassen's Type for a Noncommutative Preordered Semialgebra through the Semialgebra of its Fractions" ]
[ "Tao Zheng \nAcademy of Mathematics and Systems Science\nKey Laboratory of Mathematics Mechanization\nChinese Academy of Sciences\n100190BeijingChina\n", "Lihong Zhi \nAcademy of Mathematics and Systems Science\nKey Laboratory of Mathematics Mechanization\nChinese Academy of Sciences\n100190BeijingChina\n\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n" ]
[ "Academy of Mathematics and Systems Science\nKey Laboratory of Mathematics Mechanization\nChinese Academy of Sciences\n100190BeijingChina", "Academy of Mathematics and Systems Science\nKey Laboratory of Mathematics Mechanization\nChinese Academy of Sciences\n100190BeijingChina", "University of Chinese Academy of Sciences\n100049BeijingChina" ]
[]
Preordered semialgebras and semirings are two algebraic structures frequently occurring in real algebraic geometry. They have many interesting and promising applications in the fields of probability theory, theoretical computer science, quantum information theory, etc.. Strassen's Vergleichsstellensatz and its generalized versions, analogs of those well-known Positivstellensätze, play important roles in these applications. While these Vergleichsstellensätze accept only a commutative setting (for the semirings in question), we prove in this paper a noncommutative version of one of the generalized Vergleichsstellensätze proposed by Fritz [Comm. Algebra, 49 (2) (2021), 482-499]. The most crucial step in our proof is to define the semialgebra of the fractions of a noncommutative semialgebra, which generalizes (at least some of) the definitions in the literature. Our new theorem characterizes the relaxed preorder on a noncommutative semialgebra induced by all monotone homomorphisms to R + via three other equivalent conditions on the semialgebra of its fractions.Recently, Strassen's separation theorem[4,5]for preordered semirings (called "Strassen's Vergleichsstellensatz" [6]) has been generalized tremendously by Fritz and Vrana [2, 3, 6, 7] due to its varies applications to real algebraic geometry, probability theory, theoretical computer science, and quantum information theory[3,5,8,9,10,11,12]. Strassen's Vergleichsstellensatz and its generalizations are analogs of those Positivstellensätze in real algebraic geometry focusing mainly on ordered rings and fields. Among them, Fritz's results[3,6]recover the classical Positivstellensatz of Krivine-Kadison-Dubois, which suggests that the theory of semirings and semialgebras can provide us with new insights into real algebraic geometry.While Strassen's Vergleichsstellensatz and its generalizations accept only the commutative setting for the semirings under consideration, we prove in this paper a noncommutative version of one of Fritz's generalized Vergleichsstellensätze in[3]. As in the commutative case, the main assumption on the preordered semialgebra S considered in our theorem is:Assumption 1.2. The inequality " 1 S ≥ 0 S " holds in the preordered semialgebra S, and there is a power universal element u ∈ S.The assumption of the existence of a power universal element is similar to the Archimedean condition in the traditional real algebraic geometry. Moreover, the semialgebra S considered in this paper also satisfies the following assumption:Assumption 1.3. The semialgebra S is both zero-sum-free and zero-divisorfree, and the inequality " 1 S = 0 S " holds.Then our main result can be stated as follows:
10.48550/arxiv.2204.02577
[ "https://export.arxiv.org/pdf/2204.02577v3.pdf" ]
247,996,865
2204.02577
0a8d23531d5450b7073b4b1266ae5bb4c79db343
A Vergleichsstellensatz of Strassen's Type for a Noncommutative Preordered Semialgebra through the Semialgebra of its Fractions 19 May 2023 Tao Zheng Academy of Mathematics and Systems Science Key Laboratory of Mathematics Mechanization Chinese Academy of Sciences 100190BeijingChina Lihong Zhi Academy of Mathematics and Systems Science Key Laboratory of Mathematics Mechanization Chinese Academy of Sciences 100190BeijingChina University of Chinese Academy of Sciences 100049BeijingChina A Vergleichsstellensatz of Strassen's Type for a Noncommutative Preordered Semialgebra through the Semialgebra of its Fractions 19 May 2023semiringStrassen's theoremPositivstellensatz Preordered semialgebras and semirings are two algebraic structures frequently occurring in real algebraic geometry. They have many interesting and promising applications in the fields of probability theory, theoretical computer science, quantum information theory, etc.. Strassen's Vergleichsstellensatz and its generalized versions, analogs of those well-known Positivstellensätze, play important roles in these applications. While these Vergleichsstellensätze accept only a commutative setting (for the semirings in question), we prove in this paper a noncommutative version of one of the generalized Vergleichsstellensätze proposed by Fritz [Comm. Algebra, 49 (2) (2021), 482-499]. The most crucial step in our proof is to define the semialgebra of the fractions of a noncommutative semialgebra, which generalizes (at least some of) the definitions in the literature. Our new theorem characterizes the relaxed preorder on a noncommutative semialgebra induced by all monotone homomorphisms to R + via three other equivalent conditions on the semialgebra of its fractions.Recently, Strassen's separation theorem[4,5]for preordered semirings (called "Strassen's Vergleichsstellensatz" [6]) has been generalized tremendously by Fritz and Vrana [2, 3, 6, 7] due to its varies applications to real algebraic geometry, probability theory, theoretical computer science, and quantum information theory[3,5,8,9,10,11,12]. Strassen's Vergleichsstellensatz and its generalizations are analogs of those Positivstellensätze in real algebraic geometry focusing mainly on ordered rings and fields. Among them, Fritz's results[3,6]recover the classical Positivstellensatz of Krivine-Kadison-Dubois, which suggests that the theory of semirings and semialgebras can provide us with new insights into real algebraic geometry.While Strassen's Vergleichsstellensatz and its generalizations accept only the commutative setting for the semirings under consideration, we prove in this paper a noncommutative version of one of Fritz's generalized Vergleichsstellensätze in[3]. As in the commutative case, the main assumption on the preordered semialgebra S considered in our theorem is:Assumption 1.2. The inequality " 1 S ≥ 0 S " holds in the preordered semialgebra S, and there is a power universal element u ∈ S.The assumption of the existence of a power universal element is similar to the Archimedean condition in the traditional real algebraic geometry. Moreover, the semialgebra S considered in this paper also satisfies the following assumption:Assumption 1.3. The semialgebra S is both zero-sum-free and zero-divisorfree, and the inequality " 1 S = 0 S " holds.Then our main result can be stated as follows: Introduction A (noncommutative) semiring S is a set together with binary operations +, * : S × S → S, respectively called addition and multiplication. Elements 0 S , 1 S ∈ S such that (S, +, 0 S ) is a commutative monoid, (S, * , 1 S ) is a noncommutative monoid and such that the multiplication distributes over the addition [1]. A (noncommutative) semialgebra S is a (noncommutative) semiring together with a (nonnegative) scalar multiplication R + × S → S, (r, x) → r · x that is a commutative monoid homomorphism from (R + , +) or (S, +) to (S, +) in the first or the second argument respectively, and satisfies the following addition laws: i ) 1 · x = x, ii ) (r · x) * (s · y) = (rs) · (x * y) [2,Defintion 3.5]. A semialgebra S is zero-sum-free if for any a, b ∈ S, a + b = 0 S implies a = b = 0 S . It is zero-divisor-free if for any a, b ∈ S, a * b = 0 S implies a = 0 S or b = 0 S . In a zero-divisor-free semialgebra S, r · a = 0 S implies r = 0 or a = 0 S for any r ∈ R + and a ∈ S: r · a = 0 S ⇒ (r · 1 S ) * a = 0 S ⇒ r · 1 S = 0 S or a = 0 S . If a = 0 S and r = 0, then 1 S = r −1 · (r · 1 S ) = r −1 · 0 S = 0 S . Hence a = a * 1 S = a * 0 S = 0 S , which is a contradiction. A preorder relation (or, preorder ) ≤ on a set X is a binary relation that is both reflexive and transitive ( [1], page 119). Set x, y ∈ X, we sometimes write "x ≥ y" instead of "y ≤ x", and write "x < y" or "y > x" instead of the condition "x ≤ y and x = y" throughout the paper. A preorder relation ≤ on X is considered trivial if x ≤ y holds for any x, y ∈ X. A preordered semiring [3, Definition 2.2] is a semiring with a preorder relation ≤ such that for all a, x, y in the semiring, x ≤ y implies    a + x ≤ a + y, a * x ≤ a * y, x * a ≤ y * a. (1) Similarly, a preordered semialgebra is a semialgebra with a preorder relation ≤ such that x ≤ y implies the inequalities in (1). Note that it also implies the inequality r · x ≤ r · y for any r ∈ R + , since r · x = (r · 1 S ) * x ≤ (r · 1 S ) * y = r · y. Let E and K be two semialgebras. A map f from E to K is a semialgebra homomorphism (or, simply homomorphism) if f (0 E ) = 0 K , f (1 E ) = 1 K and for any x, y ∈ E and any r ∈ R + , the following equations hold f (x + y) = f (x) + f (y), f (x * y) = f (x) * f (y) and f (r · x) = r · f (x). If ≤ E and ≤ K are two preorder relations on E and K respectively, then a map f from E to K is monotone (w.r.t. ≤ E and ≤ K ) if for any x, y ∈ E, x ≤ E y implies f (x) ≤ K f (y). The following definition of a power universal element in a noncommutative preordered semialgebra is a natural generalization of [3, Definition 2.8]. Definition 1.1. Let S be a noncommutative preordered semialgebra with 1 S ≥ 0 S . An element u ∈ S with u ≥ 1 S is power universal (w.r.t. the preorder ≤) if for every nonzero x ∈ S there is a number k ∈ N such that u k * x ≥ 1 S , x * u k ≥ 1 S and u k ≥ x. (3) Theorem 1.4. Let (S, ≤) be a preordered semialgebra satisfying Assumptions 1.2 and 1.3, with a power universal element u ∈ S. Then, for every nonzero x, y ∈ S, the following are equivalent: (a) f (x) ≥ f (y) for every monotone semialgebra homomorphism f : S → R + . (b) For every real number ǫ > 0, there is a finite number m ∈ N such that x + m j=0 ǫ j+1 · u j y. (c) For every real number r ∈ R + and every real number ǫ > 0, there is a polynomial p ∈ Q + [X] such that p(r) ≤ ǫ and x + p(u) y. (d) For every real number r ∈ R + and every real number ǫ > 0, there is a polynomial p ∈ Q + [X] such that p(r) ≤ 1 + ǫ and p(u) * x y. The elements x, y and u above are the images of x, y and u for the canonical map (Definition 2.9) from S to the semialgebra of its fractions, and the preorder " " is a preorder on the semialgebra of the fractions which is directly derived from the original preorder "≤" on S (Definition 3.1). Example 1.5. Let R + x 1 , . . . , x n ′ be the set of all noncommutative polynomials in n variables with coefficients in R + whose constant terms are nonzero. Then, the semialgebra S = {0}∪R + x 1 , . . . , x n ′ equipped with the coefficientwise preorder is one of the simplest examples of preordered semialgebras. It satisfies Assumptions 1.2-1.3 with the element u = 2+ n i=1 2·x i being power universal. Theorem 1.4 can therefore be applied. The main technical difficulty in proving Theorem 1.4 is to define properly the semialgebra of the fractions of a noncommutative semialgebra: In the commutative case, every fraction of elements in a commutative semialgebra/semiring can be written in the simple form " a b " for two elements a, b in the semialgebra/semiring. However, "fractions" in the noncommutative case can be of more complicated forms, e.g., "d −1 * g * h −1 " and "(d + g −1 ) −1 ", which can no longer be written in the form " a b ." Our definition for the semialgebra of the fractions of a noncommutative semialgebra in Subsection 2.3 generalizes the definition in the literature, and we also show that it coincides with the usual definition in the commutative case (Proposition B.4). Now that theories on commutative semirings and semialgebras help us better understand commutative real algebraic geometry, it is reasonable to expect the same thing in the noncommutative case. Moreover, we hope the theory of noncommutative semirings and semialgebras (including our new theorem) could be applied to quantum information theory and other related areas. The main difference between Theorem 1.4 and Fritz's Vergleichsstellensatz ([3, Theorem 2.12]) is that, in the commutative case, the corresponding inequalities in conditions (b) -(d) can be rewritten as inequalities in the original semialgebra S (Proposition C.2), while in Theorem 1.4, one may find it non-trivial to do the same thing. In the next section, we define the semialgebra of the fractions of a noncommutative semialgebra that satisfies Assumption 1.3. This definition is the cornerstone of the main theorem of our paper. Section 3 indicates how a preorder relation on a semialgebra derives another preorder relation on the semialgebra of its fractions. Section 4 is devoted to interpreting how an R + -valued semialgebra homomorphism can be extended from a semialgebra to the semialgebra of its fractions. In Section 5, we explain in detail how an R-linear space can be constructed from a (noncommutative) semialgebra. This is mentioned in [3], but we include it for rigorousness and completeness. Finally, in Section 6, we present the proof of Theorem 1.4. The Semialgebra of the Fractions of a Noncommutative Semialgebra Throughout the paper, the letter "S" stands for a noncommutative semialgebra that satisfies Assumption 1.3. The following example introduces a simple method to define the semialgebra of nonnegative rational numbers Q + from the semialgebra of nature numbers N. Then we define the semialgebra of the fractions of S similarly. ii) There are obviously some "illegal" expressions like 3 ⊘ 0 and 0 ⊘ 0, but we only care about the set of "legal" expressions W = {n ⊘ m | n, m ∈ N, m > 0}. iii) Since some expressions, e.g., 2 ⊘ 6 and 3 ⊘ 9, stand for the same rational number, we need an equivalence relation R on W telling whether two expressions "equal". Here is a simple way to define R: n ⊘ m R ∼ i ⊘ j if and only if nj = mi. We then define Q + = W/R, which can be regarded as the semialgebra of the fractions of N. We can define the semialgebra of the fractions of S similarly. Defining the set of formal rational expressions of the elements in S The first step is to construct the set of all formal expressions of fractions (as the set U in Example 2.1). Since S is noncommutative, the "fractions" of S may have more complicated forms than just a/b. For instance, we should allow expressions like (2 · a * b + c −1 ) −1 , which contains multiplication, scale multiplication, addition, and inversion. Thus, we define U to be the set of all finite formal expressions in the elements in S with the formal addition "⊕", the formal multiplication "⊛," the formal scalar multiplication "⊙" and the formal inversion " −1 " (⊕, ⊛, and ⊙ are binary operations while −1 is unary). To be precise, we have Definition 2.2. The set U(S) (or simply U, if it won't cause any ambiguity) of formal rational expressions consisting of finitely many formal operations and elements in a semialgebra S, refers to the set determined exactly by the following rules: i) S ⊂ U, ii) a ⊛ b ∈ U if and only if both a, b ∈ U, iii) a ⊕ b ∈ U if and only if both a, b ∈ U, iv) r ⊙ a ∈ U if and only if r ∈ R + and a ∈ U, v) a −1 ∈ U if and only if a ∈ U. There is no such element as a ⊕ b ⊕ c in U since ⊕ is a binary operation. But there are elements of U of the form (a⊕b)⊕c or a⊕(b⊕c). By Definition 2.2, every element of U either contains no formal operations (i.e., it is in S) or is of exactly one of the following forms: r ⊙a, a⊛b, a⊕b, a −1 . To illustrate, set s i ∈ S, then these are respectively some elements of U of those forms: 2 ⊙ (s 1 ⊕ s 2 ) −1 , 0 S ⊛ s 1 , (0 S ⊛ s 1 ) ⊕ (0 R ⊙ s −1 2 ), (0 S ⊛ s 1 ) −1 . It is clear that the last expression is "illegal" since it is the inverse of an expression that "equals" to zero, which makes no sense. Therefore, the set of "legal" expressions in U are those ones such that whenever they contain a sub-expression of the form a −1 , a is not an expression "equaling" zero. On the other hand, there are many expressions in U "equaling" zero, as the second and the third ones shown above. The set of all "legal" expressions (denoted by W ) is not as clear as in the commutative case in Example 2.1, since the set (denoted by O) of the expressions "equaling" zero is more complicated. These two sets have to be defined together because they "tangle" with each other: for any a ∈ W , a −1 should be in W if and only if a / ∈ O, and for any a ∈ W , 0 S ⊛ a should be in O. The precise definition of them is as follows: Definition 2.3. The sets of legal and null formal rational expressions of the elements in a semialgebra S, denoted by W (S) (or W ) and O(S) (or O) respectively, refer to the subsets of U which are determined exactly by the following rules: i) S ⊂ W and O ∩ S = {0 S }; for all a, b ∈ U, ii) a ⊕ b ∈ W if and only if a, b ∈ W , and a ⊕ b ∈ O if and only if a, b ∈ O; iii) a ⊛ b ∈ W if and only if a, b ∈ W , and a ⊛ b ∈ O if and only if one of a, b is in O but the other is in W ; for all a ∈ U and r ∈ R + , iv) r ⊙ a ∈ W if and only if a ∈ W , and r ⊙ a ∈ O if and only if a ∈ O or (r = 0 R ∧ a ∈ W ); for all a ∈ U, v) a −1 / ∈ O, and a −1 ∈ W if and only if a ∈ W \O. All these rules are natural and easy to understand: Rule i) means elements of S are "legal" expressions, and 0 S is the only element in S that "equals" zero. The first conditions of Rules ii) -iv) mean that the "legal" expressions are closed concerning the formal operations ⊕, ⊛ and ⊙ and that only "legal" expressions can generate other "legal" expressions. The second condition in Rule v) means that only the inverses of "legal" expressions not "equaling" zero are "legal" inverses. The second condition in Rule ii) says the set of "zeros" is closed for the formal addition, and the zero-sum-free property holds. The second condition in Rules iii) -iv) means zero times anything "legal" is again zero, and that the zero-divisor-free property holds. The (s 1 ⊕ s 2 ) −1 ∈ W ⇔ s 1 ⊕ s 2 ∈ W \O ⇔ s 1 ⊕ s 2 ∈ W ∧ s 1 ⊕ s 2 / ∈ O ⇔ s 1 , s 2 ∈ W ∧ (s 1 / ∈ O ∨ s 2 / ∈ O) ⇔ True. The reasoning above uses only rules in Definition 2.3 time after time. Whenever a rule is used, an operation in the expression (s 1 ⊕s 2 ) −1 is reduced. When there are no operations, Rule i) in Definition 2.3 helps decide true or false. Here is another example: (0 S ⊛ s 1 ) −1 ∈ W ⇔ 0 S ⊛ s 1 ∈ W \O ⇔ 0 S ⊛ s 1 ∈ W ∧ 0 S ⊛ s 1 / ∈ O ⇔ (0 S ⊛ s 1 ∈ W ) ∧ 0 S / ∈ O ∧ s 1 / ∈ O ⇔ False, which means (0 S ⊛ s 1 ) −1 is "illegal". We can also decide whether an element of U is in O. For instance, we have (s 1 ⊕ s 2 ) −1 / ∈ O according to Rule v) in Definition 2.3, and s 1 ⊕ s 2 ∈ O ⇔ s 1 , s 2 ∈ O ⇔ False , and 0 S ⊛ s 1 ∈ O ⇔ 0 S or s 2 ∈ O ⇔ True . The following proposition shows that all expressions "equaling" zero are "legal", which is important. Proposition 2.5. The set O is a subset of W . Proof. It suffices to show that ∀i ∈ N, ∀a ∈ O, "a has i operations in it" implies "a ∈ W ". But this can be proved inductively (for i) using Definition 2.3. The proposition below gives another definition for the sets W and O. Proposition 2.6. Set W 0 = S and O 0 = {0 S }. Define the recurrence sequences {W i } and {O i } as follows: W i+1 = W i ∪ (W i ⊕ W i ) ∪ (W i ⊛ W i ) ∪ (R + ⊙ W i ) ∪ (W i \O i ) −1 , (6) O i+1 = O i ∪ (O i ⊕ O i ) ∪ (O i ⊛ W i ) ∪ (W i ⊛ O i ) ∪ ({0 R } ⊙ W i ) ∪ (R + ⊙ O i ), (7) where W i ⊕W i means {a⊕b | a, b ∈ W i }, (W i \O i ) −1 means {a −1 | a ∈ W i \O i }W = ∪ ∞ i=0 W i and O = ∪ ∞ i=0 O i for the sets W and O in Definition 2.3. Proof. SetŴ = ∪ ∞ i=0 W i andÔ = ∪ ∞ i=0 O i . It suffices to show that they satisfy all the rules in Definition 2.3, since the pair (W, S) of subsets of U satisfying those rules is unique. It is clear thatŴ andÔ satisfy all those rules, excluding the second one in rule v). To show that they also satisfy that rule, it suffices to have W i \Ô = W i \O i for any i ∈ N. This is a direct corollary of Lemma 2.7. Lemma 2.7. For any i ∈ N and the sets W i , O i andÔ = ∪ ∞ i=0 O i defined in Proposition 2.6, we have W i ∩Ô = O i . Proof. It is clear that W i ∩Ô ⊃ O i since O i ⊂ W i and O i ⊂Ô. By simply reasoning inductively on the index i, one shows the opposite containment, which we omit the details. The trick is, for any w ∈ W k ∩Ô, it is of one of the forms: a ⊕ b, a ⊛ b and r ⊙ a. In every case, we have a, b ∈ W k−1 . Using the rules in Definition 2.3 for the setÔ, one finds that the induction step works. In the rest of the paper, some proofs will be carried out based on the definition of the sets W and O given in Proposition 2.6. Constructing an equivalence relation on the set of "legal" expressions As in Example 2.1, we define in this subsection an equivalence relation R(S) (or written as R) on the set W so that the quotient W/R becomes a semialgebra of the fractions of S with every nonzero element invertible. This is why it is called the semialgebra-oriented equivalence relation (Definition 2.8). The principal is to ensure that R includes only necessary relations, such that R is as small as possible, and the semialgebra of fractions W/R can then be as general as possible. On the one hand, there are no semialgebra laws in the set of "legal" expressions W , e.g., w 1 = (a ⊕ b) ⊕ c and w 2 = a ⊕ (b ⊕ c) are not the same. On the other hand, from Example 2.1, we know that (a, b) ∈ R means a = b in the semialgebra of fractions W/R. Therefore, we will have w 1 = w 2 (and thus the associative law of addition) in W/R if and only if we include the pair (w 1 , w 2 ) into the equivalence relation R. Since we expect W/R to be a semialgebra, all pairs (like (w 1 , w 2 ) mentioned above) which stand for any semialgebra axioms should necessarily be included in R. These pairs are in the sets listed below: A 1 = {((a ⊕ b) ⊕ c, a ⊕ (b ⊕ c)), ((a ⊛ b) ⊛ c, a ⊛ (b ⊛ c)), h(a ⊛ (b ⊕ c), (a ⊛ b) ⊕ (a ⊛ c)), ((b ⊕ c) ⊛ a, (b ⊛ a) ⊕ (c ⊛ a)), h(a ⊕ b, b ⊕ a), (0 S ⊕ a, a), (a ⊕ 0 S , a) | a, b, c ∈ W }, A 2 = {(1 S ⊛ a, a), (a ⊛ 1 S , a), (0 S ⊛ a, 0 S ), (a ⊛ 0 S , 0 S ) | a ∈ W }, A 3 = {(r ⊙ (a ⊕ b), (r ⊙ a) ⊕ (r ⊙ b)), ((r + R t) ⊙ a, (r ⊙ a) ⊕ (t ⊙ a)), h(r ⊙ (a ⊛ b), (r ⊙ a) ⊛ b), (r ⊙ (a ⊛ b), a ⊛ (r ⊙ b)), h((rt) ⊙ a, r ⊙ (t ⊙ a)), (1 R ⊙ a, a), (0 R ⊙ a, 0 S ) | r, t ∈ R + , a, b ∈ W }. (8) Similarly, since (a ⊛ b) −1 and b −1 ⊛ a −1 are different expressions in W , we need to include into R those axioms involving the inversion: A 4 = {(a −1 ⊛ a, 1 S ), (a ⊛ a −1 , 1 S ), ((a ⊛ b) −1 , b −1 ⊛ a −1 ), (1 −1 S , 1 S ), h(s ⊙ (a −1 ), ( 1 s ⊙ a) −1 ) | a, b ∈ W \O, s ∈ R >0 }.(9) Besides, we also expect that the identities in S remain valid in W/R. For instance, suppose s 1 = x 1 * x 2 +2, s 2 = x 1 * x 2 * x 1 * x 2 +2 and s 3 = 2x 1 * x 2 +1 are elements in the semialgebra S defined in Example 1.5, then s 2 1 = s 2 + 2s 3 is an identity in S. This identity should also be true in the semialgebra of the fractions of S. To explain that explicitly, we define Q = {w ∈ W | w does not contain the formal inversion}, then any w ∈ Q can be evaluated in the semialgebra S: one takes off all the "circles" of the symbols ⊕, ⊛ and ⊙, turning them into +, * and · (the operations in S) respectively, whenever they occur in the expression w. In this way, the expression w results in an element of S, denoted by w S . To illustrate, let s 1 and s 2 be as above, then w = s 2 ⊕ (2 · s 3 ) is an expression without the formal inversion and w S = s 2 + 2s 3 , which is an element in S. Now the condition "identities in S remains valid in W/R" require us to include into R the following set of pairs: A 5 = {(a, b) ∈ Q × Q | a S = b S }.(10) Finally, the expressions in the set O defined in Definition 2.3 should be regarded as zero. Thus we need to include into R the pairs in the set below: A 6 = {(a, 0 S ) | a ∈ O} = O × {0 S }.(11) From above, the pairs in the set ∪ 6 i=1 A i should be included in R. However, to define operations in the semialgebra of fractions W/R, R needs to be closed under formal operations ⊕, ⊛, ⊙ and −1 . This means i) (a, b), (c, d) ∈ R ⇒ (a ⊕ c, b ⊕ d), (a ⊛ c, b ⊛ d) ∈ R, ii) (a, b) ∈ R ⇒ (r ⊙ a, r ⊙ b) ∈ R, and iii) (a, b) ∈ R ∧ a / ∈ O ∧ b / ∈ O ⇒ (a −1 , b −1 ) ∈ R for any a, b, c, d ∈ W and any r ∈ R + . To see why the set R has to meet these conditions, suppose a, b ∈ W/R are the equivalence classes of a, b ∈ W , respectively. Then it is natural to define the addition in W/R by a + b = a ⊕ b. Now "the relation R is closed under the formal addition ⊕" ensures that this addition is well-defined. Otherwise, it seems impossible to define an addition in W/R. Note that if R 1 , R 2 ⊂ W × W are two equivalence relations on W which are both closed under all the formal operations, then so is their intersection R 1 ∩ R 2 . Therefore we have the following definition Definition 2.8. The semialgebra-oriented equivalence relation R(S) ⊂ W × W on the set W of legal formal rational expressions in elements of a semialgebra S is defined to be the minimal subset of W ×W such that (1) it contains (2) it is an equivalence relation and that (3) it is closed under the formal operations ⊕, ⊛, ⊙ and −1 . In other words, if R is the set of all those subsets of W × W which meet these three conditions, then ∪ 6 i=1 A i , thatR = T ∈R T. The equivalence relation R is well-defined since W × W ∈ R and R is not empty. Definition 2.9. For any w ∈ W , we denote by w the R-equivalence class containing w. The map a : W → W/R, w → w is called the generalized canonical map while its restriction to S, i.e., the map a : S → W/R, s → s, is called the canonical map . A subset B ⊂ W is called R-saturated if for any w ∈ W , w ∈ B implies w ⊂ B. The following proposition provides another definition of the semialgebraicoriented equivalence relation. Proposition 2.10. Set R 0 = (∪ 6 i=0 A i ) {(a, a) | a ∈ W } and define the recurrence sequence {R i } i∈N as R i+1 = R i ∪ (R i ⊕ R i ) ∪ (R i ⊛ R i ) ∪ (R + ⊙ R i ) ∪ (R i \((O × W ) ∪ (W × O))) −1 ∪ rev(R i ) ∪ {(a, c) ∈ W × W | ∃b ∈ W s.t. (a, b), (b, c) ∈ R i },(12)with            R i ⊕ R i = {(a ⊕ c, b ⊕ d) | (a, b), (c, d) ∈ R i }, R i ⊛ R i = {(a ⊛ c, b ⊛ d) | (a, b), (c, d) ∈ R i }, R + ⊙ R i = {(r ⊙ a, r ⊙ b) | r ∈ R + , (a, b) ∈ R i }, (R i \((O × W ) ∪ (W × O))) −1 = {(a −1 , b −1 ) | a, b ∈ W \O, (a, b) ∈ R i }, rev(R i ) = {(b, a) | (a, b) ∈ R i }. (13) Then, R = ∪ ∞ i=0 R i for R defined in Definition 2.8. Proof. SetR = ∪ ∞ i=0 R i . By the definition of R i , we see thatR is an equivalence class on W which contains the set ∪ 6 i=0 A i and is closed under the formal operations ⊕, ⊛, ⊙ and −1 . Hence R ⊂R by Definition 2.8. To prove the converse containment, it suffices to show R i ⊂ R for any i ∈ N. This can be done inductively on the index i, which only contains routine checks; therefore, we omit it. The trick is to choose any p ∈ R k and then to proof p ∈ R. When p is of one of the following forms (a ⊕ c, b ⊕ d), (a ⊛ c, b ⊛ d), (r ⊙ a, r ⊙ b), (a −1 , b −1 ), (b, a) shown in equations (13) with (a, b), (c, d) ∈ R k−1 , then the induction step works for this case. Otherwise, p = (a, c) with (a, b), (b, c) ∈ R k−1 for some b ∈ W . In this case, the induction step also works. In the rest of the paper, we may use the definition of R given in Proposition 2.10 rather than the one in Definition 2.8, while conducting some proofs. For instance, we show in Lemma A.2 that the set O is R-saturated. And with that lemma, the following proposition is then straightforward. It is, however, important while define the semialgebra of fractions in the next subsection. Proposition 2.11. It holds that 0 S = O. Proof. Since O×{0 S } ⊂ R, 0 S ⊃ O. Conversely, set x ∈ 0 S , then (x, 0 S ) ∈ R. Clearly, (x, 0 S ) ∈ R i for some i ∈ N and 0 S ∈ O. By Lemma A.2, we have x ∈ O. Defining the semialgebra of the fractions of S Set F = W/R to be the set of equivalence classes in W , to turn it into a semialgebra, and we can define an addition, a multiplication, an inversion, and a scalar multiplication in F as follows, a + b = a ⊕ b a * b = a ⊛ b r · a = r ⊙ a c −1 = c −1        , ∀a, b ∈ W, ∀r ∈ R + , ∀c ∈ W \O. Proposition 2.12. The operations given above are well-defined, and they turn F into a semialgebra with any nonzero element invertible. Proof. The definitions of addition, multiplication, and scalar multiplication are all well-defined since R is closed under the formal operations ⊕, ⊛, and ⊙. The relations in the set A 1 ∪ A 2 ∪ A 3 ensure that all semialgebra axioms are valid in F . Moreover, 0 S and 1 S are the identities of addition and multiplication, respectively. We need to show that the inversion is well-defined and it is indeed "the inversion" for the multiplication defined here. Set e ∈ W and suppose that e = c. By Proposition 2.11, c / ∈ O implies c = 0 S . Hence e = 0 S , meaning that e / ∈ O. Thus e −1 ∈ W . Moreover, e −1 = c −1 since R is closed under inversion. Therefore, the inversion is well-defined for any c / ∈ O, i.e., for any c = 0 S . Finally, the relations in the set A 4 guarantee that the following equations hold for any a, b = 0 S : a −1 * a = 1 S = a * a −1 , (a * b) −1 = b −1 * a −1 , meaning that this is exactly the inversion for the multiplication " * ". Therefore, we have the following important definition: Definition 2.13. The semialgebra of the fractions of a semialgebra S satisfying Assumption 1.3, denoted by F (S) (or simply by F ), refers to the semialgebra W (S)/R(S) proposed in Proposition 2.12. The proposition below is clear since A 5 ⊂ R: Proposition 2.14. The canonical map from S to F is a semialgebra homomorphism. Remark 2.15. The semialgebra F is also zero-sum-free and zero-divisorfree: • a + b = 0 S implies that a ⊕ b ∈ O. Hence a, b ∈ O, that is, a = b = 0 S ; • a * b = 0 S implies that a ⊛ b ∈ O. Hence a ∈ O or b ∈ O, that is, a = 0 S or b = 0 S ; • r · b = 0 S implies that r ⊙ b ∈ O. Hence r = 0 R or b ∈ O, that is, r = 0 R or b = 0 S . The following proposition is natural. Proposition 2.16. Suppose that S is a semialgebra satisfying Assumption 1.3 and that E ⊂ S is a sub-semialgebra with 0 S and 1 S its additive and multiplicative identities. Let U(S), W (S), O(S) and R(S) be the sets defined in Definitions 2.2, 2.3 and 2.8 for the semialgebra S, and U(E), W (E), O(E) and R(E) the corresponding sets defined for the sub-semialgebra E. Then, we have            U(E) ⊂ U(S), W (E) ⊂ W (S), O(E) ⊂ O(S), W (E)\O(E) ⊂ W (S)\O(S), R(E) ⊂ R(S).(14) Moreover, there is a natural semialgebra homomorphism between the following two semialgebra of fractions: F (E) → F (S) : a E → a S , where a ∈ W (E) is any "legal" formal expressions in the elements of E, and a E : W (E) → F (E) and a S : W (S) → F (S) are the generalized canonical maps. Proof. By Definition 2.2, it is clear that U(E) ⊂ U(S) since E ⊂ S. For any x ∈ U(E) ⊂ U(S), we denote by |x| the number of formal operations in x. Then, to prove the second, the third, and the fourth containment in inequalities (14), it suffices to show that for any i ∈ N and any x ∈ U(E) with |x| = i, we have    x ∈ W (E) implies x ∈ W (S), x ∈ O(E) implies x ∈ O(S), x ∈ W (E)\O(E) implies x ∈ W (S)\O(S).(15) That can be done inductively on the index i. We omit the details as usual. The trick is, x is always of one of forms: a ⊕ b, a ⊛ b, r ⊙ a and a −1 . In every case we have |a| < |x| and |b| < |x|. Hence the inductive step works. One should also notice that implications in (15) sometimes "prove each other". For instance, if x = a −1 ∈ W (E) and we want to show x ∈ W (E). Then we have |a| < |x| and a ∈ W (E)\O(E). By the inductive assumption, we have a ∈ W (S)\O(S). Thus x = a −1 ∈ W (S). Suppose A 1 (S), . . . , A 6 (S) are the sets defined w.r.t. the semialgebra S in equations (8)- (11), and R 0 (S) = (∪ 6 i=0 A i (S)) {(a, a) | a ∈ W (S)} is as in Proposition 2.10. Similarly, we have A 1 (E), . . . , A 6 (E) and R 0 (E) = (∪ 6 i=0 A i (E)) {(a, a) | a ∈ W (E)} defined for the semialgebra E. From the first four inequalities in (14), we see that A i (E) ⊂ A i (S) for any 1 ≤ i ≤ 6. Thus R 0 (E) ⊂ R 0 (S). Using the definition of semialgebra-oriented equivalence relation given in Proposition 2.10, we have the sequences {R i (S)} and {R i (E)} such that R(S) = ∪ i R i (S) and R(E) = ∪ i R i (E). To prove R(E) ⊂ R(S), it suffices to show R i (E) ⊂ R i (S) for any i ∈ N, which can be done inductively on the index i. That is similar to proving "R i ⊂ R, ∀i ∈ N" in Proposition 2.10. Now that we have R(E) ⊂ R(S), the map a E → a S is well-defined and is clearly a semialgebra homomorphism. Related work The new definition of the semialgebra of fractions in this section generalizes the one in [1,Proposition 11.5], where elements in the semiring of the fractions of a noncommutative semiring are only allowed to be of the form a −1 * b. While our definition allows the inversion to appear in any position or even to be nested in an element, e.g., we have elements of the form a −1 * b * c −1 + d * p −1 * q and (a + b −1 ) −1 , etc., which are much more complicated. In [1,Chap. 18], the author also generalized the definition of the semiring of fractions in [1,Chap. 11] via the concept of a Gabriel filter of a semiring. Although our definition in this section generalizes the one in [1, Chap. 11], we do not know how our definition and the one given in [1, Chap. 18] could possibly be related since they look utterly different from each other. The advantage of our definition is that it is elementary. Moreover, it is more convenient to prove the main theorem using our definition than the Gabriel filter. Another topic related to the content in this section is the theory of skew fields [13], which mainly studies the rings (or fields) of fractions of commutative and noncommutative rings. While many methods exist to construct skew fields of noncommutative rings, this section illustrates a particular way to construct a "semi-skew field" of a noncommutative semialgebra. We do not use the theory of skew fields directly for two reasons: i) There are subtractions in those skew fields that are not allowed in semirings and semialgebras. One has to rule out the subtraction while defining the semialgebra of fractions via the definition of the skew field, which is more troublesome than defining it from nothing. ii) One of the most important noncommutative skew fields is the fractions of noncommutative polynomials. Two elements in such a skew field are equal if and only if their values coincide with each other at every matrix tuple in a certain set. This definition of equality relies on other algebraic structures but not the skew field itself. That is, this kind of equality is not an "intrinsic" property. The advantage of our definition of equality (i.e., the set R) relies only on the semialgebra S itself, which is "intrinsic". In Proposition B.4, we show that when the semialgebra S is commutative, the semialgebra F of its fractions defined in this section coincides with the corresponding concept in the commutative case, which can be obtained naturally from [1,Example 11.7]. Therefore, the new definition of the semialgebra of fractions generalizes the corresponding concept in the commutative case, indicating that our definition is reasonable. Preordered Semialgebra and the Derived Preorder Relation on the Semialgebra of its Fractions This section discusses how a preorder relation on a semialgebra S derives a relation on the semialgebra F = F (S) of the fractions of S. In a semialgebra, we are only concerned with preorders that are compatible with all the semialgebra operations (that is, which satisfy the implication in 1). Denote by P the set of such preorders on the semialgebra F (S), then the intersection of any elements of P is still an element of P. On the other hand, an inequality in S is naturally expected to be true in F . Therefore, we have the following definition: Definition 3.1. Let (S, ≤) be a preordered semialgebra, and F be the semialgebra of the fractions of S. Set I = {(x, y) | x ≤ y in S}. Then the derived preorder " " on F w.r.t. the preorder "≤" is the minimal preorder which is compatible with all the semialgebra operations and contains the set I . That is, {(a, b) ∈ F × F | a b} = I ⊂P ∈P P.(16)Definition 3.2. For any a, b ∈ F , if there are k ∈ N ≥1 , g 1 , . . . , g k , h 1 , . . . , h k ∈ F and A 1 , . . . , A k , B 1 , . . . , B k ∈ S such that A i ≤ B i for all i, and that a = k i=1 g i A i h i b = k i=1 g i B i h i (17) in F , then we write "a ⋖ b". The proposition below gives another perspective into the derived preorder: Proposition 3.3. If a, b ∈ F , then a b if and only if there are c 1 , . . . , c k ∈ F for some k ∈ N such that a ⋖ c 1 ⋖ · · · ⋖ c k ⋖ b.(18) Proof. The "if" part: This is clear from the observation that for any two elements w 1 , w 2 in F , w 1 ⋖ w 2 implies w 1 w 2 , which follows from the fact that the preorder contains the set I and is compatible with all the semialgebra operations. The "only if" part: Define a binary relation " ′ " on F by the condition that for any a, b ∈ F , a ′ b if and only if inequalities (18) hold for some k and some c i . Since k can be zero in this proposition, the binary relation ⋖ is contained in the binary relation ′ . It suffices to show that the derived preorder is contained in the binary relation ′ . By the minimality of the derived preorder in Definition 3.1, it suffices to show that the binary relation ′ is a preorder relation compatible with all the semialgebra operations and contains the set I . The relation ′ is reflexive since the relation ⋖ is, while it is transitive by its definition. Thus it is a preorder relation. The set I is contained in the relation ⋖, which is again contained in the preorder ′ . It is clear that the relation ⋖ is compatible with all the semialgebra operations, from which one proves that this is also true for the preorder ′ . The following corollary is interesting: Corollary 3.4. If a b ∈ F , then there is a w ∈ F such that a + w ⋖ b + w. That is, there are k ∈ N ≥1 , g 1 , . . . , g k , h 1 , . . . , h k ∈ F and A 1 , . . . , A k , B 1 , . . . , B k ∈ S such that A i ≤ B i for all i, and a + w = k i=1 g i A i h i , b + w = k i=1 g i B i h i . Proof. One observes that a ⋖ c ⋖ b implies a + c ⋖ c + b. Thus, there is some w ∈ F (e.g., w = c) such that a + w ⋖ b + w. By reasoning inductively, one finds that condition (18) also ensures the existence of such a w. The proposition below gives a recurrence characterization for the derived preorder. Proposition 3.5. For any g, h ∈ F , g h if and only if the pair (g, h) is in the set L = ∪ ∞ i=0 L i with    L 0 = {(x, y), (0 S , a), (a, a) | x, y ∈ S, x ≤ y, a ∈ F }, L i+1 = L i ∪ (R + · L i ) ∪ (L i + L i ) ∪ (F * L i ) ∪ (L i * F ) ∪{(a, c) | ∃b ∈ F such that (a, b), (b, c) ∈ L i },(19) where R + · L i = {(r · b 1 , r · b 2 ) | (b 1 , b 2 ) ∈ L i , r ∈ R + }, L i + L i = {(b 1 + c 1 , b 2 + c 2 ) | (b 1 , b 2 ) ∈ L i , (c 1 , c 2 ) ∈ L i }, F * L i = {(a * b 1 , a * b 2 ) | (b 1 , b 2 ) ∈ L i , a ∈ F }, and L i * F = {(b 1 * a, b 2 * a) | (b 1 , b 2 ) ∈ L i , a ∈ F }. Proof. To prove that the derived preorder is contained in the binary relation L via its minimality, we need to show that L is a preorder relation that is compatible with all the semialgebra operations and contains the set I = {(x, y) | x ≤ y in S}. It is reflexive since (a, a) ∈ L 0 for every a ∈ F , while its transitivity follows from the last component of the union in the definition of L i+1 in equations (19). Therefore, L is a preorder. Clearly, we have I ⊂ L 0 ⊂ L. The fact that L is compatible with all the semialgebra operations is straightforward if one notices the components (R + · L i ) ∪ (L i + L i ) ∪ (F * L i ) ∪ (L i * F ) in the definition of L i+1 . Now we prove the converse, that is, every (g, h) ∈ L satisfies g h. It suffices to show that for every natural number i, every (g, h) ∈ L i satisfies g h. This is clearly true once we prove it inductively on the index i. One thing that should be noticed while proving the i = 0 case is that 0 s a for every a ∈ F since 0 S 1 S and a * 0 S a * 1 S . In the rest of the paper, we sometimes use equations (19), i.e., the definition of the relation L, as the definition of the derived preorder. Recall that, for a commutative zero-divisor-free semiring K with preorder relation "≤", the derived preorder relation " " on the semiring K fr of the fractions of K (see Appendix B) is defined as follows: for any x, y ∈ K and any a, b ∈ K\{0 K }, If K is a commutative zero-sum-free semialgebra, then the derived preorder relation " " can naturally be defined as in (20), too. In Appendix C Proposition C.2, we indicate that, when S is commutative, and if one takes K = S in (20), then the derived preorder relation for F given in Definition 3.1 coincides with the definition in (20) for S fr . This explains the motivation of Definition 3.1. The following proposition is important for the proof of the main theorem: Proposition 3.6. If u ∈ S is power universal in S (with respect to ≤), then u is power universal in F (with respect to ). Proof. We may assume that u = 0 S , since otherwise, both the preorder relations ≤ on S and on F would be trivial. In that case, the property that we want to prove is trivially true. Since (S\{0 S })∩O = ∅, u = 0 S clearly implies λ · u = 0 S for any fixed real number and λ > 1. In the following, we first prove that λ · u is power universal in F , then indicate that u is power universal, too. For any x ∈ W \O (or equivalently, any nonzero x ∈ F ), we need to show that there is a number k ∈ N such that (λ · u) k * x 1 S , x * (λ · u) k 1 S and (λ · u) k x.(21) Suppose that x ∈ W i for i ∈ N with W i defined as in Proposition 2.6, we then prove inductively on the index i: The "i = 0" case: When i = 0, x ∈ W 0 = S and x = 0 S . Since u is power universal in S, there is an integer k ∈ N such that inequalities (3) hold. By the definition of in equations (19), we also have u k * x 1 S , x * u k 1 S and u k x. Since λ > 1, λ · u = u + (λ − 1) · u u. Thus (λ · u) k u k , and we obtain the inequalities in (21). The induction step: Assume that for any 0 ≤ i ≤ ℓ (ℓ ∈ N) and any x ∈ W i \O, there is an integer k ∈ N such that the inequalities in (21) hold. We prove that this is also valid for i = ℓ + 1: Set x ∈ W ℓ+1 \O. Then there are several cases corresponding to the components in the definition of W ℓ+1 in equation (6): 1) x ∈ W ℓ : The conclusion follows directly from the inductive assumption. 2) x ∈ W ℓ ⊕ W ℓ : Then x = b + c with b, c ∈ W ℓ . Since x = 0 S , b = 0 S or c = 0 S . If one of b and c equals 0 S , say, b = 0 S , then x = c. According to the inductive assumption, for c ∈ W ℓ , there is an integer k ∈ N such that (21) hold with "x" replaced by "c". Therefore, inequalities in (21) hold for x itself as well. Now, set b = 0 S and c = 0 S , then we have k 1 , k 2 ∈ N such that (λ · u) k 1 * b 1 S , b * (λ · u) k 1 1 S , (λ · u) k 1 b, (λ · u) k 2 * c 1 S , c * (λ · u) k 2 1 S , (λ · u) k 2 c.(22) Setk = max{k 1 , k 2 }, and choose sufficiently large k ≥k satisfying λ k ≥ 2λk, then (λ · u) k * x (λ · u)k * x = (λ · u)k * (b + c) 1 S + 1 S 1 S . Similarly, we have x * (λ · u) k 1 S . On the other hand, (λ · u) k = λ k · u k (2λk) · uk = 2 · (λ · u)k b + c = x. 3) x ∈ W ℓ ⊛ W ℓ : Then x = b * c with b, c ∈ W ℓ \O and the inequalities (22) hold for some k 1 , k 2 ∈ N. Set k = k 1 + k 2 . Multiplying the rightmost inequalities in each rows of (22), we get (λ · u) k b * c = x. Multiplying the leftmost inequality in the first row and the middle inequality in the second row of (22), we obtain (λ · u) k 1 * b * c * (λ · u) k 2 1 S . Then, by multiplying (λ · u) −k 1 and (λ · u) −k 2 from the left and the right sides respectively, one derives x (λ · u) −k . From that, one obtains both (λ · u) k * x 1 S and x * (λ · u) k 1 S by simply multiplying (λ · u) k from the left and the right sides respectively. 4) x ∈ R + ⊙ W ℓ : In this case, x = r · b with b ∈ W ℓ \O and r is a positive real number. Hence there is an integer k 1 ∈ N such that the inequalities in the first row or (22) hold. Choose a sufficiently large integer m ∈ N satisfying λ m ≥ max{r, 1/r} and set k = m + k 1 . Then we have (λ · u) k * x (λ · 1 S ) m * (λ · u) k 1 * (r · b) (λ m r) · ((λ · u) k 1 * b) 1 R · 1 S = 1 S , and similarly x * (λ · u) k (r · b) * (λ · u) k 1 * (λ · 1 S ) m (λ m r) · (b * (λ · u) k 1 ) 1 R · 1 S = 1 S . Moreover, we have (λ·u) k = (λ·u) m * (λ·u) k 1 (λ·1 S ) m * b = λ m ·b r·b = x. 5) x ∈ (W ℓ \O ℓ ) −1 : Then x = b −1 with b ∈ W ℓ \O ℓ satisfying the inequal- ities in the first row of (22) for some integer k 1 ∈ N. From the rightmost inequality we obtain both (λ · u) k 1 * b −1 1 S and b −1 * (λ · u) k 1 1 S by multiplying b −1 from its right and left sides, respectively. From the leftmost one we obtain (λ · u) k 1 b −1 by multiplying b −1 from its right side. By now we have shown that λ · u is power universal in F since the inequalities in (21) hold. Now we show that u is also power universal. Apparently, (λ · 1 S ) k = 0 S , hence there is k 0 ∈ N such that u k 0 ≥ (λ · 1 S ) k . Therefore u k 0 (λ · 1 S ) k in F . Using the inequalities in (21), we have u k 0 +k (λ·1 S ) k * (λ −k ·x) = x, and u k 0 +k * x (λ·1 S ) k * u k * x = (λ·u) k * x 1 S . Similarly, we also have x * u k 0 +k 1 S . Hence u is indeed power universal. Extending R + -Valued Homomorphisms from a Semialgebra to the Semialgebra of its Fractions This section explains how a monotone homomorphism from the preordered semialgebra S to some other preordered semialgebra T (which shares some good properties) can be extended to the semialgebra F of the fractions of S. In particular, the extension is always guaranteed when T = R + . The following lemma due to [3] is straightforward: Lemma 4.1. Suppose that (T, ≤ T ) is a preordered semialgebra with nontrivial preorder relation ≤ T such that 1 T ≥ T 0 T , and (E, ≤ E ) is a preordered semialgebra with 1 E ≥ E 0 E and a power universal element u. Then for any monotone semialgebra homomorphism f : E → T and any nonzero x ∈ E, we have f (x) = 0 T . Proof. There is an integer k ∈ N satisfying u k * x ≥ E 1 E .(23) Applying the homomorphism f to the inequality (23), we get f (u) k * f (x) ≥ T 1 T . Assume that f (x) = 0 T , then we have 0 T ≥ T 1 T . For any y, z ∈ T , we obtain y = y * 1 T ≤ T y * 0 T = z * 0 T ≤ T z * 1 T = z,(24) which contradicts the assumption that ≤ T is nontrivial. Let T be given as in Lemma 4.1 and suppose that (S, ≤) is a preordered semialgebra satisfying Assumption 1.2. We assume further, throughout the rest of the paper, that T is both zero-sum-free and zero-divisor-free, and that every nonzero element of T is invertible (e.g., if T = R + , then all the assumptions on the semialgebra T are satisfied). For any monotone semialgebra homomorphism f : S → T , the following proposition defines a map f W from the set of legal formal rational expressions W (S) (Definition 2.3) to the semialgebra T : Proposition 4.2. Suppose f : S → T is a monotone semialgebra homomorphism with preordered semialgebras S and T described as above. Let {W i } be the sequence defined in Proposition 2.6 such that W = ∪ i W i . Then, the recursive assignment below for f W makes it a well-defined map from W to T : 1) For any x ∈ W 0 = S, set f W (x) = f (x); 2) Otherwise, suppose that x ∈ W ℓ with ℓ = min{i ∈ N | x ∈ W i } > 0, then set f W (x) =        f W (a) + f W (b), if x = a ⊕ b ∈ W ℓ−1 ⊕ W ℓ−1 , f W (a) * f W (b), if x = a ⊛ b ∈ W ℓ−1 ⊛ W ℓ−1 , r · f W (a), if x = r ⊙ a ∈ R + ⊙ W ℓ−1 , (f W (a)) −1 , if x = a −1 ∈ (W ℓ−1 \O ℓ−1 ) −1 .(25) Proof. It is sufficient to prove (inductively on the index i) that for any i ∈ N and any x ∈ W i , f W (x) is a well-defined element in T and f W (x) = 0 T whenever x / ∈ O i (thus the assignment in the last row of (25) is well-defined due to the assumption that every nonzero element in T is invertible and Lemma 4.1): If i = 0, x ∈ W 0 = S, and f W (x) = f (x) is well-defined. When x / ∈ O 0 = {0 S }, we have x ∈ S\{0 S }. Then Lemma 4.1 indicates that f W (x) = f (x) = 0 T . Suppose that for any 0 ≤ i < k (k ∈ N ≥1 ) and any x ∈ W i , f W (x) is a well-defined element in T and f W (x) = 0 T whenever x / ∈ O i . We then show that this is also valid for i = k and any x ∈ W i : Let x ∈ W k , we may assume that x / ∈ W k−1 (Since, if x ∈ W k−1 , then the inductive assumption implies the conclusions that we need). Hence k ≥ 1 is the minimal nature number such that x ∈ W k . According to equation (6), there are several cases we need to check: i) If x = a ⊕ b ∈ W k−1 ⊕ W k−1 , then f W (x) = f W (a) + f W (b) is well- defined since f W (a) and f W (b) are well-defined. When x / ∈ O k , we have either a / ∈ O k−1 or b / ∈ O k−1 . Hence one of f W (a) and f W (b) is nonzero in T . We conclude that f W (x) = f W (a) + f W (b) is also nonzero due to the fact that T is zero-sum-free. ii) If x = a ⊛ b ∈ W k−1 ⊛ W k−1 , then f W (x) = f W (a) * f W (b) is well- defined. When x / ∈ O k , we have a, b / ∈ O k−1 . By the inductive assumption, both f W (a) and f W (b) are nonzero. Hence f W (x) = f W (a) * f W (b) is also nonzero (since T is zero-divisor-free). iii) If x = r ⊙ a ∈ R + ⊙ W k−1 , then f W (x) = r · f W (a) is well-defined. When x / ∈ O k , we have r = 0 and a / ∈ O k−1 . By the inductive assumption, f W (a) is nonzero in T . Hence f W (x) = r · f W (a) is also nonzero (using again the fact that T is zero-divisor-free). iv ) If x = a −1 ∈ (W k−1 \O k−1 ) −1 with a ∈ W k−1 \O k−1 , then f W (a) is welldefined and nonzero in T by the inductive assumption. Since every nonzero element in T is invertible, f W (x) = (f W (a)) −1 is also well-defined. Moreover, it is nonzero since (f W (a)) −1 = 0 T would imply 1 T = f W (a) * (f W (a)) −1 = f W (a) * 0 T = 0 T , which indicates that 1 T ≤ T 0 T ≤ T 1 T . And this would result in the triviality of ≤ T as shown in (24), contradicting one of the assumptions in Lemma 4.1. Note that by formulae (25) we have        f W (a ⊕ b) = f W (a) + f W (b), f W (a ⊛ b) = f W (a) * f W (b), f W (r ⊙ a) = r · f W (a), f W (c −1 ) = (f W (c)) −1 ,(26) for all a, b ∈ W, c ∈ W \O and r ∈ R + , no matter what concrete value the letter ℓ in formulae (25) takes. Therefore, Proposition 4.2 can be rewritten concisely as follows: Proposition 4.3. Suppose f : S → T is a monotone semialgebra homomorphism with preordered semialgebras S and T described as before. Then, the recursive assignment below for f W makes it a well-defined map from W to T : 1) For any x ∈ S, set f W (x) = f (x); 2) If x ∈ W \S contains some formal operations, then set f W (x) =        f W (a) + f W (b), if x = a ⊕ b with a, b ∈ W, f W (a) * f W (b), if x = a ⊛ b with a, b ∈ W, r · f W (a), if x = r ⊙ a with r ∈ R + , a ∈ W, (f W (a)) −1 , if x = a −1 with a ∈ W \O.F : F → T, x → f W (x) (∀x ∈ W ) is well-defined. Proof. For any x, y ∈ W with x = y (that is, (x, y) ∈ R with R in Definition 2.8), we will show that f W (x) = f W (y). By Proposition 2.10, we have R = ∪ ∞ i=0 R i with R i defined therein. Hence it is sufficient to prove that for any i ∈ N and any x, y ∈ W satisfying (x, y) ∈ R i , we have f W (x) = f W (y). The proof is inductive on the index i: The "i = 0" case: Note that R 0 = (∪ 6 j=1 A j ) {(a, a) | a ∈ W } as in Proposition 2.10, there are several sub-cases: 1. If (x, y) ∈ ∪ 4 j=1 A j ⊂ R 0 , f W (x) = f W (y) follows from (26). 2. If (x, y) ∈ A 5 , then x, y ∈ Q and x S = y S . We claim that, for any z ∈ Q, it holds that f W (z) = f (z S ). Instead of giving a formal proof, we only provide a simple example to illustrate the claim above: if z = s 1 ⊕ ((r ⊙ s 2 ) ⊛ s 3 ) ∈ Q for some elements s 1 , s 2 , s 3 ∈ S and a real number r ∈ R + , then clearly we have f W (z) = f (s 1 ) + ((r · f (s 2 )) * f (s 3 )) = f (s 1 + ((r · s 2 ) * s 3 )) = f (z S ), where the first equality is due to the definition (25) of f W and the identities in (26), while the second one is valid since f is a semialgebra homomorphism from S to T . Thus f W (x) = f (x S ) = f (y S ) = f W (y). 3. If (x, y) ∈ {(a, a) | a ∈ W }, then we have x = y and f W (x) = f W (y). 4. If (x, y) ∈ A 6 , then we have x ∈ O and y = 0 S . Since f W (0 S ) = 0 T , we need to show that f W (x) = 0 T for any x ∈ O: Clearly, f W (x) = 0 T for any x ∈ O 0 = {0 S }. Assume that for any 0 ≤î < k (k ∈ N ≥1 ) and any x ∈ Oˆi, we have f W (x) = 0 T . We then prove that this is also valid for i = k. Set x ∈ O k and x / ∈ O k−1 . Then, by the equation (7), we have x = a ⊕ b ∈ O k−1 ⊕ O k−1 , x = a ⊛ b ∈ (O k−1 ⊛ W k−1 ) ∪ (W k−1 ⊛ O k−1 ) or x = r ⊙ a ∈ (R + ⊙ O k−1 ) ∪ ({0 R } ⊙ W k−1 ) . From the equations in (26), we see that f W (x) = 0 T holds in all cases. Hence f W (x) = f W (y) whenever (x, y) ∈ R 0 . The inductive case: Suppose that for any 0 ≤ i < ℓ (ℓ ∈ N ≥1 ) and any (x, y) ∈ R i , we have f W (x) = f W (y). We then prove that this is also valid for i = ℓ. Assume without loss of generality that (x, y) ∈ R ℓ \R ℓ−1 . There are several cases left: 1. If (x, y) = (x 1 , y 1 ) ⊕ (x 2 , y 2 ) ∈ R ℓ−1 ⊕ R ℓ−1 , then x = x 1 ⊕ x 2 , y = y 1 ⊕ y 2 and (x 1 , y 1 ), (x 2 , y 2 ) ∈ R ℓ−1 . Thus f W (x) = f W (x 1 ) + f W (x 2 ) = f W (y 1 ) + f W (y 2 ) = f W (y). 2. Similarly, we can deal with the cases where x ∈ (R ℓ−1 ⊛ R ℓ−1 ) ∪ (R + ⊙ R ℓ−1 ). 3. If (x, y) = (x −1 1 , y −1 1 ) ∈ (R ℓ−1 \((O × W ) ∪ (W × O))) −1 , then x 1 , y 1 / ∈ O, x = x −1 1 , y = y −1 1 and (x 1 , y 1 ) ∈ R ℓ−1 . Since x 1 , y 1 / ∈ O, neither of f W (x 1 ) and f W (y 1 ) is zero according to the proof of Proposition 4.2. Thus both (f W (x 1 )) −1 and (f W (y 1 )) −1 are well-defined and they are equal to each other since f W (x 1 ) = f W (y 1 ) by the inductive assumption. 4. If (x, y) ∈ rev(R) ℓ−1 , then (y, x) ∈ R ℓ−1 . The inductive assumption indicates that f W (y) = f W (x). 5. If there is z ∈ W such that (x, z), (z, y) ∈ R ℓ−1 , then f W (x) = f W (z) and f W (z) = f W (y) again by the inductive assumption. The above discussion shows that f F is a well-defined map from F to T . Proof. The fact that it is a semialgebra homomorphism directly follows from the identities in (26): e.g., f F (x + y) = f F (x ⊕ y) = f W (x ⊕ y) = f W (x) + f W (y) = f F (x) + f F (y). The rest three identities corresponding to the last three rows in (26) can be derived similarly. We now prove that f F is monotone. Remember that the preorder relation on F is given by the set L ⊂ F ×F defined in Proposition 3.5. We need to show that f F (g) ≤ T f F (h) for any (g, h) ∈ L = ∪ i L i with L i defined in equations (19). If (g, h) ∈ L 0 , then g = 0 S , g = h or g = x, h = y for some x, y ∈ S such that x ≤ y. The first two sub-cases are trivial while the last one follows from f F (x) = f W (x) = f (x) ≤ T f (y) = f W (y) = f F (y). Assume that for any 0 ≤ i < k (k ∈ N ≥1 ) and any (g, h) ∈ L i , we have f F (g) ≤ T f F (h). We then prove the case where i = k. We may assume that (g, h) / ∈ L k−1 , thus (g, h) ∈ L k \L k−1 = (R + · L k−1 ) ∪ (L k−1 + L k−1 ) ∪ (F * L k−1 ) ∪ (L k−1 * F ) ∪{(g, h) | ∃d ∈ F so that (g, d), (d, h) ∈ L k−1 } L k−1 .(28) Those cases corresponding to the components in the formula (28) can be dealt with separately, by using the transitivity and the compatible properties (1) of a preorder relation on a semialgebra. To illustrate, consider the case where (g, h) = (g 1 , h 1 ) + (g 2 , h 2 ) ∈ L k−1 + L k−1 . We have g = g 1 + g 2 , h = h 1 + h 2 and f F (g) = f F (g 1 + g 2 ) = f F (g 1 ) + f F (g 2 ) ≤ T f F (h 1 ) + f F (g 2 ) ≤ T f F (h 1 ) + f F (h 2 ) = f F (h 1 + h 2 ) = f F (h). The two ≤ T 's above are due to the inductive assumption and the first rule in (1). The other cases can be handled similarly, hence we omit the details. From A Semialgebra to A Linear Space In this section, we define a preordered R-linear space based on the semialgebra F . This definition is already given by [3], while we rewrite it in detail for the completeness of the present paper. Define V = F − F to be the Grothendieck group [14, Chap. II] of the commutative monoid (F, +), with each element in V of the form a − b for some a, b ∈ F , and with a − b = a ′ − b ′ in V if and only if ∃c ∈ F s.t. a + b ′ + c = a ′ + b + c in F . Then, V becomes an R-linear space once we define a scalar multiplication on it as below: r · (a − b) = r · a − r · b, if r ∈ R ≥0 , (−r) · b − (−r) · a, if r ∈ R <0 . It's straightforward to verify that this operation is well-defined. One also observes the following identities: 1 R · (a − b) = a − b, (rs) · (a − b) = r · (s · (a − b)), r · ((a − b) + (c − d)) = r · (a − b) + r · (c − d), (r + s) · (a − b) = r · (a − b) + s · (a − b)(29) with r, s ∈ R and a, b, c, d ∈ F . All of the above identities can be verified straightforwardly. While most of them can be checked concisely, it is not the case for the last one: One may need to consider six cases corresponding to the different sign(s) of the real numbers r, s and r + s. To illustrate, we deal with the case where r ≥ 0, s < 0 and r + s < 0 but leave the rest of them to the readers. Now that r + s < 0, (r + s) · (a − b) = (−r − s) · b − (−r − s) · a by the definition. On the other hand, we have (r · (a − b) + s · (a − b) = (r · a − r · b) + (−s) · b − (−s) · a = r · a + (−s) · b − r · b + (−s) · a . Then it is sufficient to show that (−r − s) · b − (−r − s) · a = r · a + (−s) · b − r · b + (−s) · a (in V ) ⇐ (−r − s) · b + r · b + (−s) · a = (−r − s) · a + r · a + (−s) · b (in F ) ⇐ (−s) · b + (−s) · a = (−s) · b + (−s) · a. (in F ) The last equality is trivially true. Hence the last identity in (29) is verified for the case of r ≥ 0, s < 0 and r + s < 0. The identities in (29) suggest that V is indeed an R-linear space. In the sequel we define a preorder relation on V based on the preorder relation on F . For any a, b, c, d ∈ F (or equivalently, for any a − b, c − d ∈ V ), we write a − b c − d if and only if there is a g ∈ F so that a + d + g b + c + g in F . It is straightforward to prove that the relation is well-defined and it is also a preorder. Furthermore, it is not hard to show that a − b c − d implies (a − b) + (g − h) (c + d) + (g − h), r · (a − b) r · (c − d),(30) for any a, b, c, d, g, h ∈ F and any r ∈ R + . These inequalities are similar to the ones given in (1). To conclude, the R-linear space V is preordered with the relation which is compatible with the addition and the scalar multiplication defined in itself. The Main Theorem and the Proof This section is devoted to the proof of the main theorem. We state it again for convenience: (c) For every r ∈ R + and every real number ǫ > 0, there is a polynomial p ∈ Q + [X] such that p(r) ≤ ǫ and x + p(u) y.(32) (d) For every r ∈ R + and every real number ǫ > 0, there is a polynomial p ∈ Q + [X] such that p(r) ≤ 1 + ǫ and p(u) * x y. Proof. The proof of the implication "(b) ⇒ (c)" is almost the same as its counterpart in the proof of Theorem 3.1 in [3]. The proof of "(c) ⇒ (d)" is also similar to the corresponding part of the proof of Theorem 3.1 in [3]. But since we are dealing with noncommutative semialgebras, there are some tiny differences. So we rewrite the proof of "(c) ⇒ (d)" for clearness: Since x ∈ S and x = 0 S , x / ∈ O, we have x = 0 S . By Proposition 3.6, u is power universal in F . Hence there is an integer k ∈ N so that u k * x 1 S . For any r ∈ R + and any real number ǫ > 0, set ǫ ′ = ǫ r k , if r > 0, 1, if r = 0. Applying (c) to the numbers r and ǫ ′ , we obtain p 0 ∈ Q + [X] such that p 0 (r) ≤ ǫ ′ and p 0 (u) + x y. Set p = 1 + p 0 X k , then p(r) = 1 + p 0 (r)r k ≤ 1 + ǫ ′ r k ≤ 1 + ǫ and p(u) * x = x + p 0 (u) * u k * x x + p 0 (u) y. Hence (d) is deduced. "(d)⇒(a)": Let f : S → R + be a monotone semialgebra homomorphism, we need to show that f (x) ≥ f (y). By Remark 4.7, the map f F : F → R + , a → f W (a) (∀a ∈ W ) is a well-defined monotone semialgebra homomorphism. Setting r = f F (u) and applying f F to inequality (33), we obtain p(r)f F (x) ≥ f F (y), where p(r) ≤ 1 + ǫ, f F (x) = f W (x) = f (x) and f F (y) = f (y). Thus, we have (1 + ǫ)f (x) ≥ f (y) for any real number ǫ > 0. Taking ǫ → 0, we obtain f (x) ≥ f (y). "(a)⇒(b)": This part of the proof is almost the same as the proof of Theorem 3.1 in [3]. This is mainly due to the well-defined semialgebra of the fractions of a noncommutative semialgebra given in previous sections. We only sketch the proof of this part below. Assume that F is order-cancellative, i.e. g + w h + w implies g h (∀g, h, w ∈ F ). Let V be the linear space defined in Section 5. Define C = {v ∈ V | v 0 V } = {g − h ∈ V | g, h ∈ F, g h}, then C is a cone in V by (30). For any real number ǫ > 0, set N ǫ = {a ∈ F | ∃m ∈ N s.t. a m j=0 ǫ j+1 · u j }, and define N ǫ − N ǫ = {a − b ∈ V | a, b ∈ N ǫ }. Then, as in [3], using the set N ǫ − N ǫ as ǫ varies a basis of neighborhoods of 0 V makes V a locally convex topological vector space. Denoting by C the closure of C in that topology, we have x − y ∈ C ⇔ the condition (b) holds, where the proof of the direction "⇒" relies on the assumption that is order-cancellative. Denote by V * the dual space of V , i.e., the topological vector space consisting of all continuous linear (real-valued) functions on V and equipped with the weak-* topology. Define C * = ℓ ∈ V * | ℓ(C) ⊂ [0, ∞) , then C * is a closed convex cone in V * such that C * = conv(Er(C * )),(35) with Er(·) the set of all the extreme rays of a cone and conv(·) the closure of the convex hull of a set. In the following, we assume that (b) does not hold and deduce the negation of condition (a). According to (34), "(b) does not hold" means x−y / ∈ C. By the geometric Hahn-Banach theorem [15], there is an ℓ ∈ C * such that ℓ(x − y) < 0. Combining with the property in (35), one further claims that ∃f ∈ Er(C * ) ⊂ C * , f (x − y) < 0.(36) Then we have f 1 S − 0 S > 0 and the function f : S → R + , c → 1 f 1 S − 0 S f (c − 0 S )(37) is a monotone semialgebra homomorphism such thatf (x) <f (y), contradicting to the condition (a). If F is not order-cancellative, the method used in [3] works as well in the present case. In Observations 6.1-6.4, we verify the condition given in (34), the equation (35), the claim given in (36) and the claim that the functionf in (37) is a monotone homomorphism. These claims have already been given by Fritz in [3] and part of the proofs of them are also contained therein. However, we rewrite these proofs for completeness. Note that the proof of the equation (35) below (Observation 6.2) is provided by Fritz through private communications. Observation 6.1. x − y ∈ C ⇔ the condition (b) holds. Proof. The condition x − y ∈ C implies that ∀ǫ > 0, ∃v ∈ C s.t. v ∈ (x − y) + (N ǫ − N ǫ ). This means that there are c, d ∈ N ǫ such that (x − y) + (c − d) = v 0 V . By the assumption that F is order-cancellative, this implies x + c y + d y. Since c ∈ N ǫ , then (31) holds. Since the inequality (31) holds, we have x + m j=0 ǫ j+1 · u j − 0 S y − 0 S . Adding 0 S − y to the both sides, one obtains x + m j=0 ǫ j+1 · u j − y y − y = 0 V , which is equivalent to (x − y) + m j=0 ǫ j+1 · u j − 0 S ∈ C(38) with m j=0 ǫ j+1 · u j − 0 S ∈ N ǫ − N ǫ . By inequality (38), we know that there is a point in C which is also in the neighborhood (x − y) + (N ǫ − N ǫ ) for any ǫ > 0. Thus x − y ∈ C. Observation 6.2. C * = conv(Er(C * )). Proof. Proposition 21 in [16] indicates that any well-capped [17] closed convex cone in a locally convex topological vector space is the closed convex hull of its extreme rays. To prove (35), it is sufficient to show that C * is well-capped (since V * is already locally convex). In [3], Fritz defined the sets C * ǫ = ℓ ∈ C * | ℓ(N ǫ − 0 S ) is bounded and D ǫ = ℓ ∈ C * ǫ | ℓ(N ǫ − 0 S ) ⊂ [0, 1] , which satisfy the equation C * = ǫ>0 C * ǫ = ǫ>0 n∈N nD ǫ(39) and claimed that D ǫ was a cap [17] of C * (hence C * is well-capped by equation (39)). However, the proof of the condition "the set C * \D ǫ is convex", which is required in the definition of a cap, is omitted in [3]. The following proof of the condition is provided by Fritz through private communications: Note that m j=0 ǫ j+1 · u j ∈ N ǫ for any m ∈ N, we have D ǫ = ℓ ∈ C * ǫ ∀m ∈ N, ℓ m j=0 ǫ j+1 · u j − 0 S ≤ 1 . Then C * ǫ \D ǫ = ℓ ∈ C * ǫ ∃m ∈ N, ℓ m j=0 ǫ j+1 · u j − 0 S > 1 is convex. Set f i ∈ C * ǫ \D ǫ with m i such that f i m i j=0 ǫ j+1 · u j − 0 S > 1, i = 1, 2. For any µ 1 , µ 2 = 1 − µ 1 ∈ (0, 1) andm = max{m 1 , m 2 }, we have (µ 1 f 1 + µ 2 f 2 ) m j=0 ǫ j+1 · u j − 0 S = µ 1 f 1 m j=0 ǫ j+1 · u j − 0 S + µ 2 f 2 m j=0 ǫ j+1 · u j − 0 S ≥ µ 1 f 1 m 1 j=0 ǫ j+1 · u j − 0 S + µ 2 f 2 m 2 j=0 ǫ j+1 · u j − 0 S > 1.(40) Thus µ 1 f 1 + µ 2 f 2 ∈ C * ǫ \D ǫ . The set C * \D ǫ is also convex. Set f 1 , f 2 ∈ C * \D ǫ and µ 1 , µ 2 = 1 − µ 1 ∈ (0, 1). If one of f 1 and f 2 is not in C * ǫ (i.e., is not bounded over the set N ǫ − 0 S ), then neither is the functional µ 1 f 1 + µ 2 f 2 . Thus, µ 1 f 1 + µ 2 f 2 ∈ C * \C * ǫ ⊂ C * \D ǫ . On the contrary, if both f 1 and f 2 are in C * ǫ , then they are in C * ǫ \D ǫ . Now (40) implies µ 1 f 1 + µ 2 f 2 ∈ C * ǫ \D ǫ ⊂ C * \D ǫ . Observation 6.3. ∃f ∈ Er(C * ) s.t. f (x − y) < 0. Proof. This part was also omitted in [3], hence we give a proof here. Since there is an ℓ ∈ C * so that ℓ(x − y) < 0, the intersection H ∩ C * is not empty, where H = {φ ∈ V * | φ(x − y) < 0} is an open half-space in V * . It is sufficient to show that there is f ∈ Er(C * ) so that f ∈ H. Assume on the contrary that none of the extreme rays of C * is in H. By the equation (35), for any φ ∈ C * and any neighborhood O ∋ φ, there is φ ′ ∈ O ∩ conv(Er(C * )). Thus φ ′ = s i=1 µ i φ i for some s ∈ N, some φ i ∈ Er(C * ) and some real numbers µ i ∈ [0, 1] with s i=1 µ i = 1. Since φ i / ∈ H by the assumption, φ i (x − y) ≥ 0 for any 1 ≤ i ≤ s. Hence φ ′ (x − y) ≥ 0, i.e., φ ′ / ∈ H. This means that O ∩ (V * \H) = ∅ for any neighborhood O of φ. Since V * \H is clearly closed, φ ∈ V * \H. Noting that φ is an arbitrary element in C * , we conclude that C * ∩ H = ∅, which is a contradiction. Observation 6.4. Let f ∈ Er(C * ) be any extreme ray of C * , then f (1 S − 0 S ) > 0 andf defined in (37) is a monotone semialgebra homomorphism. Proof. We may assume, without loos of generality, that for any nonzero a ∈ F , 1 S + a = 0 S . Otherwise, there would be a nonzero a ∈ F so that 0 S = 1 S + a 1 S . Note that 1 S 0 S , one concludes that the preordder is trivial, as is shown in (24). Now the conditions (a)-(d) are all trivially true, so is the conclusion of the theorem. Set f F (w) = f (w − 0 S ) for any w ∈ F , then we have f F (1 S + a) −1 * w + f F a * (1 S + a) −1 * w = f F (w) by the linearity of f . Let g(w) = f F (1 S + a) −1 * w , h(w) = f F a * (1 S + a) −1 * w . then g, h are both linear and monotone on F since f F is. Set g V (b − d) = g(b) − g(d) and h V (b − d) = h(b) − h(d) for any b, d ∈ F (or equivalently, for any b − d ∈ V ) , then both g V and h V are well-defined on V . Moreover, a routine check indicates that they are monotone and linear maps from V e to R and g V + h V = f.(41) For any nonzero t ∈ F , the operator t * : b − d → t * b − t * d, V → V(42) is well-defined and continuous (Lemma 6.5). Noting that g V = f • (1 S + a) −1 * and h V = f • a * (1 S + a) −1 * , one observes that both g V and h V are continuous since f is. To conclude, we have g V , h V ∈ C * . By the equation (41) and the fact that f is an extreme ray, we conclude that there are nonnegative real numbers s 1 and s 2 such that g V = s 1 f and h V = s 2 f . As a ray of C * , f = 0. There is some b 0 − d 0 ∈ V such that f (b 0 − d 0 ) = 0. Thus g V (1 S + a) * b 0 − (1 S + a) * d 0 = f (b 0 − d 0 ) = 0 and f V (1 S + a) * a −1 * b 0 − (1 S + a) * a −1 * d 0 = f (b 0 − d 0 ) = 0, which means g V = 0 and h V = 0. Hence s 1 , s 2 ∈ R >0 . Therefore, we have h V = γg V for the positive number γ = s 2 /s 1 and h V (w − 0 S ) = γg V (w − 0 S ) for any w ∈ F , i.e., f F a * (1 S + a) −1 * w = γf F (1 S + a) −1 * w .(43) Replacing (1 S + a) −1 * w by w in (43) one obtains f F (a * w) = γf F (w)(44) for any w ∈ F (with γ in (44) depending on a). Taking w = 1 S in (44), we have f F (a) = γf F (1 S ).(45) Now 0 = f (b 0 −d 0 ) = f F (b 0 )−f F (d 0 ) indicates that f F (b 0 ) = 0 or f F (d 0 ) = 0. We may assume f F (b 0 ) = 0, then clearly b 0 = 0 S . Taking a = b 0 in (45), we have f F (b 0 ) = γf F (1 S ). Since f F (b 0 ) = 0, we obtain f F (1 S ) = 0. In fact, f F (1 S ) > 0 since f ∈ C * . From (45) we see that γ = f F (a)/f F (1 S ). Substituting that back to (44), we have f F (a * w) = f F (a) f F (1 S ) f F (w)(46) for any w ∈ F and any nonzero a ∈ F . Indeed, when a = 0 S , the identity (46) also holds: both of its two sides equal 0. Definef (c) = 1 f F (1 S ) f F (c) for any c ∈ S, then the identity (46) indicates thatf : S → R + preserves the multiplications. The facts that it also preserves the additions and the scalar multiplications, and that it is monotone are straightforward. Thus,f is a monotone semialgebra homomorphism. The following lemma (also claimed in [3]) is needed in the proof above. Lemma 6.5. For any nonzero t ∈ F , the map given in (42) is well-defined and continuous w.r.t. the topology in V . Proof. Suppose that b − d = b ′ − d ′ in V for some b, d, b ′ , d ′ in F . Then there is e ∈ F such that b + d ′ + e = b ′ + d + e in F . Hence t * b + t * d ′ + t * e = t * b ′ + t * d + t * e, which means t * b − t * d = t * b ′ − t * d ′ in V . Therefore, the map t * is well-defined. The map t * is clearly linear. To prove the continuity, it is sufficient to show that it is continuous at the point 0 V = 0 S − 0 S : Let k ∈ N such that u k t. For any ǫ > 0, we set δ = (min{ǫ, 1}) k+1 . Then, for any b ∈ N δ , b m j=0 δ j+1 · u j for some m ∈ N. On the other hand, t * b u k * ( m j=0 δ j+1 · u j ) = m j=0 δ j+1 · u j+k m j=0 (min{ǫ, 1}) k+j+1 · u j+k m+k j=0 (min{ǫ, 1}) j+1 · u j , which means t * b ∈ N min{ǫ,1} ⊂ N ǫ . Therefore, for any b, d ∈ N δ , we have t * b − t * d ∈ N ǫ − N ǫ . That is, t * is continuous at 0 V . Remark 6.6. Theorem 1.4 is a noncommutative counterpart of Theorem 2.12 in [3], although in the noncommutative case, an equivalent characterization for the preorder relation via the original relation ≤ similar to the one in (20) is presently not available. Conclusion We provide a noncommutative version of the Vergleichsstellensatz proved by Fritz in [3]. This is the first Vergleichsstellensatz for noncommutative semialgebras in the literature. It characterizes the relaxed preorder induced by monotone homomorphisms to R + by "asymptotic" inequalities in the semialgebra of the fractions. Finding out how it can be applied to other areas such as probability theory and quantum information theory or how it is related to classical noncommutative real algebraic geometry would be interesting topics in the future. As a byproduct, we provided a particular method to define the semialgebra of the fractions of a noncommutative semialgebra which generalizes the corresponding concept in the commutative case. One can also apply this method to noncommutative semirings by getting rid of those expressions containing (formal or non-formal) scalar multiplications in the definition. [16] G. Choquet, Les cônes convexes faiblement complets dans l'analyse, Proc. Intern. Congr. Mathematicians, Stockholm (1962) 317-330. [17] L. Asimow, Extremal structure of well-capped convex sets, Transactions of the American Mathematical Society 138 (1969) 363-375. A. Two lemmas needed in the proof of Proposition 2.11 The first lemma below characterizes when an expression in Q lies in O. Assume that for any w ∈ Q with the number of operations less than k (k ≥ 1), it holds that w S = 0 S implies w ∈ O. We then prove that this also holds for those expressions w ∈ Q containing k operations. Since w ∈ Q does not contain any formal inverse, by the definition of W i+1 in equation (6), w is of the form w (1) ⊕ w (2) , w (1) ⊛ w (2) or r ⊙ w (1) with w (1) , w (2) ∈ W and r ∈ R + . These cases are discussed separately: If w = w (1) ⊕ w (2) , then the number of operations contained in w (j) (j = 1, 2) is less than k. Moreover, w (1) , w (2) ∈ Q and w S = w (1) S + w (2) S . Since w S = 0 S , by the assumption that the semialgebra S is zero-sum-free, we see that w Suppose that w (1) ∈ O m and w (2) ∈ O n , then w (1) , w (2) ∈ O max{m,n} . Thus we have w = w (1) ⊕ w (2) ∈ O max{m,n}+1 ⊂ O. The case where w = w (1) ⊛w (2) is similar to the above one. Except that in this case we have w S = w (1) S * w (2) S which implies w (1) S = 0 S or w (2) S = 0 S via the assumption that S is zero-divisor-free. Again, by the inductive assumption, w (1) ∈ O or w (2) ∈ O. Hence w ∈ O n+1 if w (1) ∈ O n or w (2) ∈ O n . Assume that w = r ⊙ w (1) . If r = 0 R , then w ∈ O n+1 whenever w (1) ∈ W n for some n ∈ N. Suppose that r is a positive real number, then 0 S = w S = r · w (1) S implies that w (1) S = 0 S by the assumption on S. Hence the inductive assumption indicatesl that w (1) ∈ O. Thus w ∈ O m+1 whenever w (1) ∈ O m for some m ∈ N. "Only If": If w ∈ O 0 , then w = 0 S . Clearly, w S = 0 S . Assume that for any 0 ≤ i < k (k ≥ 1), w ∈ O i implies w S = 0 S . We then prove that w ∈ O k also implies w S = 0 S : If w ∈ O k−1 , then w S = 0 S follows from the assumption. If w ∈ O k−1 ⊕ O k−1 , then w = w (1) ⊕ w (1) with w (1) , w (2) ∈ O k−1 . Thus w S = w (1) S + w (1) S = 0 S + 0 S = 0 S . If w ∈ O k−1 ⊛ W k−1 , then w = w (1) ⊛ a with w (1) ∈ O k−1 and a ∈ W k−1 . We have w S = w (1) S * a S = 0 S * a S = 0 S . When w ∈ W k−1 ⊛ O k−1 , the proof is similar. It is straightforward to verify the rest of the cases with w ∈ {0 R } ⊙ W k−1 and with w ∈ R + ⊙ O k−1 . The following lemma indicates that the set O is R-saturated. ∈ O such that (x, y) or (y, x) is in R i , then x ∈ O. Proof. We prove inductively on the index i. ) | a ∈ W }, we need to discuss several cases separately: When i = 0, it is sufficient to show that if (a, b) ∈ R 0 , then a ∈ O iff b ∈ O. Since R 0 = (∪ 6 j=0 A j ) {(a, a We need two obvious observations: i) 0 S ∈ O, ii) 1 S / ∈ O. The latter one is due to the assumption that 1 S = 0 S in the semialgebra S (hence This is another example: set (a, b) ∈ A 2 and (a, b) = (1 S ⊛â,â). Ifâ ∈ O, then 1 S ⊛â ∈ O. If 1 S ⊛â ∈ O, then we haveâ ∈ O since 1 S / ∈ O. The remaining cases may occur when (a, b) ∈ A 1 ∪ A 2 ∪ A 3 can be dealt with similarly. Hence we omit the discussion about them. When (a, b) ∈ A 4 , one finds the first rule in v) of Definition 2.3 useful: For example, if (a, b) = ((â ⊛b) −1 ,b −1 ⊛â −1 ) for someâ,b ∈ W \O, then since a = (â ⊛b) −1 / ∈ O, it is sufficient to prove thatb −1 ⊛â −1 / ∈ O. Assume on the contrary thatb −1 ⊛â −1 ∈ O, thenb −1 ∈ O orâ −1 ∈ O. Both of these contradict that rule. As usual, we omit the discussion about the other cases that may occur when (a, b) ∈ A 4 . If (a, b) ∈ A 5 , then a, b ∈ Q and a S = b S . By Lemma A.1, it is sufficient to show that a S = 0 S if and only if b S = 0 S . But this is clear since a S = b S . The cases with (a, b) ∈ A 6 ∪ {(a, a) | a ∈ W } are trivial. Thus the lemma holds for i = 0. The inductive step: Now assume that it holds for 0 ≤ i < k, we show that it also holds for i = k: For (x, y) ∈ R k or (y, x) ∈ R k , we consider first the case (x, y) ∈ R k . By the definition of R k (12), there are several sub-cases: If (x, y) ∈ R k−1 , then x ∈ O by the inductive assumption. If (x, y) ∈ R k−1 ⊕ R k−1 , then x = x 1 ⊕ x 2 and y = y 1 ⊕ y 2 with (x 1 , y 1 ), (x 2 , y 2 ) ∈ R k−1 . Since y ∈ O, y 1 , y 2 ∈ O. By the inductive assumption, x 1 , x 2 ∈ O. Thus x = x 1 ⊕ x 2 ∈ O. If (x, y) ∈ R k−1 ⊛ R k−1 , then x = x 1 ⊛ x 2 and y = y 1 ⊛ y 2 with (x 1 , y 1 ), (x 2 , y 2 ) ∈ R k−1 . Hence y ∈ O implies y 1 ∈ O or y 2 ∈ O. By the inductive assumption, x 1 ∈ O or x 2 ∈ O. Thus x = x 1 ⊛ x 2 ∈ O. If (x, y) ∈ R + ⊙ R k−1 , then x = r ⊙ x 1 and y = r ⊙ y 1 with (x 1 , y 1 ) ∈ R k−1 . When r = 0 R , x = r ⊙ x 1 ∈ O. When r is a positive real number, y = r ⊙y 1 ∈ O implies y 1 ∈ O, which, by the inductive assumption, indicates that x 1 ∈ O. Again, x = r ⊙ x 1 ∈ O. If (x, y) ∈ rev(R k−1 ), that is, (y, x) ∈ R k−1 , then x ∈ O by the inductive assumption. Finally, if there is z ∈ W such that both (x, z) and (z, y) are in R k−1 , then, by applying the inductive assumption to the pair (z, y), we have z ∈ O. Applying it again to the pair (x, z), we conclude that x ∈ O. By now the proof of the "(x, y) ∈ R k " case is done. The proof of the "(y, x) ∈ R k " case is similar. Hence we omit it. B. Comparing definitions of semialgebras of the fractions in noncommutative and commutative cases For a commutative zero-divisor-free semiring K, the semiring of its fractions is usually defined to be the quotient K fr = K × (K\{0 K }) / ∼(47) with ∼ an equivalence relation on the set K × (K\{0 K }) such that (a 1 , a 2 ) ∼ (b 1 , b 2 ) if and only if there is t ∈ K\{0 K } such that a 1 * b 2 * t = a 2 * b 1 * t in K. The elements in the set K fr , for instance, the equivalent class containing the the pair (a 1 , a 2 ) ∈ K × (K\{0 K }), is denoted by a 1 a 2 . The addition and multiplication are defined as usual, and an element a 1 a 2 is invertible if and only if a 1 = 0 K . This definition is equivalent to the one given by [1] Example 11.7. It can be generalized to the semialgebra case: If K is a commutative zerodivisor-free semialgebra, then we can define the semialgebra of its fractions K fr as in equation (47), with the same equivalence relation ∼. Using the same notation, we define the (nonnegative) scalar multiplication by r · a 1 a 2 = r·a 1 a 2 . Then one verifies without difficulty that this K fr is indeed a semialgebra. We prove that when the semialgebra S (satisfying Assumption 1.3) is commutative, the semialgebra F of its fractions defined in the Subsection 2.3 is isomorphic to the semialgebra S fr = S × (S\{0 S }) / ∼. We need the following result in the proof of Proposition B.4. Proposition B.1. If S is commutative, so is the semialgebra F of its fractions. Proof. It is sufficient to show that for any a, b ∈ W , (a ⊛ b, b ⊛ a) ∈ R. We prove that inductively on the number of operations in the expression a ⊛ b. If a ⊛ b contains only one operation, i.e., a, b ∈ W 0 = S, then both a ⊛ b and b ⊛ a are in Q and (a ⊛ b) S = a S * b S = b S * a S = (b ⊛ a) S . Thus (a ⊛ b, b ⊛ a) ∈ A 5 ⊂ R 0 ⊂ R. Now we assume that (c ⊛ d, d ⊛ c) ∈ R for any expression c ⊛ d which contains fewer operations than a ⊛ b does, and then prove that (a ⊛ b, b ⊛ a) ∈ R. We may assume that a ⊛ b contains at least two operations. That means a or b contains at least one operation. Without generality loss, we further assume that a contains at least one operation. Hence a = x ⊕ y, x ⊛ y, r ⊙ x or z −1 for some x, y ∈ W, z ∈ W \O and r ∈ R + . If a = x⊕y, then a⊛b = (x⊕y)⊛b R ∼ (x⊛b)⊕(y ⊛b) R ∼ (b⊛x)⊕(b⊛y) R ∼ b ⊛ (x ⊕ y) R ∼ b ⊛ a. The second " R ∼" is due to the inductive assumption (note that both x ⊛ b and y ⊛ b contain fewer operations than a ⊛ b does), while the other ones are according to the definition of R. Similarly, if a = x ⊛ y, then a ⊛ b = (x ⊛ y) ⊛ b R ∼ x ⊛ (y ⊛ b) R ∼ x ⊛ (b ⊛ y) R ∼ (x ⊛ b) ⊛ y R ∼ (b ⊛ x) ⊛ y R ∼ b ⊛ (x ⊛ y) = b ⊛ a. If a = r ⊙ x, then a ⊛ b = (r ⊙ x) ⊛ b R ∼ r ⊙ (x ⊛ b) R ∼ r ⊙ (b ⊛ x) R ∼ b ⊛ (r ⊙ x) = b ⊛ a. If a = z −1 for some z ∈ W \O, then z ⊛ b has less operations than a ⊛ b does. Hence z ⊛ b R ∼ b ⊛ z, namely z * b = b * z. Thus z −1 * z * b * z −1 = z −1 * b * z * z −1 , which is b * z −1 = z −1 * b. This means (z −1 ⊛ b, b ⊛ z −1 ) ∈ R. And the inductive proof is completed. The following lemma defines a map (which is key to the proof of Proposition B.4) from W to S fr when S is commutative: Lemma B.2. The recursive assignment below for G makes it a well-defined map from W to S fr : 1) For any x ∈ W 0 = S, set G (x) = x 1 S ∈ S fr ; 2) Suppose that x ∈ W ℓ for a minimal ℓ ∈ N and ℓ > 0, then set G (x) =        G (a) + G (b), if x = a ⊕ b ∈ W ℓ−1 ⊕ W ℓ−1 , G (a) * G (b), if x = a ⊛ b ∈ W ℓ−1 ⊛ W ℓ−1 , r · G (a), if x = r ⊙ a ∈ R + ⊙ W ℓ−1 , (G (a)) −1 , if x = a −1 ∈ (W ℓ−1 \O ℓ−1 ) −1 , with "+", " * ", "·" and " −1 " the operations in S fr . Proof. It is sufficient to prove (inductively on the index i) the following statement: for any i ∈ N and x ∈ W i , G (x) is well-defined, and G (x) is invertible in S fr whenever x ∈ O i . The "i = 0" case is true. We then assume that this statement holds for i = k − 1 and prove it for i = k. Now that x ∈ W k , there are several sub-cases: If x ∈ W k−1 , then G (x) is well-defined. When x ∈ O k , we also have x ∈ O k−1 . Hence G (x) is invertible. If x ∈ W k−1 but x = a ⊕ b ∈ W k−1 ⊕ W k−1 , then k is the minimal nature number such that x ∈ W k . Hence G (x) = G (a) + G (b) for some a, b ∈ W k−1 and G (x) is well-defined. When x = a ⊕ b / ∈ O k , at least one of a and b is not in O k−1 . Hence G (a) = a 1 a 2 and G (b) = b 1 b 2 with a 2 , b 2 nonzero and one of a 1 and b 1 nonzero. Thus, a 1 * b 2 + a 2 * b 1 = 0 S follows the assumption that S is zero-divisor and zero-sum-free. Hence G (x) = a 1 * b 2 +a 2 * b 1 a 2 * b 2 is invertible. If x ∈ W k−1 but x = a ⊛ b ∈ W k−1 ⊛ W k−1 , then G (x) = G (a) * G (b) is well-defined. When x = a ⊛ b / ∈ O k , neither of a and b is in O k−1 . Hence, G (a) = a 1 a 2 and G (b) = b 1 b 2 with none of a 1 , a 2 , b 1 and b 2 equals to 0 S . Thus a 1 * b 1 = 0 S and G (x) = a 1 * b 1 a 2 * b 2 is invertible. If x / ∈ W k−1 but x = r ⊙ a ∈ R + ⊙ W k−1 , then G (x) = r · G (a) is welldefined. When x = r ⊙ a / ∈ O k , r = 0 and a / ∈ O k−1 . Hence G (a) = a 1 a 2 with a 1 = 0 S , and G (x) = r·a 1 a 2 is invertible. If x / ∈ W k−1 but x = a −1 ∈ (W k−1 \O k−1 ) −1 , then G (a) is well-defined and invertible. Thus G (x) = (G (a)) −1 is also well-defined. Clearly G (x) is invertible. The lemma below defines a push-forward of the map G from W to the semialgebra F of the fractions of S: Lemma B.3. Let the sequence {R i } be as in Proposition 2.10. Set x, y ∈ W . If (x, y) ∈ R, then G (x) = G (y). That is, x = y implies G (x) = G (y). Proof. For (x, y) ∈ R 0 , we prove that G (x) = G (y): From the definition of G it follows that for any a, b ∈ W and c ∈ W \O,        isomorphism from E to K, we say they are isomorphic to each other. We will see that G is a semialgebra isomorphism: Proposition B.4. Define H : S fr → F, a b → a * b −1 and let G be as above, then H and G are semialgebra isomorphisms which are also compatible with the inversions such that G • H = id S fr and H • G = id F . Proof. The map H is well-defined: for any a b = c d in S fr , there is nonzero t ∈ S so that a * d * t = b * c * t. Hence (a ⊛ d) ⊛ t R ∼ (b ⊛ c) ⊛ t and therefore (a * d) * t = (b * c) * t in F . Since none of b, d, t is in O, their corresponding equivalent classes are invertible in F . Because F is commutative, we have a * b −1 = c * d −1 . Using, if necessary, the fact that F is commutative, one verifies directly that H is compatible with the additions, the multiplications, the scalar multiplications, and the inversions in S fr and F . Moreover, H ( 0 S 1 S ) = 0 S and H ( 1 S 1 S ) = 1 S . Thus H is a semialgebra homomorphism compatible with the inversions. Noting that, for any a, b ∈ W , G (a + b) = G (a ⊕ b) = G (a ⊕ b) and that G (a ⊕ b) = G (a) + G (b) = G (a) + G (b) by equations (48), we see that G is compatible with the additions in F and S fr . Similarly, we conclude that G is also compatible with the other three pairs of operations in F and S fr . Moreover, G (0 S ) = G (0 S ) = 0 S 1 S and G (1 S ) = G (1 S ) = 1 S 1 S . Hence G is a semialgebra homomorphism compatible with the inversions. Finally, since G (H ( a b )) = G (a * b −1 ) = G (a ⊛ b −1 ) = G (a⊛b −1 ) = G (a) * (G (b)) −1 = a 1 S * 1 S b = a b for any (a, b) ∈ S × (S\{0 S }), we have G • H = id S fr . To prove H • G = id F , it is sufficient to show that for any i ∈ N and any w ∈ W i , it holds that H (G (w)) = w. This can be done inductively on the index i: When i = 0, w ∈ S. Thus H (G (w)) = H ( w 1 S ) = w * 1 −1 S = w. We suppose that this claim holds for i = k and then prove it for i = k + 1. If w = a ⊕ b ∈ W k ⊕ W k , with a, b ∈ W k , and set G (a) = a 1 a 2 and G (b) = b 1 b 2 for some (a 1 , a 2 ), (b 1 , b 2 ) ∈ S × (S\{0 S }), then we have H (G (w)) = H ( a 1 a)) + H (G (b)) = a + b = w, where the second last equality is due to the inductive assumption. The rest cases where w ∈ W k ⊛ W i , R + ⊙ W k or (W k \O k ) −1 can be dealt with in a similar way. Hence H (G (w)) = w for any w ∈ W and therefore we have H • G = id F . Thus both H and G are semialgebra isomorphisms. a 2 + b 1 b 2 ) = a 1 * b 2 + a 2 * b 1 * a 2 * b 2 −1 = a 1 * a 2 −1 + b 1 * b 2 −1 = H ( a 1 a 2 ) + H ( b 1 b 2 ) = H (G ( Example 2. 1 . 1To define Q + from N, there are 3 steps: i) Define the set of formal expressions U = {n ⊘ m | n, m ∈ N} with ⊘ the formal division. and the meanings of the other sets in the expressions are similar. Then we have and only if ∃t ∈ K\{0 K }, x * b * t ≤ y * a * t ([3], page 13). (20) For any monotone semialgebra homomorphism f : S → T , the map f Proposition 4. 5 . 5The map f F : F → T is a monotone semialgebra homomorphism. Remark 4. 6 . 6For any monotone semialgebra homomorphism η : F → T such that η(x) = f F (x) for all x ∈ S (i.e., it extends the monotone semialgebra homomorphism f ), we can prove inductively that η = f F . This indicates the universal property of the semialgebra F of the fractions of S. Remark 4. 7 . 7Set T = R + , then Propositions 4.4 and 4.5 indicate that any monotone semialgebra homomorphism f : S → R + can be extended to a monotone semialgebra homomorphism mapping from F to R + . Theorem 1.4 Let (S, ≤) be a preordered semialgebra satisfying Assumptions 1.2 and 1.3, with a power universal element u ∈ S. Suppose that (F, ) is the preordered semialgebra of the fractions of S defined in Subsection 2.3 with " " in Def. 3.1 and a : S → F, x → x the canonical map in Def. 2.9. Then, for every nonzero x, y ∈ S, the following are equivalent: (a) f (x) ≥ f (y) for every monotone semialgebra homomorphism f : S → R + . (b) For every real number ǫ > 0, there is m ∈ N such that x + m j=0 ǫ j+1 · u j y. Lemma A. 1 . 1If w is an expression in Q, then w ∈ O if and only if w S = 0 S . Proof. "If": We prove inductively on the number ℓ of operations that occur in the expression w. When ℓ = 0, w ∈ W 0 = S. Thus w = w S . But w S = 0 S , so w = 0 S ∈ O. S = 0 S . By the inductive assumption, we have w (1) , w (2) ∈ O. Lemma A. 2 . 2Let the sequence {R i } be as in Proposition 2.10. For any i ∈ N and any x ∈ W , if there is an expression y 1 S / ∈ {0 S } = O 0 ) and the fact that 1 S / ∈ O ℓ for any integer ℓ > 0. Using the rules for the set O in Definition 2.3 several times, one proves "a ∈ O iff b ∈ O" in each case where the pair (a, b) admits a particular form given by the elements of the set A 1 ∪ A 2 ∪ A 3 . For example, if (a, b) ∈ A 1 and (a, b) = ((b ⊕ĉ) ⊛â, (b ⊛â) ⊕ (ĉ ⊛â)), then a ∈ O impliesâ ∈ O orb ⊕ĉ ∈ O. Ifâ ∈ O, then bothb ⊛â andĉ ⊛â are in O. Hence b = (b ⊛â) ⊕ (ĉ ⊛â) ∈ O. Ifb ⊕ĉ ∈ O, thenb,ĉ ∈ O. Thuŝ b ⊛â,ĉ ⊛â ∈ O and also b ∈ O. Conversely, b ∈ O impliesb ⊕â,ĉ ⊕â ∈ O. Soâ ∈ O orâ / ∈ O,b,ĉ ∈ O. Ifâ ∈ O, then a = (b ⊕ĉ) ⊛â ∈ O. If bothb andĉ are in O, then so areb ⊛â andĉ ⊛â. Thus b = (b ⊛â) ⊕ (ĉ ⊛â) ∈ O. condition a −1 / ∈ O in Rule v) is quite natural since anything of the form a −1 cannot be zero. Definition 2.3 does uniquely characterize two subsets of U since there is an algorithm deciding whether a given expression a ∈ U is in W (or O) or not. Some examples are sufficient for the readers to understand that: Example 2.4. Let s i be some nonzero elements of S. Then we have AcknowledgementWe thank Tobias Fritz for the useful discussion and suggestions, which have helped the authors improve the paper a lot. This research is supported by the National Key Research Project of China 2018YFA0306702 and the National Natural Science Foundation of China 12071467.The following proposition characterizes the derived preorder relation in the commutative case (see condition (20)) recursively:Proposition C.1. Let K be a commutative zero-divisor-free semialgebra with preorder relation "≤" (such that 0 K ≤ 1 K ) and the derived preorder relation " " on the semialgebra K fr of its fractions as in (20), then for any g, h ∈ K fr , g h iff the pair (g, h) is in the following setProof. "If": It is clear that very pair (g, h) in Γ 0 satisfies g h by the condition (20). Noting that the preorder relation defined in condition (20) satisfies the implications in(1)and(2)(since the preorder relation "≤" does), that it is transitive and that every pair (g, h) ∈ Γ 0 satisfies g h, we see that every pair (g, h) ∈ Γ 1 also satisfies g h. By induction on the index i of Γ i , we see that every pair (g, h) ∈ Γ also satisfies g h."Only if": Suppose that g = x a h = y b for some x, y ∈ K and some a, b ∈ K\{0 K }, we need to show that ( x a , y b ) ∈ Γ. Since x a y b , by condition (20) there is t ∈ K\{0 K } such that x * b * t ≤ y * a * t in K. Hence ( x * b * t 1 K , y * a * t 1 K ) ∈ Γ 0 , thus ( x a , y b ) = 1 K a * b * t * ( x * b * t 1 K , y * a * t 1 K ) ∈ Γ 1 ⊂ Γ. Note that the recursive definition of Γ above is of the same form as the equations in (19) once we identify the semialgebra F of the fractions of S J S Golan, Semirings and their Applications. Springer Science & Business MediaJ. S. Golan, Semirings and their Applications, Springer Science & Busi- ness Media, 1999. T Fritz, arXiv:2112.05949v2Abstract Vergleichsstellensätze for preordered semifields and semirings II. arXiv preprintT. Fritz, Abstract Vergleichsstellensätze for preordered semifields and semirings II, arXiv preprint arXiv:2112.05949v2 (2021). A generalization of Strassen's Positivstellensatz, Communications in Algebra. T Fritz, 49T. Fritz, A generalization of Strassen's Positivstellensatz, Communica- tions in Algebra 49 (2) (2021) 482-499. The asymptotic spectrum of tensors. V Strassen, Journal für die reine und angewandte Mathematik. 384V. Strassen, The asymptotic spectrum of tensors, Journal für die reine und angewandte Mathematik 384 (1988) 102-152. The asymptotic spectrum of graphs and the Shannon capacity. J Zuiddam, Combinatorica. 395J. Zuiddam, The asymptotic spectrum of graphs and the Shannon ca- pacity, Combinatorica 39 (5) (2019) 1173-1184. Abstract Vergleichsstellensätze for preordered semifields and semirings I. T Fritz, arXiv:2003.13835v3arXiv preprintT. Fritz, Abstract Vergleichsstellensätze for preordered semifields and semirings I, arXiv preprint arXiv:2003.13835v3 (2021). A generalization of Strassen's theorem on preordered semirings. P Vrana, Order. P. Vrana, A generalization of Strassen's theorem on preordered semir- ings, Order (2021) 1-20. Quantum asymptotic spectra of graphs and noncommutative graphs, and quantum Shannon capacities. Y Li, J Zuiddam, IEEE Transactions on Information Theory. 671Y. Li, J. Zuiddam, Quantum asymptotic spectra of graphs and non- commutative graphs, and quantum Shannon capacities, IEEE Transac- tions on Information Theory 67 (1) (2020) 416-432. The semiring of dichotomies and asymptotic relative submajorization. C Perry, P Vrana, A H Werner, IEEE Transactions on Information Theory. 681C. Perry, P. Vrana, A. H. Werner, The semiring of dichotomies and asymptotic relative submajorization, IEEE Transactions on Information Theory 68 (1) (2021) 311-321. Asymptotic relative submajorization of multiplestate boxes. G Bunth, P Vrana, Letters in Mathematical Physics. 1114G. Bunth, P. Vrana, Asymptotic relative submajorization of multiple- state boxes, Letters in Mathematical Physics 111 (4) (2021) 1-23. Amortized circuit complexity, formal complexity measures, and catalytic algorithms. R Robere, J Zuiddam, 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS). IEEER. Robere, J. Zuiddam, Amortized circuit complexity, formal complexity measures, and catalytic algorithms, in: 2021 IEEE 62nd Annual Sym- posium on Foundations of Computer Science (FOCS), IEEE, 2022, pp. 759-769. T Fritz, arXiv:2004.13655v2The asymptotic comparison of random walks on topological abelian groups. arXiv preprintT. Fritz, The asymptotic comparison of random walks on topological abelian groups, arXiv preprint arXiv:2004.13655v2 (2021). P M Cohn, Skew fields: Theory of general division rings. Cambridge University PressP. M. Cohn, Skew fields: Theory of general division rings, Cambridge University Press, 1995. M Karoubi, An introduction. Berlin, HeidelbergSpringerM. Karoubi, K-theory: An introduction, Springer, Berlin, Heidelberg, 1978. Functional analysis 2nd ed. W Rudin, International Series in Pure and Applied Mathematics. McGraw-Hill, IncW. Rudin, Functional analysis 2nd ed, International Series in Pure and Applied Mathematics. McGraw-Hill, Inc., New York (1991). G (b), G (a ⊛ b) = G (a) * G (b), G (r ⊙ a) = r · G (a). 1G (c −1G (a ⊕ b) = G (a) + G (b), G (a ⊛ b) = G (a) * G (b), G (r ⊙ a) = r · G (a), G (c −1 ) = (G (c)) −1 . ∪ 4 i=1 A i , then G (x) = G (y) follows from the above identities. If (x, y) ∈ A 5. I E , ∈ Q And X S = Y S , Then, G , x) = x S 1 S = y S 1 S = G(y) (by using the identities above recursively for any a ∈ Q. we see that G (a) = a S 1 SIf (x, y) ∈ ∪ 4 i=1 A i , then G (x) = G (y) follows from the above identities. If (x, y) ∈ A 5 , i.e., x, y ∈ Q and x S = y S , then G (x) = x S 1 S = y S 1 S = G (y) (by using the identities above recursively for any a ∈ Q, we see that G (a) = a S 1 S ). Thus, G (x) = G (y) for any (x, y) ∈ R 0 . We then assume that G (x) = G (y) for any (x, y) ∈ R k , and prove this identity for R k+1 : If (x, y) ∈ R k ∪ rev(R k ), then G (x) = G (y) follows right from the assumption. If (x, y) = (x 1 ⊕ x 2 , y 1 ⊕ y 2 ) ∈ R k ⊕ R k with (x 1 , y 1 ) and (x 2 , y 2 ) in R k , then G (x j ) = G (y j ) for j = 1, 2. Thus G (x) = G (x 1 ) + G (x 2 ) = G (y 1 ) + G (y 2 ) = G (y). If (x, y) is in the sets R k ⊛ R k , R + ⊙ R k or R k \((O × W ) ∪ (W × O)), the identity follows similarly. G X, Y) ∈ { ; | A ∈ W }, Then X = Y, (x) = G, If (x, y) ∈ A 6 , then x ∈ O and y = 0 S . It is straightforward to prove that for any a ∈ O, G (a) = 0 S 1 S , hence G (x) = G (y). If there is z ∈ W such that (x, z), (z, y) ∈ R k , then we have G (x) = G (z) = G (y)If (x, y) ∈ {(a, a) | a ∈ W }, then x = y and G (x) = G (y). If (x, y) ∈ A 6 , then x ∈ O and y = 0 S . It is straightforward to prove that for any a ∈ O, G (a) = 0 S 1 S , hence G (x) = G (y). Thus, G (x) = G (y) for any (x, y) ∈ R 0 . We then assume that G (x) = G (y) for any (x, y) ∈ R k , and prove this identity for R k+1 : If (x, y) ∈ R k ∪ rev(R k ), then G (x) = G (y) follows right from the as- sumption. If (x, y) = (x 1 ⊕ x 2 , y 1 ⊕ y 2 ) ∈ R k ⊕ R k with (x 1 , y 1 ) and (x 2 , y 2 ) in R k , then G (x j ) = G (y j ) for j = 1, 2. Thus G (x) = G (x 1 ) + G (x 2 ) = G (y 1 ) + G (y 2 ) = G (y). If (x, y) is in the sets R k ⊛ R k , R + ⊙ R k or R k \((O × W ) ∪ (W × O)), the identity follows similarly. If there is z ∈ W such that (x, z), (z, y) ∈ R k , then we have G (x) = G (z) = G (y). We can thus define the map G : F → S fr , x → G (x). We can thus define the map G : F → S fr , x → G (x). A homomorphism f from E to K is a semialgebra isomorphism (or. E Let, simply isomorphismLet E and K be two semialgebras. A homomorphism f from E to K is a semialgebra isomorphism (or, simply isomorphism)
[]
[ "Artificial prediction markets present a novel op-portunity for human-AI collaboration", "Artificial prediction markets present a novel op-portunity for human-AI collaboration" ]
[ "Tatiana Chakravorti ", "Vaibhav Singh ", "Sarah Rajtmajer ", "Michael Mclaughlin ", "Robert Fraleigh ", "Christopher Griffin ", "Anthony Kwasnica ", "David Pennock [email protected] ", "C Lee Giles ", "Tatiana Chakravorti ", "Vaibhav Singh ", "Sarah Rajtmajer ", "Michael Mclaughlin ", "Robert Fraleigh ", "Christopher Griffin ", "Anthony Kwasnica ", "David Pennock ", "C Lee Giles ", "\nPennsylvania State University State College\nUSA\n", "\nPennsylvania State University State College\nUSA\n", "\nPennsylvania State University State College\nUSA\n", "\nPennsylvania State University State College\nUSA\n", "\nPennsylvania State University State College\nUSA\n", "\nPennsylvania State University State College\nUSA\n", "\nPennsylvania State University State College\nUSA\n", "\nRutgers University New Jersey\nUSA\n", "\nPennsylvania State University State College\nUSA\n" ]
[ "Pennsylvania State University State College\nUSA", "Pennsylvania State University State College\nUSA", "Pennsylvania State University State College\nUSA", "Pennsylvania State University State College\nUSA", "Pennsylvania State University State College\nUSA", "Pennsylvania State University State College\nUSA", "Pennsylvania State University State College\nUSA", "Rutgers University New Jersey\nUSA", "Pennsylvania State University State College\nUSA" ]
[ "IFAAMAS" ]
Despite high-profile successes in the field of Artificial Intelligence, machine-driven technologies still suffer important limitations, particularly for complex tasks where creativity, planning, common sense, intuition, or learning from limited data is required. These limitations motivate effective methods for human-machine collaboration. Our work makes two primary contributions. We thoroughly experiment with an artificial prediction market model to understand the effects of market parameters on model performance for benchmark classification tasks. We then demonstrate, through simulation, the impact of exogenous agents in the market, where these exogenous agents represent primitive human behaviors. This work lays the foundation for a novel set of hybrid human-AI machine learning algorithms.
10.5555/3545946.3598915
[ "https://export.arxiv.org/pdf/2211.16590v1.pdf" ]
254,095,953
2211.16590
701129fbb7ae40a02611787048cddcabaf2a9217
Artificial prediction markets present a novel op-portunity for human-AI collaboration 2023. May 29 -June 2, 2023 Tatiana Chakravorti Vaibhav Singh Sarah Rajtmajer Michael Mclaughlin Robert Fraleigh Christopher Griffin Anthony Kwasnica David Pennock [email protected] C Lee Giles Tatiana Chakravorti Vaibhav Singh Sarah Rajtmajer Michael Mclaughlin Robert Fraleigh Christopher Griffin Anthony Kwasnica David Pennock C Lee Giles Pennsylvania State University State College USA Pennsylvania State University State College USA Pennsylvania State University State College USA Pennsylvania State University State College USA Pennsylvania State University State College USA Pennsylvania State University State College USA Pennsylvania State University State College USA Rutgers University New Jersey USA Pennsylvania State University State College USA Artificial prediction markets present a novel op-portunity for human-AI collaboration IFAAMAS . of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023)London, United Kingdom102023. May 29 -June 2, 2023ACM Reference Format:prediction marketsmachine learningartificial intelligencehuman- AI collaboration Despite high-profile successes in the field of Artificial Intelligence, machine-driven technologies still suffer important limitations, particularly for complex tasks where creativity, planning, common sense, intuition, or learning from limited data is required. These limitations motivate effective methods for human-machine collaboration. Our work makes two primary contributions. We thoroughly experiment with an artificial prediction market model to understand the effects of market parameters on model performance for benchmark classification tasks. We then demonstrate, through simulation, the impact of exogenous agents in the market, where these exogenous agents represent primitive human behaviors. This work lays the foundation for a novel set of hybrid human-AI machine learning algorithms. INTRODUCTION A body of work on artificial prediction markets is emerging. These are numerically simulated markets, populated by artificial agents (bot-traders) for the purpose of supervised learning of probability estimators [8]. While nascent, this literature has demonstrated the plausibility of using a trained market as a supervised learning algorithm, achieving comparable performance to standard approaches on simple classification tasks [7,8,35,53]. In fact, these results are sensible given the deep mathematical connections between prediction markets and learning [1,14,16]. Like other machine learning algorithms, functioning of an artificial prediction market depends on several researcher-determined parameters: number of agents; liquidity; initial cash; alongside parameters related to training processes. Scenarios in which performance is robust or brittle to these settings is yet unclear. Prior work has observed that artificial markets may suffer from lack of participation [62]. That is, like their human counterparts in traditional prediction markets, agents may not invest in the market if they do not have sufficient information [5,63,69]; in practice, this occurs when an asset representing test data point is too dissimilar to training examples. In our view, the most promising opportunity afforded by artificial prediction markets is eventual human-AI collaboration -a market framework should theoretically support human traders participating alongside agents to evaluate outcomes. Whether and how artificial prediction markets might benefit from this hybrid scenario is an open question. The work we undertake here provides, through simulation, initial support for this opportunity in the context of a simple artificial market and primitive human behaviors. Our work is framed by two primary research questions. RQ1: How does performance of a simple artificial prediction market depend on hyper-parameter selection? RQ2: What impact does the inclusion of exogenous agents representing simple (human-like) behaviors have on market performance? Our findings support those of prior recent work indicating the promise of artificial prediction markets for classification tasks. We demonstrate the sensitivity of this approach to hyper-parameter selection and highlight, in particular, the role of liquidity in moderating performance. Finally, we demonstrate the exciting opportunity for hybrid prediction markets to serve as a framework for human-AI collaboration. We suggest that this approach may be particularly valuable in contexts where machine learning falls short (e.g., lack of training data, complex tasks) and potential for human-only approaches is either undesirable or infeasible. RELATED WORK Our work builds upon and contributes to two primary literatures, namely, work on artificial prediction markets and work in collaborative human-AI technologies. Artificial Prediction Markets Prediction markets are simple futures markets used to aggregate disperse information into efficient forecasts of uncertain future events [31,48,74,75]. Specifically, market participants buy and sell contracts that pay out based on the outcomes of future events. Market prices generated from these contracts can be understood as a collective prediction among market participants. Prediction markets have been successfully used, e.g., for forecasting election outcomes [9], sports betting [66], forecasting infectious disease activity [58], and aggregating employee wisdom in corporate settings [19,27]. Artificial prediction markets are a variation on this idea, wherein numerically simulated markets populated by trained agents (bottraders) are used for the purpose of supervised learning of probability estimators [7,8]. In initial formulations by Barbu and Lay [8,43,44], each agent is represented as a budget and a simple betting function. During training, each agent's budget is updated based on the accuracies of its predictions over a training dataset. Authors found that these markets outperformed standard approaches on benchmark classification and regression tasks. Later, Storkey and colleagues [67,68] developed the so-called machine learning market, also for the purpose of classification. In their formulation, each agent purchases contracts in order to maximize a utility function. Most recently, Nakshatri et al. [53] proposed an artificial prediction market wherein agent purchase logic is defined geometrically, in particular, by a convex semi-algebraic set in feature space. Time varying asset prices affect the structure of the semi-algebraic sets leading to time-varying agent purchase rules. Agent parameters are trained using an evolutionary algorithm. Authors show that their approach has desirable properties, e.g., the market satisfies certain universal approximation properties, and there exist sufficient conditions for convergence. Our work builds on this approach. Like their human-populated counterparts, artificial prediction markets have found a number of real-world applications [7,35]. Ongoing theoretical work has offered support for these promising experimental findings, highlighting the mathematical connections between artificial markets and machine learning [1,14,15,34]. Human-AI Collaboration Despite high-profile successes in the field of Artificial intelligence (AI) [10,33,41,79], machine-driven solutions still suffer important limitations particularly for complex tasks where creativity, common sense, intuition or learning from limited data is required [4,30,37,38,42,47,51]. Both the promises and challenges of AI have motivated work on human-machine collaboration [20,54,59,73,77]. The hope is that we can eventually develop hybrid systems that bring together human intuition and machine rationality to effectively and efficiently tackle today's grand challenges. Recent work in hybrid intelligence systems has demonstrated the feasibility and highlighted the potential of integrating human input into AI systems [38], or even, of human-AI collaboration [72]. The spectrum of these efforts range from accounting for human factors in technology design [6,13,32] to efficiently utilizing human inputs for training data [3] in applications as diverse as business [52,65], civic welfare [25], criminal justice [70], and healthcare [45,61,71]. The work we describe here brings together the bodies of prior work on artificial prediction markets and hybrid intelligence, proposing hybrid prediction markets for direct integration of human wisdom into the deployment of a machine learning algorithm. DATA We consider three classification tasks. The first two are benchmark tasks used broadly to compare performance of machine learning algorithms. The third is the task of classifying scientific research outcomes as replicable or not replicable -a challenging, complex task on which both machine learning algorithms [2,56,76,78] and human assessment [11,12,22,26,28,29] have achieved respectable but not excellent performance. The replication prediction task, we suggest, is an example of the type of problem well-suited to hybrid human-AI approaches. Benchmark Machine Learning Datasets The Iris dataset [24] was one the earliest datasets used for evaluation of classification methodologies. The dataset contains three classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the others; the latter are not linearly separable from one another. For evaluation using the binary market, we have combined the latter two classes (iris virginica and iris versicolor). Prior approaches for classification of the Iris dataset based on support vector classification [50], random forest classification [17,49], and logisitc regression [57] have reported 100% or near-100% accuracy on the task. In addition to the Iris dataset, we consider the Heart Disease dataset [36]. The Heart Disease dataset is also a multivariate dataset used for benchmark classification algorithms. Fourteen patient attributes are used to predict presence or absence of heart disease. Random forest [64], Xgboost [60], and logistic regression [21] achieve performance just under 90% accuracy. While, support vector classification achieves 86% [60]. Replication Studies Outcomes In the last decade, several large-scale replication projects have been undertaken across psychology, economics, political science, cancer biology and other domains [11,12,18,23,39,40,55]. Amongst their important impacts, these studies have created small groundtruth datasets of replication studies outcomes that can be used for train and test of automated approaches for replication prediction. Specifically, we use the dataset and extracted features considered by [62] for ease of comparison. The dataset containes 192 findings in the social and behavioral sciences, each labeled either Replicable or Not Replicable, and a set of 41 features extracted from each associated paper representing biblometric, venue-related, authorrelated, statistical and semantic information. See [76] for further detail on feature extraction processes. Of note, authors in [62] achieve 89.4% accuracy, remarkable for the task of replication prediction. However, accuracy is calculated based on the approximately one-third of the test data that gets evaluated by the market. Because agent participation is voluntary and agents do not participate if they do not have sufficient information about a test point, some (or much) of the data can be left unclassified. Our work uses the same data and market structure described in [62]. This allows us to explore the effects of hyperparameters (RQ1) and the inclusion of exogenous agents (RQ2) on these performance/participation trade-offs. PREDICTION MARKET MODEL We use as a base model the artificial binary prediction market described in [53]. The state of the prediction market is defined by a pair of integers q = ( 0 , 1 ) ∈ Z 2 + giving the number of units of the two asset classes that have been sold. For simplicity we refer to the assets as 0 and 1. Traders are agents A = { 1 , . . . , } who buy assets 0 and 1 using policies { 1 , . . . , }. Also following [53], we assume for simplicity that agents cannot sell. If agent purchase policy is conditioned on exogenous information x ∈ ⊆ R then, : (q , x) ↦ → ( 0 , 1 ) and agent purchases 0 units of 0 and 1 units of 1 , thus causing a state update. In what follows, we assume that agents specialize in the purchase of either Asset 0 or Asset 1 so that if 0 > 0, then 1 = 0. Asset prices are computed using a logarithmic market scoring rule (LMSR): 0 = exp ( 0 ) exp ( 0 ) + exp ( 1 ) 1 = exp ( 1 ) exp ( 0 ) + exp ( 1 ) . This is the softmax function of ( 0 , 1 ). Liquidity adjusts the price change given a change in asset quantities [46]. The fact that prices vary as a function of q ensures that the policy need not take spot price into consideration explicitly. It is often more convenient to work in units of 1/ as can become arbitrarily close to zero. Experimental results are therefore reported for this quantity as the liquidity factor. To start the market, all agents may purchase assets at time = 0. After this, we assume that agents arrive at the market with arrival rate and inter-arrival time governed by an exponential distribution. This allows us to avoid scenarios in the hybrid setting where the synthetic traders swamp the market. The LMSR imposes a market maker price, so that actual trade costs are given by: 0 (Δ 0 ) = 1 log exp[ ( 0 + Δ 0 )] + exp[ 1 ] exp[ 0 ] + exp[ 1 ] 1 (Δ 1 ) = 1 log exp[ 0 ] + exp[ ( 1 + Δ 1 )] exp[ 1 ] + exp[ 0 ] . Here (Δ ) is the cost to a trader for purchasing Δ units of Asset (with ∈ {0, 1}) at time . For small values of (large values of 1/ ) the cost of purchase approaches the spot-price [53]. Agent purchase logic is governed by a time-varying bank value and a characteristic function : R × R 2 × R → R to reason about information x and its decision to buy an asset in class is governed by: Δ = [ (x, q; )] − · − .(1) Here : R → [0, 1] is a sigmoid function and ( ) is the unit step function defined as 0 at = 0. The expression [ (x, q)] defines the value Agent places on Asset as a function of the market state (and hence spot-prices) and the information in the external information x. If Agent places more value on Asset than its present price , then { [ (x, p)] − } = 1 and Δ = 1 just in case the agent has sufficient funds given by [ − ]. That is, Agent purchases a share of Asset . Notice we are assume that agents may buy one share of an asset at a time. This both simplifies the agent logic and also would prevent the agents from out-competing humans in the market in the hybrid scenario. The vector is a set of parameters that define the specific outputs of and thus affect the agent purchase logic. Let be the (matrix) of all parameter vectors for the agents. After running for time units with input information x, the spot price for Asset 1 is 1 (x; ). If we are given input information {x 1 , . . . , x } with class information { 1 , . . . , }, then training the market is the process of solving: min 1 ∑︁ =1 1 (x ; ) − 2 . This problem is solved in [53] using a genetic algorithm to obtain a market that can classify external information x ∈ . At the close of the market, the price of a each asset is taken as a proxy for the market's confidence in the corresponding outcome. In our binary market model, there are two mutually exclusive possible outcomes and so the (normalized) prices should sum to 1. In this way, the market can be used for regression or classification. In the three examples we consider here, the market is used for classification. A separate market is run for each point in the test set and the asset with the higher price is considered the market's classification decision for that test point. We note, critically, that based on this model, agent participation is voluntary and decision to participate is driven by Δ = 1 from Equation 1. If this condition is not met during the course of the market for any agents, there will be no market activity and thus no classification decision for that test point. Authors in [62] have noted that this may occur frequently, particularly in cases where the training data set is small or points in the test set are significant different from training the data. Accordingly, we calculate accuracy and F1 based on the scored subset of the data, while also reporting the percentage of scored test data as a performance metric. The artificial prediction market includes five hyper-parameters that are not optimized by the genetic algorithm discussed in [53]: (1) Agent inter-arrival rate ( ); (2) Agent initial bank value ( (0)); As such, these parameters are researcher-determined and warrant further study (RQ1). Our first set of experiments, described below, explore the specific roles of agent inter-arrival rate ( ), agent initial bank value ( (0)) (or, "cash"), and market liquidity (1/ ) on performance. We explore the robustness of performance to selection of these hyper-parameters, highlighting accuracy and F1 score but also trade-offs with agent participation. In experiments that follow, the genetic algorithm is trained over five generations. The objective function of the genetic algorithm maximizes root mean square error of the estimated score. Agent performance is evaluated based on profit; nonprofitable agents are deleted from the pool. The ten most profitable agents are retained and, amongst them, the seven most profitable agents are selected for mutation and crossover. EXPERIMENTAL DESIGN The following experiments support the two primary research questions we have put forward. First, we capture the effects of different combinations of hyper-parameters on market performance (RQ1). Second, we explore the impact of exogenous agents not trained through the evolutionary training process, but rather who adopt one of a set of three simple purchasing rules meant to represent primitive human inputs (RQ2). Market robustness to hyper-parameters We study the effects of inter-arrival rate , agent initial bank value (0) (or, "cash"), and market liquidity factor 1/ on artificial market performance. As mentioned, number of generations is fixed at five during training; while, market duration is fixed at 20. These parameters were fixed (vs. manipulated) to avoid combinatorial complexity during this initial study; however, they should be further studied in future work. In practice, we have found these values to be sufficient for market behavior to converge while also offering reasonable run time. Liquidity factor is tested for the set of Our experiments consider all combinations of these hyper-parameter values, 441 total, and measure corresponding performance in terms of accuracy, F1 score, and percentage of scored test points. Performance for each hyper-parameter set is determined based on 5-fold cross validation with 80/20 train/test splits and performance metrics are averaged over the folds. From these outcomes, we select best and worst-performing hyper-parameter sets to be used for downstream analyses. This process is outlined in Figure 1. Market behavior with exogenous agents We introduce three classes of exogenous agents representing simple, fundamental behaviors which operate fully separate from the agent logic and feature-based training protocol used for the other agents in the market. These classes of behavior are intended to represent behavioral primitives that, in combination, would underlie the actions of human participants in a hybrid scenario. The first, ground truth agents (GT) have perfect knowledge of the correct outcome and always buy contracts corresponding to the correct outcome whenever they have the opportunity to participate (which is moderated by their arrival rate, ). The second are ground truth inverse agents (GTinv). These agents also know the correct outcome but always buy contracts corresponding to the incorrect outcome whenever they have an opportunity to participate. This scenario is equivalent to the case where agents are simply certain but incorrect in their forecast. Finally, our third class of agents are random agents which purchase contracts corresponding to one or the other outcome randomly. Understanding that the decisions of human participants in the hybrid prediction market would not fall squarely into these three categories, these simulations are intended to draw initial boundaries around the impacts human participants might have on the performance of an artificial market depending on the complexity of the task, e.g., there are some tasks which are very easy for humans but difficult for algorithms wherein we would expect near-perfect performance from human participants. Our experiments measure impact of exogenous agents on market performance measured, as before, by accuracy, F1 score, and percentage of scored test points. Because exogenous agents are not trained, they are not subject to the genetic algorithm. Rather, exogenous agents are added directly to the agent pool during test. We test the impact of adding varying number of agents from each class. We specify this number based on percentage of the total agent pool. Specifically, we test hybrid market performance with the inclusion of {0.1%, 0.5%, 1%} GT and GTinv agents. We test hybrid market performance with the inclusion of random agents accounting for {1%, 5%, 10%, 50%} of the total agent pool. Random agents are included at a higher rate given the comparatively lesser impact they have on asset prices. All RQ2 experiments are run with an 80/20 train/test split. This process is diagrammed in Supplemental Materials. All experiments with exogenous agents are based on the third of the three datasets studied in RQ1 for hyper-parameter assessment, namely, the replication outcomes data. This is the type of task where would expect the greatest gain from human-AI collaboration. Namely, this is an extremely challenging task for which (1) neither machine learning nor human judgement alone is likely to guarantee satisfactory performance, and for which (2) algorithmic and human assessments likely consider very different information/feature sets. RESULTS Following, we detail experimental findings in support of RQ1 and RQ2, respectively. Market robustness to hyper-parameters Market robustness to hyper-parameter settings is explored for the Iris and Heart Disease benchmark classification tasks, and for the prediction of replication studies outcomes. These experiments offer the opportunity to compare/contrast the impact of hyperparameters across three contexts. 6.1.1 Iris classification. Figure 2 highlights average F1 score over 441 combinations of initial cash, , and liquidity factor. Generally, better F1 scores are obtained when initial cash ranges between 1 and 4 and when liquidity is greater than 100. Choice of does not appear to significantly impact F1 score. Best F1 of 0.91 is achieved for {liquidity factor = 300, = 1.0, initial cash = 1}. In this setting, accuracy is 0.94 and 100% of the data is scored. Tables and 1, 2, and 3 report market performance holding each one of the three hyper-parameters fixed and varying the other two. Performance metrics are averaged over 5 folds. The data suggests that market performance increases as liquidity increases and decreases with initial cash. While, the effect of reveals no clear pattern. Figure 3 shows F1 vs. accuracy for different values of initial cash. F1 score increases with accuracy and best performance for both is achieved when initial cash is 1. Similar plots of F1 vs. accuracy for liquidity and are provided in Supplemental Materials. 6.1.2 Heart Disease classification. Figure 4 shows average F1 score for all 441 hyper-parameter combinations. Performance is generally poorer than for the Iris classification task, and there is also not as clear a region of best performance in hyper-parameter space. Highest F1 of 0.71 is achieved for (liquidity factor = 50, = 0.05, initial cash = 20). In this setting, accuracy is 0.66 and 99.67% of the data is scored (exactly one test point is left unscored by the market). As above, we report performance for varying liquidity in Table 4, holding and cash fixed. Similar tables are provided for varying and cash in Supplemental Materials, as are plots of F1 vs. accuracy for , cash and liquidity. Similar to our finding on the Iris dataset, liquidity appears to be the primary driver of performance gains and losses on the Heart Disease classification task. Figure 5 shows Of note, despite modest F1 and accuracy scores for this task, the percentage of scored test points is very high. This stands in contrast to results on the replication prediction task which follows. In other words, agents in this case are sufficiently confident (have learned from sufficiently similar points in the training dataset) to invest. However, they are incorrect. While, in the replication prediction task which follows, agents are not sufficiently confident and do not invest -i.e., they "know what they don't know". 6.1.3 Replication outcomes prediction. Finally, we explore the impact of hyper-parameter selection in the context of replication outcomes prediction. Figure 6 gives average F1 scores over all hyperparameter combinations. Best F1 of 0.84 is achieved for {liquidity factor = 5, = 0.05, initial cash = 1}. In this setting, accuracy is 0.79 and 36% of the test data is scored. Figure 7 provides another view of this data via the density plot of F1 and accuracy scores, across Table 5: Average number of agents participating in each market, of 1080 total agents, for varying liquidity, cash and , over all combinations of the other two parameters. Figure 8 shows the average F1 vs. accuracy scores on the replication prediction task, for varying liquidity. As was the case with both benchmark classification tasks, liquidity appears to drive performance here too. In this case, market performance improves as liquidity factor decreases. While, there are no clear best and worstperforming values for initial cash and . Supporting plots are shared in Supplemental Materials. As noted, the artificial prediction market algorithm struggles with agent participation on the replication prediction task. The hyper-parameter set associated with highest F1 score leaves 63% of the data unscored. In fact, all except two hyper-parameter combinations leave more than 40% of the test data unscored (see Supplemental Materials). Liquidity factor here too appears to play a critical role. Table 5 provides the average number of participating agents per market, for fixed values of each hyper-parameter. Liquidity has the greatest impact on participation. Liquidity controls the magnitude of shifts in asset price with each buy/sell. Agents' participation depends on movements in asset price, and as such, this behavior is in line with expectations. Market behavior with exogenous agents Our experiments in support of RQ2 introduce simulated, exogenous agents representing ground truth (GT), ground truth inverse (GTinv), and random behavioral primitives into the market. These additional agents operate outside of the training process and, as such, represent complementary actions that may underlie simple human participant inputs. Exogenous agents are introduced into the general agent pool and are subject to the same arrival rate, , as trained agents. Our simulations with exogenous agents are run over the replication prediction task as baseline. In particular, we consider the five best-and five worst-performing hyper-parameter settings, sorted by F1 score (Table 6). We use these 10 markets as baselines to study the impacts on performance of including GT, GTinv and random agents into the market. Changes to F1 scores after the introduction of each of the three exogenous agent populations, in varying amounts, into the replication prediction markets are detailed in Table 7. Gains in accuracy follow similarly, see Supplemental Materials. Figure 9: F1 with different percentage of added GT agents for replication data 6.2.1 Introduction of GT agents into the agent pool. Notably, the inclusion of even a very small population of GT agents improves market performance substantially. Figure 9 shows the incremental The introduction of GTinv agents into the agent pool has even greater impact on F1, see Figure 10. The addition of just 0.1% GTinv agents drops average F1 below 0.35 for all 10 markets. While, inclusion of 1% GTinv agents brings the best-performing baseline market from F1 of 0.84 to F1 of 0.09. The significant impact of very small numbers of GT and GTinv agents has important implications for the promise of future hybrid prediction markets with human participants. For a task which a human participant would very likely perform accurately but for which scaling is a concern, for example, this suggests that a trained, artificial prediction market might perform very well with minimal human input. Finally, and as an additional baseline, we experiment with the inclusion of agents who randomly buy and sell assets corresponding to future outcomes into the artifical prediction market framework. Because the impact of these agents is relatively lesser in magnitude than that of GT and GTinv agents, we experiment with adding more of them into the agent pool. In all cases, the inclusion of exogenous random agents into the agent pool degrades performance. However, in many cases, the change in performance is modest. These results are detailed in Supplemental Materials. Impact of exogenous agents on (trained) agent participation. The inclusion of exogeneous agents into the agent pool has impact beyond their own asset purchases. The investments of exogeneous agents in the market drives asset prices above and/or beyond where they were in the baseline case, and in doing so, has the impact of increasing participation amongst the trained agents in the pool. Table 8 gives the average number of participating trained agents, for each of the 10 markets under study in RQ2, for each experimental condition. These trends are visualized in additional plots in Supplemental Materials. We note that increased participation is similarly observed for inclusion of GT and GTinv agents. Given the losses in F1 and accuracy when GTinv agents are present, it is clear that increased participation of trained agents is not necessarily a goal. However, these findings highlight the possible impacts, both good and bad, of participation exogenous to the trained artificial market. CONCLUSIONS The comprehensive study of a simple artificial prediction market we undertake here highlights a promising new machine learning algorithm, which achieves respectable performance on benchmark machine learning tasks but which, we argue, affords unique opportunities for human-AI collaboration. The performance of this very simple, initial market model is encouraging. There is likely great room for improvement: other agent training schemes may be more efficient that the genetic algorithm; more sophisticated agent logic can likely be devised; agents need not be homogeneous -rather, specialized agents populations may be trained with different and complementary expertise. These improvements, building on an already-functional baseline algorithm, may offer new avenues of creative artificial intelligence. Beyond the potential of an artificial prediction market as an AI, future work should take the next step and introduce human participants into a hybrid prediction market model. This process will require research into best mechanisms and practices for humanagent collaboration in the context of markets. E.g., Should agents and human participants be given the same amount of cash? What is the appropriate duration of such a market? At what rate should agents be permitted to transact? Which tasks are best suited to hybrid intelligence? Ultimately, one goal might be to train a class of agents in the presence of human participants but be able to deploy those agents offline for scalability. (RQ1) Market robustness to hyper-parameters. Iris classification. Heart Disease classification. Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), A. Ricci, W. Yeoh, N. Agmon, B. An (eds.), May 29 -June 2, 2023, London, United Kingdom. © 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. running time ( ) or duration; (5) Number of generations in the genetic (training) algorithm. values {5, 10, 20, 50, 75, 100, 150, 200, 300}. Initial cash is tested for {1, 2, 3, 4, 5, 10, 20}; is tested for {0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1.0}. Figure 1 : 1RQ1 experimental architecture. Figure 2 : 2Average F1 score on the Iris classification task, plotted in hyper-parameter space. Figure 3 : 3Average F1 score vs. accuracy on the Iris classification task, for varying initial cash. Figure 4 : 4Average F1 score on the Heart Disease classification task, plotted in hyper-parameter space. Figure 5 : 5Average F1 score vs. accuracy on the Heart Disease classification task, for varying Liquidity the average F1 vs. accuracy for all the combinations from where we took subsets to show more in-depth impact of liquidity. Figure 6 : 6Average F1 score on the replication prediction task, plotted in hyper-parameter space. Figure 7 : 7Density plot for F1 and accuracy scores on the replication prediction task. all hyper-parameter sets. Tables detailing F1, accuracy and percentage of scored test points, varying individual hyper-parameters, are provided in Supplemental Materials. Figure 8 : 8Average F1 score vs. accuracy with different values of Liquidity for Replication Data Figure 10 : 10F1 with different percentage of added GT inverse agents for replication data 6.2.2 Introduction of GTinv agents into the agent pool. Figure 1 : 1Average F1 score vs. accuracy on the Iris classification task, for varying liquidity. Figure 2 : 2Average F1 score vs. accuracy on the Iris classification task, for varying . Figure 7 :Figure 4 : 74Plot of liquidity, lambda, Initial Cash vs Scored % for replication data Average F1 score vs. accuracy on the Heart Disease classification task, for varying Initial Cash. Figure 5 :Figure 6 : 56Average F1 score vs. accuracy with different values of for Replication Data Average F1 score vs. accuracy with different values of Initial cash for Replication Data (RQ2) Market behavior with exogenous agents. Figure 10 : 10Agent Participation with different percentage of added GT agents for replication dataFigure 11: Agent Participation with different percentage of added GT inverse agents for replication data Table 1 : 1Average F1 score and accuracy on the Iris classification task, varying initial cash.Cash Liquidity Accuracy F1 Scored % 1 300 1.0 0.94 0.91 100 2 300 1.0 0.81 0.58 100 3 300 1.0 0.81 0.64 100 4 300 1.0 0.87 0.76 100 5 300 1.0 0.76 0.42 100 10 300 1.0 0.75 0.35 100 20 300 1.0 0.75 0.37 100 Table 2 : 2Average F1 score and accuracy on the Iris classification task, varying .Liquidity Cash Accuracy F1 Scored % 0.010 300 1 0.87 0.79 100 0.025 300 1 0.91 0.86 100 0.050 300 1 0.88 0.82 100 0.100 300 1 0.87 0.79 100 0.250 300 1 0.87 0.80 100 0.500 300 1 0.87 0.78 100 1.000 300 1 0.94 0.91 100 Table 3 : 3Average F1 score and accuracy on the Iris classification task, varying liquidity.Liquidity Cash Accuracy F1 Scored % 5 1.0 1 0.67 0.00 100 10 1.0 1 0.67 0.00 100 20 1.0 1 0.67 0.03 100 50 1.0 1 0.75 0.33 100 75 1.0 1 0.85 0.72 100 100 1.0 1 0.86 0.74 100 150 1.0 1 0.86 0.76 100 200 1.0 1 0.88 0.81 100 300 1.0 1 0.94 0.91 100 Table 4 : 4Average F1 score and accuracy on the Heart Disease classification task, varying liquidity.Liquidity Cash Accuracy F1 Scored % 5 0.05 20 0.59 0.54 100 10 0.05 20 0.54 0.56 99.67 20 0.05 20 0.60 0.66 99.01 50 0.05 20 0.66 0.71 99.67 75 0.05 20 0.61 0.64 99.67 100 0.05 20 0.58 0.62 99.34 150 0.05 20 0.58 0.59 99.01 200 0.05 20 0.57 0.62 99.34 300 0.05 20 0.59 0.61 99.34 Table 6 : 6Five best-and worst-performing hyper-parameter settings for replication prediction.Liquidity Cash F1 Accuracy Scored % 5 0.05 1 0.84 0.79 36 10 0.05 10 0.84 0.76 35 5 1 4 0.83 0.75 37 5 0.1 1 0.83 0.77 36 5 0.1 2 0.83 0.75 37 150 0.25 4 0.69 0.58 35 100 0.05 2 0.68 0.0.57 35 150 0.5 20 0.68 0.58 35 10 0.5 3 0.65 0.66 55 75 0.1 2 0.64 0.65 52 Table 7 : 7Average F1 scores on 10 replication prediction markets, for different types and size of exogenous agent populations None GT 0.1% GT 0.5% GT 1% GTinv 0.1% GTinv 0.5% GTinv 1% Random 1% Random 5% Random 10% Random 50%0.84 0.93 1 1 0.34 0.23 0.09 0.79 0.81 0.80 0.82 0.84 0.91 0.96 0.97 0.34 0.31 0.28 0.74 0.76 0.75 0.75 0.83 0.94 1 1 0.32 0.25 0.06 0.79 0.81 0.83 0.78 0.83 0.90 0.95 0.99 0.33 0.28 0.24 0.76 0.74 0.82 0.77 0.83 0.88 0.91 0.94 0.33 0.32 0.29 0.75 0.77 0.79 0.73 0.69 0.89 0.94 0.96 0.34 0.31 0.28 0.76 0.77 0.81 0.77 0.69 0.91 0.97 0.96 0.34 0.29 0.29 0.78 0.77 0.77 0.78 0.68 0.90 0.95 0.96 0.33 0.29 0.29 0.78 0.78 0.79 0.76 0.66 0.89 0.90 0.94 0.33 0.33 0.29 0.76 0.76 0.79 0.74 0.65 0.90 0.91 0.94 0.34 0.32 0.30 0.77 0.75 0.81 0.74 improvements in F1 score derived with as little as 0.1% GT agents, i.e., 1 GT agent for each 1000 trained agents in the pool, for each of the five best hyper-parameter settings from the baseline replication outcomes experiments. Inclusion of 0.5% GT agents brings F1 up to 1.0 in two cases of the five, and over 0.9 in all five. Table 8 : 8Agent participation with different percentages of GT and GTinv agents (1080 total agents).None GT GT GT GTinv GTinv GTinv 0.1% 0.5% 1% 0.1% 0.5% 1% 28.35 31.33 32.21 36.91 29.54 37.06 46.79 22.62 25.97 29.42 36.82 25.07 31.11 37.82 23.66 22.93 29.98 35.39 27.08 32.16 36.54 28.56 30.63 36.16 37.90 28.17 35.64 48.34 24.70 27.51 31.99 34.64 28.04 33.67 44.20 41.12 44.56 47.82 52.02 42.80 44.23 53.81 40.01 39.89 43.83 48.46 41.27 42.57 51.36 41.90 42.06 46.83 51.55 42.88 45.02 51.24 26.93 28.02 29.56 35.72 28.17 34.77 41.71 44.47 37.49 40.50 46.66 38.66 42.20 49.84 6.2.3 Introduction of random agents into the agent pool. Supplemental Materials: Artificial prediction markets present a novel opportunity for human-AI collaboration ACM Reference Format: Anonymous Author(s). 2023. Supplemental Materials: Artificial prediction markets present a novel opportunity for human-AI collaboration . In Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), London, United Kingdom, May 29 -June 2, 2023, IFAAMAS, 3 pages.Anonymous Author(s) Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), A. Ricci, W. Yeoh, N. Agmon, B. An (eds.), May 29 -June 2, 2023, London, United Kingdom. © 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.Table 1: Average F1 score and accuracy on the Heart Disease classification task, varying Lambda.Liquidity Cash Accuracy F1 Scored % 0.01 50 20 0.55 0.64 99.67 0.025 50 20 0.56 0.64 99.67 0.05 50 20 0.66 0.71 99.67 0.1 50 20 0.60 0.66 100 0.25 50 20 0.58 0.67 99.67 0.5 50 20 0.60 0.65 99.34 1.0 50 20 0.58 0.63 99.67 Cash Liquidity Accuracy F1 Scored % 1 50 0.05 0.47 0.38 100 2 50 0.05 0.55 0.56 100 3 50 0.05 0.58 0.64 100 4 50 0.05 0.59 0.64 100 5 50 0.05 0.58 0.61 99.01 10 50 0.05 0.60 0.63 99.34 20 50 0.05 0.66 0.71 99.67 Table 2 : 2Average F1 score and accuracy on the Heart Disease classification task, varying Initial Cash.Figure 3: Average F1 score vs. accuracy on the Heart Disease classification task, for varying . arXiv:2211.16590v1 [cs.IT] 29 Nov 2022Replication prediction. Table 3 : 3Average F1 score and accuracy on the Replication classification task, varying Initial Cash.Cash Liquidity Accuracy F1 Scored % 1 5 0.05 0.79 0.84 35.86 2 5 0.05 0.65 0.75 35.86 3 5 0.05 0.67 0.76 35.86 4 5 0.05 0.66 0.74 37.24 5 5 0.05 0.70 0.78 36.55 10 5 0.05 0.66 0.76 36.55 20 5 0.05 0.67 0.75 36.55 Table 4 : 4Average F1 score and accuracy on the Replication classification task, varying Lambda.Liquidity Cash Accuracy F1 Scored % 0.01 5 1 0.74 0.80 36.55 0.025 5 1 0.76 0.82 35.86 0.05 5 1 0.79 0.84 35.86 0.1 5 1 0.77 0.83 35.86 0.25 5 1 0.73 0.79 37.24 0.5 5 1 0.75 0.80 36.55 1.0 5 1 0.73 0.79 37.24 Table 5 : 5Average F1 score and accuracy on the Replication classification task, varying Liquidity.Liquidity Cash Accuracy F1 Scored % 5 0.05 1 0.79 0.84 35.86 10 0.05 1 0.74 0.81 36.55 20 0.05 1 0.62 0.70 36.55 50 0.05 1 0.64 0.72 37.24 75 0.05 1 0.65 0.72 35.17 100 0.05 1 0.71 0.76 35.17 150 0.05 1 0.71 0.77 39.31 200 0.05 1 0.69 0.75 39.31 300 0.05 1 0.74 0.78 39.31 Table 6 : 6Average accuracy scores on 10 replication prediction markets, for different types and size of exogenous agent populations None GT 0.1% GT 0.5% GT 1% GTinv 0.1% GTinv 0.5% GTinv 1% Random 1% Random 5% Random 10% Random 50%0.79 0.92 1 1 0.27 0.14 0.05 0.76 0.77 0.77 0.78 0.76 0.90 0.95 0.97 0.27 0.21 0.17 0.67 0.68 0.67 0.70 0.75 0.94 1 1 0.26 0.14 0.03 0.76 0.77 0.80 0.77 0.77 0.89 0.95 0.99 0.26 0.18 0.14 0.70 0.66 0.77 0.73 0.75 0.87 0.90 0.94 0.27 0.23 0.18 0.67 0.70 0.74 0.68 0.58 0.88 0.93 0.95 0.26 0.21 0.17 0.70 0.70 0.76 0.71 0.57 0.90 0.96 0.95 0.25 0.18 0.18 0.73 0.70 0.71 0.72 0.58 0.89 0.94 0.95 0.25 0.19 0.17 0.73 0.71 0.73 0.74 0.65 0.88 0.89 0.93 0.25 0.26 0.18 0.71 0.68 0.74 0.69 0.64 0.89 0.90 0.94 0.27 0.23 0.19 0.72 0.68 0.76 0.70 An optimization-based framework for automated market-making. Jacob Abernethy, Yiling Chen, Jennifer Wortman Vaughan, Proceedings of the 12th ACM conference on Electronic commerce. the 12th ACM conference on Electronic commerceJacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. 2011. An optimization-based framework for automated market-making. In Proceedings of the 12th ACM conference on Electronic commerce. 297-306. Predicting the replicability of social science lab experiments. Adam Altmejd, Anna Dreber, Eskil Forsell, Juergen Huber, Taisuke Imai, Magnus Johannesson, Michael Kirchler, PloS one. 14225826Gideon Nave, and Colin CamererAdam Altmejd, Anna Dreber, Eskil Forsell, Juergen Huber, Taisuke Imai, Magnus Johannesson, Michael Kirchler, Gideon Nave, and Colin Camerer. 2019. Predicting the replicability of social science lab experiments. PloS one 14, 12 (2019), e0225826. Power to the people: The role of humans in interactive machine learning. Saleema Amershi, Maya Cakmak, William Bradley Knox, Todd Kulesza, Ai Magazine. 35Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. Ai Magazine 35, 4 (2014), 105-120. Guidelines for human-AI interaction. Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, N Paul, Kori Bennett, Inkpen, Proceedings of the 2019 chi conference on human factors in computing systems. the 2019 chi conference on human factors in computing systemsSaleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 chi conference on human factors in computing systems. 1-13. . Kenneth J Arrow, Robert Forsythe, Michael Gorham, Robert Hahn, Robin Hanson, John O Ledyard, Saul Levmore, Robert Litan, Paul Milgrom, Forrest D Nelson, George R Neumann, Marco Ottaviani, Thomas C Schelling, Robert J Shiller, Vernon L Smith, Erik Snowberg, Cass R Sunstein, Paul C Tetlock, Philip E , Kenneth J. Arrow, Robert Forsythe, Michael Gorham, Robert Hahn, Robin Hanson, John O. Ledyard, Saul Levmore, Robert Litan, Paul Milgrom, Forrest D. Nelson, George R. Neumann, Marco Ottaviani, Thomas C. Schelling, Robert J. Shiller, Vernon L. Smith, Erik Snowberg, Cass R. Sunstein, Paul C. Tetlock, Philip E. The Promise of Prediction Markets. Hal R Tetlock, Justin Varian, Eric Wolfers, Zitzewitz, Science. 320Tetlock, Hal R. Varian, Justin Wolfers, and Eric Zitzewitz. 2008. The Promise of Prediction Markets. Science 320, 5878 (May 2008), 877-878. Beyond accuracy: The role of mental models in human-AI team performance. Gagan Bansal, Besmira Nushi, Ece Kamar, S Walter, Lasecki, S Daniel, Eric Weld, Horvitz, Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. the AAAI Conference on Human Computation and Crowdsourcing7Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 2-11. Artificial prediction markets for lymph node detection. Adrian Barbu, Nathan Lay, 2013 E-Health and Bioengineering Conference (EHB). IEEE. Adrian Barbu and Nathan Lay. 2013. Artificial prediction markets for lymph node detection. In 2013 E-Health and Bioengineering Conference (EHB). IEEE, 1-7. An Introduction to Artificial Prediction Markets for Classification. Adrian Barbu, Nathan Lay, Shie Mannor, Journal of Machine Learning Research. 137Adrian Barbu, Nathan Lay, and Shie Mannor. 2012. An Introduction to Artificial Prediction Markets for Classification. Journal of Machine Learning Research 13, 7 (2012). Prediction market accuracy in the long run. E Joyce, Berg, D Forrest, Thomas A Nelson, Rietz, International Journal of Forecasting. 24Joyce E Berg, Forrest D Nelson, and Thomas A Rietz. 2008. Prediction market accuracy in the long run. International Journal of Forecasting 24, 2 (2008), 285-300. Superhuman AI for multiplayer poker. Noam Brown, Tuomas Sandholm, Science. 365Noam Brown and Tuomas Sandholm. 2019. Superhuman AI for multiplayer poker. Science 365, 6456 (2019), 885-890. Evaluating replicability of laboratory experiments in economics. F Colin, Anna Camerer, Eskil Dreber, Teck-Hua Forsell, Jürgen Ho, Magnus Huber, Michael Johannesson, Johan Kirchler, Adam Almenberg, Taizan Altmejd, Chan, Science. 351Colin F Camerer, Anna Dreber, Eskil Forsell, Teck-Hua Ho, Jürgen Huber, Magnus Johannesson, Michael Kirchler, Johan Almenberg, Adam Altmejd, Taizan Chan, et al. 2016. Evaluating replicability of laboratory experiments in economics. Science 351, 6280 (2016), 1433-1436. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. F Colin, Anna Camerer, Felix Dreber, Teck-Hua Holzmeister, Jürgen Ho, Magnus Huber, Michael Johannesson, Gideon Kirchler, Nave, A Brian, Thomas Nosek, Pfeiffer, 10.1038/s41562-018-0399-zNature Human Behaviour. 2Colin F Camerer, Anna Dreber, Felix Holzmeister, Teck-Hua Ho, Jürgen Huber, Magnus Johannesson, Michael Kirchler, Gideon Nave, Brian A Nosek, Thomas Pfeiffer, et al. 2018. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour 2, 9 (2018), 637-644. https://doi.org/10.1038/s41562-018-0399-z The wisdom of the market: Using human factors to design prediction markets for collective intelligence. Christopher Lorenzo Barberis Canonico, Nathan Flathmann, Mcneese, Proceedings of the Human Factors and Ergonomics Society Annual Meeting. the Human Factors and Ergonomics Society Annual MeetingLos Angeles, CASAGE Publications Sage CA63Lorenzo Barberis Canonico, Christopher Flathmann, and Nathan McNeese. 2019. The wisdom of the market: Using human factors to design prediction markets for collective intelligence. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63. SAGE Publications Sage CA: Los Angeles, CA, 1471-1475. Complexity of combinatorial market makers. Yiling Chen, Lance Fortnow, Nicolas Lambert, M David, Jennifer Pennock, Wortman, Proceedings of the 9th ACM Conference on Electronic Commerce. the 9th ACM Conference on Electronic CommerceYiling Chen, Lance Fortnow, Nicolas Lambert, David M Pennock, and Jennifer Wortman. 2008. Complexity of combinatorial market makers. In Proceedings of the 9th ACM Conference on Electronic Commerce. 190-199. Designing markets for prediction. Yiling Chen, David M Pennock, AI Magazine. 31n.d.Yiling Chen and David M. Pennock. [n.d.]. Designing markets for prediction. AI Magazine 31, 4 ([n. d.]), 42--52. A new understanding of prediction markets via no-regret learning. Yiling Chen, Jennifer Wortman Vaughan, Proceedings of the 11th ACM conference on Electronic commerce. the 11th ACM conference on Electronic commerceYiling Chen and Jennifer Wortman Vaughan. 2010. A new understanding of pre- diction markets via no-regret learning. In Proceedings of the 11th ACM conference on Electronic commerce. 189-198. Adnan Mohsin Abdulazeez, Diyar Qader Zeebaree, and Dilovan Assad Zebari. 2021. Machine learning classifiers based classification for IRIS recognition. Chicho Bahzad Taha, Qubahan Academic Journal. 1Bahzad Taha Chicho, Adnan Mohsin Abdulazeez, Diyar Qader Zeebaree, and Dilovan Assad Zebari. 2021. Machine learning classifiers based classification for IRIS recognition. Qubahan Academic Journal 1, 2 (2021), 106-118. Estimating the reproducibility of experimental philosophy. Florian Cova, Brent Strickland, Angela Abatista, Aurélien Allard, James Andow, Mario Attie, James Beebe, Renatas Berniūnas, Jordane Boudesseul, Matteo Colombo, Review of Philosophy and Psychology. 12Florian Cova, Brent Strickland, Angela Abatista, Aurélien Allard, James Andow, Mario Attie, James Beebe, Renatas Berniūnas, Jordane Boudesseul, Matteo Colombo, et al. 2021. Estimating the reproducibility of experimental philos- ophy. Review of Philosophy and Psychology 12, 1 (2021), 9-44. Using Prediction Markets to Track Information Flows: Evidence from Google. Bo Cowgill, Justin Wolfers, Eric Zitzewitz, amma. 3Bo Cowgill, Justin Wolfers, and Eric Zitzewitz. 2009. Using Prediction Markets to Track Information Flows: Evidence from Google.. In amma. 3. Sascha Weigel, and Philipp Ebel. 2021. The future of human-AI collaboration: a taxonomy of design knowledge for hybrid intelligence systems. Dominik Dellermann, Adrian Calma, Nikolaus Lipusch, Thorsten Weber, arXiv:2105.03354arXiv preprintDominik Dellermann, Adrian Calma, Nikolaus Lipusch, Thorsten Weber, Sascha Weigel, and Philipp Ebel. 2021. The future of human-AI collaboration: a tax- onomy of design knowledge for hybrid intelligence systems. arXiv preprint arXiv:2105.03354 (2021). Back-propagation neural network versus logistic regression in heart disease classification. D Shrinivas, Shantala Desai, Prashant Giraddi, Narayankar, R Neha, Shreya Pudakalakatti, Sulegaon, Advanced computing and communication technologies. SpringerShrinivas D Desai, Shantala Giraddi, Prashant Narayankar, Neha R Pudakalakatti, and Shreya Sulegaon. 2019. Back-propagation neural network versus logistic regression in heart disease classification. In Advanced computing and communi- cation technologies. Springer, 133-144. Using prediction markets to estimate the reproducibility of scientific research. Anna Dreber, Thomas Pfeiffer, Johan Almenberg, Siri Isaksson, Brad Wilson, Yiling Chen, A Brian, Magnus Nosek, Johannesson, Proceedings of the National Academy of Sciences. 112Anna Dreber, Thomas Pfeiffer, Johan Almenberg, Siri Isaksson, Brad Wilson, Yiling Chen, Brian A Nosek, and Magnus Johannesson. 2015. Using prediction markets to estimate the reproducibility of scientific research. Proceedings of the National Academy of Sciences 112, 50 (2015), 15343-15347. An open investigation of the reproducibility of cancer biology research. Elizabeth Timothy M Errington, William Iorns, Fraser Elisabeth Gunn, Joelle Tan, Brian A Lomax, Nosek, Elife. 3Timothy M Errington, Elizabeth Iorns, William Gunn, Fraser Elisabeth Tan, Joelle Lomax, and Brian A Nosek. 2014. An open investigation of the reproducibility of cancer biology research. Elife 3 (2014). R A Fisher, Iris. UCI Machine Learning Repository. R.A. Fisher. 1988. Iris. UCI Machine Learning Repository. A Case for Humans-in-the-Loop: Decisions in the Presence of Misestimated Algorithmic Scores. Riccardo Fogliato, Maria De-Arteaga, Alexandra Chouldechova, Available at SSRN. 4050125Riccardo Fogliato, Maria De-Arteaga, and Alexandra Chouldechova. 2022. A Case for Humans-in-the-Loop: Decisions in the Presence of Misestimated Algorithmic Scores. Available at SSRN 4050125 (2022). Predicting replication outcomes in the Many Labs 2 study. Eskil Forsell, Domenico Viganola, Thomas Pfeiffer, Johan Almenberg, Brad Wilson, Yiling Chen, A Brian, Magnus Nosek, Anna Johannesson, Dreber, Journal of Economic Psychology. 75102117Eskil Forsell, Domenico Viganola, Thomas Pfeiffer, Johan Almenberg, Brad Wil- son, Yiling Chen, Brian A Nosek, Magnus Johannesson, and Anna Dreber. 2019. Predicting replication outcomes in the Many Labs 2 study. Journal of Economic Psychology 75 (2019), 102117. Information aggregation mechanisms in the field: Sales forecasting inside intel. J Benjamin, Charles R Gillen, Matthew Plott, Shum, In Working paperBenjamin J Gillen, Charles R Plott, and Matthew Shum. 2012. Information aggregation mechanisms in the field: Sales forecasting inside intel. In Working paper. Are replication rates the same across academic fields? Community forecasts from the DARPA SCORE programme. Michael Gordon, Domenico Viganola, Michael Bishop, Yiling Chen, Anna Dreber, Brandon Goldfedder, Felix Holzmeister, Magnus Johannesson, Yang Liu, Charles Twardy, Royal Society open science. Michael Gordon, Domenico Viganola, Michael Bishop, Yiling Chen, Anna Dreber, Brandon Goldfedder, Felix Holzmeister, Magnus Johannesson, Yang Liu, Charles Twardy, et al. 2020. Are replication rates the same across academic fields? Community forecasts from the DARPA SCORE programme. Royal Society open science (2020). Predicting replicability-Analysis of survey and prediction market data from large-scale forecasting projects. Michael Gordon, Domenico Viganola, Anna Dreber, Magnus Johannesson, Thomas Pfeiffer, Plos one. 16248780Michael Gordon, Domenico Viganola, Anna Dreber, Magnus Johannesson, and Thomas Pfeiffer. 2021. Predicting replicability-Analysis of survey and prediction market data from large-scale forecasting projects. Plos one 16, 4 (2021), e0248780. The principles and limits of algorithm-in-theloop decision making. Ben Green, Yiling Chen, Proceedings of the ACM on Human-Computer Interaction. 3CSCWBen Green and Yiling Chen. 2019. The principles and limits of algorithm-in-the- loop decision making. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1-24. Information aggregation and manipulation in an experimental market. Robin Hanson, Ryan Oprea, David Porter, Journal of Economic Behavior & Organization. 604Robin Hanson, Ryan Oprea, and David Porter. 2006. Information aggregation and manipulation in an experimental market. Journal of Economic Behavior & Organization 60, 4 (2006), 449-459. The Role of HCI in the Age of AI. H R Richard, Harper, International Journal of Human-Computer Interaction. 35Richard HR Harper. 2019. The Role of HCI in the Age of AI. International Journal of Human-Computer Interaction 35, 15 (2019), 1331-1344. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision. 1026-1034. Multi-period trading prediction markets with connections to machine learning. Jinli Hu, Amos Storkey, International Conference on Machine Learning. Jinli Hu and Amos Storkey. 2014. Multi-period trading prediction markets with connections to machine learning. In International Conference on Machine Learning. 1773-1781. Artificial prediction markets as a tool for syndromic surveillance. Fatemeh Jahedpari, Julian Padget, Marina De Vos, Benjamin Hirsch, Crowd Intelligence: Foundations, Methods and Practices. Fatemeh Jahedpari, Julian Padget, Marina De Vos, and Benjamin Hirsch. 2014. Ar- tificial prediction markets as a tool for syndromic surveillance. Crowd Intelligence: Foundations, Methods and Practices (2014). Heart disease data set. Andras Janosi, William Steinbrunn, Matthias Pfisterer, Robert Detrano, The UCI KDD Archive. Andras Janosi, William Steinbrunn, Matthias Pfisterer, and Robert Detrano. 1988. Heart disease data set. The UCI KDD Archive (1988). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Jarrahi Mohammad Hossein, Business horizons. 61Mohammad Hossein Jarrahi. 2018. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business horizons 61, 4 (2018), 577-586. Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. Ece Kamar, IJCAI. Ece Kamar. 2016. Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. In IJCAI. 4070-4073. Investigating variation in replicability. Kate A Richard A Klein, Michelangelo Ratliff, Reginald B Vianello, Štěpán AdamsJr, Bahník, J Michael, Konrad Bernstein, Bocian, J Mark, Beach Brandt, Claudia Chloe Brooks, Brumbaugh, Social psychology. Richard A Klein, Kate A Ratliff, Michelangelo Vianello, Reginald B Adams Jr, Štěpán Bahník, Michael J Bernstein, Konrad Bocian, Mark J Brandt, Beach Brooks, Claudia Chloe Brumbaugh, et al. 2014. Investigating variation in replicability. Social psychology (2014). Many Labs 2: Investigating variation in replicability across samples and settings. Michelangelo Richard A Klein, Fred Vianello, Hasselman, G Byron, Reginald B Adams, Sinan AdamsJr, Mark Alper, Aveyard, Jordan R Axt, T Mayowa, Štěpán Babalola, Bahník, Advances in Methods and Practices in Psychological Science. 1Richard A Klein, Michelangelo Vianello, Fred Hasselman, Byron G Adams, Regi- nald B Adams Jr, Sinan Alper, Mark Aveyard, Jordan R Axt, Mayowa T Babalola, Štěpán Bahník, et al. 2018. Many Labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science 1, 4 (2018), 443-490. Human decisions and machine predictions. Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan, The quarterly journal of economics. 133Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. 2018. Human decisions and machine predictions. The quarterly journal of economics 133, 1 (2018), 237-293. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. Vivian Lai, Chenhao Tan, Proceedings of the conference on fairness, accountability, and transparency. the conference on fairness, accountability, and transparencyVivian Lai and Chenhao Tan. 2019. On human predictions with explanations and predictions of machine learning models: A case study on deception detection. In Proceedings of the conference on fairness, accountability, and transparency. 29-38. Supervised aggregation of classifiers using artificial prediction markets. Nathan Lay, Adrian Barbu, ICML. Nathan Lay and Adrian Barbu. 2010. Supervised aggregation of classifiers using artificial prediction markets. In ICML. Nathan Lay, Adrian Barbu, arXiv:1204.4154The artificial regression market. arXiv preprintNathan Lay and Adrian Barbu. 2012. The artificial regression market. arXiv preprint arXiv:1204.4154 (2012). Alexandre Bernardino, and Sergi Bermúdez i Badia. 2021. A human-ai collaborative approach for clinical decision making on rehabilitation assessment. Min Hun Lee, P Daniel, Asim Siewiorek, Smailagic, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsMin Hun Lee, Daniel P Siewiorek, Asim Smailagic, Alexandre Bernardino, and Sergi Bermúdez i Badia. 2021. A human-ai collaborative approach for clinical decision making on rehabilitation assessment. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1-14. Optimizing the liquidity parameter of logarithmic market scoring rules prediction markets. Suparerk Lekwijit, Daricha Sutivong, Journal of Modelling in Management. Suparerk Lekwijit and Daricha Sutivong. 2018. Optimizing the liquidity parameter of logarithmic market scoring rules prediction markets. Journal of Modelling in Management (2018). Crowdsourced data management: A survey. Guoliang Li, Jiannan Wang, Yudian Zheng, Michael J Franklin, IEEE Transactions on Knowledge and Data Engineering. 28Guoliang Li, Jiannan Wang, Yudian Zheng, and Michael J Franklin. 2016. Crowd- sourced data management: A survey. IEEE Transactions on Knowledge and Data Engineering 28, 9 (2016), 2296-2319. Interpreting the predictions of prediction markets. F Charles, Manski, economics letters. 913Charles F Manski. 2006. Interpreting the predictions of prediction markets. economics letters 91, 3 (2006), 425-429. Boosted random forest. Yohei Mishina, Ryuei Murata, Yuji Yamauchi, Takayoshi Yamashita, Hironobu Fujiyoshi, IEICE TRANSACTIONS on Information and Systems. 98Yohei Mishina, Ryuei Murata, Yuji Yamauchi, Takayoshi Yamashita, and Hironobu Fujiyoshi. 2015. Boosted random forest. IEICE TRANSACTIONS on Information and Systems 98, 9 (2015), 1630-1636. Support vector machine accuracy improvement with classification. Lalit Mohan, Janmejay Pant, Priyanka Suyal, Arvind Kumar, 2020 12th International Conference on Computational Intelligence and Communication Networks (CICN). IEEELalit Mohan, Janmejay Pant, Priyanka Suyal, and Arvind Kumar. 2020. Support vector machine accuracy improvement with classification. In 2020 12th Interna- tional Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 477-481. Organic Computing-Technical Systems for Survival in the Real World. Christian Müller, - Schloer, Sven Tomforde, SpringerChristian Müller-Schloer and Sven Tomforde. 2017. Organic Computing-Technical Systems for Survival in the Real World. Springer. Making business predictions by combining human and machine intelligence in prediction markets. Association for Information Systems. Yiftach Nagar, Thomas W Malone, Yiftach Nagar and Thomas W Malone. 2011. Making business predictions by combining human and machine intelligence in prediction markets. Association for Information Systems. Design and Analysis of a Synthetic Prediction Market using Dynamic Convex Sets. Nishanth Nakshatri, Arjun Menon, Lee Giles, Sarah Rajtmajer, Christopher Griffin, arXiv:2101.01787arXiv preprintNishanth Nakshatri, Arjun Menon, C Lee Giles, Sarah Rajtmajer, and Christo- pher Griffin. 2021. Design and Analysis of a Synthetic Prediction Market using Dynamic Convex Sets. arXiv preprint arXiv:2101.01787 (2021). A survey on human-inthe-loop applications towards an internet of all. David Sousa Nunes, Pei Zhang, Jorge Sá Silva, IEEE Communications Surveys & Tutorials. 17David Sousa Nunes, Pei Zhang, and Jorge Sá Silva. 2015. A survey on human-in- the-loop applications towards an internet of all. IEEE Communications Surveys & Tutorials 17, 2 (2015), 944-965. Estimating the reproducibility of psychological science. 10.1126/science.aac4716arXivScience. 3496251Open Science Collaboration. 2015. Estimating the reproducibility of psycholog- ical science. Science 349, 6251 (2015). https://doi.org/10.1126/science.aac4716 arXiv:https://science.sciencemag.org/content/349/6251/aac4716.full.pdf Probabilistic forecasting of replication studies. Samuel Pawel, Leonhard Held, PloS one. 15231416Samuel Pawel and Leonhard Held. 2020. Probabilistic forecasting of replication studies. PloS one 15, 4 (2020), e0231416. Iris flower species identification using machine learning approach. Soumya Joylin Priya Pinto, Jyothi Kelur, Shetty, 4th International Conference for Convergence in Technology (I2CT). IEEEJoylin Priya Pinto, Soumya Kelur, and Jyothi Shetty. 2018. Iris flower species iden- tification using machine learning approach. In 2018 4th International Conference for Convergence in Technology (I2CT). IEEE, 1-4. Use of prediction markets to forecast infectious disease activity. Philip M Polgreen, D Forrest, Nelson, Robert A George R Neumann, Weinstein, Clinical Infectious Diseases. 44Philip M Polgreen, Forrest D Nelson, George R Neumann, and Robert A Weinstein. 2007. Use of prediction markets to forecast infectious disease activity. Clinical Infectious Diseases 44, 2 (2007), 272-279. Watch-and-help: A challenge for social perception and human-ai collaboration. Xavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Yuan-Hong Liao, Joshua B Tenenbaum, Sanja Fidler, Antonio Torralba, arXiv:2010.09890arXiv preprintXavier Puig, Tianmin Shu, Shuang Li, Zilin Wang, Yuan-Hong Liao, Joshua B Tenenbaum, Sanja Fidler, and Antonio Torralba. 2020. Watch-and-help: A challenge for social perception and human-ai collaboration. arXiv preprint arXiv:2010.09890 (2020). Feature Selection for Predicting Heart Disease Using Black Hole Optimization Algorithm and XGBoost Classifier. R Rajadevi, Em Roopa Devi, Shanthakumari, Rs Latha, R Anitha, Devipriya, 2021 International Conference on Computer Communication and Informatics (ICCCI). IEEE. R Rajadevi, EM Roopa Devi, R Shanthakumari, RS Latha, N Anitha, and R De- vipriya. 2021. Feature Selection for Predicting Heart Disease Using Black Hole Optimization Algorithm and XGBoost Classifier. In 2021 International Conference on Computer Communication and Informatics (ICCCI). IEEE, 1-7. AI in health and medicine. Pranav Rajpurkar, Emma Chen, Oishi Banerjee, Eric J Topol, Nature Medicine. 28Pranav Rajpurkar, Emma Chen, Oishi Banerjee, and Eric J Topol. 2022. AI in health and medicine. Nature Medicine 28, 1 (2022), 31-38. A synthetic prediction market for estimating confidence in published work. Sarah Rajtmajer, Christopher Griffin, Jian Wu, Robert Fraleigh, Laxmaan Balaji, Anna Squicciarini, Anthony Kwasnica, David Pennock, Michael Mclaughlin, Timothy Fritton, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Sarah Rajtmajer, Christopher Griffin, Jian Wu, Robert Fraleigh, Laxmaan Balaji, Anna Squicciarini, Anthony Kwasnica, David Pennock, Michael McLaughlin, Timothy Fritton, et al. 2022. A synthetic prediction market for estimating con- fidence in published work. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 13218-13220. The extent of price misalignment in prediction markets. David Rothschild, M David, Pennock, Algorithmic Finance. 3David Rothschild and David M Pennock. 2014. The extent of price misalignment in prediction markets. Algorithmic Finance 3, 1-2 (2014), 3-20. Heart disease prediction system using random forest. K Yeshvendra, Nikhil Singh, Sinha, K Sanjay, Singh, International Conference on Advances in Computing and Data Sciences. SpringerYeshvendra K Singh, Nikhil Sinha, and Sanjay K Singh. 2016. Heart disease prediction system using random forest. In International Conference on Advances in Computing and Data Sciences. Springer, 613-623. Cobots in knowledge work: Human-AI collaboration in managerial professions. Konrad Sowa, Aleksandra Przegalinska, Leon Ciechanowski, Journal of Business Research. 125Konrad Sowa, Aleksandra Przegalinska, and Leon Ciechanowski. 2021. Cobots in knowledge work: Human-AI collaboration in managerial professions. Journal of Business Research 125 (2021), 135-142. Sports forecasting: a comparison of the forecast accuracy of prediction markets, betting odds and tipsters. Martin Spann, Bernd Skiera, Journal of Forecasting. 28Martin Spann and Bernd Skiera. 2009. Sports forecasting: a comparison of the forecast accuracy of prediction markets, betting odds and tipsters. Journal of Forecasting 28, 1 (2009), 55-72. Machine learning markets. Amos Storkey, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. the Fourteenth International Conference on Artificial Intelligence and StatisticsAmos Storkey. 2011. Machine learning markets. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 716-724. Isoelastic agents and wealth updates in machine learning markets. Amos Storkey, Jono Millin, Krzysztof Geras, arXiv:1206.6443arXiv preprintAmos Storkey, Jono Millin, and Krzysztof Geras. 2012. Isoelastic agents and wealth updates in machine learning markets. arXiv preprint arXiv:1206.6443 (2012). Liquidity and prediction market efficiency. C Paul, Tetlock, Available at SSRN. 929916Paul C Tetlock. 2008. Liquidity and prediction market efficiency. Available at SSRN 929916 (2008). Machine learning and criminal justice: a systematic review of advanced methodology for recidivism risk prediction. Guido Vittorio Travaini, Federico Pacchioni, Silvia Bellumore, Marta Bosia, Francesco De Micco, International journal of environmental research and public health. 1910594Guido Vittorio Travaini, Federico Pacchioni, Silvia Bellumore, Marta Bosia, and Francesco De Micco. 2022. Machine learning and criminal justice: a systematic review of advanced methodology for recidivism risk prediction. International journal of environmental research and public health 19, 17 (2022), 10594. Human-computer collaboration for skin cancer recognition. Philipp Tschandl, Christoph Rinner, Zoe Apalla, Giuseppe Argenziano, Noel Codella, Allan Halpern, Monika Janda, Aimilios Lallas, Caterina Longo, Josep Malvehy, Nature Medicine. 26Philipp Tschandl, Christoph Rinner, Zoe Apalla, Giuseppe Argenziano, Noel Codella, Allan Halpern, Monika Janda, Aimilios Lallas, Caterina Longo, Josep Malvehy, et al. 2020. Human-computer collaboration for skin cancer recognition. Nature Medicine 26, 8 (2020), 1229-1234. From human-human collaboration to Human-AI collaboration: Designing AI systems that can work together with people. Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, Qianying Wang, Extended abstracts of the 2020 CHI conference on human factors in computing systems. Dakuo Wang, Elizabeth Churchill, Pattie Maes, Xiangmin Fan, Ben Shneiderman, Yuanchun Shi, and Qianying Wang. 2020. From human-human collaboration to Human-AI collaboration: Designing AI systems that can work together with people. In Extended abstracts of the 2020 CHI conference on human factors in computing systems. 1-6. Human-AI collaboration in data science: Exploring data scientists' perceptions of automated AI. Dakuo Wang, Justin D Weisz, Michael Muller, Parikshit Ram, Werner Geyer, Casey Dugan, Yla Tausczik, Horst Samulowitz, Alexander Gray, Proceedings of the ACM on Human-Computer Interaction. 3CSCWDakuo Wang, Justin D Weisz, Michael Muller, Parikshit Ram, Werner Geyer, Casey Dugan, Yla Tausczik, Horst Samulowitz, and Alexander Gray. 2019. Human-AI collaboration in data science: Exploring data scientists' perceptions of automated AI. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1-24. Prediction markets. Justin Wolfers, Eric Zitzewitz, Journal of economic perspectives. 18Justin Wolfers and Eric Zitzewitz. 2004. Prediction markets. Journal of economic perspectives 18, 2 (2004), 107-126. Interpreting prediction market prices as probabilities. Justin Wolfers, Eric Zitzewitz, National Bureau of Economic ResearchTechnical ReportJustin Wolfers and Eric Zitzewitz. 2006. Interpreting prediction market prices as probabilities. Technical Report. National Bureau of Economic Research. Jian Wu, Rajal Nivargi, Sree Sai Teja, Arjun Lanka, Manoj Menon, Ajay Sai, Nishanth Modukuri, Xin Nakshatri, Zhuoer Wei, James Wang, Sarah M Caverlee, Rajtmajer, arXiv:2104.04580Predicting the Reproducibility of Social and Behavioral Science Papers Using Supervised Learning Models. arXiv preprintJian Wu, Rajal Nivargi, Sree Sai Teja Lanka, Arjun Manoj Menon, Sai Ajay Modukuri, Nishanth Nakshatri, Xin Wei, Zhuoer Wang, James Caverlee, Sarah M Rajtmajer, et al. 2021. Predicting the Reproducibility of Social and Behavioral Sci- ence Papers Using Supervised Learning Models. arXiv preprint arXiv:2104.04580 (2021). Tianlong Ma, and Liang He. 2022. A survey of human-in-the-loop for machine learning. Xingjiao Wu, Luwei Xiao, Yixuan Sun, Junhang Zhang, Future Generation Computer Systems. Xingjiao Wu, Luwei Xiao, Yixuan Sun, Junhang Zhang, Tianlong Ma, and Liang He. 2022. A survey of human-in-the-loop for machine learning. Future Generation Computer Systems (2022). Estimating the deep replicability of scientific findings using human and artificial intelligence. Yang Yang, Wu Youyou, Brian Uzzi, Proceedings of the National Academy of Sciences. 117Yang Yang, Wu Youyou, and Brian Uzzi. 2020. Estimating the deep replicability of scientific findings using human and artificial intelligence. Proceedings of the National Academy of Sciences 117, 20 (2020), 10762-10768. Human-like autonomous car-following model with deep reinforcement learning. Meixin Zhu, Xuesong Wang, Yinhai Wang, Transportation research part C: emerging technologies 97. Meixin Zhu, Xuesong Wang, and Yinhai Wang. 2018. Human-like autonomous car-following model with deep reinforcement learning. Transportation research part C: emerging technologies 97 (2018), 348-368.
[]
[ "Welfare and Fairness in Multi-objective Reinforcement Learning", "Welfare and Fairness in Multi-objective Reinforcement Learning" ]
[ "Zimeng Fan ", "Nianli Peng [email protected] ", "Muhang Tian [email protected] ", "Brandon Fain [email protected] ", "Zimeng Fan ", "Nianli Peng ", "Muhang Tian ", "Brandon Fain ", "\nDuke University Durham\nNCUSA\n", "\nDuke University Durham\nNCUSA\n", "\nDuke University Durham\nNCUSA\n", "\nDuke University Durham\nNCUSA\n" ]
[ "Duke University Durham\nNCUSA", "Duke University Durham\nNCUSA", "Duke University Durham\nNCUSA", "Duke University Durham\nNCUSA" ]
[ "Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023)" ]
We study fair multi-objective reinforcement learning in which an agent must learn a policy that simultaneously achieves high reward on multiple dimensions of a vector-valued reward. Motivated by the fair resource allocation literature, we model this as an expected welfare maximization problem, for some non-linear fair welfare function of the vector of long-term cumulative rewards. One canonical example of such a function is the Nash Social Welfare, or geometric mean, the log transform of which is also known as the Proportional Fairness objective. We show that even approximately optimal optimization of the expected Nash Social Welfare is computationally intractable even in the tabular case. Nevertheless, we provide a novel adaptation of Q-learning that combines non-linear scalarized learning updates and non-stationary action selection to learn effective policies for optimizing nonlinear welfare functions. We show that our algorithm is provably convergent, and we demonstrate experimentally that our approach outperforms techniques based on linear scalarization, mixtures of optimal linear scalarizations, or stationary action selection for the Nash Social Welfare Objective.CCS CONCEPTS• Computing methodologies → Reinforcement learning.
10.48550/arxiv.2212.01382
[ "https://export.arxiv.org/pdf/2212.01382v3.pdf" ]
254,246,863
2212.01382
a33872d7ae0a0fa62512aca793c1fe860734a8f1
Welfare and Fairness in Multi-objective Reinforcement Learning IFAAMASCopyright IFAAMAS2023. May 29 -June 2, 2023 Zimeng Fan Nianli Peng [email protected] Muhang Tian [email protected] Brandon Fain [email protected] Zimeng Fan Nianli Peng Muhang Tian Brandon Fain Duke University Durham NCUSA Duke University Durham NCUSA Duke University Durham NCUSA Duke University Durham NCUSA Welfare and Fairness in Multi-objective Reinforcement Learning Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023) . of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023)London, United KingdomIFAAMAS2023. May 29 -June 2, 20239 pages. We study fair multi-objective reinforcement learning in which an agent must learn a policy that simultaneously achieves high reward on multiple dimensions of a vector-valued reward. Motivated by the fair resource allocation literature, we model this as an expected welfare maximization problem, for some non-linear fair welfare function of the vector of long-term cumulative rewards. One canonical example of such a function is the Nash Social Welfare, or geometric mean, the log transform of which is also known as the Proportional Fairness objective. We show that even approximately optimal optimization of the expected Nash Social Welfare is computationally intractable even in the tabular case. Nevertheless, we provide a novel adaptation of Q-learning that combines non-linear scalarized learning updates and non-stationary action selection to learn effective policies for optimizing nonlinear welfare functions. We show that our algorithm is provably convergent, and we demonstrate experimentally that our approach outperforms techniques based on linear scalarization, mixtures of optimal linear scalarizations, or stationary action selection for the Nash Social Welfare Objective.CCS CONCEPTS• Computing methodologies → Reinforcement learning. INTRODUCTION Suppose a logistics company announces they will deploy a reinforcement learning agent that optimizes for completed deliveries. It works and the number of deliveries increase. Some days later, the company begins receiving complaints that delivery service to * These authors contributed equally to this work, ordered alphabetically by last name. Proc. of the 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), A. Ricci, W. Yeoh, N. Agmon, B. An (eds.), May 29 -June 2, 2023, London, United Kingdom. © 2023 International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved. some locations has actually gotten much worse than before the AI deployment. The company assures its customers that the AI is learning and things will improve. But in another week, the situation is the same. Desperate, the company's engineers boost the reward weights associated with deliveries to those locations, only to find that now other locations are being neglected. This is an example where data-driven algorithmic systems may be generally quite performant but nonetheless fail on structured subsets of input. This is well-known in reinforcement learning, where extensive "reward shaping" is sometimes necessary to achieve desired behavior. In the opening example, the desired behavior is a policy that achieves high delivery service rates at all customer locations. However, standard reinforcement learning, in which the reward signal is a scalar value and the goal is to maximize total discounted reward, might naturally learn a policy that prioritizes "easy" to optimize regions (perhaps clusters of many tightly packed locations with many deliveries) at the expense of more difficult ways to achieve reward. Furthermore, because standard techniques rely on learning a stationary policy, the policy continues to prioritize the same customers day after day. Addressing this problem may require problem-specific fine tuning of rewards, and even then "fixing" the original problem can introduce new ones. In this paper, we take a different approach. We study nonlinear welfare optimization in the context of Multi-objective Reinforcement Learning (MORL). A Multi-Objective Markov Decision Process (MOMDP) is a Markov Decision Process where rewards are vectors instead of scalars. The components of this vector can be viewed either as different criteria like cost and time or, as as we interpret them, as individual utilities of "users" to whom the learning agent should be fair. In the opening example, the customers are the users and the vector reward tracks how well the learning agent is doing at optimizing deliveries for each user separately. The solution to a MOMDP is a policy that seeks to maximize some function of the cumulative reward vector. Due to the linearity of expectation, linear functions maximizing some weighted arithmetic mean of the cumulative reward vector are the simplest to use. However, for any particular selection of weights, the resulting policies may be undesirable from a fairness perspective as any linear function may ignore the utility of some users. For instance, for equal weights, a policy that gives user 1 a utility of 10 and user 2 a utility of 0 is preferred over another policy that yields a utility of 4 for both. We therefore study a more general class of welfare functions with a particular emphasis on nonlinear welfare functions that optimize for fairness and efficiency. Optimizing a nonlinear welfare function in a MOMDP is a substantial algorithmic challenge. The Bellman optimality principles [6] [27] no longer hold, and stationary policies, in which the action selection depends only on the current state and not history, are no longer necessarily optimal. This is quite intuitive for fairness. For example, if an AI personal assistant is tasked with grocery shopping for a household with competing preferences over desserts, it is relevant to the current decision whether one member of the household got their most or least favorite dessert in previous weeks, as the agent may wish to be fair to its users across time. Though these examples are toys, there are many real-world decision problems in which a learning agent may need to simultaneously prioritize more than a single utility or goal in a balanced and fair way. In telecommunications and wireless networking, one may want to allocate bandwidth in a way that balances the quality of service in many different locations. In autonomous driving, one may want to balance vehicle speed and passenger comfort [14]. Contributions and Outline Sections 2 and 3 introduce related work and preliminaries for MOMDPs. Our results are as follows: (1) In Section 4 we introduce and characterize the problem of optimizing expected welfare for fairness in a MOMDP. We specifically focus on nonlinear welfare functions, with the Nash Social Welfare (NSW) as our canonical example of a fair welfare function. (2) In Section 5 we give a reduction in Theorem 5.2 to show that optimizing expected NSW is computationally intractable, even in the tabular setting. We further show that stationary policies cannot, in general, guarantee high approximation to optimality as the number of dimensions of reward grows. (3) On the positive side, also in Section 5 we define Algorithm 1 Welfare Q-Learning that adapts model-free Q-learning in two important ways to optimize non-linear welfare functions: (1) nonlinear learning updates, and (2) non-stationary action selection. We show in Theorem 5.3 that our algorithm is provably convergent. (4) In Section 6 we deploy our algorithm in two simulated environments to optimize expected NSW. Our algorithm substantially outperforms the following baselines: (1) optimal [for NSW] linear scalarization, (2) optimal [for NSW] mixtures of optimal policies in each dimension, and (3) stationary action selection on our algorithm's learned Q-table. RELATED WORK Multi-objective reinforcement learning (MORL) algorithms include single-policy and multi-policy methods [18]. Single-policy methods use a scalarization function to reduce the problem to scalar optimization for a single policy. The simplest form is linear scalarization, applying a weighted sum on the Q vector [20]. Multi-policy methods search for a set of policies that approximate the Pareto frontier of the problem. For instance, the convex hull value-iteration algorithm [5] computes the deterministic stationary policies on the convex hull of the Pareto front. Pareto Q learning [20] integrates temporal difference algorithms with Pareto dominance relations to learn a set of Pareto dominating policies. Stochastic mixture policy [28] combines multiple deterministic base policies with a convex combination, choosing a base policy with a given probability at the start of each episode. We focus on single-policy methods with nonlinear scalarization, as the size of the Pareto frontier may grow exponentially with the dimensionality of the problem, and because the Pareto frontier may not be well-approximated by its convex hull for nonlinear welfare functions. Fairness in Reinforcement Learning has been recently considered, beginning with [15] in a scalar setting. More directly related to our work, [25] investigated the (Deep) MORL problem of learning a fair policy to optimize the Generalized Gini Social Welfare function using nonlinear scalarization. [1] studied a similar problem and considered maximizing concave welfare functions generally and Nash welfare specifically, showing an optimal approach to optimizing the welfare of expected rewards in the tabular setting and an extension to the function approximation setting. Our work differs from these in two major ways: (1) we seek to optimize the expected welfare, rather than welfare of expected rewards (see Section 4) which is fundamentally more challenging computationally (see Section 5), and (2) we learn a non-stationary policy, as stationary policies may be far from optimal for optimizing expected (nonlinear) welfare (see Section 5 and [23]). We formulate our consideration of welfare functions in Section 4 based on consideration from the resource allocation literature [21]. Our canonical example of a fair welfare function, the Nash Social Welfare (NSW) derives from Nash's solution to the bargaining game [22] and its n-player extension [19]. Its log transform is commonly known as the proportional fairness objective. More recent studies have shown NSW maximization provides outstanding fairness guarantees when allocating both divisible and indivisible goods [8]. PRELIMINARIES Multi-objective Markov Decision Process. A Multi-objective Markov Decision Process (MOMDP) consists of a finite set S of states, a starting state 1 ∈ S, 1 a finite set A of actions (we let A ( ) denote the subset of actions available in state ), and probabilities P , , ′ ∈ [0, 1] that determine the probability of transitioning to state ′ from state after taking action . Probabilities are normalized so that ∈A ( ), ′ P , , ′ = 1 for all . We also have a reward function ( , ) : S × A → R for taking action in state . 2 Some states may be terminal, meaning they transition only to themselves and yield 0 reward. Each of the dimensions of the reward vector correspond to one of the multiple objectives that are to be maximized. At each time step , the agent observes state ∈ S, takes action ∈ A ( ), and receives a reward vector = ( , ) ∈ R . The environment, in turn, transitions into +1 with probability P , , +1 . Where clear from context, we will often omit the subscript and simply write the immediate reward vector as . A trajectory is a sequence of state, action, reward tuples = ( 1 , 1 , 1 ), ( 2 , 2 , 2 ), ..., ( , , ). A trajectory that begins in the starting state 1 and ends in a terminal state defines an episode. For a discounting factor ∈ [0, 1), the discounted cumulative return of a trajectory is the vector ( ) = ∞ ∑︁ =1 −1 . A stationary policy is function ( | ) : S × A → [0, 1] that forms a probability distribution such that ∈A ( ) ( | ) = 1 for all . Such a policy is stationary since the probability with which an action is selected depends only on the current state. More generally, a policy (not necessarily stationary) is a function ( | , ) that may additionally depend on a given trajectory (intuitively, the history prior to reaching state ). An action value function is defined as the expected total reward starting from , taking action , and following policy thereafter: ( , ) := E ∼ ∞ ∑︁ =0 + = , = . The value function of a policy from state is defined by: ( ) := E ∼ ∞ ∑︁ =0 + = . Our algorithms will aim to solve the learning problem by finding an estimate of ( , ) and ( ). We denote such estimates as ( , ) and ( ), respectively. PROBLEM FORMULATION Our goal is to learn a policy that maximizes E ∼ [ ( )] in all dimensions. To make this optimization objective concrete, we must specify a scalarization function : R → R. In fair reinforcement learning, we think of each of the dimensions of the reward vector as corresponding to a distinct user to whom the learning agent wishes to be fair. The scalarization function can thus be thought of as a welfare function over the users, and the learning agent is a welfare maximizer. For a given welfare function , our goal is then to compute a policy * that maximizes expected welfare: * = arg max E ∼ ( ) .(1) Welfare Axioms Here we describe desirable properties of welfare functions in terms of general outcomes as vectors . You can think of these vectors as possible discounted cumulative reward vectors corresponding to different policies. In the fair division literature [21], the most basic requirement of a welfare function is monotonicity. Definition 4.1. satisfies monotonicity if and only if for all and ′ with ≥ ′ for all and > ′ for some , ( ) > ( ′ ). Intuitively, monotonicity specifies that given all else equal, one prefers to increase a user's utility. Related is Pareto optimality, that an outcome should be efficient in the sense of not being dominated by any other outcome. Definition 4.2. An outcome satisfies Pareto optimality if there is no other outcome ′ such that ′ ≥ for all users , and at least one inequality is strict. satisfies Pareto optimality if any (within some feasible space) that maximizes is Pareto optimal. Welfare functions should also satisfy symmetry, or indifference towards permutations of the input [21]. This is the most basic form of a fairness guarantee, that different users are treated similarly. Definition 4.3. satisfies Symmetry if for every permutation of its inputs , ( ( )) = ( ). A family of welfare functions satisfying the above properties are generalized mean -welfare functions [4,21], where ( ) = 1/ =1 ( ) 1/ . For instance, when = 1, we have the utilitarian welfare function [7], the arithmetic mean of utilities. Note that the utilitarian social welfare that may not be suitable for ensuring fairness on outcomes. Monotonicity, Pareto optimality, and Symmetry are merely minimal requirements for a welfare function. One may wish to introduce a stronger axiom such as the Pigou-Dalton Principle [9]. This principle states that one-to-one transfer of utility (or rewards in the MOMDP) from a better-off user to a worse-off user should increase the overall welfare. , ′ equal except for = ′ + and = ′ − where ′ − ′ > > 0, ( ) > ( ′ ). In other words, more equal distributions of utility are preferred. Functions that satisfy this formulation of fairness are often concave, capturing the diminishing marginal returns of increasing the utility of a user who already enjoys high utility relative to other users. Among the generalized mean -welfare functions, the Pigou-Dalton Principle is satisfied for all < 1. Our algorithm is designed to maximize welfare functions in this class. Nash Social Welfare Function The extreme case of a fair welfare function is the generalized mean -welfare function where → −∞, which corresponds to the egalitarian welfare function [24] that maximizes the minimum utility (and subject to that, optimizes the next smallest, and so forth). In-between the extremes of the utilitarian and egalitarian social welfare functions, we specifically focus on the Nash Social Welfare (NSW) function as our canonical example of a fair welfare function that also balances efficiency with fairness [8,12,16,22]. NSW( ) = =1 1 (2) NSW is also simply the geometric mean of utilities, and in its log transform is also known as the proportional fairness objective. Note that NSW is a generalized mean -welfare function where → 0. In addition to the previous desirable properties, NSW also enjoys the property of being scale invariant, meaning that the arg max of NSW is invariant under scaling of a given dimension of reward. From a practical perspective, this means that the relative scales of utility or reward for each dimension are not significant and do not need to be tuned during reward shaping. Though we focus on NSW as our canonical example, we note that other reasonable welfare functions exist. For example, in Multi-Objective optimization, several works have studied Ordered Weighted Average (OWA) operators, a family of operators that contains many types of means [31]. Expected Welfare In contrast to some prior work [1,25] we focus on optimizing the expected welfare E ∼ ( ) rather than the welfare of the expectation E ∼ [ ( )] . Note that for any concave welfare function, including NSW, Jensen's inequality [11] implies E ∼ ( ) ≤ E ∼ [ ( )] .(3) We optimize for the lower bound (which turns out to be a more computationally challenging objective, see Section 5) in order to avoid treating policies as "fair" that are unfair in every particular episode and satisfy fairness only across several episodes on average. Example (Expected Welfare). Consider the example diagrammed in Figure 1 with = 2 users. Suppose we want to learn a policy that maximizes NSW. There is a stochastic policy 1 that yields discounted cumulative reward of (1, 0) with probability 0.5 and (0, 1) with probability 0.5. There is also a deterministic policy 2 that yields (0.5− , 0.5− ) (where > 0 is small). The NSW expected reward under 1 is 0.5, even though with probability 1, the NSW of every trajectory generated by 1 is 0. By contrast, the NSW of 2 is always 0.5 − . Maximizing the expected welfare, our optimization problem would prefer 2 . This example shows the intuition for why we choose to maximize E ∼ ( ) . We seek to find a policy that generates trajectories with high expected welfare, a stronger property than generating high welfare of expected rewards. As we see in the next section, the problem is also computationally more challenging. 1 2 (0.5 − , 0.5 − ) (0, 1) (1, 0) OPTIMIZING WELFARE In general, one cannot provably and efficiently optimize all fair welfare functions. We first demonstrate that finding a policy that maximizes the NSW is APX-hard, implying one cannot get an arbitrarily close approximation efficiently, even in the tabular setting. We note that the same is not true for optimizing the NSW of expected rewards, for which the optimal stochastic policy can be computed efficiently [1]. Our argument follows via a reduction from the problem of allocating indivisible goods, in which items must be partitioned among users, where user has utility , ≥ 0 for good and their utility for multiple goods is the sum of their utilities for the individual goods. Lemma 5.1. [17] It is APX-hard to compute an indivisible allocation of goods optimizing the NSW. From this we can show the following impossibility. Theorem 5.2. Computing the policy that maximizes NSW ( ) is APX-hard, even in a deterministic environment. Proof. We reduce the problem of allocating a set of indivisible items to users with additive utilities. Given such an allocation problem, consider the following MOMDP. Find an arbitrary enumeration { 1 , 2 , . . . , } of the items. These are the states of the MOMDP. At each time step, the environment transitions from to +1 , beginning at 1 and with as a terminal state. In each state, , the agent has available actions { 1 , . . . , } where taking action in state corresponds to allocating item to user and receives a reward vector with , in dimension and 0 elsewhere. Consider a policy that is -approximately optimal on the objective. The indivisible allocation corresponding to each user receiving the set of goods for which chooses is alsoapproximately optimal on the indivisible allocation problem. By Lemma 1, there exists a constant > 1 such that it is NP-hard to approximate the NSW optimal allocation of indivisible items to agents with additive utilities within a factor . It must also be NP-hard to approximate the NSW optimal policy in a MOMDP within a factor . □ Non-stationary Welfare Q-Learning We now present our algorithm, Welfare Q-Learning, which implements a variant on Q-learning [30], a model-free temporal-difference learning algorithm [26]. Our algorithm differs in two major ways from standard Q-learning. (1) Q table updates are chosen to maximize the (potentially) nonlinear , and each value ( , ) is a vector in R corresponding to an estimate of the future reward vector possible that is welfare maximal. (2) Action selection is non-stationary. We keep track of the discounted cumulative reward vector within a trajectory so far and select the action that maximizes total estimated welfare including that already accumulated and future estimates. We show experimentally in Section 6 that both of these changes are crucial to achieving high expected welfare for fair welfare functions such as NSW. We show in Theorem 5.3 that the algorithm still provably converges even with the nonlinear learning updates on a vector-valued Q table. To see the intuition for the significance of non-stationary action selection for optimizing the expected welfare of a nonlinear welfare function, we present the following example. ← + 1 13: until is terminal 14: end for welfare resulting from such a policy is E ∼ NSW ∑︁ = ∑︁ =1 1 = 1 On the other hand, an optimal non-stationary policy * that keeps track of accumulated rewards is able to choose the correct complementary action at 3 depending on the random transition at = 1, so that E ∼ * [NSW [ ]] = 1. This example shows why, when optimizing for fairness, one should keep track of the discounted cumulative reward allocated to thus far. A stationary policy does not distinguish which users have received higher or lower rewards on a given trajectory thus far. Our approach seeks to be greedy on the sum of this discounted cumulative reward and future estimates of reward stored in the Q table. In this manner, users that have not received much reward so far within an episode are prioritized in action selection. It is worth noting that in order to incorporate both information from the past and future, we need a consistent accounting of discounting for both terms to ensure our agent has correct information from both past an future when deciding a fair policy: = 1 + 2 + 2 3 + ... + −1 ( , ) = E +1 + +1 +2 + ... | = , = . It is also important that each value in our learned Q table stores the vector of estimated future rewards rather than simply a scalar estimate of the welfare achievable, as the true greedy objective is the welfare of the sum of these vectors, not sum of the welfare of the two. Because of this, our Q table has size |S| × |A| × where is the dimension of the reward. Indeed, as we show in Section 6, this does result in decreased convergence rates of the algorithm for larger . Convergence. We now argue that the Q values of our algorithm converge to an interpretable set of values. Proof. The proof follows from Banach's Fixed-Point Theorem [3], which guarantees the existence and uniqueness of fixed-point of a contraction map on a complete metric space. The update step of Algorithm 1 can be seen as applying, in expectation, an operator on the Q-table. To apply the fixed-point theorem, we define a metric and show that this operator is a contraction. The Generalized Banach Fixed-Point Theorem therefore implies that Algorithm 1 will converge toward a unique fixed-point of this operator. Proofs for the two technical lemmas are provided in the full version [13]. where arg takes the multi-objective value corresponding to the maximum, i.e., ( , ′′ ) such that ′′ ∈ arg max ∈A ( ( , ′ )). We can define an optimality operator T in terms of the optimal filter. Definition 5.7. The optimality operator T is defined as (T )( , ) = ( , ) + E ′ ∼P ( · | , ) (H ) ( ′ ). Note that in the algorithm, at each iteration, we sample from P (·| , ) to make an update. If the learning rate satisfies the usual Robbins-Monro type conditions, namely = ∞ and 2 < ∞, the update at each iteration is, in expectation, applying the optimality operator T . Thus, to show convergence, it suffices to show that iteratively applying T on any leads to a unique -table. Lemma 5.8 (The optimality operator is a contraction). Let , ′ be any two multi-objective -value functions, then (T , T ′ ) ≤ ( , ′ ), where ∈ [0, 1) is the discount factor of the underlying MOMDP. Finally, since in our design, the distance is a well-defined metric, to prove convergence to a unique fixed point, we will use the Generalized Banach Fixed Point Theorem as in [32]. Lemma 5.9 (Generalized Banach Fixed-Point [32]). Given that T is a contraction mapping with Lipschitz coefficient on the complete pseudo-metric space ⟨Q, ⟩, then there exists * such that lim →∞ (T , * ) = 0 for any ∈ Q. It follows from the Lemmas that there exists * such that lim →∞ (T , * ) = 0 for any ∈ Q. Note that if the distance between two tables is 0, the tables are equal. In other words, iteratively applying the optimality operator T to a multi-objective Q-table will converge toward a unique Q-table. Since the update step in Algorithm 1 is applying T in expectation, the algorithm also converges toward a unique Q-table. This concludes the proof of Theorem 5.3. □ Note that the convergence result is not dependant on any particular welfare function , but applies generally. Next, we provide an interpretation of the unique fixed point * of our algorithm. Note that the Bellman optimality conditions are not satisifed for nonlinear welfare functions, precluding the typical interpretation of the optimal Q-table. We nevertheless show that a very similar interpretation can be given in which * ( , ) provides a lower bound estimate on the discounted cumulative reward vector that is achievable after taking action in state and then optimizing for . So it suffices to show that arg max ′ ∈A ( * ( ′ , ′ )) = * * ( ′ ). But expanding the LHS and RHS recursively and recalling * 0 = arg max ′ ( * ( ′ , ′ )) (see Definition 5.10) as well as the definition of arg in Definition 5.6, we get arg max ′ ∈A ( * ( ′ , ′ )) = * ( ′ , * 1 ) = T [ * ( ′ , * 0 )] = ( ′ , * 1 ) + E ′′ ∼P ( · | ′ , * 1 ) arg max ′′ ∈A ( * ( ′′ , ′′ )) and * * ( ′ ) = E ∼( P, * * ) This completes the proof of Lemma 5.11. □ Lemma 5.11 implies that each entry * ( , ) represents what the agent could actually expect to receive as total discounted future reward in expectation if one performs action in initial state , then follows greedy stationary action selection using the Q-table. This is essentially the same interpretation as in traditional scalar Qlearning, except that there is no optimality guarantee of the result for non-linear welfare optimization. | 1 = ′ ∞ ∑︁ =1 −1 ( , * ) = ( ′ , * 1 ) + E ′′ ∼P ( · | ′ , ′ ) E ∼( P, * * ) | 1 = ′′ ∞ ∑︁ =1 −1 ( , * ) = ( ′ , * 1 ) + E ′′ ∼P ( · | ′ , ′ ) * * ( ′′ ) . Intuitively, these entries only serve as an estimate of the lower bound of total discounted rewards that are achievable in the future. Algorithm 1 combines these estimates with keeping track of the discounted cumulative rewards up to a given point in order to greedily optimize for the welfare of the sum of the two vectors. This is the essential intuition behind Algorithm 1. EXPERIMENTS We run experiments under two simulated environments diagrammed in Figure 3. 3 Our results demonstrate that (a) Welfare Q-Learning is effective in finding policies with high expected welfare compared with other baselines, (b) the rate of convergence depends on , the dimensionality of the reward space, and (c) linear scalarization and mixture policies are generally inadequate for optimizing fair welfare functions. All results for all algorithms are obtained using an average of NSW and utilitarian welfare of for each episode over 50 runs. For all the experiments on both environments, each unit on the -axis corresponds to an episode, which equals to 10000 timesteps or action selections in the environment. The duration of a timestep is the same for all methods. Metrics, Methods, and Baseline Algorithms All of our experiments attempt to optimize NSW (results for other welfare functions are provided in the full version [13]). We measure the NSW function on earned thus far. As elaborated in Section 1, NSW satisfies all of our basic desiderata plus scale invariance, and is an intermediate welfare function between the extremes of egalitarian and utilitarian social welfare. For comparison, we also show utilitarian social welfare (the arithmetic mean of reward vectors) alongside NSW. The geometric mean can be numerically unstable. In experiments we work with the log transform of NSW. That is, instead of maximizing NSW( ) = 1/ , we equivalently maximize ln( + ), where > 0 is included as a smoothing factor in case = 0. Due to the nature of the NSW function, the NSW of rewards with negative elements is undefined, or alternatively can be defined as −∞. Thus, our scope of exploration is restricted to policies that yield all non-negative accumulated rewards. 6.1.1 Baseline Algorithms. We compare our algorithm against three baselines. (1) Optimal Linear Scalarization. A simple MORL technique is to apply linear scalarization on the Q-table [29]. Given weights ∈ R , where =1 = 1 for objectives, let ( , ) = =1 ( , ) , where ( , ) is the Q-value for ℎ objective. For each time step, ( , ) is treated by the algorithm as the objective to perform both action selection with -and learning updates of ( , ). We chose the weights that performed best on the NSW objective as determined by a grid search through combinations of . We include this algorithm as the baseline to demonstrate the limitations of linear scalarization on non-linear objectives and show the importance of our non-linear learning updates in Algorithm 1. (2) Stationary Policy. Algorithm 1, Welfare Q-Learning, learns a particular Q-table corresponding to a (potentially) non-linear welfare function, then performs non-stationary action selection. By contrast, we also show the results if one stationary -greedy action selection on the same learned Q-table. That is, the stationary policy algorithm does not consider in its action selection. We include this algorithm to show the importance of non-stationary action selection. (3) Optimal Mixture Policy. [28] proposed the idea of combining multiple Pareto Optimal base policies into a single mixture policy. We chose our base policies as those that optimize each dimension of the reward vector independently. The algorithm then uses one of these policies for time steps, then switches to the next. To determine the optimal value of for optimizing NSW, we used a grid search. We use the resulting optimal value * . This baseline examines the effectiveness of intuitive approaches (combining optimal policies for each user) for optimizing fairness. Taxi Environment Description. Inspired by the taxi toy example problem for singleobjective RL [10], we designed a multi-objective taxi environment 4 . In this grid world, our agent is a taxi driver whose goal is to deliver passengers from their origins to their destinations. There are origin-destination pairs, one for each dimension of reward, and the agent earns reward in that dimension when dropping off a passenger from that origin-destination pair. There are an unlimited number of passengers for each origin-destination pair, but the taxi can only take one passenger at a time. This constraint enforces objectives to be conflicting, thus our agent's fairness performance becomes more important-it should provide its delivery service to each origin successfully and fairly over time, without ignoring origins that are more difficult to deliver (such as number 3 origin/destination pair in Figure 3a). Results. Results are shown in Figure 4. Welfare Q-Learning achieves the maximum average NSW score among all the algorithms, and still manages to achieve the second highest utilitarian score. We observe that our non-stationary policy outperforms the stationary policy on the same Q-table for both NSW and utilitarian score. Note that a stationary policy that optimizes NSW on this environment must essentially make a large loop always taking each origin-destination pair in turn, whereas a non-stationary policy can selectively optimize a single origin-destination pair for several time steps before switching to another pair. Linear scalarization has the lowest average NSW since there simply does not exist a set of weights that would produce accumulated rewards in all dimensions. It achieves highest utilitarian score since it favors to complete delivery for closest origin/destination pairs (such as index 2 pair in Figure 3a). The mixture policy performs generally well but slightly lower than that of Welfare Q-Learning, this is because an optimal fair policy in this environment does have the structure of alternating between optimizing on different dimensions at a time. This is not, however, true in general, as is seen in the next environment. Although the mixture policy converges quickly (each dimension independently is very easy to optimize), such performance is also subject to finding the optimal interval for the taxi environment * (227 timesteps) via a search through the parameter space, which involves a computational cost not reflected by the figures. For Welfare Q-Learning, we observe that there is an inverse correlation between the dimensionality of the reward space and the rate of convergence, as shown in Figure 4c. A possible explanation is that the increase in dimensionality increases the size of the Qtable, which is of size |S| × |A| × , thus more updates are needed to converge. Resource Gathering (RG) Environment Description. Building on the resource-collection domain [5], we modified the environment to include more complexity and randomness, and we also restructured it as a multi-objective setting. 5 In this grid world, our agent collects three types of resources (gold, gem, and sword) spawned randomly at different locations which also disappear with a given probability. The dimensions of reward correspond to the resources. That is, gathering a given resource earns reward in that dimension and 0 the others. The goal of our agent is to collect as many resources as possible while maintaining a balance between difference types of resources. Results. Welfare Q-Learning achieves the maximum average NSW score among all the algorithms. Although linear scalarization achieves a reasonable utilitarian score, it fails to satisfy fairness, as it largely ignores one of the dimensions/resource types. The diminishing performance of the linear scalarization on NSW is presumably due to the algorithm increasingly coming to optimize for some dimensions and ignoring others. The mixture policy achieved the most comparable utilitarian score to the non-stationary policy, yet substandard NSW score. Unlike in the taxi environment, the optimal policy in RG is not characterized by optimizing each objective sequentially, which results in ignoring certain type of nearby resources. An additional desirable property of optimizing NSW is scale invariance, that the optimizing policy is invariant with respect to changes in the scale of a dimension of reward. We demonstrate this empirically by comparing the distribution over resources gathered using Nash Welfare Q-Learning (Welfare Q-Learning with NSW as the welfare function) versus linear scalarization in Figure 5c. When all resources are worth the same amount of reward in their respective dimension (Figure 5c Top) the NSW agent achieves a balanced distribution between the three types of resources, while the linear scalarized agent tends to neglect swords in the third dimension. To rectify this one might try to rescale the rewards (Figure 5c Bottom) so that the value of swords is scaled to 50, but then the scalarized policy changes drastically to gather swords and ignore gems. However, the NSW agent is immune to scaling of this kind, retaining the same general distribution of reward in both cases. In other words, fair welfare (and especially NSW) optimization in a MOMDP may be more robust to specifications of the environment as compared to techniques based on linear scalarization. FUTURE WORK While we know that exact optimization of, say, NSW is intractable, we do not know what provable approximation factors might be achievable in tabular MORL. Additionally, we observe a dependence between the dimensionality and the convergence rate of our algorithm, but we do not know whether this "curse of dimensionality" is fundamental to nonlinear welfare optimization in MORL. Finally, though even the tabular setting is challenging for nonlinear welfare optimization in MORL, we believe the intuition of non-stationary action selection coupled with nonlinear learning updates can be extended to the function approximation setting and combined with deep neural network representations. = 30, = 0∀ ≠ if passenger from origin in taxi is dropped correctly to its destination In this environment, the agent is a taxi driver who is trying to deliver multiple passengers from their origins to their destinations. For simplicity, we assume there are infinite number of passengers at each origin, and the task is modeled as a continuing task and therefore has no terminal state. The state space contains information about location of taxi, whether there is passenger currently in taxi, as well as destination of the passenger in taxi. Our agent has six actions: drive north, south, east, west, and pick and drop passenger. The dimension of the objectives is the number of origin and destination pairs, which can be decided arbitrarily as a parameter in the environment. At each time step, the agent receives a reward of r = 0 for movement, r = −10 for illegal action (dropping or picking at incorrect locations), and a reward of 30 at the dimension of the origin location for correct delivery, 0 for others. We also restricted the taxi to carry only one passenger at a time. This constraint enforces objectives to be conflicting, where delivery of one passenger implies ignoring the others. Under this particular setting, the agent's fairness performance becomes more important. It should provide its delivery service to each location successfully and fairly over time within each episode, without ignoring certain locations 1.2 Resource Gathering (RG) The RG domain is a 5 × 5 grid world where the agent collects three types of resources (gold, gem, and sword) spawned randomly at different locations with a 99.99% stochastic probability and disappearing with a small probability of 10%. With a probability of 99.99%, a new map is generated randomly indicating the newly updated locations of the resources. Our agent has four actions: traveling up, down, left and right in the four cardinal directions. The reward encodes the value achieved at each resource type (determined by the quantity 1 of resources and the reward value of the resource type). Under equal rewards, the agent receives a reward = 10 in one dimension for gathering one resource in the corresponding dimension. We trained the agent with non-stationary action selection in discounted continuous task, and evaluated the agent with 10000 steps over resources of equal rewards and scaled rewards, and recorded accumulated rewards for each resource type. The goal of our agent is to collect as more resources as possible while maintaining a balance between difference types of resources. EXPERIMENTAL RESULTS FOR OTHER WELFARE FUNCTIONS We run experiments for Welfare Q-Learning based on -welfare and egalitarian welfare functions. We choose a range of values of between [−1, 1], and recorded each of its performance with NSW and utilitarian score. PROOF OF CONVERGENCE FOR WELFARE -LEARNING In this section, we provide convergence proof for our multi-objective algorithm. The proof is based on the well-known Banach's Fixed-Point Theorem, which guarantees the existence and uniqueness of fixed-point of a contraction map on a complete metric space. Therefore, generalizing this theorem a bit, we can imagine all value functions of reinforcement learning are in some metric space, and finding the optimal value or policy is to find the fixed point of a certain contraction on that space. To do this, we i) define a well-defined metric on the space of -functions; ii) show that the optimal operator is a contraction; and finally iii) apply the Generalized Banach Fixed-Point Theorem. Theorem 3.1. For discount factor ∈ [0, 1), the Q values of Welfare Q-Learning converge. the Pigou-Dalton Principle if for all Figure 1 : 1Example MOMDP. Dotted lines represent trajectories generated by 1 , solid line for 2 5.1. 1 1Non-stationarity. Consider the MOMDP diagrammed in Figure 2.There are users. |A ( 1 )| = 1, and at = 1, the environment transitions from 1 to 2, with probability 1/ and reward 0 ∈ R . At = 2, the environment transitions from 2, to3 with probability 1 and a reward vector that has 1 at the ℎ component and 0 elsewhere. At = 3, the agent gets to choose from actions, 1 , 2 , . . . , that yield associated with rewards (1, . . . , 1, 0), (1, . . . , 0, 1), . . . , (0, . . . , 1, 1), respectively. A stationary policy cannot achieve expected NSW greater than 1/ on this MOMDP, even though a non-stationary policy can achieve expected NSW of 1. Without loss of generality, assume a stationary stochastic policy chooses the action with probability for = 1, . . . , such that = 1. Then the expected Nash social Algorithm 1 Welfare Q-Learning 1: Parameters: Learning rate ∈ (0, 1], Discount factor ∈ [0, 1), exploration rate > 0, welfare function 2: Require: Initialize ( , ) for all ∈ S, ∈ A ( ) arbitrarily except (¯, ·) ← 0 for terminal states3 : for each episode do ( 1 1, 0 , . . . , 0 ) (0, 1, . . . , 0) ( 0 , 0 , . . . , Figure 2 : 2Example of a stochastic MOMDP in which the expected Nash social welfare of stationary policies shrinks to 0 as the dimension of rewards increases. Theorem 5 . 3 . 53For discount factor ∈ [0, 1), the Q values of Welfare Q-Learning converge. Definition 5 . 4 . 54Define on the space of Q-tables Q by ( , ′ ) := max ∈S, ∈A ∈ {1,..., } ( , )) − ( ′ ( , )) . Lemma 5 . 5 . 55⟨Q, ⟩ is a complete metric space.Next, we define the optimality filter H .Definition 5.6. The optimality filter H is an operator defined as (H ) ( ) = arg max ′ ∈A ( ( , ′ )), Definition 5 . 10 . 510Define a policy * given a Q-table as follows: at a given state , let * ( ) = * = arg max ( ( , )) and let ( , ) be the immediate reward of performing action in state . Then the value function corresponding to * ( ) is * ( ) = E ∼( P, * ) | 1 = ∞ ∑︁ =1 −1 ( , * ). Lemma 5.11 (Interpreting the Fixed-Point). Let * be the unique fixed point of the algorithm, i.e. * = T * , then * ( , ) = ( , ) + E ′ ∼P ( · | , ) * * ( ′ ).Proof. Since * is the fixed point of the algorithm, * = T * . Expanding this using the definition of T , we get * ( , ) = ( , ) + E ′ ∼P ( · | , ) arg max ′ ∈A ( ′ ) ( * ( ′ , ′ )). So in turn, it suffices to show that arg max ′′ ∈A ( * ( ′′ , ′′ )) is equal to * * ( ′′ ). Repeat this argument until the agent is in one of the terminal states¯. Note that (¯, ·) = 0, i.e. Q-values for the terminal states are zero for all actions. So arg max ∈A ( * (¯, )) = 0 = * * (¯). Figure 3: Simulated Environments Figure 4 :Figure 5 : 45Experiment Results for Taxi Environment. Non-stationary Policy is Welfare Q-Learning (a) Online Performance (NSW) (b) Online Performance (Utilitarian) (c) Distribution of Resources for Equal (Top) and Scaled Rewards (Bottom) Experiment Results for RG Environment. Non-stationary Policy is Welfare Q-Learning Figure 2 : 2Visualization of RG grid world Figure 3 : 3Experimental Results for Other Welfare Functions 1 ENVIRONMENT DESCRIPTIONS 1.1 Taxi Figure 1: Visualization of Taxi grid world, orange circle is the taxi, origins are blue squares, destinations are red squares, with numbers indicating the corresponding origin/destination pairs (1) State space: contains information about location of taxi on the grid, whether there is passenger in taxi, destination of the passenger in taxi (2) Action space: move north, south, east, west, pick passenger, drop passenger (3) Reward function: if ∈ {north, south, east, west } or is a valid pick or drop −10 if invalid pick or drop is performed=          0 In general we may have a distribution over starting states; we assume a single starting state for ease of exposition.2 For simplicity of exposition, we assume rewards are deterministic. The implementation is available at https://github.com/MuhangTian/Fair-MORL-AAMAS See detailed description of the environment in the full version[13] 5 See detailed description of the environment in the full version[13] So triangle inequality holds for . It is easy to verify from the definiton that ( , ) = 0 and ( , ′ ) = ( ′ , ). So (·) is indeed a well-defined metric. □Remark.It is easy to show that metric space (Q, ) is complete.Next, similar to scalarized -learning, we design an optimality filter H defined bywhere arg takes the multi-objective value corresponding to the maximum, i.e., ( , ′′ ) such that ′′ ∈ arg max ∈A ( ( , ′ )), and is a welfare function of interest.Using the definition of the optimality filter, we can then write the optimality operator T in terms of the optimal filter:Remark. Note that in the algorithm, at each iteration, we sample from P (·| , ) to make an update. If the learning rate satisfies the usual Robbins-Monro type conditions, namely = ∞ and 2 < ∞, the update at each iteration is, in expectation, applying the optimality operatior T . Thus, to show convergence, it suffices to show that iteratively applying T on any leads to a unique -table. where (1) is due to | [·] | ≤ [| · |] ≤ max | · |. According to our assumption, let ′ be the action chosen to maximize the value of ( , ′ ) for some state and component of interest, then we have(2) arises from the w.l.o.g. assumption that ( ′ , ′ ) − max ′′ ( ′ , ′′ ) ≥ 0. Thus, the whole expression in | · | is nonnegative and ( ′ , ′ ) − ( ′ , ′ ) ≥ 0. We can discard the last two terms since ′ ( ′ , ′ ) ≤ max ′′ ( ′ , ′′ ). (3) is due to max ′ , ( ′ , ′ ) ≤ max ′ , ′′ ( ′ , ′′ ) holds for any ′ and (·). This completes our proof that T is a contraction. □ Finally, since in our design, the distance is a well-defined metric, to prove convergence to a unique fixed point, we will use the Generalized Banach Fixed Point Theorem.Lemma 3.4. (Generalized Banach Fixed-Point Theorem)Given that T is a contraction mapping with Lipschitz coefficient on the complete pseudo-metric space ⟨Q, ⟩, then there exists * such that lim →∞ (T , * ) = 0 for any ∈ Q.Since our metric is a well-defined metric by Lemma 2 and therefore ⟨Q, ⟩ is a complete metric space, which is also a complete pseudo-metric space. Also, by Lemma 3, T is a contraction. So it follows from Lemma 4 that there exists * such that lim →∞ (T , * ) = 0 for any ∈ Q. In other words, iteratively applying optimal operator T on any multi-objective Q-table, the algorithm will terminate with a unique table. Since in Welfare -learning, the update at each iteration is in expectation applying T , the algorithm is convergent. This concludes the proof of Theorem 3.1. Multi-Objective Reinforcement Learning with Non-Linear Scalarization. Mridul Agarwal, Vaneet Aggarwal, Tian Lan, Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (Virtual Event, New Zealand) (AAMAS '22). International Foundation for Autonomous Agents and Multiagent Systems. the 21st International Conference on Autonomous Agents and Multiagent Systems (Virtual Event, New Zealand) (AAMAS '22). International Foundation for Autonomous Agents and Multiagent SystemsRichland, SCMridul Agarwal, Vaneet Aggarwal, and Tian Lan. 2022. Multi-Objective Rein- forcement Learning with Non-Linear Scalarization. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (Virtual Event, New Zealand) (AAMAS '22). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 9-17. MO-Gym: A Library of Multi-Objective Reinforcement Learning Environments. Lucas N Alegre, Florian Felten, El-Ghazali, Grégoire Talbi, Ann Danoy, Ana L C Nowé, Bruno C Bazzan, Da Silva, Proceedings of the 34th Benelux Conference on Artificial Intelligence BNAIC/Benelearn. the 34th Benelux Conference on Artificial Intelligence BNAIC/BenelearnLucas N. Alegre, Florian Felten, El-Ghazali Talbi, Grégoire Danoy, Ann Nowé, Ana L. C. Bazzan, and Bruno C. da Silva. 2022. MO-Gym: A Library of Multi- Objective Reinforcement Learning Environments. In Proceedings of the 34th Benelux Conference on Artificial Intelligence BNAIC/Benelearn 2022. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Stefan Banach, Fund. math. 3Stefan Banach. 1922. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fund. math 3, 1 (1922), 133-181. Universal and Tight Online Algorithms for Generalized-Mean Welfare. Siddharth Barman, Arindam Khan, Arnab Maiti, 10.1609/aaai.v36i5.20406Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Siddharth Barman, Arindam Khan, and Arnab Maiti. 2022. Universal and Tight Online Algorithms for Generalized-Mean Welfare. Proceedings of the AAAI Conference on Artificial Intelligence 36, 5 (Jun. 2022), 4793-4800. https://doi.org/ 10.1609/aaai.v36i5.20406 Learning all optimal policies with multiple criteria. Leon Barrett, Srini Narayanan, Proceedings of the 25th international conference on Machine learning. the 25th international conference on Machine learningLeon Barrett and Srini Narayanan. 2008. Learning all optimal policies with multiple criteria. In Proceedings of the 25th international conference on Machine learning. 41-47. Dynamic programming. Richard Bellman, Science. 153Richard Bellman. 1966. Dynamic programming. Science 153, 3731 (1966), 34-37. Egalitarianism versus utilitarianism. Ken Binmore, Utilitas. 10Ken Binmore. 1998. Egalitarianism versus utilitarianism. Utilitas 10, 3 (1998), 353-367. The unreasonable fairness of maximum Nash welfare. Ioannis Caragiannis, David Kurokawa, Hervé Moulin, Ariel D Procaccia, Nisarg Shah, Junxing Wang, ACM Transactions on Economics and Computation (TEAC). 7Ioannis Caragiannis, David Kurokawa, Hervé Moulin, Ariel D Procaccia, Nisarg Shah, and Junxing Wang. 2019. The unreasonable fairness of maximum Nash welfare. ACM Transactions on Economics and Computation (TEAC) 7, 3 (2019), 1-32. A note on inequality measures and the Pigou-Dalton principle of transfers. Erio Castagnoli, Pietro Muliere, Income and wealth distribution, inequality and poverty. SpringerErio Castagnoli and Pietro Muliere. 1990. A note on inequality measures and the Pigou-Dalton principle of transfers. In Income and wealth distribution, inequality and poverty. Springer, 171-182. Hierarchical reinforcement learning with the MAXQ value function decomposition. G Thomas, Dietterich, Journal of artificial intelligence research. 13Thomas G Dietterich. 2000. Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of artificial intelligence research 13 (2000), 227-303. Rick Durrett, Probability: theory and examples. Cambridge university press49Rick Durrett. 2019. Probability: theory and examples. Vol. 49. Cambridge university press. Fair allocation of indivisible public goods. Brandon Fain, Kamesh Munagala, Nisarg Shah, Proceedings of the 2018 ACM Conference on Economics and Computation. the 2018 ACM Conference on Economics and ComputationBrandon Fain, Kamesh Munagala, and Nisarg Shah. 2018. Fair allocation of indivisible public goods. In Proceedings of the 2018 ACM Conference on Economics and Computation. 575-592. Welfare and Fairness in Multi-objective Reinforcement Learning. Zimeng Fan, Nianli Peng, Muhang Tian, Brandon Fain, 10.48550/ARXIV.2212.01382Zimeng Fan, Nianli Peng, Muhang Tian, and Brandon Fain. 2022. Welfare and Fairness in Multi-objective Reinforcement Learning. https://doi.org/10.48550/ ARXIV.2212.01382 Multi-objective optimization for autonomous driving strategy based on Deep Q Network. Tianmeng Hu, Biao Luo, Chunhua Yang, 10.1007/s44163-021-00011-3Discover Artificial Intelligence. 1Tianmeng Hu, Biao Luo, and Chunhua Yang. 2021. Multi-objective optimization for autonomous driving strategy based on Deep Q Network. Discover Artificial Intelligence 1, 1 (2021), 1-11. https://doi.org/10.1007/s44163-021-00011-3 Fairness in reinforcement learning. Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Aaron Roth, PMLRInternational conference on machine learning. Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. 2017. Fairness in reinforcement learning. In International conference on machine learning. PMLR, 1617-1626. The Nash social welfare function. Mamoru Kaneko, Kenjiro Nakamura, Econometrica: Journal of the Econometric Society. Mamoru Kaneko and Kenjiro Nakamura. 1979. The Nash social welfare function. Econometrica: Journal of the Econometric Society (1979), 423-435. APX-hardness of maximizing Nash social welfare with indivisible items. Euiwoong Lee, 10.1016/j.ipl.2017.01.012Inform. Process. Lett. 122Euiwoong Lee. 2017. APX-hardness of maximizing Nash social welfare with indivisible items. Inform. Process. Lett. 122 (2017), 17-20. https://doi.org/10.1016/ j.ipl.2017.01.012 Multiobjective reinforcement learning: A comprehensive overview. Chunming Liu, Xin Xu, Dewen Hu, IEEE Transactions on Systems, Man, and Cybernetics: Systems. 45Chunming Liu, Xin Xu, and Dewen Hu. 2014. Multiobjective reinforcement learning: A comprehensive overview. IEEE Transactions on Systems, Man, and Cybernetics: Systems 45, 3 (2014), 385-398. Games and decisions: Introduction and critical survey. Duncan Luce, Howard Raiffa, Courier Corporation. R Duncan Luce and Howard Raiffa. 1989. Games and decisions: Introduction and critical survey. Courier Corporation. Hypervolumebased multi-objective reinforcement learning. Kristof Van Moffaert, M Madalina, Ann Drugan, Nowé, International Conference on Evolutionary Multi-Criterion Optimization. SpringerKristof Van Moffaert, Madalina M Drugan, and Ann Nowé. 2013. Hypervolume- based multi-objective reinforcement learning. In International Conference on Evolutionary Multi-Criterion Optimization. Springer, 352-366. Fair division and collective welfare. Hervé Moulin, MIT pressHervé Moulin. 2004. Fair division and collective welfare. MIT press. The bargaining problem. F John, NashJr, Econometrica: Journal of the econometric society. John F Nash Jr. 1950. The bargaining problem. Econometrica: Journal of the econometric society (1950), 155-162. Fair optimization and networks: A survey. Wlodzimierz Ogryczak, Hanan Luss, Michał Pióro, Dritan Nace, Artur Tomaszewski, Journal of Applied Mathematics. Wlodzimierz Ogryczak, Hanan Luss, Michał Pióro, Dritan Nace, and Artur Tomaszewski. 2014. Fair optimization and networks: A survey. Journal of Applied Mathematics 2014 (2014). Collective choice and social welfare. Amartya Sen, Harvard University PressAmartya Sen. 2018. Collective choice and social welfare. Harvard University Press. Learning Fair Policies in Multi-Objective (Deep) Reinforcement Learning with Average and Discounted Rewards. Umer Siddique, Paul Weng, Matthieu Zimmer, PMLRProceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research. Hal Daumé III and Aarti Singhthe 37th International Conference on Machine Learning ( Machine Learning Research119Umer Siddique, Paul Weng, and Matthieu Zimmer. 2020. Learning Fair Policies in Multi-Objective (Deep) Reinforcement Learning with Average and Discounted Rewards. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR, 8905-8915. https://proceedings.mlr.press/v119/siddique20a.html Learning to predict by the methods of temporal differences. S Richard, Sutton, Machine learning. 3Richard S Sutton. 1988. Learning to predict by the methods of temporal differences. Machine learning 3, 1 (1988), 9-44. Introduction to reinforcement learning. S Richard, Andrew G Sutton, Barto, Richard S Sutton, Andrew G Barto, et al. 1998. Introduction to reinforcement learning. (1998). Constructing stochastic mixture policies for episodic multiobjective reinforcement learning tasks. Peter Vamplew, Richard Dazeley, Ewan Barker, Andrei Kelarev, Australasian joint conference on artificial intelligence. SpringerPeter Vamplew, Richard Dazeley, Ewan Barker, and Andrei Kelarev. 2009. Con- structing stochastic mixture policies for episodic multiobjective reinforcement learning tasks. In Australasian joint conference on artificial intelligence. Springer, 340-349. Scalarized multi-objective reinforcement learning: Novel design techniques. Kristof Van Moffaert, M Madalina, Ann Drugan, Nowé, 2013 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning. IEEEKristof Van Moffaert, Madalina M Drugan, and Ann Nowé. 2013. Scalarized multi-objective reinforcement learning: Novel design techniques. In 2013 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (AD- PRL). IEEE, 191-199. Learning from delayed rewards. Christopher John Cornish Hellaby Watkins, Christopher John Cornish Hellaby Watkins. 1989. Learning from delayed rewards. (1989). Families of OWA operators. Fuzzy Sets and Systems. R Ronald, Yager, 10.1016/0165-0114(93)90194-M59Ronald R. Yager. 1993. Families of OWA operators. Fuzzy Sets and Systems 59, 2 (1993), 125-148. https://doi.org/10.1016/0165-0114(93)90194-M A Generalized Algorithm for Multi-Objective Reinforcement Learning and Policy Adaptation. Runzhe Yang, Xingyuan Sun, Karthik Narasimhan, Curran Associates IncRed Hook, NY, USARunzhe Yang, Xingyuan Sun, and Karthik Narasimhan. 2019. A Generalized Algo- rithm for Multi-Objective Reinforcement Learning and Policy Adaptation. Curran Associates Inc., Red Hook, NY, USA.
[ "https://github.com/MuhangTian/Fair-MORL-AAMAS" ]
[ "Mobility-Aware Coded Storage and Delivery", "Mobility-Aware Coded Storage and Delivery" ]
[ "Emre Ozfatura [email protected] \nDepartment of Electrical and Electronic Engineering\nInformation Processing and Communications Lab\nImperial College London\n\n", "Deniz Gündüz [email protected] \nDepartment of Electrical and Electronic Engineering\nInformation Processing and Communications Lab\nImperial College London\n\n" ]
[ "Department of Electrical and Electronic Engineering\nInformation Processing and Communications Lab\nImperial College London\n", "Department of Electrical and Electronic Engineering\nInformation Processing and Communications Lab\nImperial College London\n" ]
[]
Content caching at small-cell base stations (SBSs) is a promising method to mitigate the excessive backhaul load and delay, particularly for on-demand video streaming applications. A cache-enabled heterogeneous cellular network architecture is considered in this paper, where mobile users connect to multiple SBSs during a video downloading session, and the SBSs request files, or fragments of files, from the macro-cell base station (MBS) according to the user requests they receive. A novel content storage and delivery scheme that exploits coded storage and coded delivery jointly, is introduced to reduce the load on the backhaul link from the MBS to the SBSs. It is shown that the proposed caching scheme, by exploiting user mobility, provides a significant reduction in the number of sub-files required while also reducing the backhaul load when the cache capacity is large. Overall, for practical scenarios in which the number of subfiles that can be created is limited (by the size or the protocol overhead), the proposed coded caching and delivery schemes decidedly outperforms other known alternatives.
10.1109/tcomm.2020.2981454
[ "https://arxiv.org/pdf/1804.01903v2.pdf" ]
4,611,254
1804.01903
9d6ae874301e1f0c95b2dedf3e4bc395172a830a
Mobility-Aware Coded Storage and Delivery Emre Ozfatura [email protected] Department of Electrical and Electronic Engineering Information Processing and Communications Lab Imperial College London Deniz Gündüz [email protected] Department of Electrical and Electronic Engineering Information Processing and Communications Lab Imperial College London Mobility-Aware Coded Storage and Delivery 1 Content caching at small-cell base stations (SBSs) is a promising method to mitigate the excessive backhaul load and delay, particularly for on-demand video streaming applications. A cache-enabled heterogeneous cellular network architecture is considered in this paper, where mobile users connect to multiple SBSs during a video downloading session, and the SBSs request files, or fragments of files, from the macro-cell base station (MBS) according to the user requests they receive. A novel content storage and delivery scheme that exploits coded storage and coded delivery jointly, is introduced to reduce the load on the backhaul link from the MBS to the SBSs. It is shown that the proposed caching scheme, by exploiting user mobility, provides a significant reduction in the number of sub-files required while also reducing the backhaul load when the cache capacity is large. Overall, for practical scenarios in which the number of subfiles that can be created is limited (by the size or the protocol overhead), the proposed coded caching and delivery schemes decidedly outperforms other known alternatives. I. INTRODUCTION Due to the popularity of on demand video streaming services, such as YouTube and Netflix, video dominates the Internet traffic [1], [2]. A promising solution to mitigate the excessive video traffic and to reduce the video latency in video streaming, particularly in cellular networks, is storing the popular contents at the network edge. There are two prominent approaches used extensively in the literature to reduce the backhaul load in cellular networks; namely, coded storage and coded delivery. In a broad sense, coded storage is designed from the perspective of the users, and allows them to efficiently receive a file from multiple access points without worrying about overlapping bits, which can be considered as "liquidification" of content. Maximum distance separable (MDS) or fountain codes, have been studied extensively, for example, in multi-access downlink scenarios, e.g., a static user downloading content from multiple small-cell base stations (SBSs) [3]- [6], mobile users (MUs) connecting to different SBSs sequentially to download content [7]- [11], or MUs utilizing device-to-device (D2D) communication opportunities [12], [13]. Coded delivery, on the other hand, is designed from the point of the server, which utilizes the caches of the users to seek multicasting opportunities in order to reduce the amount of data it needs to transmit to the users to satisfy their demands [14]- [24]. Coded delivery schemes consist of two phases. In the placement phase, files are divided into sub-files, and each user stores a certain subset of the sub-files. In the delivery phase, the server carefully constructs the multicast messages as XORed combinations of the requested sub-files. Each user recovers its request from the multicasted messages together with its own cache contents. We note that the multicast gain increases with the number of users; that is, the higher the number of users the lower the per-user delivery rate. However, to achieve the promised gain, the number of sub-files has to increase exponentially with the number of users, which is considered as one of the main challenges in front of the implementation of coded delivery in practice [25]. To remedy this limitation of coded delivery, coded caching and delivery designs that require low subpacketizaton levels have become an active research area. In [27], [28], the authors show that a particular family of bipartite graphs, namely Ruzsa-Szemerédi graphs, can be used to a construct a coded caching scheme, in which, for sufficiently large number of nodes, K, the number of sub-files scales linearly with K. Unfortunately, however, the large K assumption limits the practical applicability of the proposed coded caching design. Alternatively, in [29] a novel coded caching scheme based on linear block codes is introduced and it is shown that the number of sub-files can be reduced dramatically with a small increase in the delivery rate for any practical value of K. It is further shown that using different linear block codes, different delivery rates can be achieved using different sub-files for fixed K, M and N values, allowing some flexibility for implementation. In [30], coded caching designs based on linear block codes are constructed by using the so-called placement delivery arrays (PDAs). We note that all the aforementioned works seek a coded caching design that reduces the number of sub-files with a minimum sacrifice in the delivery rate. In this paper, we show that the mobility pattern of the users can be utilized to reduce the number of required sub-files by introducing a novel coded storage and delivery scheme, which is designed to take into account the random mobility patterns of the users. The proposed scheme divides the SBSs into smaller groups according to the mobility patterns of the users, and applies coded delivery to each group of SBSs independently. MDS-coded storage is used to guarantee that the MUs can collect Fig. 1: Illustration of the static user access models studied in [14] and [26], respectively. useful information from any of the SBSs they connect to until they recover their requested files. We introduce an efficient grouping strategy via utilizing the analogous well-known frequency reuse pattern problem [31]. In [26], a hierarchical network, in which a macro-cell base station (MBS) serves multiple cache-equipped SBSs through a shared link while each user is connected to L SBSs, is analyzed and it is shown that a lower delivery rate compared to [14] is achievable (please see Fig. 1 for illustration of the models considered in [14] and [26]). Although it is not highlighted explicitly in [26], existence of a multi-access pattern, i.e., each user accessing L SBSs, reduces the number of sub-files as well. A similar observation is made in [32] by leveraging multiple transmission antennas instead of user mobility. Considering a MBS with L transmit antennas, the users are divided into groups of L, so that the channel from the MBS to each group becomes an L × L channel with L transmit antennas and L single-antenna users. For the delivery phase, a two-level coding scheme is used, where the outer code is designed considering each group of users as a single virtual user, and their L antennas as a single virtual antenna, and the coded caching scheme of [14] is applied for these L virtual users. Hence, each user in the same group caches the same sub-files. The inner code, for each group, is designed according to the L × L transmission channel from the server to the L users in this group. The rest of the paper is organized as follows. The system model is introduced in Section II, and the proposed coded storage and delivery scheme is presented in Section III. The performance of the proposed scheme is evaluated and compared numerically with the coded delivery scheme in [14] in Section IV. Finally, in Section V we conclude the paper with a summary of our main contributions and potential future research directions. Notations. Throughout the paper, for positive integer N, the set {1, . . . , N } is denoted by [N]. We use to denote the bit-wise XOR operation, while j i represents the binomial coefficient corresponding to the number of i-element subsets of a set with j elements. II. SYSTEM MODEL A. Network model We consider a cellular network architecture that consists of one MBS and K SBSs, i.e., SBS 1 , . . . , SBS K . The MBS has access to a content library of N files, W 1 , . . . , W N , each of size F bits. Each SBS is equipped with a cache memory of M F bits. The SBSs are connected to the MBS through a shared wireless backhaul link. Hence, when a user requests a content from a SBS, the SBS first checks its content cache. If the requested content is fully cached, then the SBS directly delivers the corresponding file. If the requested content is not cached at all, or partially cached, then the remaining parts of the content are first transferred from the MBS to the SBS over a backhaul link. We assume that all the demands from the MBS are delivered simultaneously over a shared error-free backhaul connection. In this paper, we assume that all the files in the library are requested by the users with the same probability, i.e., W n is requested by a MU with probability 1/N, n ∈ [N]. Under this assumption, if N is large compared to K, which is the case in realistic scenarios, then it is safe to assume that all the users request a different content. For instance, when K = 30 and N = 10000 the probability of each user requesting a different file is 0.975. This assumption has been widely accepted in the coded caching literature as it also represents the worst case scenario. We also assume that the number of users in the network is limited by the number of SBSs; hence, there are K users in the network, i.e., U 1 , . . . , U K , and the requests of the users are denoted by the demand vector d (d 1 , . . . , d K ). The placement phase, during which the caches of the SBSs are filled, takes place before the demand vector d is revealed. The required delivery rate for the backhaul link, R(M, d), is defined as the minimum number of bits that must be transmitted over the shared link, normalized by the file size, for a given normalized cache capacity M, in order to satisfy all the user demands, d. Since, we assume that each user requests a different file, we simply use R(M) to denote the worst case delivery rate, instead of R(M, d). The delivery rate over the backhaul link from the MBS to the SBSs depends on the access model of the users. In this paper, we are particularly interested in the single access model with mobility, in which a MU is connected to exactly one SBS at a particular time instant; however, due to mobility, it connects to multiple SBSs over time. To better motivate and explain our model and results, we will first explain the previously studied access models in the literature, and then provide a detailed explanation of the considered single access model with mobility. B. User access models 1) Static single access model: In this model, it is assumed that each user is connected to exactly one SBS, as illustrated in Fig. 1a. This corresponds to the shared link problem introduced in [14]. The caching and coded delivery method introduced in [14] works as follows. For t M K N ∈ Z, in the placement phase, all the files are cached at level t; that is, W n , n ∈ [N], is divided into K t non-overlapping sub-files of equal size, and each sub-file is cached by a distinct subset of t SBSs. Then, each sub-file can be identified by a subset I, where I ⊆ [K] and |I| = t, such that sub-file W n, I is cached by SBS k , k ∈ I. In the delivery phase, for each subset S ⊆ [K], |S| = t + 1, all the requests of the SBSs in S can be served simultaneously by MBS via multicasting s ∈S W d s , S\{s } .(1) Thus, with a single multicast message the MBS can deliver t +1 sub-files, and achieve a multicasting gain of t +1. Accordingly, the achievable delivery rate is R(M) = K−t t+1 . We emphasize that the promised coded caching gain is obtained by dividing each file into K t subfiles, which grows exponentially with K. This limits the potential gain in practice for finite-size files. The delivery rate for a cache memory M that results in a non-integer t value can be obtained as a linear combination of the delivery rates of the two nearest M values for which the corresponding t values are integers. This is achieved by memorysharing between the caching and delivery schemes for those two M values. In the rest of the paper we will consider integer t values unless otherwise stated. 2) Static multi-access model: In this model, each user connects to multiple SBSs. A particular case of this problem is studied in [26], where each user connects to L SBSs following a certain cyclic pattern, where user U k connects to SBS k , . . . , SBS k+L−1 mod K , k ∈ [K]. The case of L = 2 is illustrated in Figure 1b. In [26], the authors divide the SBSs into L groups, where the lth group consists of G l {SBS k : k mod L = l}. Then, the coded delivery scheme in [14] is adapted to this setting as follows. In the placement phase each file is divided into L equal-size disjoint fragments, i.e., W l n is the lth fragment of file W n . Then, for each l ∈ [L], all the fragments in W l W l 1 , . . . , W l N are cached by the SBSs in G l . For the placement of a particular group G l , we use the same caching scheme as in the static single access model witĥ K = K/L SBSs, each with a normalized cache size 1 ofM = M L. Therefore, each fragment of each file is cached at level t KM N = MK/N, i.e., sub-file W l n, I , where I ⊆ {k : k mod L = l} and |I| = t, is cached by SBS k , k ∈ I. Similarly, the coded delivery phase is executed for each G l , l ∈ [L], separately. The coded delivery algorithm for this model is given in Algorithm 1, and the corresponding delivery rate is found as R(M) = K−Lt t=1 . We note that the delivery rate decreases with L, the number of SBSs each user connects to. The number of sub-files each file is divided into is L K/L t for this scheme, which provides a significant reduction in the subpacketization. 3) Single access model with mobility: In this model, MUs connect to different SBSs during the downloading period of a single-file, i.e., a video file or a group of frames, depending on the time scales. But, unlike in the previous model, each MU is connected only to the nearest SBS at any time instant. We consider equal-length time slots, whose duration corresponds to the minimum time duration a MU remains connected to the same SBS. We assume that each SBS is capable of transmitting B bits to a MU within one time slot. Hence, a file of size F bits can be downloaded in T = F B slots. We define the mobility path of a user as the sequence of small-cells visited during these T time slots. For instance, for K = 7 and T = 3, SBS 2 , SBS 3 , SBS 4 is one such mobility path. We further assume that during the video downloading session of T time slots each MU is connected to exactly T different SBSs, which we call as the high mobility assumption. C. Problem definition Our aim is to minimize the normalized delivery rate over the backhaul link under the single access model with high mobility assumption for MUs. In addition to reducing the normalized backhaul delivery rate, we also want to reduce the number of sub-files used in the delivery phase in order to obtain a practically viable caching strategy. We first note that the single access model with mobility can be treated similarly to the static single access model in the following way: each file is divided into T disjoint fragments, and each fragment is considered as a separate file so that the size of the file library and the size of the caches are scaled to NT and MT, respectively. Then the placement phase is executed as in [14] according to caching level t = K MT NT = K M/N, and the delivery at each time slot can also be executed as in [14], and , R(M) = K−t t+1 is still achievable with L K t sub-files. Below we will present an alternative caching and delivery scheme that will reduce the number of required sub-files considerably. The approach introduced in [26] is an efficient method to reduce both the number of sub-files and the normalized delivery rate of the backhaul link, when the users connect to the SBSs in a uniform manner, as described in the previous section. However, this method is not applicable when the users do not follow the prescribed access patterns. Instead, MDS-coded caching can be employed when the users are mobile, or access the SBSs with non-uniform patterns [7], [8], [33]. The key advantage of MDS-coded caching at the SBSs is to reduce the amount of data that need to be cached at each SBS for each file. Consider the following simple example with K = 4 SBSs, where a MU can connect to any 3 of them. In this case, each file is divided into 3 fragments, and they are encoded into 4 fragments through a (4,3) MDS code. Each SBS caches a different fragment so that a MU that connects to any three SBSs can recover the file. In this example each SBS needs to cache only one fragment for each file, equivalently, 1/3 of the original file. Accordingly, under the high mobility assumption with given T = F/B, it is sufficient to store only 1/T portion of each file at each SBS. Hence, if M ≥ N/T, via MDS coded storage, the normalized delivery rate over the backhaul link can be reduced to zero, which means that all the user requests can be delivered locally. Otherwise, only MT/N portion of each file can be cached and delivered locally using MDS coded storage at the SBSs. The main drawback of MDS coded storage is that, coded delivery techniques cannot be applied directly to MDS coded files since the multicasting gain of coded delivery stems from the overlaps among the cached sub-files at different SBSs. Hybrid designs which leverage coded delivery and coded caching techniques have been previously studied for different network setups [33]- [35]. III. SOLUTION APPROACH In this section, we introduce a new hybrid design, which utilizes both MDS-coded caching and coded delivery to satisfy the demands of MUs under the single access model with mobility, and analyze its performance under the high mobility assumption. For the sake of exposition, we first consider a special class of mobility patterns, for which the coded delivery technique in [26] can be applied directly. A. Special case: Linear topology Consider a particular mobility scenario, in which a user's mobility path is determined by its direction and the first SBS it connects to. This can model, for example, MUs on a train connecting to SBSs located by the rail tracks in a known order. In this special case, MUs can be considered as moving on a line as illustrated in Fig. 2. Although a MU is connected only to the nearest SBS at any time instant, the coded delivery technique introduced in [26] for the static multi-access model can be applied in this special case. For given file size F and SBS transmission rate B, each file is divided into T = F/B equal-size disjoint fragments, i.e., W n (W n,1 , . . . , W n,T ), n ∈ [N]. Similarly, the set of all SBSs are also divided into T disjoint groups, denoted by G 1 , . . . , G T , where SBSs in each group cache only one fragment of each file, using the placement scheme in [14]. That is, fragment W n,l are cached across SBSs in group G l . For a MU to be able to recover all the sub-files of its request, the SBSs should be grouped such that, any mobility path visits exactly one SBS from each group. Grouping of the SBSs can be considered as a coloring problem, where the SBSs are colored using T different colors such that any adjacent T of them have different colors. An example for T = 2 is illustrated in Fig. 2, where any two neighboring SBSs have different colors. Delivery phase is executed at each time slot separately for each group of SBSs {G l }, l ∈ [T]. Hence, for the special case of linear topology, the achievable delivery rate for the backhaul link is R(M) = K−tT 1+t with T × K/T t sub-files, where t = K M/N, and integer valued as before. B. General Case: Two dimensional topologies In this subsection, we consider a general path model in which the users move on a 2D grid, and each SBS covers a disjoint, equal size area with hexagonal shape as illustrated in Fig. 3. As opposed to the one dimensional path model, in the 2 dimensional path model it may not be possible to group all the SBSs using only T colors while ensuring that in any path of length T a MU connects to exactly one SBS from each group. For given path length T, we will say that the SBSs are L-colorable, if there is a coloring of the SBSs with L colors such that any mobility path of length T consists of T SBSs with different colors. Note that we must have L ≥ T. The following theorem states the achievable delivery rate over the backhaul link for an L-colorable network. R(M) = K − t L 1 + t(2) using T × K/L t sub-files, for integer t values. Proof. In the placement phase, each file is divided into T disjoint fragments with equal size. These are then encoded into L fragments using a (L, T) MDS code. Hence, any T fragments out of the total L is sufficient to decode the original file. Consequently, each group of SBSs (SBSs with the same color) cache a different fragment using the placement scheme in [14]. The overall delivery phase consists of T identical consecutive delivery steps, each executed in one time slot, such that in each step a coded fragment is delivered to each MU, and having received T fragments at the end of T steps, each MU can recover the original file. To this end, we focus on a single delivery step. The number of SBSs in each group, labeled with the same color, isK = K/L. If we consider the coded delivery phase for a particular group at a particular time slot, this is identical to the single access model withK SBSs each with a cache memory of sizeM = MT files; and hence, the corresponding delivery rate isK −KM/N Lemma 1. If t = K MT N L is not an integer, then the following rate is achievable by memory sharing R(M) = γ K L − t t + 1 + (1 − γ) K L − t t + 1 L,(3) where γ t − t. A simple example for T = 2 is illustrated in Fig. 3. As one can observe from Fig. 3, three colors are sufficient to group the SBSs to ensure that in any mobility path of length two a MU always connects to two SBSs with different colors. Hence, in the placement phase of the given example, each file is initially divided into two fragments and these fragments are then encoded using (3,2) MDS code to obtain 3 coded fragments which are labeled with colors green, red and blue. All the SBSs in the same group (i.e., those with the same color) cache the fragments that have been assigned the same color. Then, at each time slot, the coded delivery phase is executed for each group of SBSs independently. Remark 1. We remark that the delivery rate achieved for a random path is higher than the one achieved for a deterministic path since the number of colors needed in general is greater than T, and it increases further with the number of colors used. Hence, in the single access model with mobility the objective is to identify the minimum L such that the network is L-colorable. In the following section we study the optimal coloring strategy and the corresponding backhaul delivery rate. C. Cell coloring For a given mobility path of length T, our objective is to color the cells using the minimum number of distinct colors while ensuring that, in any mobility path each color is encountered at most once. In general, for any given cell structure, and mobility length T, the cell coloring problem can be modeled as a vertex coloring problem. Consider K SBSs with disjoint cells. We can consider each cell as a vertex of a graph G, and add an edge between vertices k and j if there is a mobility path of length T that contains both cell k and j. Once the graph G is constructed, the chromatic number γ(G) of this graph gives the minimum L such that the network is L-colorable. In general, this vertex coloring problem, i.e., finding the chromatic number γ(G), is NP-complete. Hence, finding the optimal coloring, the minimum L, in a large network may not be feasible. Therefore in the scope of this paper we focus on the scaling behavior of the cell coloring problem i.e., for the given mobility path length T, what is the minimum L as K goes to infinity. Let us limit our focus on hexagonal cells first. This problem is analogous to the well known frequency reuse pattern problem in cellular networks [31]. In this problem, the same frequency is allocated to multiple cells to efficiently use the limited available spectrum while minimizing interference. We note that utilization of frequency reuse patterns in the coded delivery framework has been previously studied in [16] for limiting the interference in device-to-device communication. The cells serving in the same frequency are called the co-channel cells. The co-channel cell locations are determined according to a given distance constraint (the distance between the center of two co-channel cells). In [31], a frequency reuse pattern (or, equivalently, a co-channel cell pattern) is defined via the integer-valued shift parameters i and j in the following way: starting from a cell, "move i cells along any chain of hexagons; turn counter-clockwise 60 degrees; move j cells along the chain that lies on this new heading". A frequency reuse pattern example with i = 2 and j = 1 is illustrated in Fig. 4. When co-channel cells are identified with a same colour, then the pattern of cells with different colours, so that the whole network is a repetition of this pattern, is called the cluster, which is illustrated in Fig. 4. It is shown that using a reuse pattern with shift parameters i and j, the two nearest co-channels are separated with a distance D = √ 3C (scaled with the cell diameter), where C = i 2 + j 2 + i j is the cluster size which refers to the total number of different frequencies used in the network. We remark that when the nearest co-channel cells are separated with a distance D = √ 3C according to the reuse pattern with shift parameters i and j, a user in a particular cell should visit at least i + j (including the current cell) cells to reach the nearest co-channel cell, and by definition no two of these cells can be co-channel cells. Therefore, the frequency reuse pattern problem is analogous to our problem, where the length of the mobility path T is equivalent to i + j, and the cluster size C is equivalent to the number of colors L. In our problem, we want to minimize the number of colors L = T 2 − i j for a given mobility length T = i + j. Hence, we use the reuse pattern (i, j), with i = T 2 and j = T 2 to minimize the number of colors L. The scaling behavior of the clusters with respect to T is illustrated in Fig. 5. Remark 2. We observe that when the reuse pattern (i, j), with i = T 2 and j = T 2 is used for a given T, then in the corresponding cluster, it is possible to reach a cell from any other cell in T steps so that each cluster corresponds to complete graph. Theorem 2. For a network of SBS with hexagonal cells, and a given mobility path length T, the minimum L such that the network is L-colorable is given by L min = 3n 2 , if T = 2n, 3n 2 + 3n + 1, if T = 2n + 1,(4) for some positive integer n. Proof. It is clear that L min is achievable by using the proper reuse pattern explained. On the other hand, since the cluster corresponds to a complete graph, its chromatic index is equal to cluster size L min which implies that it is not possible to color network with less than L min different color. We remark that this analysis can be extended to different network topologies. For instance, if we consider a square grid, the given reuse pattern can be modified by simply using 90-degree turns instead of 60. An example of a reuse pattern with i=2, j=2 is illustrated in Fig. 6 for a square grid. Corollary 1. For a network of SBS with square cells, and a given mobility path length T, the minimum L such that the network is L-colorable is given by L min = 2n 2 , if T = 2n, 2n 2 + 2n + 1, if T = 2n + 1,(5) for some positive integer n. The scaling behavior of the clusters in a square cell topology is illustrated in Fig. 7. We remark that as the number of neighboring cells increases, the cluster size also increases. For instance, for a mobility path length T = 4, the corresponding cluster size is 12 for hexagonal cells whereas it is 8 for the square cells. Recall that the delivery rate of the mobility-aware coded delivery scheme increases with the cluster size L. Hence, we can conclude that the mobility-aware scheme performs better when there is less randomness in terms of the cells a MU can move into throughout its mobility path. IV. NUMERICAL RESULTS For the simulations, we consider two network topologies with K = 24 and K = 48 SBSs of hexagonal shapes, respectively. We consider a mobility path of length T = 2. Hence, the cells are colored according to the reuse pattern (i = 1, j = 1) with a total of L = 3 colors as in Fig. 3. We compare the performance of our mobility-aware coded delivery scheme with the coded delivery scheme of [14] in terms of two metrics: the number of required sub-files and the normalized backhaul delivery rate. For each topology, we analyze the performance of these schemes for two different storage capacities of M/N = 1/4 and M/N = 1/8, respectively. The numerical results are presented in Table I. In the first group of simulations we analyze the network topology with K = 24 SBSs and observe that our mobility-aware coded delivery scheme reduces the number of sub-files dramatically. When, M/N = 1/8 our mobility-aware coded delivery scheme has 12.5% increase in the delivery rate, while reducing the number of sub-files by approximately 1/72. The more interesting results are observed when the storage capability is higher, i.e., M/N = 1/4. In this case, the proposed mobilityaware coded delivery scheme outperforms the original coded delivery scheme in both performance metrics. At the first glance this might be counterintuitive since there is a trade-off between the delivery rate and the number of sub-files [29]. However, mobility-aware approach not only utilizes the multicasting gain, but also the multi-access, gain which is clearly visible in the deterministic path scenario. In this network setting the number of required sub-files goes from 269000 down to 140. In the second group of simulations, we analyze the network topology with K = 48 SBSs. When M/N = 1/8 our mobilityaware coded delivery scheme results in a 20% increase in the delivery rate, while reducing the number of sub-files by approximately four orders of magnitude. Hence, thanks to the proposed approach, coded delivery with caching can be practically realizable with only 20% increase in the delay. At this point, one can argue that the number of sub-files could also be reduced by simply clustering the SBSs to obtain two sub-networks with K/2 SBSs, and then applying the coded delivery scheme to each sub-network independently. Indeed, the clustering approach could reduce the number of sub-files significantly; however, it leads to a further increase in the backhaul delivery rate. The results with the clustering approach, assuming two clusters, each consisting of K/2 = 24 SBSs, are included in Table I. We note that when there are two clusters, the corresponding delivery rate is simply the sum of the delivery rates corresponding to each cluster. Hence, the coded delivery scheme with two clusters uses the same number of sub-files as the coded delivery scheme for the network topology with K = 24 SBSs; however, the delivery rate is doubled. One can easily observe that for both M/N = 1/8 and M/N = 1/4 our mobility-aware coded delivery scheme outperforms the coded delivery scheme with two clusters in terms of both performance metrics. We also observe that the mobility-aware coded delivery approach becomes more efficient compared to the other two schemes, particularly when the storage capacity is high. To highlight this fact, for T = 2, consider the extreme point M/N = 1/2. In this case the backhaul delivery rate reduces to zero, while the number of subfiles is only two. We remark that a more sophisticated scheme, such as the one utilizing the erasure code design in [29], can be also applied to seek a balance between the number-of sub-files and the delivery rate. To this end, we consider the scenario in Example 9 in [29], where there are K = 60 SBSs with hexagonal shapes and M/N = 1/5. Similarly to the previous setup we consider Coded delivery [14], for K = 24 4048 5.25 Mobility-aware coded delivery for K = 24 56 6 1 4 Coded delivery [14], for K = 24 2.69 × 10 5 2.57 Mobility-aware coded delivery for K = 24 140 2.4 1 8 Coded delivery [14], for K = 48 2.45 × 10 7 6 Coded delivery for K = 48 with clustering 4048 10.5 Mobility-aware coded delivery for K = 48 3640 7.2 1 4 Coded delivery [14], for K = 48 1.39 × 10 11 2.77 Coded delivery for K = 48 with clustering 2.69 × 10 5 5.14 Mobility-aware coded delivery for K = 48 2.57 × 10 4 2.66 TABLE I: Comparison of the proposed mobility-aware coded storage and delivery scheme with the conventional coded delivery scheme of [14] and a coded delivery scheme with clustering in terms of the number of required sub-files and the normalized delivery rate. Coded delivery method Number of sub-files Normalized Delivery rate Coded delivery [14] 2.8 × 10 12 3.69 Mobility-aware coded delivery 251940 4 Coded delivery using (12,8) block code [29] 2.34 × 10 6 5.33 Coded delivery with two clusters 1.18 × 10 6 6.85 [14] and a coded delivery scheme using block code designs terms of the number of required sub-files and the normalized delivery rate. T = 2. In this setup, the original coded delivery scheme achieves a slightly lower delivery rate compared to the mobility-aware scheme with approximately 10 7 times more sub-files. To illustrate the efficiency of the mobility-aware scheme we can limit the number of files to be less than 10 7 and then compare the achievable delivery rates. Performances of the clustering method and the block code design in [29], under the subpacketization constraint are shown in Table II. One can observe that the proposed mobility-aware caching scheme outperforms the clustering scheme and the block code design both in terms of the delivery rate under the subpacketization constraint. At this point, it is worth emphasizing that, performance of the mobility-aware scheme depends on the mobility length T, thus to reduce the number of sub-files further, for each coded fragment placement scheme in [29] is used instead of the one in [14]. V. CONCLUSIONS We have introduced a novel MDS-coded storage and coded delivery scheme that adopts its caching strategy to the mobility patterns of the users. Our scheme exploits a coloring scheme for the SBSs, inspired by frequency reuse patterns in cellular networks, that have been extensively studied in the past to reduce interference. The files in the library are divided into sub-files, which are MDS-coded, and stored in the SBS caches, allowing users to satisfy their demands from multiple SBSs on their path under a high mobility assumption. We have shown that the proposed strategy achieves a significant reduction in the number of sub-files. Particularly when the number of sub-files that can be created is limited, either due to the finite file size or to limit the complexity of the caching and delivery scheme, the proposed scheme provides significant gains in the backhaul load. We are currently extending our analysis to the case of non-uniform file popularity. Algorithm 1 : 1Delivery phase for the static multi-access model 1 for l = 1 : ∈ {k : k mod L = l } , | S | Fig. 2 : 2Linear topology: moving along a line. Fig. 3 : 3File fragmentation and associated cell coloring. Theorem 1 . 1For given values of the variables N, M, K, T, and, t K MT N L if the network is L-colorable, then the following delivery rate over the backhaul link is achievable: TFig. 4 : 4, where t K MT N L . Accordingly, the overall delivery rate for the backhaul link is found as R(M) = K−t L 1+t . For non-integer t values the following lemma can be used to calculate the corresponding achievable delivery rate. Frequency reuse pattern with i=2, j=1 and the corresponding cluster. for T = 2, T = 4 and T = 6 are illustrated with yellow, yellow and red, and all three colors, respectively.(b) Clusters for T = 3, T = 5 and T = 7 are illustrated with yellow, yellow and red, and all three colors, respectively. Fig. 5 : 5Scaling behavior of clusters for hexagonel cells. Fig. 6 : 6Cell coloring in a square grid according to the reuse pattern with i=2, j=2.(a) Clusters for T = 2, T = 4 and T = 6 are illustrated with yellow, yellow and red, and all three colors, respectively. (b) Clusters for T = 3, T = 5 and T = 7 are illustrated with yellow, yellow and red, and all three colors, respectively Fig. 7: Scaling behavior of clusters for square cells. This work was supported in part by the Marie Sklodowska-Curie Action SCAVENGE (grant agreement no. 675891), and by the European Research Council (ERC) StartingGrant BEACON (grant agreement no. 725731).This paper was presented in part at the 2018 ITG Workshop on Smart Antennas in Bochum, Germany.arXiv:1804.01903v2 [cs.IT] 21 Jan 2019 MBS SBSs Users Backhaul link (a) Single access model. MBS SBSs Users Backhaul link (b) Multi-access model with uniform access pattern. TABLE II : IIComparison of the proposed mobility-aware coded storage and delivery scheme with the conventional coded delivery scheme of The cache capacity is normalized here with respect to the size of a fragment, which is 1/L of the original file. The Global internet phenomena report. Sandvine, White PaperSandvine, "The Global internet phenomena report," October 2018, White Paper. Cisco visual networking : Forecast and methodology. Cisco, White PaperCisco, "Cisco visual networking : Forecast and methodology, 2017-2022," November 2018, White Paper. Femtocaching: Wireless content delivery through distributed caching helpers. K Shanmugam, N Golrezaei, A G Dimakis, A F Molisch, G Caire, IEEE Trans. Inf. Theory. 59K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire, "Femtocaching: Wireless content delivery through distributed caching helpers," IEEE Trans. Inf. Theory, vol. 59, Dec. 2013. Modeling, analysis, and optimization of coded caching in small-cell networks. X Xu, M Tao, IEEE Transactions on Communications. 658X. Xu and M. Tao, "Modeling, analysis, and optimization of coded caching in small-cell networks," IEEE Transactions on Communications, vol. 65, no. 8, pp. 3415-3428, Aug 2017. Coding, multicast, and cooperation for cache-enabled heterogeneous small cell networks. J Liao, K K Wong, Y Zhang, Z Zheng, K Yang, IEEE Transactions on Wireless Communications. 1610J. Liao, K. K. Wong, Y. Zhang, Z. Zheng, and K. Yang, "Coding, multicast, and cooperation for cache-enabled heterogeneous small cell networks," IEEE Transactions on Wireless Communications, vol. 16, no. 10, pp. 6838-6853, Oct 2017. Cooperative edge caching in user-centric clustered mobile networks. S Zhang, P He, K Suto, P Yang, L Zhao, X Shen, IEEE Transactions on Mobile Computing. 178S. Zhang, P. He, K. Suto, P. Yang, L. Zhao, and X. Shen, "Cooperative edge caching in user-centric clustered mobile networks," IEEE Transactions on Mobile Computing, vol. 17, no. 8, pp. 1791-1805, Aug 2018. Code, cache and deliver on the move: A novel caching paradigm in hyper-dense small-cell networks. K Poularakis, L Tassiulas, IEEE Transactions on Mobile Computing. 16K. Poularakis and L. Tassiulas, "Code, cache and deliver on the move: A novel caching paradigm in hyper-dense small-cell networks," IEEE Transactions on Mobile Computing, vol. 16, March 2017. Mobility and popularity-aware coded small-cell caching. E Ozfatura, D Gündüz, IEEE Communications Letters. 222E. Ozfatura and D. Gündüz, "Mobility and popularity-aware coded small-cell caching," IEEE Communications Letters, vol. 22, no. 2, pp. 288-291, Feb 2018. Mobility-aware coded-caching scheme for small cell network. T Liu, S Zhou, Z Niu, 2017 IEEE International Conference on Communications (ICC). T. Liu, S. Zhou, and Z. Niu, "Mobility-aware coded-caching scheme for small cell network," in 2017 IEEE International Conference on Communications (ICC), May 2017, pp. 1-6. Delay-aware coded caching for mobile users. E Ozfatura, T Rarris, D Gunduz, O Ercetin, 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). E. Ozfatura, T. Rarris, D. Gunduz, and O. Ercetin, "Delay-aware coded caching for mobile users," in 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Sep. 2018, pp. 1-5. Mobility-aware coded storage and delivery. E Ozfatura, D Gunduz, WSA 2018; 22nd International ITG Workshop on Smart Antennas. E. Ozfatura and D. Gunduz, "Mobility-aware coded storage and delivery," in WSA 2018; 22nd International ITG Workshop on Smart Antennas, March 2018, pp. 1-6. Green and mobility-aware caching in 5G networks. M Chen, Y Hao, L Hu, K Huang, V K N Lau, IEEE Transactions on Wireless Communications. 1612M. Chen, Y. Hao, L. Hu, K. Huang, and V. K. N. Lau, "Green and mobility-aware caching in 5G networks," IEEE Transactions on Wireless Communications, vol. 16, no. 12, pp. 8347-8361, Dec 2017. Cost-optimal caching for d2d networks with user mobility: Modeling, analysis, and computational approaches. T Deng, G Ahani, P Fan, D Yuan, IEEE Transactions on Wireless Communications. 175T. Deng, G. Ahani, P. Fan, and D. Yuan, "Cost-optimal caching for d2d networks with user mobility: Modeling, analysis, and computational approaches," IEEE Transactions on Wireless Communications, vol. 17, no. 5, pp. 3082-3094, May 2018. Fundamental limits of caching. M A Maddah-Ali, U Niesen, IEEE Trans. Inf. Theory. 605M. A. Maddah-Ali and U. Niesen, "Fundamental limits of caching," IEEE Trans. Inf. Theory, vol. 60, no. 5, May 2014. Decentralized coded caching attains order-optimal memory-rate tradeoff. IEEE/ACM Trans. Netw. 234--, "Decentralized coded caching attains order-optimal memory-rate tradeoff," IEEE/ACM Trans. Netw., vol. 23, no. 4, Aug 2015. Fundamental limits of caching in wireless d2d networks. M Ji, G Caire, A F Molisch, IEEE Transactions on Information Theory. 622M. Ji, G. Caire, and A. F. Molisch, "Fundamental limits of caching in wireless d2d networks," IEEE Transactions on Information Theory, vol. 62, no. 2, pp. 849-869, Feb 2016. The exact rate-memory tradeoff for caching with uncoded prefetching. Q Yu, M A Maddah-Ali, A S Avestimehr, IEEE Transactions on Information Theory. 642Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, "The exact rate-memory tradeoff for caching with uncoded prefetching," IEEE Transactions on Information Theory, vol. 64, no. 2, pp. 1281-1296, Feb 2018. Fundamental limits of coded caching: Improved delivery rate-cache capacity tradeoff. M M Amiri, D Gunduz, IEEE Transactions on Communications. 652M. M. Amiri and D. Gunduz, "Fundamental limits of coded caching: Improved delivery rate-cache capacity tradeoff," IEEE Transactions on Communications, vol. 65, no. 2, pp. 806-815, Feb 2017. Order-optimal rate of caching and coded multicasting with random demands. M Ji, A M Tulino, J Llorca, G Caire, IEEE Trans. Inf. Theory. 636M. Ji, A. M. Tulino, J. Llorca, and G. Caire, "Order-optimal rate of caching and coded multicasting with random demands," IEEE Trans. Inf. Theory, vol. 63, no. 6, June 2017. Coded caching under arbitrary popularity distributions. J Zhang, X Lin, X Wang, IEEE Transactions on Information Theory. 641J. Zhang, X. Lin, and X. Wang, "Coded caching under arbitrary popularity distributions," IEEE Transactions on Information Theory, vol. 64, no. 1, pp. 349-366, Jan 2018. Uncoded caching and cross-level coded delivery for non-uniform file popularity. E Ozfatura, D Gunduz, 2018 IEEE International Conference on Communications (ICC). E. Ozfatura and D. Gunduz, "Uncoded caching and cross-level coded delivery for non-uniform file popularity," in 2018 IEEE International Conference on Communications (ICC), May 2018, pp. 1-6. An efficient delivery scheme for coded caching. A Ramakrishnan, C Westphal, A Markopoulou, Proceedings of the 2015 27th International Teletraffic Congress, ser. ITC '15. the 2015 27th International Teletraffic Congress, ser. ITC '15Washington, DC, USAIEEE Computer SocietyA. Ramakrishnan, C. Westphal, and A. Markopoulou, "An efficient delivery scheme for coded caching," in Proceedings of the 2015 27th International Teletraffic Congress, ser. ITC '15. Washington, DC, USA: IEEE Computer Society, 2015, pp. 46-54. Finite-length analysis of caching-aided coded multicasting. K Shanmugam, M Ji, A M Tulino, J Llorca, A G Dimakis, IEEE Transactions on Information Theory. 6210K. Shanmugam, M. Ji, A. M. Tulino, J. Llorca, and A. G. Dimakis., "Finite-length analysis of caching-aided coded multicasting," IEEE Transactions on Information Theory, vol. 62, no. 10, pp. 5524-5537, Oct 2016. Coded caching for a large number of users. M M Amiri, Q Yang, D Gunduz, 2016 IEEE Information Theory Workshop (ITW). M. M. Amiri, Q. Yang, and D. Gunduz, "Coded caching for a large number of users," in 2016 IEEE Information Theory Workshop (ITW), Sep. 2016, pp. 171-175. Wireless caching: Technical misconceptions and business barriers. G Paschos, E Bastug, I Land, G Caire, M Debbah, IEEE Communications Magazine. 548G. Paschos, E. Bastug, I. Land, G. Caire, and M. Debbah, "Wireless caching: Technical misconceptions and business barriers," IEEE Communications Magazine, vol. 54, no. 8, pp. 16-22, August 2016. Coded caching for multi-level popularity and access. J Hachem, N Karamchandani, S N Diggavi, IEEE Trans. Inf. Theory. 635J. Hachem, N. Karamchandani, and S. N. Diggavi, "Coded caching for multi-level popularity and access," IEEE Trans. Inf. Theory, vol. 63, no. 5, May 2017. Coded caching with linear subpacketization is possible using ruzsa-szemerédi graphs. K Shanmugam, A M Tulino, A G Dimakis, 2017 IEEE International Symposium on Information Theory (ISIT). K. Shanmugam, A. M. Tulino, and A. G. Dimakis, "Coded caching with linear subpacketization is possible using ruzsa-szemerédi graphs," in 2017 IEEE International Symposium on Information Theory (ISIT), June 2017, pp. 1237-1241. A unified ruzsa-szemerédi framework for finite-length coded caching. K Shanmugam, A G Dimakis, J Llorca, A M Tulino, 2017 51st Asilomar Conference on Signals, Systems, and Computers. K. Shanmugam, A. G. Dimakis, J. Llorca, and A. M. Tulino, "A unified ruzsa-szemerédi framework for finite-length coded caching," in 2017 51st Asilomar Conference on Signals, Systems, and Computers, Oct 2017, pp. 631-635. Coded caching schemes with reduced subpacketization from linear block codes. L Tang, A Ramamoorthy, IEEE Transactions on Information Theory. 644L. Tang and A. Ramamoorthy, "Coded caching schemes with reduced subpacketization from linear block codes," IEEE Transactions on Information Theory, vol. 64, no. 4, pp. 3099-3120, April 2018. A novel recursive construction for coded caching schemes. M Cheng, J Jiang, Y Yao, abs/1712.09090CoRR. M. Cheng, J. Jiang, and Y. Yao, "A novel recursive construction for coded caching schemes," CoRR, vol. abs/1712.09090, 2017. [Online]. Available: http://arxiv.org/abs/1712.09090 Advanced mobile phone service: The cellular concept. V H M Donald, The Bell System Technical Journal. 581V. H. M. Donald, "Advanced mobile phone service: The cellular concept," The Bell System Technical Journal, vol. 58, no. 1, pp. 15-41, Jan 1979. Adding transmitters dramatically boosts coded-caching gains for finite file sizes. E Lampiris, P Elia, IEEE Journal on Selected Areas in Communications. 366E. Lampiris and P. Elia, "Adding transmitters dramatically boosts coded-caching gains for finite file sizes," IEEE Journal on Selected Areas in Communications, vol. 36, no. 6, pp. 1176-1188, June 2018. Coded caching in a multi-server system with random topology. N Mital, D Gunduz, C Ling, 2018 IEEE Wireless Communications and Networking Conference (WCNC). N. Mital, D. Gunduz, and C. Ling, "Coded caching in a multi-server system with random topology," in 2018 IEEE Wireless Communications and Networking Conference (WCNC), April 2018, pp. 1-6. Novel decentralized coded caching through coded prefetching. Y P Wei, S Ulukus, 2017 IEEE Information Theory Workshop (ITW). Y. P. Wei and S. Ulukus, "Novel decentralized coded caching through coded prefetching," in 2017 IEEE Information Theory Workshop (ITW), Nov 2017, pp. 1-5. Coded caching for combination networks with cache-aided relays. A A Zewail, A Yener, 2017 IEEE International Symposium on Information Theory (ISIT). A. A. Zewail and A. Yener, "Coded caching for combination networks with cache-aided relays," in 2017 IEEE International Symposium on Information Theory (ISIT), June 2017, pp. 2433-2437.
[]
[ "Topological Interpretation of Interactive Computation", "Topological Interpretation of Interactive Computation" ]
[ "Emanuela Merelli [email protected] \nDepartment of Computer Science\nUniversity of Camerino\nItaly\n", "Anita Wasilewska \nDepartment of Computer Science\nStony Brook University\nStony BrookNYUSA\n" ]
[ "Department of Computer Science\nUniversity of Camerino\nItaly", "Department of Computer Science\nStony Brook University\nStony BrookNYUSA" ]
[]
It is a great pleasure to write this tribute in honor of Scott A. Smolka on his 65th birthday. We revisit Goldin, Smolka hypothesis that persistent Turing machine (PTM) can capture the intuitive notion of sequential interaction computation. We propose a topological setting to model the abstract concept of environment. We use it to define a notion of a topological Turing machine (TTM) as a universal model for interactive computation and possible model for concurrent computation.
10.1007/978-3-030-31514-6_12
[ "https://arxiv.org/pdf/1908.04264v1.pdf" ]
199,543,676
1908.04264
89918f14785957fec047854a52cbca941a59d00e
Topological Interpretation of Interactive Computation Emanuela Merelli [email protected] Department of Computer Science University of Camerino Italy Anita Wasilewska Department of Computer Science Stony Brook University Stony BrookNYUSA Topological Interpretation of Interactive Computation Persistent Turing machineTopological environmentTopo- logical Turing machine It is a great pleasure to write this tribute in honor of Scott A. Smolka on his 65th birthday. We revisit Goldin, Smolka hypothesis that persistent Turing machine (PTM) can capture the intuitive notion of sequential interaction computation. We propose a topological setting to model the abstract concept of environment. We use it to define a notion of a topological Turing machine (TTM) as a universal model for interactive computation and possible model for concurrent computation. Introduction In 2004, Scott A. Smolka worked with Dina Goldin 3 and colleagues on a formal framework for interactive computing; the persistent Turing machine (PTM) was at the heart of their formalization [1,2,3]. A PTM is a Turing machine (TM) dealing with persistent sequential interactive computation a class of computations that are sequences (possibly infinite) of non-deterministic 3-tape TMs. A computation is called sequential interactive computation because it continuously interacts with its environment by alternately accepting an input string on the input-tape and computing on the work-tape a corresponding output string to be delivered on the output-tape. The computation is persistent, meaning that the content of the work-tape persists from one computation step to the next by ensuring a memory function. The definition of PTM was based on Peter Wegner's interaction theory developed to embody distributed network programming. Interaction is more powerful than rule-based algorithms for computer problem solving, overturning the prevailing view that all computing is expressible as algorithms [4,5]. Since in this framework interactions are more powerful than rules-based algorithms they are not expressible by an initial state described in a finite terms. Therefore, one of the four Robin Gandy's principles (or constraints) for computability is violated, as stated in [6]. The need to relax such constraints allows one to think that interactive systems might have a richer behavior than algorithms, or that algorithms should be seen from a different perspective. Although PTM makes the first effort to build a TM that accepts infinite input, we strongly support the idea that the interaction model should also include the formal characterization of the notion of environment. In this paper, we focus on Smolka et al. original point of view on persistent and interactive computation. We revisit and formalize a concept of computational environment for PTM following Avi Wigderson's machine learning paradigm in [7]. Many new algorithms simply create themselves with relatively little intervention from humans, mainly through interaction with massive data 4 . We use the notion of computational environment to define class of abstract computable functions as sets of relations between inputs and outputs of PTM. The computational environment depends on time and space. It can evolve and so the effectiveness of these functions depends on a given moment and a given context. Computational environment is defined in terms of ambient space. The ambient space is a generalization of a notion of ambient manifold introduced in [8] to describe the topological quantum computation model. We do it in such a way that the infinite computation can be reduced to a set of relations, constrained within its ambient space by loops of non-linear interactions. The ambient space is not necessarily a vector space, hence there is a problem of linearity and non-linearity of computation. The non-linearity originated from the shape that can be associated to the ambient space, which can be obtained by the topological analysis of the set of data provided by the real environment. Figure 1 shows the synthesis of this concept. The ambient space and PTM can be thought as mathematical representation of complex systems, merely defined as systems composed of many non-identical elements, constituent agents living in an environment and entangled in loops of non-linear interactions. We built a topological PTM to model both the behavior of an interactive machine and its computational environment. The main idea of the generalization is that output-tape is forced to be connected to the input-tape through a feedback loop. The latter can be modeled in a way that the input string can be affected by the last output strings, and by the current state of the computational environment. A state of a topological PTM becomes a set of input and output relations constrained to an environment whose geometric representation formally defines the context of the computation. If many topological P T M s share the same computational environment, the computation becomes a stream of interactions of concurrent processes, which at higher dimension can be seen as a collection of streams, such as an n-string braid as examined in topological quantum information [8]. In this scenario, the computational environment, envisaged as a discrete geometric space, may even evolve while computations take place. The informal description given above depicts the environment. We define it as follows. Given a PTM, let X be a set of its input and output strings. Since the computational environment depends on time and space. In this case the time is represented by collection of steps. For each step i in time, we define an equivalence relation ∼ i on X such that input i in X there exists an operator f i such that f i (input i ) = output i . In classical Turing machine the set of operators f i is called rules or transformations. Our goal is to build an environment where this set of functions f i can be discovered. Each element of X represents a transition from one state of the machine to a next guided by the operator f i (unknown for the model) constrained over the computational environment. The mathematical objects we are looking for should reflect the collective properties of the set X in a natural way to support the discovery of the set of operators f i . These operators allow us to represent X as a union of quotient spaces of the set of equivalence classes X/ ∼ i of all the feasible relations hidden in X. The resulting functional matrix of f i , also called interaction matrix, represents the computational model or what we called the learnt algorithm [9]. In order to characterize the set of operators {f i }, we decided to analyze the set X of environmental data by a persistent homology, a procedure used in topological data analysis (TDA). TDA is a subarea of the computational topology that filters the optimal space among simplicial complexes [10]. A simplicial complex can be seen as a generalization of the notion of graph, where the relations are not binary between vertices, but n-ary among simplices. A simplex expresses any relation among points. For example, a 0-dimensional simplex is a unary relation of a single point, a 1-dimensional simplex is a binary relation of two points (a line), a 2-dimensional simplex is a three points relation (a full triangle), and so on. For the interested reader, Appendix 1 gives some useful definitions for algebraic and computational topology. Although a simplicial complex allows us to shape the environment as a discrete topological space, the new model of PTM also requires to express the feedback loop between the output at step i with the input at the next step i + 1 of the computation. To this end, we follows a recent approach proposed in the context of big data and complex systems for embedding a set of correlation functions (e.g. the encoding of a given data set) into a field theory of data which is relied on a topological space naturally identified with a simplicial complex [11]. The resulting mathematical structure is a fiber bundle [12], whose components are summarized in Figure 1. The framework consists of three topological spaces B, H, G, and one projection map π. The base space B, the set of input/output strings embedded in a simplicial complex; the fiber H, the set of all possible computations (the set of f i ) constrained by the gauged transformations over the base point P B of the fiber; the total space G of the fiber bundle obtained by the product of the other two spaces (G = B × H), and the projection map π : G → G/H that allows Fig. 1. Example of topological interpretation of computation. The base space B is a two-dimensional handlebody of genus 3, such as a trifold. The small red circles around some points of the fiber space H indicate the presence of states that make the computation inconsistent. The violet lines over the base space B show the corresponding unfeasible paths to be avoided due to the topological constraints imposed by the base space. The non-linear transformations of the fibers states, induced by the projection map π over the simplicial complex, guarantees the choice of the admissible paths with respect to the topology of the base space. The lines marked with a black cross correspond to inconsistent states of the system, which do not exist in the topological interpretation. The picture at the bottom right corner is an example of computation, it refers to the notion of contextuality [18], informally a family of data -a piece of information -which is locally consistent and globally inconsistent. us to go from the total space G to the base space B obtained as a quotient space of the fiber H. In Figure 1, the π projection map is represented by dashed lines and used to discover if the geometry of the base space can constrain the ongoing computation in order to predict and avoid unfeasible transformations, the red lines in the figure. In our model, the obstructions that characterize the ambient space and constraint the computation are represented by the presence of n-dimensional holes (n > 1) in the geometry of the topological space. In our framework the holes represent the lack of specific relations among input and output of topological Turing machine. It means that the topological space, in our representation as simplicial complex, has a non trivial topology. As an example, in Figure 1, the base space B is two-dimensional handlebody of genus 3. The formal description of the proposed approach rests on three pillars: i) algebraic and computational topology for modeling the environment as a simplicial space B; ii) field theory to represent the total space G of the machine as a system of global coordinates that changes according to the position P B of the observer respect to the reference space H, and iii) formal languages to enforce the semantic interpretation of the system behaviour into a logical space of geometric forms, in terms of operators f i that here we call correlation functions in the space of the fiber H. Consequently, an effective PTM is nothing but a change of coordinates, consistently performed at each location according to the field action representing the language recognized by the machine. While the algorithmic aspect of a computation expresses the effectiveness of the computation, the topological field theory constraints the effectiveness of a computation to a specific environment where the computation might take place at a certain time in space. It is right here to recall Landin's metaphor of the ball and the plane, introduced to describe the existence of a double link between a program and machine [13]: One can think of the ball as a program and the plane as the machine on which it runs. ... the situation is really quite symmetric; each constrains the other [14]. Alan Turing himself, in his address to the London Mathematical Society in 1947, said . . . if a machine is expected to be infallible, it cannot also be intelligent [15]. It is becoming general thinking that intelligence benefits from interaction and evolves with something similar to adaptability checking [9]. Accordingly, the PTM, and its topological interpretation seem to be a good starting point for modeling concurrent processes as interactive TMs [19]. Also considering that the set of P T M s reveals to be isomorphic to a general class of effective transition systems as proved in Smolka et al. in [1]. This result allows to make the hypothesis that the PTM captures the intuitive notion of sequential interactive computation [2], in analogy to the Church-Turing hypothesis that relates Turing machines to algorithmic computation. What is computation? Turing, Church, and Kleene independently formalized the notion of computability with the notion of Turing machine, λ-calculus, partial recursive functions. Turing machine manipulates strings over a finite alphabet, λ-calculus manipulate λ-terms, and µ-recursive functions manipulate natural numbers. The Church-Turing thesis states that every effective computation can be carried out by a Turing machine or equivalently a certain informal concept (algorithm) corresponds to a certain mathematical object (Turing machine) [16]. The demonstration lies on the fact that the three notions of computability are formally equivalent. In particular, the Turing machine is a model of computation like a finite states control unit with an unbounded tape used to memorize strings of symbols. A deterministic sequence of computational steps transforms a finite input string in the output string. For each step of the computation, a Turing machine contains all the information for processing input in output, an algorithmic way to computing a function, those functions that are effectively computable. The U niversal TM is the basic model of all effectively computable functions, formally defined by a mathematical description. Definition 1 (Turing machine) A Turing machine (TM) is M = Q, Σ, P , -Q is a finite set of states; -Σ is a finite alphabet containing the blank symbol #; L and R are special symbols. -P ⊆ Q × Σ × Σ × Q × {L, R}, the set of configurations of M. A computation is a chain of elements of P such that the last one cannot be linked to any possible configuration of P. The multi-tape Turing machine is a TM equipped with an arbitrary number k of tapes and corresponding heads. Definition 2 (k-tape Turing machine) A non-deterministic k-tape TM is a quadruple Q, Σ, P, s 0 , where -Q is a finite set of states; s 0 ∈ Q is the initial state and h / ∈ Q is the halting state. -Σ is a finite alphabet containing the blank symbol #. L and R are special symbols. -P ⊆ Q × Σ k × (Q ∪ {h}) × (Σ ∪ {L, R}) k is the set of configurations. The machine makes a transition from its current configuration (state) to a new one (possibly the halt state h). For each of the k tapes, either a new symbol is written at the current head position or the position of the head is shifted by one location to the left (L) or right (R). The above definitions of TMs do not take into account the notion of environment; the input is implicitly represented in the configurations P of M machine modulo feasible relations. The objective of this contribution is to represent the environment explicitly in a way such that the admissible relations are naturally determined. Our view is supported by a recent, even though not formal, definition of computation. Computation is the evolution process of some environment via a sequence of simple and local steps [7]. A Computational environment is the base space over which the process of transformation of an input string happens. For the TM, an environment is any configuration of P of a machine M, from the initial one to the final one. It is a closed set -represented by the functional matrix, -whose feasible relations should be known a priori to assure the algorithmic aspect of the computation. Indeed, in TM the environment does not evolve, it remains unchanged during the computation. If we consider the environment as an open set -the set of configurations may changes along the way due to computation -accordingly, the set of feasible relations may change. As Section 3 describes, one way to capture this variation is to associate a topology to the space of all possible configurations and use the global invariants of the space to classify the relations in categories whose elements are isomorphic to those of some model of computation, such as the TM. In this setting, the local steps (feasible relations) -the functional matrix -are affected by global topology. As a consequence, the evolution of an environment corresponds to a change of the topological invariants. Then the classical TM is equivalent to working with a space of states whose topology is invariant, which allows the process of transformation to run linearly. While an interactive computation takes into account the non-linearity of the computation due to the structure of the transformations characterizing it. The non-linearity is implied by the topology of the base space B, and induced by the semi-direct product factorization of the transformation group, the simplicial analog of the mapping class group, denoted by G MC . In the viewpoint of computation as a process, the global context induces non-linear interactions among the processes affecting the semantic domain of the computation. The semantic object associated to TM, that is the function that TM computes, or the formal language that it accepts, becomes an interactive transition system for a PTM. In the topological setting it changed into the pair of function, structure , entangled as a unique object. The function represents the behavior and the structure the context. Formally represented by the fiber subgroup in the semi-direct product form of the group of computations (connected to process algebra), denoted by G AC , and G MC the group of selfmapping of the topological spaces (the environmentself-transformations algebra, i.e. automorphisms which leave the topology invariant), quotient by the set of feasible relations. The new semantic object, a gauge group G = G AC ∧ G MC , provides another way to understand the meaning of contextuality [17], as a tool to distinguish effective computation from interactive computations. That is to identify configurations that are locally consistent, but globally inconsistent , as shown in Figure 1 and informally summarised in the following sentence Contextuality arises where we have a family of data which is locally consistent but globally inconsistent. Section 3 introduces the new interpretation. We leave the formal definition and full formalization of the theory corresponding to the group of computations for an evolving environment as future work. 2 Interactive Computation n this section, we recall the definition of the persistent Turing Machine, PTM as defined by Smolka et al. in [1] and the related notion of environment introduced in their earlier work [2]. We introduce the definitions needed to support the construction of a new topological model that is a generalization of the PTM. The new model allows one to re-interpret the classic scheme of computability, which envisages a unique and complete space of problems. The PTM provides a new way of interpreting TM computation, based on dynamic stream semantics (comparable to behavior as a linear system). A PTM is a non-deterministic 3-tape TM (N3TM) that performs an infinite sequence of classical TM computations. Each such computation starts when the PTM reads input from its input-tape and ends when the PTM produces an output on its output-tape. The additional work-tape retains its content from one computational step to the next to carry out the persistence. Definition 3 (Smolka, Goldin Persistent Turing machine) A persistent Turing machine (PTM) is a N 3T M having a read-only input-tape, a read/write work-tape, and a write-only output-tape. Let w i and w 0 denote the content of the input and output tapes, respectively, while w and w the content of work-tape, and # empty content, then an interaction stream is an infinite sequence of pairs of (w i , w o ) representing a computation that transforms w i in w o ; a macrostep of PTM is a computation step denoted by w wi/wo − −−− → w , that starts with w and ends with w on the work-tape and transforms w i in w o ; a PTM computation is a sequence of macrosteps. w wi/µ − −− → s div denotes a macrostep of a computation that diverges (that is a non-terminating computation); s div is a particular state where each divergent computations falls, and µ is special output symbol signifying divergence; µ / ∈ Σ. Moreover, the definition of the interactive transition system (ITS) equipped with three notions of behavioral equivalence -ITS isomorphism, interactive bisimulation, and interaction stream equivalence -allows them to determine the PTMs equivalence. Definition 4 (Interactive transition system) Given a finite alphabet Σ not containing µ, an ITS over Σ is a triple S, m, r where -S ⊆ Σ * ∪ {s div } is the set of states; -m ⊆ S × Σ * × S × (Σ * ∪ {µ}) ) is the transition relation; r denotes the initial state. It is assumed that all the states in S are reachable from r. Intuitively, a transition s, w i , s , w o of an ITS states that while the machine is in the state s and having received the input string w i from the environment, the ITS transits to state s and output w o . Unfortunately, the sake of space economy forced to omit most of the results; we only recall Theorem 24, Theorem 32 and Thesis 50 (in the sequel renumbered Theorem 1, Theorem 2 and Thesis 1, respectively) and address the reader eager for more information to the original article [1]. Theorem 1 The structures M, = ms and T, = iso are isomorphic. Theorem 1 states that there exists a one-to-one correspondence between the class of PTMs, denoted by M up to macrostep equivalence, denoted by = ms , and the class of IT Ss, denoted by T up to isomorphism, denoted by = iso . Theorem 2 If a PTM M has unbounded nondeterminism, then M diverges. Theorem 2 states that a PTM M diverges if there exists some w ∈ reach(M), w i ∈ Σ * such that there is an infinite number of w o ∈ Σ * ∪{µ}, w ∈ Σ * ∪{s div }, such that w wi/wo − −−− → w . Thesis 1 Any sequential interactive computation can be performed by a PTM. Like the Church-Turing Thesis, Thesis 1 cannot be proved. Informally, each step of a sequential interactive computation, corresponding to a single input/outputpair transition, is algorithmic. Therefore, by the Church-Turing Thesis, each step is computable by a TM. A sequential interactive computation may be historydependent, so state information must be maintained between steps. A PTM is just a TM that maintains state information on its work-tape between two steps. Thus, any sequential interaction machine can be simulated by a PTM with possibly infinite input. The PTM environment. In her earlier work [2], D. Goldin proposed a notion of environment to highlight that the class of behaviors captured by the TM, the class of algorithmic behaviors, is different from that represented by the PTM model, the sequential interactive behaviors. The conceptualization of the environment provides the observational characterization of PTM behaviors given by the input-output streams. In fact, given two different environments So far, we have assumed that all the input streams are all feasible. However, this is not a reasonable assumption for the context in which interactive machines normally run. Typically an environment can be constrained and limited by some obstructions when generating the output streams. In our view, this is the case where the space of all possible configurations lies on a topological space with not trivial topology. In order to contribute to this theory, in the following we will tackle the issue of specifying these constraints, and relating them to the PTM model. Topological Interpretation of Interactive Computation Topological environment This section deals with the notion of topological environment as an integral part of the model of topological computation. In a classical TM the environment is not represented (Def:1), whereas in a PTM the environment is a mapping between the class of PTMs and their feasible domains. As described above the two functions B and O permit to identified the behavior of a PTM machine by observing its stream of interactions. In this case the environment O is a static mapping that associates machines with an equivalent behavior B(M) to the same equivalence class. In this case the environment plays the role of an observer. In our approach the environment is part of the system that evolves together with the behavior of the machine over time step i. The environment constrains the behavior of a machine PTM so as the output generated by the machine affects the evolution of the environment. To detect dynamic changes in the environment, we propose to define a dynamic analysis of the set of all the interactions streams available at any single PTM computation step i. Since interaction streams are infinite sequences of pairs of the form (w i , w o ) representing the input and output strings of PTMs computation step i, we use the set P of PTM configurations to represent them. The resulting model of computation consists of two components entangled and coexisting during the interactive computation, a functional unit of computation and a self-organizing memory. In our model, the infinite input of the PTM should be seen as a feedback loop of a dynamic system. Its functional behavior is represented by a class T of ITS constrained by the information contained in the self-organizing memory associated with the notion of topological environment. The data structure used to store information is the simplicial complex S P , that is a topological space S constructed over the set of PTM configuration P. The S P is equipped with a finite presentation in terms of homology groups whose relations are fully representable. In this view the PTM functional behavior can be determined by S P modulo ITS isomorphism. We operate in a discrete setting where full information about topological space is inherent in their simplicial representation. Appendix 1 provides some useful definitions for algebraic and computational topology. Definition 6 (Topological environment) Given the set of PTM configurations P i available at a given time i, the topological environment is the simplicial complex S Pi constructed over P i . The topological environment S P , as any topological space is equipped with a set of invariants that are important to understand the characteristics of the space. For the sake of simplicity we will refer to topological space as a continuous space. The n-dimensional holes, the language of paths, the homology and the genus are topological invariants. The n-dimensional holes are determined during the process of filtration, called persistent homology, that is used to construct a topological space starting from a set of points. The numbers of holes and their associated dimensions are determined by the homology structure fully represented by the homology groups associated with a topological space. Also the homology is a topological invariant of the space, it is always preserved by homeomorphisms of the space. A path in a topological space S is a continuous function f : [0, 1] → S from the unit interval to S. Paths are oriented, thus f (0) is the starting point and f (1) is the end-point, if we label the starting point v and the end-point v , we call f a path from v to v as shown in Figure 2-(a). Two paths a and b, that is two continuous functions, from a topological space S to a topological space S are homotopic if one can be continuously deformed into the other. Being homotopic is an equivalence relation on the set of all continuous functions from S to S . The homotopy relation is compatible with function composition. Therefore, it is interesting to study the effect of the existence of holes (at any dimension) in a topological space S (for simplicity the discussion is made thinking of S as a 2D surface) built from the space of configurations P where a sequential interactive computation takes place as a sequential composition of paths. Figures 2-(b) and -(c) show the composition of two paths a and b, and the proof that they are not homotopic, respectively. Given two-cycle paths, a and b, with a point in common in x, if the composition of the two paths ab or ba is not commutative, the two composed paths are not equivalent. In this case, the two cycle paths, a and b can be considered the generators of a topological space with one 2-dimensional hole, as shown in Figure 3. Each generator represents a distinct class of paths, [a] those going around the neck, and [b] those around the belt of the torus, respectively. Computable functions and topological space. We start taking into account those classes of problems whose computable functions are defined over a space S endowed with a trivial topology, and it is a Vector Space. Figure 4 shows how an algorithmic computation A associated with the function f A : S → S, evolves over S, representing the space of the states. Each state v is defined by a vector that moves over S driven by the configurations of the TM. In Figure 4, from left to right, the first two pictures represent a successful computation and a computation with an infinite loop, respectively. When the algorithm moves the vector towards a boundary, see the last picture, the computation is deadlocked. This happens because S has not been defined globally. In fact, the boundary breaks the translational symmetry. If we allow the boundary to disappear by adding an extra-relation, global in nature, we obtain a global topology that is not trivial -the space is characterized by a not empty set of n-dimensional holes (n ≥ 2). Figure 5 shows how the computation with a deadlock on the plane could have succeeded if the manifold of the space is a torus. Figure 6 shows how we can transform a rectangle, 2-dimensional space S homomorphic to 2-manifold with boundary, into a cylinder and then into a torus by adding two relations among the generators of the manifold P that will be proved to be without boundary. Hence, we proceed to analyze those classes of problems whose computable functions are defined over a space S endowed with non-trivial topology. The class of functions F S effectively computable over a space S, and for each single function f A ∈ F S and a couple of points v, v ∈ S, we associate a computation f A (v) = v , as a path that connects the two points v and v in the space S. The path can be semantically interpreted as an interaction stream. In Figure 7, the first two pictures from left to right, show that a close path π in a surface that starts and ends to a fixed point P B is homotopic to 0; it means that any π can be reduced to the point P B . The class of behavioral equivalence to τ denoted by [π] belongs to space or subspace space with trivial topology g = 0 (g is the genus). The other pictures show irreducible paths belonging to space with a topological genus g = 0. E.g. if g = 1, i.e. is a torus there are three different classes of behaviors: i) the set of closed paths homotopic to 0. In this case, we are given a local interpretation and we are not aware that at the global level the genus can be different from 0; ii) the set of closed paths homotopic to the first generator a of the homology group of the topological space S. The cycle fixed on the base point P B can be used to reduce any path going around the belt of the torus to a by a continuous deformation; iii) similar to the previous set, but the paths are homotopic to the second generator b of S. The cycle fixed The interpretation of interaction streams over a S P is indeed nothing but its identification with an element of the path algebra corresponding to a quiver representation of the transformation group G of S, say Q (or, more generally, a set of quivers, over some arbitrary ring). The different ways to reach any point p ∈ P from P B generate a path algebra A whose elements are describable words in a language L. Any point of P can be related to any other point by a group element. By selecting a point p 0 of P as a unique base point, every point of P is in one-to-one correspondence with an element of such group G MC ≈ MCG, the simplicial analog of the mapping class group. G MC is a group of transformations which do not change the information hidden in the data, such as the group of diffeomorfisms that do not change the topology of the base space. MCG is an algebraic invariant of a topological space, that is a discrete group of symmetries. Since the algebras manipulate the data, the transformations applied to space are processes carried on through the fiber, which is the representation space of the process algebra. Whenever Q can give the representation of the algebra, the algebra can be exponentiated to a group G AP and t a gauge group. We have now all the ingredients for defining a fiber bundle enriched with a group G = G AP ∧ G MC , called gauge group, (see Figure 1). Summarizing, fiber bundle is the mathematical structure that allows us to represent computation and its context (the environment) as a unique model. In terms of TM, the context represents the transition function, also called the functional matrix. While the algorithmic aspect of a computation expresses the effectiveness of the computation, the topology provides a global characterization of the environment. Both the computation and the environment can be represented as groups (algebras), and their interaction is captured as the set of accessible transformations of the semi-direct product of the two groups, carrying constrained by the restric- tions imposed by topology. Incidentally, it is this set of constraints together with the semidirect product structure that implies the non-linearity of the process. Proposition 1 If G is automatic, the associated language L is regular. Since the representations of G can then be constructed in terms of quivers Q with relations induced by the corresponding path algebra induced by PTMs, the syntax of L is fully contained in T and its semantics in M. Definition 8 (Constrained interactive computation) An interactive computation is constrained if it is defined over a topological space S P and it is an element of the language of paths of S P . Theorem 4 Any constrained interactive computation is an effective computation for a TTM. Thesis 2 Any concurrent computation can be performed by a TTM. Final remarks In 2013, Terry Tao in his blog [20] posted this question: if there is any computable group G which is Turing complete in the sense that the halting problem for Fig. 7. A class of behaviors over a torus α close paths, λ path around the neck, µ path around the belt, γ complex path any Turing machine can be converted into a question of the above form. In other words, there would be an algorithm which, when given a Turing machine T , would return (in a finite time) a pair x T , y T of elements of G with the property that x T , y T generate a free group in G if and only if T does not halt in finite time. Or more informally: can a group be a universal Turing machine? α • PB α • PB λ • PB PB γ Appendix 1: Definitions of Algebraic and Computational Topology Definition 9 Topology A topology on a set X is a family T ⊆ 2 X such that -If S1, S2 ∈ T, then S1 ∩ S2 ∈ T (equivalent to: If S1, S2, . . . , Sn ∈ T then ∩ n i=1 Si ∈ T ) -If {Sj|j ∈ J} ⊆ T, then ∪j∈J Sj ∈ T. -∅, X ∈ T. Definition 10 Topological spaces The pair (X,T) of a set X and a topology T is a topological space. We will often use the notation X for a topological space X, with T being understood. Definition 11 Simplices Let u0, u1, ..., u k be points in R d . A point x = k i=0 λiui is an affine combination of the ui if the λi sum to 1. The affine hull is the set of affine combinations. It is a k-plane if the k+1 points are affinely independent by which we mean that any two affine combinations, x= k i=0 λiui and y = k i=0 µiui are the same iff λi = µi for all i. The k+1 points are affinely independent iff the k vectors ui . . . u0, for 1 ≤ i ≤ k, are linearly independent. In R d we can have at most d linearly independent vectors and therefore at most d+1 affinely independent points. k-simplex is the convex hull of k+1 affinely independent points, σ = {u0, u1, u2, ...u k }. Its dimension is dimσ = k. Any subset of affinely independent points is again independent and therefore also defines a simplex of lower dimension. Definition 12 Face A face of σ is the convex hull of a non-empty subset of the ui and it is proper if the subset is not the entire set. We sometimes write τ ≤ σ if τ is a face and τ < σ if it is a proper face of σ. Since a set of k+1 has 2 k+1 subsets, including empty set, σ has 2 k+1 − 1 faces, all of which are proper except for σ itself. The boundary of σ, denoted as bdσ, is the union of all proper faces, and the interior is everything else. Definition 13 Simplicial complexes A simplical complex is a finite collection of simplices K such that σ ∈ K and τ ∈ K, and σ, σ0 ∈ K implies σ ∩ σ0 is either empty or a face of both. Definition 14 Filtration A filtration of a complex K is a nested sequence of subcomplex, ∅ = K 0 ⊆ K 1 ⊆ K 2 ⊆ .... ⊆ K m = K. We call a complex K with a filtration a filtered complex. Definition 15 Chain group The k-th chain group of a simplicial complex K is C k (K), + , let F be a field. The F−linear space on the oriented k-simplices, where [σ] = −[τ ] if σ = τ and σ and τ have different orientations. An element of C k (K) is a k-chain, q nq[σq], nq ∈ Z, σq ∈ K. Definition 16 Boundary homomorphism Let K be a simplicial complex and σ ∈ K, σ = [v0, v1, ..., v k ] The boundary homomorphism ∂ k : C k (K) → C k−1 (K) is ∂ k σ = i (−1) i [v0, v1, ..., vi, ..., vn] where vi indicates that vi is deleted from the sequence. A simplicial complex (left) and not valid simplicial complex (right). A simplicial complex and its simplices. Definition 17 Cycle and boundary The k-th cycle group is Z k = ker∂ k . A chain that is an element of Z k is a k-cycle. The k-th boundary group is B k = im∂ k+1 . A chain that is an element of B k is a k-boundary. We also call boundaries bounding cycles and cycles not in B k nonbounding cycles. Definition 18 Homology group The k-th homology group is H k = Z k /B k = ker∂ k /im∂ k+1 . If z1 = z2 +B k , z1, z2 ∈ Z k , we say z1 and z2 are homologous and denote it with z1 ∼ z2 Definition 19 k-th Betti number The k-th Betti number B k of a simplicial complex K is the dimension of the k-th homology group of K. Informally, β0 is the number of connected components, β1 is the number of two-dimensional holes or "handles" and β2 is the number of three-dimensional holes or "voids" etc. . . . Definition 20 Invariant A topological invariant is a property of a topological space which is invariant under homeomorphisms. Betti numbers are topological invariants. Definition 21 Genus The genus is a topological invariant of a close (oriented) surface. The connected sum of g tori is called a surface with genus g. genus refers to how many holes the donut surface has. As an example, a torus is homeomorphic to a sphere with a handle. Both of them have just one hole (handle). The sphere has g = 0 and the torus has g = 1. O 1 and O 2 and a PTM machine M, the behavior of M observed by interacting with an environment O 1 can be different if observed by interacting with O 2 . Also, given two machines M 1 and M 2 and one environment O, if the behaviors of the two machines are equal (one can be reduced to the other), they must be equivalent in O. This claim gives the go-ahead to Theorem 3. Any environment O induces a partitioning of M into equivalence classes whose members appear behaviorally equivalent in O; the set of equivalence classes is denoted by β o . Indeed, the equivalences of the behaviors of two PTMs can be expressed by the language represented in the set of all interaction streams. Let B(M) denote the operator that extracts the behavior of a given machine M, and O(M) a mapping that associates any machine M to the class of the behaviors feasible for the environment O. Therefore, each machine can be classified by analyzing its interaction streams with the two operators, B and O. Definition 5 (Environment) Given a class M of PTMs and a set of suitable domains β O , that is the set of equivalence classes of feasible behaviours. An environment O is a mapping from machines to some domains O : M → β O and the following property holds: ∀ M 1 , M 2 ∈ M, if B(M 1 ) = B(M 2 ) then O(M 1 ) = O(M 2 ) When O(M 1 ) = O(M 2 ), we say that M 1 and M 2 are distinguishable in O; otherwise, we say that M 1 and M 2 appear equivalent in O. Theorem 3 Let Θ denote the set of all possible environments. The environments in Θ induce an infinite expressiveness hierarchy of PTM behaviors, with TM behaviors at the bottom of the hierarchy. Fig. 2 . 2a) homotopic paths a ∼ b; b) composition of paths ab; c) not homotopic paths ab ba Fig. 3 . 3From cycling paths to generators of a space S Fig. 4. a) successful computation, b) computation with an infinite loop, c) computation with a deadlock. Fig. 5 . 5A deadlocked computation on the plane may successes over a space with nontrivial topology.on the base point P B goes around the belt of the torus. The last picture shows the composition of paths. Fig. 6 . 6The pictures (A-D) summarize the main steps to transform a space S of PTM into a topological space SP . The construction is obtained by gluing together -put in relation -the two boundaries of the space S, a and b respectively, which become the generators a and b of the new space SP . The topological space SP , finite but not limited, naturally supports the notion of the environment of PTM. Definition 7 ( 7Topological Turing machine) A Topological Turing machine (TTM) is a group G consisting of all interaction streams generated by the group of PTMs entangled with the group of all transformations of the topological space S P preserving the topology. Formally G = G AP ∧ G M C , where G AP is the group of PTMs and G M C the simplicial analog of the mapping class group. https://www.ias.edu/ideas/mathematics-and-computation Acknowledgements E. M. thanks Mario Rasetti for bringing her to conceive a new way of thinking about computer science and for numerous and lively discussions on topics related to this article; and Samson Abramsky with his group for insightful conversations on the topological interpretation of contextuality and contextual semantics. E. M. and A.W. thank the anonymous referees for suggesting many significant improvements. Turing machines, transition systems, and interaction. Information and Computation 194. D Q Goldin, S A Smolka, P C Attie, E L Sondereggera, Elsevier52D.Q. Goldin, S.A. Smolka, P.C. Attie, E.L. Sondereggera. Turing machines, transi- tion systems, and interaction. Information and Computation 194, 2004. -ENTCS Vol. 52, No. 1, Elsevier, 2001 D Goldin, Persistent Turing Machines as a Model of Interactive Computation. Springer1762D. Goldin. Persistent Turing Machines as a Model of Interactive Computation. LNCS, Vol.1762, Springer, 2000. Interacting Computation: The new paradigm. D Q Goldin, S A Smolka, P Wegner, SpringerD.Q. Goldin, S.A. Smolka, P. Wegner. Interacting Computation: The new paradigm. Springer, 2006. . P Wegner, Why Intera. is More P Than Algorit. CACM. 405ACMP. Wegner. Why Intera. is More P Than Algorit. CACM, Vol. 40, No.5, ACM, 1997. . P Wegner, Interactive foundations of computing. TCS. 192ElsevierP. Wegner. Interactive foundations of computing. TCS, Vol.192, Elsevier, 1998. Churchs Thesis and Principles for Mechanisms. R Gandy, The Kleene Symposium. J. Barwise, H. J. Keisler and K. KunenNorth-Holland Publishing CompanyR.O Gandy. Churchs Thesis and Principles for Mechanisms. J. Barwise, H. J. Keisler and K. Kunen, eds, The Kleene Symposium, North-Holland Publishing Company, 1980. . A. Wigderson. Mathematics and Computation. IAS. A. Wigderson. Mathematics and Computation. IAS, Draft: March 2018. Spin networks, quantum automata and link invariants. S Garrone, A Marzuoli, M Rasetti, Journal of Physics: Conference Series. 33S. Garrone, A. Marzuoli, M. Rasetti. Spin networks, quantum automata and link invariants. Journal of Physics: Conference Series 33, 2006. E Merelli, M Pettini, M Rasetti, Topology driven modeling: the IS metaphor. Natural Computing. 14E. Merelli, M. Pettini, M. Rasetti. Topology driven modeling: the IS metaphor. Natural Computing Vol.14, No.3, 2015. Topology and data. G Carlsson, Bulletin of the American Mathematical Society. 462G. Carlsson. Topology and data. Bulletin of the American Mathematical Society Vol.46, No.2, 2009. Topological Field Theory of Data: mining data beyond complex networks. M Rasetti, E Merelli, Advances in disordered systems, random processes and some applications. P. Contucci, LaganàCambridge University PressM. Rasetti, E. Merelli. Topological Field Theory of Data: mining data beyond complex networks. Ed. P. Contucci, Laganà, In Advances in disordered systems, random processes and some applications. Cambridge University Press, 2016. N Steenrod, The topology of Fiber Bundles. Princeton Mathematical Series. Princeton University PressN. Steenrod. The topology of Fiber Bundles. Princeton Mathematical Series. Princeton University Press, 1951. A Program Machine Symmetric Automata Theory. P J Landin, Machine Intelligence. 5Edinburgh University PressP.J. Landin. A Program Machine Symmetric Automata Theory. Machine Intelli- gence Vol. 5, ed. Meltzer and Michie, Edinburgh University Press. An algebraic characterisation of concurrent composition. ArXiv 1406. S Abramsky, S. Abramsky. An algebraic characterisation of concurrent composition. ArXiv 1406.1965v1, 2014. Lecture to the. A M Turing, A. M. Turing's Ace. B. E. Carpenter and R. W. DoranLondon Mathematical Society1946A. M. Turing. Lecture to the London Mathematical Society, 20 February 1947. Quoted in B. E. Carpenter and R. W. Doran (eds.), A. M. Turing's Ace Report of 1946 Elements of the Theory of Computation. H Lewis, C H Papadimitriou, Prentice Hall2nd EdH. Lewis, C.H. Papadimitriou. Elements of the Theory of Computation. 2nd Ed. Prentice Hall, 1998. Contextuality: At the Borders of Paradox. Categories for the Working Philosophers. S Abramsky, Ed. by Elaine LandryS. Abramsky. Contextuality: At the Borders of Paradox. Categories for the Work- ing Philosophers, Ed. by Elaine Landry.2017. S Abramsky, arXiv:1406.7386v1Contextual Semantics: From Quantum Mechanics to Logic, Databases, Constraints, and Complexity. S. Abramsky. Contextual Semantics: From Quantum Mechanics to Logic, Databases, Constraints, and Complexity. arXiv:1406.7386v1, 2014. What are the Fundamental Structures of Concurrency? We still dont know! Electronic Notes in Theoretical Computer Science Vo. S Abramsky, 162S. Abramsky. What are the Fundamental Structures of Concurrency? We still dont know! Electronic Notes in Theoretical Computer Science Vo.162, 2006. 20. Mathoverflow. https://mathoverflow.net/questions/88368/can-a-group-be-a- universal-turing-machine
[]
[ "Adversarial Robustness Assessment of NeuroEvolution Approaches", "Adversarial Robustness Assessment of NeuroEvolution Approaches" ]
[ "Inês Valentim [email protected] \nUniversity of Coimbra\nCISUC\nDEICoimbraPortugal\n", "Nuno Lourenço \nUniversity of Coimbra\nCISUC\nDEICoimbraPortugal\n", "Nuno Antunes \nUniversity of Coimbra\nCISUC\nDEICoimbraPortugal\n" ]
[ "University of Coimbra\nCISUC\nDEICoimbraPortugal", "University of Coimbra\nCISUC\nDEICoimbraPortugal", "University of Coimbra\nCISUC\nDEICoimbraPortugal" ]
[]
NeuroEvolution automates the generation of Artificial Neural Networks through the application of techniques from Evolutionary Computation. The main goal of these approaches is to build models that maximize predictive performance, sometimes with an additional objective of minimizing computational complexity. Although the evolved models achieve competitive results performance-wise, their robustness to adversarial examples, which becomes a concern in security-critical scenarios, has received limited attention. In this paper, we evaluate the adversarial robustness of models found by two prominent Neu-roEvolution approaches on the CIFAR-10 image classification task: DENSER and NSGA-Net. Since the models are publicly available, we consider white-box untargeted attacks, where the perturbations are bounded by either the L2 or the L∞-norm. Similarly to manually-designed networks, our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero under both distance metrics. The DENSER model is an exception to this trend, showing some resistance under the L2 threat model, where its accuracy only drops from 93.70% to 18.10% even with iterative attacks. Additionally, we analyzed the impact of pre-processing applied to the data before the first layer of the network. Our observations suggest that some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness. Thus, this choice should not be neglected when automatically designing networks for applications where adversarial attacks are prone to occur.
10.1109/cec55065.2022.9870202
[ "https://arxiv.org/pdf/2207.05451v1.pdf" ]
250,451,473
2207.05451
d82552ee0f6340b6ba6a524e26dd93205faf7bfa
Adversarial Robustness Assessment of NeuroEvolution Approaches Inês Valentim [email protected] University of Coimbra CISUC DEICoimbraPortugal Nuno Lourenço University of Coimbra CISUC DEICoimbraPortugal Nuno Antunes University of Coimbra CISUC DEICoimbraPortugal Adversarial Robustness Assessment of NeuroEvolution Approaches Index Terms-Adversarial ExamplesConvolutional Neural NetworksNeuroEvolutionRobustness NeuroEvolution automates the generation of Artificial Neural Networks through the application of techniques from Evolutionary Computation. The main goal of these approaches is to build models that maximize predictive performance, sometimes with an additional objective of minimizing computational complexity. Although the evolved models achieve competitive results performance-wise, their robustness to adversarial examples, which becomes a concern in security-critical scenarios, has received limited attention. In this paper, we evaluate the adversarial robustness of models found by two prominent Neu-roEvolution approaches on the CIFAR-10 image classification task: DENSER and NSGA-Net. Since the models are publicly available, we consider white-box untargeted attacks, where the perturbations are bounded by either the L2 or the L∞-norm. Similarly to manually-designed networks, our results show that when the evolved models are attacked with iterative methods, their accuracy usually drops to, or close to, zero under both distance metrics. The DENSER model is an exception to this trend, showing some resistance under the L2 threat model, where its accuracy only drops from 93.70% to 18.10% even with iterative attacks. Additionally, we analyzed the impact of pre-processing applied to the data before the first layer of the network. Our observations suggest that some of these techniques can exacerbate the perturbations added to the original inputs, potentially harming robustness. Thus, this choice should not be neglected when automatically designing networks for applications where adversarial attacks are prone to occur. I. INTRODUCTION The design of Artificial Neural Networks (ANNs) is a timeconsuming trial-and-error process that requires domain expertise. The design choices affect one another and this interdependence must be considered when dealing with increasingly complex models which often reach thousands or millions of parameters. To alleviate these issues, NeuroEvolution (NE) uses evolutionary approaches to automate the search for better topologies and parametrization of ANNs. Models designed through NE achieve competitive results in comparison with manually-designed ANNs, sometimes even surpassing their performance [1], [2], [3], [4], [5]. Thus, it is reasonable to consider that these models can be applied in real-world scenarios. However, their adoption in such cases means that concerns other than predictive performance must be addressed. In the particular case of security-critical systems, such as autonomous vehicles or malware detectors, one of such concerns is the models' vulnerability to adversarial examples [6], [7]. These malicious examples are usually crafted by adding small perturbations to the original inputs causing the attacked model to produce an incorrect output [8], [9]. Restrictions are often imposed on the maximum perturbation that can be added so as to try to keep the adversarial example and the original input indistinguishable [8], [9]. In the image domain, the perturbations are often bounded by some L pnorm [8], [10]. ANNs designed by humans have been widely studied and are known to be vulnerable to these attacks, but it is still unclear the extent to which ANNs obtained through the application of NE suffer from this vulnerability. Our goal is to empirically evaluate the resistance of ANNs designed by NE to adversarial examples, with a focus on image classification tasks and Convolutional Neural Networks (CNNs). We consider threat models where the attacker's goal is simply to cause a misclassification of a given instance, i.e., we perform untargeted attacks, and has full access to the models, i.e., we perform white-box attacks. Similar to previous work in adversarial machine learning, we consider L 2 and L ∞bounded perturbations [11], and white-box attacks that maximize the loss function while constraining the perturbations to a pre-defined budget under each distance metric. Our analysis is performed by attacking pre-trained models for the CIFAR-10 dataset [12], made publicly available by the authors of the corresponding NE approaches. We consider DENSER [1] and two variants of NSGA-Net models [3], all of which achieving an accuracy above 93% on clean images. These models, and their training, do not incorporate any defense mechanism against adversarial examples. Thus, any robustness that is shown can be likely attributed to architectural aspects of the ANNs. Moreover, the NE approaches only seek to maximize predictive performance, while directly or indirectly minimizing computational complexity. Our results show that these evolved models are susceptible to adversarial attacks, similar to hand-designed ANNs. Using iterative methods, like the basic iterative method (BIM) [13] and the projected gradient descent (PGD) attack [14], their accuracy drops dramatically, to values below 0.25% under both distance metrics. DENSER deviates from this behavior and shows some resistance to L 2 perturbations, keeping its accuracy at 21.76% under a BIM attack with 100 iterations. When reducing the iterations to 50 but incorporating 10 random restarts, the accuracy falls further to 18.10%. On the other hand, the models that use the architectures found by NSGA-Net, especially when applied to the NASNet search space, are more robust to the single-step attacks of our experiments. We could also identify distinct patterns in the misclassifications produced by the models, even under the same attack and when the accuracy of all of them is zero. For instance, some misclassify the adversarial examples of a class into a small subset of classes, while the misclassifications of other models are much more spread out. Furthermore, we warn about the choice of data preprocessing procedures and the effect of such transformations on the perturbations added to the clean inputs. Certain techniques can exacerbate these perturbations before the data reaches the first layer of the network, which in a way makes it easier for the attacker to succeed in generating an adversarial example under a certain perturbation budget. For this reason, it might be worth including this component in the search process of NE approaches. Moreover, all training conditions, including any model-specific pre-processing step, should be clearly specified when evaluating adversarial robustness, especially when comparing models from different sources. The remainder of this paper is structured as follows. Section II provides an overview of NE approaches and adversarial machine learning, and presents relevant work on the intersection of the two fields. Section III describes our experimental setup in terms of datasets, models, adversarial attacks, and evaluation metrics. Section IV presents the results of the experimental campaign and discusses our main findings. Section V concludes the paper. II. BACKGROUND AND RELATED WORK In this section, we overview key concepts and methods from the fields of NeuroEvolution (NE) and adversarial machine learning, after which we present relevant work on the intersection of these two areas of research. We mainly focus on image classification and Convolutional Neural Networks (CNNs). A. NeuroEvolution Neural Architecture Search (NAS) is the field which deals with designing the optimal architecture of an ANN in an automated way. Different search strategies have been proposed, including Reinforcement Learning [15], [16], [17], Evolutionary Computation [1], [2], [3], [4], [5], [18], as well as gradient-based methods [19]. NeuroEvolution (NE) approaches, which are the main focus of this work, refer to those which apply techniques from Evolutionary Computation. Especially since 2017, remarkable results have been achieved by several NE approaches that automate the search for CNNs [1], [2], [3], [4], [5], [18]. Two of such proposals, DENSER [1] and NSGA-Net [3], are described in more detail in what follows. In DENSER [1], each candidate solution is represented at two levels, the genetic algorithm (GA) level and the dynamic structured grammatical evolution (DSGE) [20] level. The GA level encodes the macro structure of the ANN (layers and order in which they are connected) and any additional hyperparameters (learning strategy and data augmentation procedures, for instance) as a sequence of evolutionary units. Each evolutionary unit stores a non-terminal symbol later used at the DSGE level, as well as the minimum and maximum number of times the unit can be used. At the DSGE level, the parameters of the evolutionary units (e.g., layer type, number of filters, filter size, activation functions) are encoded through a contextfree grammar. DENSER uses crossover and mutation operators at both levels. NSGA-Net [3] is explicitly designed to solve a multiobjective optimization problem. It uses the NSGA-II algorithm [21] to maintain a trade-off frontier between candidate solutions that maximize classification performance but minimize computational complexity (defined by the number of floating-point operations in the forward pass). Each candidate solution is represented by a bit string that encodes computational blocks (referred to as phases). In turn, each computational block can be regarded as a directed acyclic graph of several nodes, with each node representing a single operation (e.g., convolution, pooling, batch-normalization) or a sequence of operations. This representation follows the method from [18] with a minor modification. Similar to DENSER, NSGA-Net also makes use of crossover and mutation operators. Additionally, at the end of the evolutionary process, ANNs are designed by sampling phases from a Bayesian Network which models the relationship between these computational blocks as seen in the architectures found during the search procedure. In practice, NSGA-Net constrains the search space based on prior knowledge of successful architectures. Namely, the number of phases and the maximum number of nodes on each phase are set to 3 and 6, respectively, and the changes in spatial resolution between phases are also fixed. Moreover, each node encompasses the same sequence of operations. When it comes to DENSER, the main restriction is at the GA level in how the different layers can be connected (i.e., layers are connected sequentially without skip-connections). In the original work, the search space includes CNNs of up to 40 hidden-layers (30 convolution or pooling layers, at most, followed by a maximum of 10 fully-connected layers). B. Adversarial Machine Learning Adversarial examples are maliciously crafted so as to make an attacked model produce incorrect outputs [9], [22]. Throughout the years, several methods have been proposed in the literature to craft such attacks, under different threat models. These threat models can be defined based on the goals, knowledge, and capabilities of the adversary [10]. A distinction can be made based on the adversary's knowledge about the model: architecture and parameters, training algorithm and training data, randomness at test-time, and allowed level of query access [23]. In this work, we focus on white-box attacks, where the adversary has full access to the model. A further distinction can be made between untargeted and targeted attacks. In the particular case of image classification tasks, the goal of untargeted attacks is simply to make the model predict a class different from the true label of a given instance, while the goal of targeted attacks is to make the model produce a misclassification into some desired class [8]. Formally, x is an input with correct label y and C is the classifier under attack. In the untargeted setting, an adversarial example x adv is such that C(x adv ) = y. On the other hand, a targeted attack would aim at crafting x adv given a target t = y such that C(x adv ) = t. An adversarial example is usually obtained by adding some perturbation δ to a benign image [22]. Constraints are usually imposed on the capabilities of the adversary in terms of the maximum perturbation that can be added, so that x adv remains close to the original input and its true label remains unchanged [10]. In the image domain, a common approach is to use L p -norms for those bounds such that x − x adv p ≤ , where is the perturbation budget and usually p ∈ {0, 1, 2, ∞} [10], [24]. Optimization-based methods are an important category of white-box attacks. Some of such methods try to minimize the perturbation, while others try to maximize some loss function (typically the cross-entropy loss) [23], [24]. We present some of the latter methods assuming an untargeted setting. The fast gradient sign method (FGSM) [6] is a one-step gradient-based attack optimized for the L ∞ -norm which generates an adversarial example as: x adv = x + · sign (∇ x L(x, y)) where ∇ x L(x, y) is the gradient of the cross-entropy loss with respect to the input image. When this method is optimized for the L 2 -norm, we obtain the fast gradient method (FGM) which generates an adversarial example as: x adv = x + · ∇ x L(x, y) ∇ x L(x, y) 2 In both cases, the perturbed image should be clipped in order to maintain a valid data range. We present the methods that follow for L ∞ adversaries, but these definitions can be easily adapted to the case where L 2 bounds are imposed, similar to what is done with FGM. A straightforward extension of the FGSM attack is to take multiple small steps (with step size α) and clip the result by at each iteration. The clipping operation also takes into account the valid range of data values. This results in the basic iterative method (BIM) [13] which can be defined as: x adv 0 = x, x adv i+1 = clip x adv i + α · sign ∇ x L(x adv i , y) The projected gradient descent (PGD) method [14] is another iterative attack which only differs from BIM on how x adv 0 is set. In the case of PGD, instead of starting from the original input, a random perturbation bounded by is generated and added to x. In an attempt to stabilize the update directions and escape from local optima, the momentum iterative fast gradient sign method (MI-FGSM) [24] incorporates momentum [25] into the BIM method. The Auto-PGD (APGD) method [26] is a variation of the PGD attack which adjusts the step size in an automated way. The authors of APGD also proposed an alternative to the cross-entropy loss called difference of logits ratio (DLR) loss. In addition to being invariant to shifts of the logits, the DLR loss is rescaling invariant [26]. Adversarial robustness evaluations typically consist of performing adversarial attacks to obtain an upper bound of the robustness of a model [11]. One resorts to this approximation since computing it exactly is usually intractable [10], [11]. Several defenses against adversarial examples have been proposed in the literature. Such defenses tend to be designed to be robust to one specific threat model [10]. Adversarial training [14], and variants like ensemble adversarial training [27], have shown the most promising results when it comes to increasing the robustness of models. The main idea behind these methods is to incorporate adversarial examples in the training procedure of a model. Many of the other methods were circumvented shortly after their proposal [23], [28]. C. Related Work RobustBench [11] aims at benchmarking adversarial robustness under different threat models, namely L p -robustness with p ∈ {2, ∞} and common image corruptions [29]. In addition to a leaderboard aggregating the evaluations of several robustness-enhancing proposals, a Model Zoo containing pretrained models from top entries of the leaderboard is also available. While the primary goal of RobustBench is to assess defenses against adversarial attacks, we focus on the general robustness of a model from an architectural point-of-view. In [30], a comparison is made between CNNs and more recent architectures, such as the Vision Transformer [31] and the MLP-Mixer [32], which have achieved promising results in computer vision tasks. This work relates to ours in that architectural differences are at the core of the analysis, but we solely focus on CNNs and try to establish a comparison between hand-crafted models and models that result from an evolutionary search. The experiments conducted by [33] are closer to our approach since the authors also look at adversarial robustness from an architectural perspective and include both manuallydesigned architectures and architectures found by NAS approaches (such as an NSGA-Net model) in their analysis. In contrast to this work, we focus only on NeuroEvolution methods. Moreover, we consider not only L ∞ -robustness but also L 2 -robustness, and we try to perform attacks that bring the accuracy of at least one of the target models to zero under each threat model. There is also a growing body of work that uses NAS approaches, including NE, to find robust models [34], [35], [36]. However, and as pointed out by [33], adversarial training is often incorporated in these studies. Thus, it is difficult to assess the role of architectural aspects on the robustness exhibited by the models. In the case of [36], robustness is explicitly included in the objective function, once again making it difficult to understand if the models found by NE are inherently more robust than the ones crafted by humans. III. EXPERIMENTAL SETUP In this section, we detail the methodology followed to evaluate the adversarial robustness of the models. We conduct all the experiments on the CIFAR-10 dataset [12], which consists of 32 × 32 RGB images divided into 10 classes. The training set has 50000 images while the test set has 10000, with an equal number of examples from each class. The original pixel values are in [0, 255], but we always operate on pixels modeled as real numbers by applying a pre-processing step to normalize the values to the interval [0, 1]. A. Target Models We evaluate models designed by two NE approaches: DENSER [1] and NSGA-Net [3]. This choice was mainly based on two criteria: firstly, models had to be directly trained on the CIFAR-10 dataset, and secondly, pre-trained models (i.e., network weights) had to be publicly available, so as to introduce as little bias as possible from our end. In fact, all the attacked models are pre-trained and publicly available. We use these models directly, without re-training them, but describe some relevant differences in the training procedures. The WRN-28-10 architecture [37], a manually-designed wide residual network, is used as the baseline model in our experiments for a variety of reasons: its performance on CIFAR-10 is similar to that of the models from the two NE approaches, the work which proposes NSGA-Net [3] also uses it as a baseline, and some of the defenses from the RobustBench leaderboard [11] are based on this architecture. In particular, we use the pre-trained model from the Model Zoo of RobustBench 1 , which was trained with the 50000 training images of CIFAR-10, without any data augmentation. Besides converting the pixel values to [0, 1] as previously described, no further pre-processing is applied to the data. In what concerns DENSER [1], we select the network that achieved the best accuracy on the CIFAR-10 test set over 10 evolutionary runs. The models resulting from 5 independent training runs of this network are publicly available 2 , but, again, we solely attack the one with the highest test accuracy. In these training runs, the original work used the complete training set of CIFAR-10 and applied a data augmentation method which includes padding, horizontal flipping, and random crops (similar to what is done in [5]). In addition to converting the pixel values to real numbers, the data is expected to be centered around zero before being fed to the first layer of the network. Following [1], this is accomplished through the removal of the mean pixel value per location and color channel (calculated on the entire training set). As far as the NSGA-Net approach [3] is concerned, we conduct experiments with a pre-trained model from the macro search space described in Section II-A (NSGA-M), as well as three variants of the architecture obtained by using the cells found by NSGA-Net on the NASNet micro search space [38] (NSGA-mA, NSGA-mB, and NSGA-mC with an increasingly higher number of model parameters, as shown in Table I). In the original work, cutout [39] was used to train these models, together with a data augmentation strategy similar to the one adopted by DENSER, which includes padding, random crops, and horizontal flipping. For the three models from the micro search space, the scheduled path dropout technique [38] was also adopted, together with an auxiliary head classifier, whose loss is aggregated with the loss from the main network [3]. After converting the pixel values to real numbers, the data is expected to be normalized using pre-calculated means and standard deviations for each color channel. Further details about these architectures and training procedure can be found in the original paper [3] and the source code repository 3 . An overview of the size of the models used in our experiments, as given by the number of trainable parameters, is presented in Table I. B. Threat Models and Attacks Since all models are publicly available, we consider the scenario in which the attacker has full access to the target model, i.e., we perform white-box attacks. Furthermore, we consider untargeted attacks, where the adversarial perturbations are bounded by = 8/255 under the L ∞ -norm or, in the case of the L 2 -norm, by = 0.5. The perturbation budgets were chosen based on threat models used in previous works, namely in the RobustBench benchmark [11]. We focus on attacks that craft adversarial examples by solving a constrained optimization problem instead of attacks that aim at finding minimal adversarial perturbations. Therefore, we attack the chosen models using different configurations (number of iterations and number of random initializations) of FGSM / FGM, BIM, and PGD. For the iterative attacks (i.e., BIM and PGD), we set the step size to α = /4. We use the Python implementations of the attacks by the Adversarial Robustness Toolbox (ART) library [40]. 3 https://github.com/ianwhale/nsga-net C. Baseline Performance The accuracy of the models on the clean examples of the test set is shown in Table I. For a fair comparison, we only generate adversarial examples on samples that initially receive a correct classification by the model under evaluation. Nevertheless, when reporting the models' accuracy of adversarially generated samples, we consider the complete test set of CIFAR-10. It is important to mention that an untargeted attack is considered to be successful if the model produces a misclassification, regardless of the predicted class. For this reason, no perturbation needs to be added to a sample that is already incorrectly classified. While the DENSER models are implemented in Keras / TensorFlow 2, the baseline and the NSGA-Net models are in PyTorch. We reiterate that all models were trained using a standard procedure and no defensive method was applied. IV. RESULTS AND DISCUSSION The accuracy of the models on the adversarial examples generated under the threat model that considers L ∞ -robustness is shown in Table II. We present the results for the FGSM attack, for FGSM with 10 random initializations (FGSM-10), and for the BIM attack with 10 and 50 iterations (BIM-10 and BIM-50, respectively). In this case, the attacks operate in [0, 1] and any additional model-specific pre-processing is applied to the images after the adversarial perturbations are added. A brief perusal of the results reveals that the DENSER model is the most susceptible to L ∞ attacks. Even in the case of single-step attacks like FGSM, the accuracy falls below 10% when random restarts are incorporated. On the other hand, the models that result from the application of NSGA-Net to the NASNet search space are the most resistant to single-step attacks. In fact, they achieve higher accuracy on the adversarially perturbed images than the baseline model. Nevertheless, given enough iterations, the accuracy of all models drops to zero under this threat model. This suggests that these NE approaches do not seem to find L ∞ -robust models, at least if that objective is not explicitly included in the evolutionary process. Table II also shows the adversarial accuracy of the models under the threat model that considers L 2 -bounded perturbations. We present the results for the FGM attack, for the BIM attack with 10, 50, and 100 steps (BIM-10, BIM-50, and BIM-100, respectively), as well as for the PGD attack with 10 random restarts and 50 iterations (PGD-50-10). We again report the results when the attacks operate in [0, 1] and any additional model-specific pre-processing is applied after the adversarial perturbations are added to the images. The strongest L 2 attacks in our analysis (BIM-100 and PGD-50-10) bring the accuracy of the models to below 1% and, in the case of NSGA-M and NSGA-mB, to zero. Surprisingly, and contrary to what was observed with the L ∞ attacks, this does not hold true for the DENSER model whose accuracy just drops to around 20% under this threat model. However, the DENSER model remains the most susceptible under the single-step FGM attack, while the NSGA-Net models from the search space of NASNet remain the most robust. Moreover, a comparison between the three NSGA-Net models from the micro search space reveals that NSGA-mB is the least robust of the three, even though it is more complex (i.e., it has a higher number of parameters) than NSGA-mA. This is observed under both distance metrics, but the susceptibility of NSGA-mB is particularly higher than that of NSGA-mA and NSGA-mC under the BIM-10 attack with L 2 perturbations. Unlike NSGA-mA and NSGA-mC, NSGA-mB does not use Squeeze-and-Excitation blocks. Thus, it seems that Squeeze-and-Excitation may help improve the robustness of an ANN, an hypothesis worth investigating in future work. The discrepancies in the relative robustness of the models between the two distance metrics demands further analysis. Namely, it would be of interest to understand what aspect of the DENSER model makes it more L 2 -robust and why it does not seem to help the model against L ∞ attacks. A. Impact of Data Pre-Processing In Section III, we described the different pre-processing steps applied to the images before they reach the first layer of each model. Contrary to WRN-28-10 and DENSER, the pre-processing for the NSGA-Net models changes the scale of the data (i.e., the difference between the maximum and the minimum values of a feature is larger than 1). Therefore, the NSGA-Net models perceive the adversarial perturbations as approximately 4 times larger than the perturbation budget of the threat model. To show this effect, we craft adversarial examples after all the pre-processing steps have been applied to the data, instead of operating in the [0, 1] range. By doing so, the perturbation budget refers to the distance in the space in which the first layer of a network expects the data to be. The results of this experiment are shown in Table III for both the L ∞ and the L 2 attacks. The baseline model is excluded from this analysis since it only requires the data to be in [0, 1], without centering or standardizing it. We can see that, as expected, there is no significant difference between these results and those from Table II regarding the DENSER model. On the other hand, the robustness of the NSGA-Net models appears to be much higher, especially in the case of L 2 -bounded perturbations. This shows that the choice of data pre-processing should not be neglected when designing networks under scenarios where adversarial attacks may be of concern. In the particular case of NE approaches, one might even consider including this design choice in the evolutionary process. Additionally, works that focus on robustness evaluations should clearly specify the conditions under which the models were trained, including any modelspecific pre-processing step. We also evaluate the impact of converting the pixel values back to integers, such that each value is between 0 and 255. We just consider the case in which the attacks operate in the range [0, 1] and any additional pre-processing is applied after the perturbations have been added to the images. Therefore, we multiply each pixel value by 255 and round to the nearest even integer. We then re-apply all pre-processing steps required by the model and report the accuracy on the post-processed examples. Table IV shows the results for the attacks under both distance metrics. For the L ∞ attacks, differences are mainly detected when random restarts are incorporated (FGSM-10). The attack success rate slightly deteriorates, but the differences are of less than 0.25% and seem negligible. As far as the L 2 -bounded attacks are concerned, the largest differences occur with FGM but also seem negligible (always of less than 0.25%). The success of the attacks is mostly affected by this post-processing procedure when their target is an NSGA-Net model from the NASNet search space. B. Analyzing Misclassifications through Confusion Matrices To complement our analysis, we looked into the confusion matrices of each model under different attacks. Even when an attack brings the accuracy of all models to zero (e.g., the L ∞ -constrained BIM-50 attack), different patterns in their misclassifications can be observed. As shown in Fig. 1a, under an L ∞ -bounded BIM-50 attack, WRN-28-10 produces misclassifications for each class that are spread out across the remaining classes. Moreover, it seems to favor mainly two classes, bird and cat, with most examples being misclassified as such. Under the L 2 -constrained BIM-100 attack (Fig. 1b), the misclassifications of classes that represent a means of transportation (airplane, automobile, ship, and truck) are more clustered together. means of transportation are misclassified as another vehicle. The confusion matrix of DENSER under the BIM-100 attack constrained by the L 2 -norm is shown in Fig. 2b. Contrary to the L ∞ attack, the BIM-100 attack is unable to decrease the accuracy to zero, and so, some images are correctly classified. Notwithstanding, the misclassifications follow a pattern similar to that shown in Fig. 2a. The automobile and the truck classes are the most difficult to attack under this threat model, while it is easier to cause a misclassification of airplane instances. According to Fig. 3a, and similar to the baseline model, most examples are also misclassified as bird and cat with NSGA-M. However, misclassifications of a single class are less spread out between the remaining classes, especially in the case of bird, cat, ship, and truck. The three NSGA-Net models from the NASNet search space show similar patterns in their misclassifications. The main distinguishing factor is the spread of the misclassifications of each class: NSGA-mB misclassifies the majority of the examples from one class into fewer classes than NSGA-mC (check, for instance, the ship class), and NSGA-mA is in the middle of the spectrum. Similar to NSGA-M, most misclassifications of these three models also fall into the cat class (especially with NSGA-mA). However, the second most predicted class is dog and not bird. The confusion matrices for the BIM-100 attack with L 2bounded perturbations exhibit similar patterns. In comparison with the BIM-50 attack with L ∞ constraints, less examples are misclassified by the three models from the micro search space V. CONCLUSION AND FUTURE WORK Artificial Neural Networks designed through evolution achieve competitive results with respect to predictive performance, but the study of their adversarial robustness is limited. In this work, we assessed the L ∞ and L 2 -robustness of models found by NeuroEvolution approaches for the CIFAR-10 classification task, under white-box untargeted attacks. No defense against adversarial examples was incorporated in the models or in their training. Our results show that, similar to human-designed networks, the accuracy of these evolved models usually drops to zero (or close to zero). The main exception occurs with the DENSER model, which shows some resistance to L 2 attacks, with the accuracy dropping to 18.10% under a PGD attack. We identified distinct patterns in the misclassifications produced by the models: in some cases, the adversarial examples from one class are misclassified into a small subset of classes, while with other models the misclassifications are much more spread out between classes. Furthermore, the choice of data preprocessing techniques must not be neglected when automati-cally designing CNNs. We have shown that certain procedures can exacerbate the adversarial perturbations before they reach the first layer of the network, this way potentially jeopardizing robustness. However, extending current NE approaches so as to include this design choice in their search is not always straightforward (i.e., NSGA-Net). We plan on studying the L 2 -robustness of the DENSER model so as to understand if it can be attributed to some architectural feature. That invaluable knowledge could be leveraged to build robust models. We tried to be as faithful as possible to the original works under analysis, which came with some disadvantages. Namely, each model was trained under slightly different configurations, including distinct data pre-processing approaches. It would be interesting to re-train all the models under the exact same conditions so as to make sure that the observed differences truly arise from architectural aspects. Although previous work suggests that higher network capacity allows for robustness improvements, analyzing the relationship between the adversarial robustness and the computational complexity of a model still needs further investigation. Since we solely analyzed pre-trained models for CIFAR-10, a clear extension to our work would be to consider other, more complex, datasets. Future work also comprises assessing the adversarial robustness of models found by NE approaches under additional threat models, such as transfer and universal attacks. Fig. 1 . 1Confusion matrices for the WRN-28-10 model under two attacks. Fig. 2 . 2Fig. 2ashows that, under the L ∞ -constrained BIM-50 attack, the predictions of the DENSER model can be clearly grouped into clusters, with most examples from one class being misclassified into a smaller subset of the other classes than with the baseline model. Images that represent an animal are mainly misclassified as another animal, while images of a Confusion matrices for the DENSER model under two attacks. Fig. 3 . 3Confusion matrices for the NSGA-Net models under the BIM-50 attack with L∞-bounded perturbations. The results under BIM-100 with L 2 -bounded perturbations revealed similar patterns. as belonging to the cat class, especially in the case of NSGA-mB. With NSGA-M and NSGA-mB, some changes are also observed with the misclassification of examples that originally belong to the ship class. TABLE I OVERVIEW IOF THE MODELS IN TERMS OF NUMBER OF PARAMETERS AND ACCURACY ON THE CLEAN EXAMPLES OF THE CIFAR-10 TEST SET.Model Number of Parameters Clean Accuracy WRN-28-10 36.48M 94.78% DENSER 10.81M 93.70% NSGA-M 3.37M 96.27% NSGA-mA 1.97M 97.57% NSGA-mB 2.20M 97.78% NSGA-mC 4.05M 97.98% TABLE II ACCURACY IION THE CIFAR-10 TEST SET WHEN THE ATTACKS OPERATE IN [0, 1]. THE HIGHEST REPORTED ACCURACY UNDER EACH ATTACK IS IN BOLD. WRN-28-10 DENSER NSGA-M NSGA-mA NSGA-mB NSGA-mC ON THE CIFAR-10 TEST SET WHEN ALL MODEL-SPECIFIC PRE-PROCESSING IS APPLIED TO THE ORIGINAL INPUTS BEFORE PERFORMING THE ATTACKS. THE HIGHEST REPORTED ACCURACY FOR EACH ATTACK IS IN BOLD. ON THE CIFAR-10 TEST SET WHEN THE ATTACKS OPERATE IN [0, 1], BUT THE GENERATED IMAGES ARE POST-PROCESSED. THE HIGHEST REPORTED ACCURACY FOR EACH ATTACK IS IN BOLD.L∞ = 8/255 FGSM 28.85% 16.37% 35.08% 52.09% 51.86% 55.06% FGSM-10 11.03% 6.19% 9.28% 25.02% 22.49% 26.92% BIM-10 0.02% 0.00% 0.00% 0.16% 0.00% 0.02% BIM-50 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% L 2 = 0.5 FGM 47.61% 44.76% 48.51% 61.34% 60.61% 64.06% BIM-10 2.01% 30.76% 0.23% 3.04% 0.73% 2.57% BIM-50 0.16% 24.13% 0.00% 0.26% 0.01% 0.35% BIM-100 0.09% 21.76% 0.00% 0.12% 0.00% 0.23% PGD-50-10 0.08% 18.10% 0.00% 0.11% 0.00% 0.21% TABLE III ACCURACY DENSER NSGA-M NSGA-mA NSGA-mB NSGA-mC L∞ = 8/255 FGSM 16.37% 46.65% 60.61% 61.57% 63.64% FGSM-10 6.16% 40.82% 58.41% 58.88% 60.97% BIM-10 0.00% 2.70% 12.22% 8.36% 12.70% BIM-50 0.00% 0.81% 6.87% 3.86% 6.96% L 2 = 0.5 FGM 44.75% 67.96% 78.11% 77.70% 80.12% BIM-10 30.77% 25.98% 48.60% 44.80% 49.61% BIM-50 24.13% 17.57% 41.63% 37.43% 42.04% BIM-100 21.76% 16.86% 40.72% 36.46% 41.09% PGD-50-10 18.10% 15.99% 40.10% 35.38% 40.05% TABLE IV ACCURACY WRN-28-10 DENSER NSGA-M NSGA-mA NSGA-mB NSGA-mC L∞ = 8/255 FGSM 28.85% 16.38% 35.08% 52.09% 51.86% 55.06% FGSM-10 11.15% 6.28% 9.45% 25.19% 22.70% 27.13% BIM-10 0.02% 0.00% 0.00% 0.16% 0.00% 0.02% BIM-50 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% L 2 = 0.5 FGM 47.69% 44.86% 48.62% 61.46% 60.82% 64.24% BIM-10 2.02% 30.77% 0.23% 3.10% 0.75% 2.67% BIM-50 0.16% 24.13% 0.00% 0.26% 0.01% 0.38% BIM-100 0.09% 21.76% 0.00% 0.12% 0.00% 0.24% PGD-50-10 0.08% 18.10% 0.00% 0.14% 0.01% 0.23% https://github.com/RobustBench/robustbench 2 https://github.com/fillassuncao/denser-models ACKNOWLEDGMENTSThis work is partially funded by national funds through the FCT -Foundation for Science and Technology, I.P., within the scope of the project CISUC -UID/CEC/00326/2020 and by European Social Fund, through the Regional Operational Program Centro 2020. It is also partially supported by the project METRICS (POCI-01-0145-FEDER-032504), cofunded by FCT and by the Fundo Europeu de Desenvolvimento Regional (FEDER) through Portugal 2020 -Programa Operacional Competitividade e Internacionalização (POCI). The first author is partially funded by FCT under the individual grant UI/BD/151047/2021. DENSER: Deep evolutionary network structured representation. F Assunçao, N Lourenço, P Machado, B Ribeiro, arXiv:1801.01563arXiv preprintF. Assunçao, N. Lourenço, P. Machado, and B. Ribeiro, "DENSER: Deep evolutionary network structured representation," arXiv preprint arXiv:1801.01563, 2018. Hierarchical representations for efficient architecture search. H Liu, K Simonyan, O Vinyals, C Fernando, K Kavukcuoglu, International Conference on Learning Representations. H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu, "Hierarchical representations for efficient architecture search," in Inter- national Conference on Learning Representations, 2018. NSGA-Net: Neural architecture search using multiobjective genetic algorithm. Z Lu, I Whalen, V Boddeti, Y Dhebar, K Deb, E Goodman, W Banzhaf, Proceedings of the Genetic and Evolutionary Computation Conference. the Genetic and Evolutionary Computation ConferenceZ. Lu, I. Whalen, V. Boddeti, Y. Dhebar, K. Deb, E. Goodman, and W. Banzhaf, "NSGA-Net: Neural architecture search using multi- objective genetic algorithm," in Proceedings of the Genetic and Evolu- tionary Computation Conference, 2019, p. 419-427. Regularized evolution for image classifier architecture search. E Real, A Aggarwal, Y Huang, Q V Le, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33E. Real, A. Aggarwal, Y. Huang, and Q. V. Le, "Regularized evolution for image classifier architecture search," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019, pp. 4780- 4789. A genetic programming approach to designing convolutional neural network architectures. M Suganuma, S Shirakawa, T Nagao, Proceedings of the Genetic and Evolutionary Computation Conference. the Genetic and Evolutionary Computation ConferenceM. Suganuma, S. Shirakawa, and T. Nagao, "A genetic programming approach to designing convolutional neural network architectures," in Proceedings of the Genetic and Evolutionary Computation Conference, 2017, p. 497-504. Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, International Conference on Learning Representations. I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in International Conference on Learning Representations, 2015. [Online]. Available: http://arxiv.org/ abs/1412.6572 Intriguing properties of neural networks. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I J Goodfellow, R Fergus, International Conference on Learning Representations. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," in International Conference on Learning Representations, 2014. [Online]. Available: http://arxiv.org/abs/1312.6199 Towards evaluating the robustness of neural networks. N Carlini, D A Wagner, 2017 IEEE Symposium on Security and Privacy (SP). IEEE Computer SocietyN. Carlini and D. A. Wagner, "Towards evaluating the robustness of neural networks," in 2017 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, 2017, pp. 39-57. Benchmarking adversarial robustness on image classification. Y Dong, Q Fu, X Yang, T Pang, H Su, Z Xiao, J Zhu, 2020Y. Dong, Q. Fu, X. Yang, T. Pang, H. Su, Z. Xiao, and J. Zhu, "Benchmarking adversarial robustness on image classification," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition CVPR. IEEE/CVF Conference on Computer Vision and Pattern Recognition CVPR, 2020, pp. 318-328. On evaluating adversarial robustness. N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, I Goodfellow, A Madry, A Kurakin, arXiv:1902.06705arXiv preprintN. Carlini, A. Athalye, N. Papernot, W. Brendel, J. Rauber, D. Tsipras, I. Goodfellow, A. Madry, and A. Kurakin, "On evaluating adversarial robustness," arXiv preprint arXiv:1902.06705, 2019. F Croce, M Andriushchenko, V Sehwag, E Debenedetti, N Flammarion, M Chiang, P Mittal, M Hein, arXiv:2010.09670RobustBench: a standardized adversarial robustness benchmark. arXiv preprintF. Croce, M. Andriushchenko, V. Sehwag, E. Debenedetti, N. Flammar- ion, M. Chiang, P. Mittal, and M. Hein, "RobustBench: a standardized adversarial robustness benchmark," arXiv preprint arXiv:2010.09670, 2020. Learning multiple layers of features from tiny images. A Krizhevsky, University of Toronto, Tech. Rep.A. Krizhevsky, "Learning multiple layers of features from tiny images," University of Toronto, Tech. Rep., 2009. Adversarial examples in the physical world. A Kurakin, I Goodfellow, S Bengio, arXiv:1607.02533arXiv preprintA. Kurakin, I. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," arXiv preprint arXiv:1607.02533, 2016. Towards deep learning models resistant to adversarial attacks. A Madry, A Makelov, L Schmidt, D Tsipras, A Vladu, International Conference on Learning Representations. A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, "Towards deep learning models resistant to adversarial attacks," in International Conference on Learning Representations, 2018. Efficient neural architecture search via parameters sharing. H Pham, M Guan, B Zoph, Q Le, J Dean, International Conference on Machine Learning. H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean, "Efficient neural architecture search via parameters sharing," in International Conference on Machine Learning, 2018. Practical block-wise neural network architecture generation. Z Zhong, J Yan, W Wu, J Shao, C Liu, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Z. Zhong, J. Yan, W. Wu, J. Shao, and C. Liu, "Practical block-wise neural network architecture generation," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 2423- 2432. Neural architecture search with reinforcement learning. B Zoph, Q V Le, arXiv:1611.01578arXiv preprintB. Zoph and Q. V. Le, "Neural architecture search with reinforcement learning," arXiv preprint arXiv:1611.01578, 2016. Genetic CNN. L Xie, A Yuille, 2017 IEEE International Conference on Computer Vision (ICCV. L. Xie and A. Yuille, "Genetic CNN," in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1388-1397. DARTS: Differentiable architecture search. H Liu, K Simonyan, Y Yang, International Conference on Learning Representations. H. Liu, K. Simonyan, and Y. Yang, "DARTS: Differentiable architecture search," in International Conference on Learning Representations, 2019. Structured grammatical evolution: A dynamic approach. N Lourenço, F Assunção, F B Pereira, E Costa, P Machado, Handbook of Grammatical Evolution. C. Ryan, M. O'Neill, and J. J. CollinsSpringerN. Lourenço, F. Assunção, F. B. Pereira, E. Costa, and P. Machado, "Structured grammatical evolution: A dynamic approach," in Handbook of Grammatical Evolution, C. Ryan, M. O'Neill, and J. J. Collins, Eds. Springer, 2018, pp. 137-161. A fast elitist nondominated sorting genetic algorithm for multi-objective optimisation: NSGA-II," in Parallel Problem Solving from Nature -PPSN VI, ser. Lecture Notes in Computer Science. K Deb, S Agrawal, A Pratap, T Meyarivan, M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. J. M. Guervós, and H. SchwefelSpringerK. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, "A fast elitist non- dominated sorting genetic algorithm for multi-objective optimisation: NSGA-II," in Parallel Problem Solving from Nature -PPSN VI, ser. Lec- ture Notes in Computer Science, M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. J. M. Guervós, and H. Schwefel, Eds., vol. 1917. Springer, 2000, pp. 849-858. Exploiting excessive invariance caused by norm-bounded adversarial robustness. J Jacobsen, J Behrmann, N Carlini, F Tramèr, N Papernot, arXiv:1903.10484arXiv preprintJ. Jacobsen, J. Behrmann, N. Carlini, F. Tramèr, and N. Papernot, "Exploiting excessive invariance caused by norm-bounded adversarial robustness," arXiv preprint arXiv:1903.10484, 2019. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. A Athalye, N Carlini, D Wagner, International Conference on Machine Learning. A. Athalye, N. Carlini, and D. Wagner, "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples," in International Conference on Machine Learning, 2018. Boosting adversarial attacks with momentum. Y Dong, F Liao, T Pang, H Su, J Zhu, X Hu, J Li, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, "Boosting adversarial attacks with momentum," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 9185- 9193. On the importance of initialization and momentum in deep learning. I Sutskever, J Martens, G Dahl, G Hinton, International Conference on Machine Learning. I. Sutskever, J. Martens, G. Dahl, and G. Hinton, "On the importance of initialization and momentum in deep learning," in International Conference on Machine Learning, 2013. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. F Croce, M Hein, International Conference on Machine Learning. F. Croce and M. Hein, "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks," in International Conference on Machine Learning, 2020. Ensemble adversarial training: Attacks and defenses. F Tramèr, A Kurakin, N Papernot, I Goodfellow, D Boneh, P Mcdaniel, International Conference on Learning Representations. F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, "Ensemble adversarial training: Attacks and defenses," in International Conference on Learning Representations, 2018. On adaptive attacks to adversarial example defenses. F Tramèr, N Carlini, W Brendel, A Madry, Conference on Neural Information Processing Systems (NeurIPS). 2020F. Tramèr, N. Carlini, W. Brendel, and A. Madry, "On adaptive attacks to adversarial example defenses," in Conference on Neural Information Processing Systems (NeurIPS), 2020. Benchmarking neural network robustness to common corruptions and perturbations. D Hendrycks, T Dietterich, International Conference on Learning Representations. D. Hendrycks and T. Dietterich, "Benchmarking neural network ro- bustness to common corruptions and perturbations," in International Conference on Learning Representations, 2019. Robustness comparison of Vision Transformer and MLP-Mixer to CNNs. P Benz, C Zhang, S Ham, A Karjauv, I S Kweon, CVPR 2021 Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges. 2021P. Benz, C. Zhang, S. Ham, A. Karjauv, and I. S. Kweon, "Robustness comparison of Vision Transformer and MLP-Mixer to CNNs," CVPR 2021 Workshop on Adversarial Machine Learning in Real-World Com- puter Vision Systems and Online Challenges (AML-CV), 2021. An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, J Uszkoreit, N Houlsby, International Conference on Learning Representations. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, "An image is worth 16x16 words: Trans- formers for image recognition at scale," in International Conference on Learning Representations, 2021. MLP-Mixer: An all-MLP architecture for vision. I O Tolstikhin, N Houlsby, A Kolesnikov, L Beyer, X Zhai, T Unterthiner, J Yung, A Steiner, D Keysers, J Uszkoreit, M Lucic, A Dosovitskiy, arXiv:2105.01601arXiv preprintI. O. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Un- terthiner, J. Yung, A. Steiner, D. Keysers, J. Uszkoreit, M. Lucic, and A. Dosovitskiy, "MLP-Mixer: An all-MLP architecture for vision," arXiv preprint arXiv: 2105.01601, 2021. On adversarial robustness: A neural architecture search perspective. C Devaguptapu, D Agarwal, G Mittal, P Gopalani, V N Balasubramanian, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021. the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021C. Devaguptapu, D. Agarwal, G. Mittal, P. Gopalani, and V. N. Bala- subramanian, "On adversarial robustness: A neural architecture search perspective," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 152-161. When NAS meets robustness: In search of robust architectures against adversarial attacks. M Guo, Y Yang, R Xu, Z Liu, D Lin, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition CVPR. M. Guo, Y. Yang, R. Xu, Z. Liu, and D. Lin, "When NAS meets robustness: In search of robust architectures against adversarial attacks," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition CVPR, 2020, pp. 628-637. Evolutionary search for adversarially robust neural networks. M Sinn, M Wistuba, B Buesser, M.-I Nicolae, M Tran, Safe Machine Learning workshop at ICLR. M. Sinn, M. Wistuba, B. Buesser, M.-I. Nicolae, and M. Tran, "Evolu- tionary search for adversarially robust neural networks," Safe Machine Learning workshop at ICLR, 2019. Evolving robust neural architectures to defend from adversarial attacks. D V Vargas, S Kotyan, arXiv:1906.11667arXiv preprintD. V. Vargas and S. Kotyan, "Evolving robust neural architectures to defend from adversarial attacks," arXiv preprint arXiv:1906.11667, 2019. Wide residual networks. S Zagoruyko, N Komodakis, BMVC. S. Zagoruyko and N. Komodakis, "Wide residual networks," in BMVC, 2016. [Online]. Available: http://arxiv.org/abs/1605.07146 Learning transferable architectures for scalable image recognition. B Zoph, V Vasudevan, J Shlens, Q V Le, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, "Learning transferable architectures for scalable image recognition," in 2018 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8697-8710. Improved regularization of convolutional neural networks with cutout. T Devries, G W Taylor, arXiv:1708.04552arXiv preprintT. DeVries and G. W. Taylor, "Improved regularization of convolutional neural networks with cutout," arXiv preprint arXiv:1708.04552, 2017. . M.-I Nicolae, M Sinn, M N Tran, B Buesser, A Rawat, M Wistuba, V Zantedeschi, N Baracaldo, B Chen, H Ludwig, I Molloy, B Edwards, arXiv:1807.01069Adversarial robustness toolbox v1.2.0," arXiv preprintM.-I. Nicolae, M. Sinn, M. N. Tran, B. Buesser, A. Rawat, M. Wistuba, V. Zantedeschi, N. Baracaldo, B. Chen, H. Ludwig, I. Molloy, and B. Edwards, "Adversarial robustness toolbox v1.2.0," arXiv preprint arXiv:1807.01069, 2018.
[ "https://github.com/ianwhale/nsga-net", "https://github.com/RobustBench/robustbench", "https://github.com/fillassuncao/denser-models" ]
[ "CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training", "CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training" ]
[ "Jianmin Bao [email protected] \nUniversity of Science\nTechnology of China\n", "Dong Chen \nMicrosoft Research\n\n", "Houqiang Li \nUniversity of Science\nTechnology of China\n\nMicrosoft Research\n\n", "Gang Hua [email protected] \nMicrosoft Research\n\n" ]
[ "University of Science\nTechnology of China", "Microsoft Research\n", "University of Science\nTechnology of China", "Microsoft Research\n", "Microsoft Research\n" ]
[]
We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models.
10.1109/iccv.2017.299
[ "https://arxiv.org/pdf/1703.10155v2.pdf" ]
206,771,102
1703.10155
e6f2adec83ba6a945f7659ff3a9bdd0c39969123
CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training Jianmin Bao [email protected] University of Science Technology of China Dong Chen Microsoft Research Houqiang Li University of Science Technology of China Microsoft Research Gang Hua [email protected] Microsoft Research CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models. Introduction Building effective generative models of natural images is one of the key problems in computer vision. It aims to generate diverse realistic images by varying some latent parameters according to the underlying natural image distributions. Therefore, a desired generative model is necessitated to capture the underlying data distribution. This is often a very difficult task, since a collection of image samples may lie on a very complex manifold. Nevertheless, recent advances in deep convolutional neural networks have spawned a series of deep generative models [14,12,8,31,29,34,15,4,33,6] that have made tremendous progress, largely due to the capability of deep networks in learning representations. Building on top of the success of these recent works, we want to go one step further to generate images of fine-grained object categories. For example, we want to be able to synthesize images for a specific identity (Figure 1), or produce a new image of a specified species of flowers or birds, and so on. Inspired by CVAE [34] and VAE/GAN [15], we propose a general learning framework that combines a variational auto-encoder with a generative adversarial network under a conditioned generative process to tackle this problem. However, we find this naïve combination insufficient in practice. The results from VAE are usually blurry. The discriminator can easily classify them as "fake", even though they sometimes look remarkably good for face images, the gradient vanishing problem still exists. Thus, the generated images are very similar to the results from using VAE alone. In this paper, we propose a new objective for the generator. Instead of using the same cross entropy loss as the discriminator network, the new objective requires the generator to generate data that minimize the 2 distance of the mean feature to the real data. For multi-class image generation, the generated samples of one category also need to match the average feature of real data of that category, since the feature distance and the separability are positively correlated. It solves the gradient vanishing problem to a certain extent. This kind of asymmetric loss function can partially help prevent the mode collapse problem that all outputs moving toward a single point, making the training of GAN more stable. Although using mean feature matching will reduce the chance of mode collapse, it does not completely solve the problem. Once mode collapse occurs, the gradient descent is unable to separate identical outputs. To keep the diversity of generated samples, we take advantage of the combination of VAE and GAN. We use an encoder network to map the real image to the latent vector. Then the generator is required to reconstruct the raw pixels and match the feature of original images with a given latent vector. In this way, we explicitly set up the relationship between the latent space and real image space. Because of the existence of these anchor points, the generator is enforced to emit diverse samples. Moreover, the pixel reconstruction loss is also helpful for maintaining the structure, such as a straight line or a facial structure in an image. As shown in Figure 2 (g), our framework consists of four parts: 1) The encoder network E, which maps the data sample x to a latent representation z. 2) The generative network G, which generates image x given a latent vector. 3) The discriminative network D, which distinguishes real/fake images. 4) The classifier network C, which measures the class probability of the data. These four parts are seamlessly cascaded together, and the whole pipeline is trained end-to-end. We call our approach CVAE-GAN. Once the CVAE-GAN is trained, it can be used in different applications, e.g., image generation, image inpainting, and attributes morphing. Our approach estimates a good representation of the input image, and the generated image appears to be more realistic. We show that it outperforms CVAE, CGAN, and other state-of-the-art methods. Compared with GAN, the proposed framework is much easier to train and converges faster and more stable in the training stage. In our experiments, we further show that the images synthesized from our models can be applied to other tasks, such as data augmentation for training better face recognition models. Related work Conventional wisdom and early research of generative models, including Principle Component Analysis (PCA) [40], Independent Component Analysis (ICA) [10],and the Gaussian Mixture Model (GMM) [46,27,37], all assume a simple formation of data. They have difficulty modeling complex patterns of irregular distributions. Later works, such as the Hidden Markov Model (HMM) [35], Markov Random Field (MRF) [19], and restricted Boltzmann machines (RBMs) [9,32], discriminatively train generative models [39], limiting their results on texture patches, digital numbers or well aligned faces, due to a lack of effective feature representations. There have been many recent developments of deep generative models [14,12,8,31,29,15,4,33,6]. Since deep hierarchical architectures allow them to capture complex structures in the data, all these methods show promising results in generating natural images that are far more realis- Figure 2. Illustration of the structure of VAE [12,31], GAN [8], VAE/GAN [15], CVAE [34], CGAN [18], PPGN [23] and the proposed CVAE-GAN. Where x and x are input and generated image. E,G,C,D are encoder, generative, classification, and discriminative network, respectively. z is the latent vector. y is a binary output which represents real/synthesized image. c is the condition, such as attribute or class label. tic than conventional generative models. Among them are three main themes: Variational Auto-encoder (VAE) [12,31], Generative Adversarial Network (GAN) [8,29,33], and Autoregression [14]. x E z G x' (a) VAE: x' D z G (b) GAN: y x' D z G (c) VAE/GAN: y x E (d) CVAE: x E z G x' c c (e) CGAN: c x' D z G y c (g) CVAE-GAN: (this work) c x' C z G x E c c y D (f) PPGN: c x' C z G x C y D VAE [12,31] pairs a differentiable encoder network with a decoder/generative network. A disadvantage of VAE is that, because of the injected noise and imperfect elementwise measures such as the squared error, the generated samples are often blurry. Generative Adversarial Network (GAN) [8,29,33] is another popular generative model. It simultaneously trains two models: a generative model to synthesize samples, and a discriminative model to differentiate between natural and synthesized samples. However, the GAN model is hard to converge in the training stage and the samples generated from GAN are often far from natural. Recently, many works have tried to improve the quality of the generated samples. For example, the Wasserstein GAN (WGAN) [2] uses Earth Mover Distance as an objective for training GANs, and McGAN [20] uses mean and covariance feature matching. They need to limit the range of the parameters of the discriminator which will decrease discriminative power. Loss-Sensitive GAN [28] learns a loss function which can quantify the quality of generated samples and uses this loss function to generate high-quality images. There are also methods which tried to combine GAN and VAE, e.g., VAE/GAN [15] and adversarial autoencoders [17]. They are closely related to and partly inspired our work. VAEs and GANs can also be trained to conduct conditional generation, e.g., CVAE [34] and CGAN [18]. By introducing additional conditionality, they can handle probabilistic one-to-many mapping problems. Recently there have been many interesting works based on CVAE and CGAN, including conditional face generation [7], At-tribute2Image [47], text to image synthesis [30], forecasting from static images [42], and conditional image synthesis [25]. All of them achieve impressive results. Generative ConvNet [44], demonstrates that a generative model can be derived from the commonly used discriminative ConvNet. Dosovitskiy et al. [5] and Nguyen et al. [22] introduce a method that generates high quality images from features extracted from a trained classification model. PPGN [23] performs exceptionally well in generating samples by using a gradient ascent and prior to the latent space of a generator. Autoregression [14] follows a different idea. It uses autoregressive connections to model images pixel by pixel. Its two variants, PixelRNN [41] and PixelCNN [26], also produce excellent samples. Our model differs from all these models. As illustrated in Figure 2, we compare the structure of the proposed CVAE-GAN with all these models. Besides the difference in the structure, more importantly, we take advantages of both statistic and pairwise feature matching to make the training process converge faster and more stable. Our Formulation of CVAE-GAN In this section, we introduce the proposed CVAE-GAN networks. As shown in Figure 3, our proposed method contains four parts: 1) the encoder network E; 2) the generative network G; 3) the discriminative network D; and 4) the classification network C. The function of networks E and G is the same as that in conditional variational auto-encoder (CVAE) [34]. The encoder network E maps the data sample x to a latent representation z through a learned distribution P (z|x, c), where c is the category of the data. The generative network G generates image x by sampling from a learned distribution P (x|z, c). The function of network G and D is the same as that in the generative adversarial network (GAN) [8]. The network G tries to learn the real data distribution by the gradients given by the discriminative network D which learns to distinguish between "real" and "fake" samples. The function of network C is to measure the posterior P (c|x). However, the naïve combination of VAE and GAN is insufficient. Recent work [1] shows that the training of GAN will suffer from a gradient vanishing or instability problem with network G. Therefore, we only keep the training process of networks E, D, and C the same as the original VAE [12] and GAN [8], and propose a new mean feature matching objective for the generative network G to improve the stability of the original GAN. Even with the mean feature matching objective, there is Figure 3. Illustration of our network structure. Our model contains four parts: 1) The encoder network E; 2) The generative network G; x C ( ′) ( ) ( ′) ( ) D(x) x E z G c c D( ) ( | ) D 3) The classification network C; and 4) The discriminative network D. Please refer to Section 3 for details. still some risk of mode collapse. So we use the encoder network E and the generative network G to obtain a mapping from real samples x to the synthesized samples x . By using the pixel-wise 2 loss and pair-wise feature matching, the generative model is enforced to emit diverse samples and generate structure-preserving samples. In the following sections, we begin by describing the method of mean feature matching based GAN (Section 3.1). Then we show that the mean feature matching can also be used in conditional image generation tasks (Section 3.2). After that, we introduce pair-wise feature matching by using an additional encoder network (Section 3.3). Finally, we analyse the objective of the proposed method and provide the implementation details in the training pipeline (Section 3.4). Mean feature matching based GAN In traditional GANs, the generator G and a discriminator D compete in a two-player minimax game. The discriminator tries to distinguish real training data from synthesized data; and the generator tries to fool the discriminator. Concretely, the network D tries to minimize the loss function L D = −E x∼Pr [logD(x)] − E z∼Pz [log(1 − D(G(z))],(1) while network G tries to minimize L GD = −E z∼Pz [log D(G(z))]. In practice, the distributions of "real" and "fake" images may not overlap with each other, especially at the early stage of the training process. Hence, the discriminative network D can separate them perfectly. That is, we always have D(x) → 1 and D(x ) → 0, where x = G(z) is the generated image. Therefore, when updating network G, the gradient ∂L GD /∂D(x ) → −∞. So the training process of network G will be unstable. Recent works [1,2] also theoretically show that training GAN often has to deal with the unstable gradient of G. To address this problem, we propose using a mean feature matching objective for the generator. The objective requires the center of the features of the synthesized samples to match the center of the features of the real samples. Let f D (x) denote features on an intermediate layer of the discriminator, G then tries to minimize the loss function L GD = 1 2 ||E x∼Pr f D (x) − E z∼Pz f D (G(z))|| 2 2 .(2) In our experiment, for simplicity, we choose the input of the last Fully Connected (FC) layer of network D as the feature f D . Combining the features of multiple layers could marginally improve the converging speed. In the training stage, we estimate the mean feature using the data in a minibatch. We also use moving historical averages to make it more stable. Therefore, in the training stage, we update network D using Eq. 1, and update network G using Eq. 2. Using this asymmetrical loss for training GAN has the following three advantages: 1) since Eq. 2 increases with the separability, the 2 loss on the feature center solves the gradient vanishing problem; 2) when the generated images are good enough, the mean feature matching loss becomes zero, making the training more stable; 3) compared with WGAN [2], we do not need to clip the parameters. The discriminative power of network D can be kept. Mean Feature Matching for Conditional Image Generation In this section, we introduce mean feature matching for conditional image generation. Supposing we have a set of data belonging to K categories, we use the network C to measure whether an image belongs to a specific fine-grained category. Here we use a standard method for classification. The network C takes in x as input and outputs a Kdimensional vector, which then turns into class probabilities using a softmax function. The output of each entry represents the posterior probability P (c|x). In the training stage, the network C tries to minimize the softmax loss L C = −E x∼Pr [log P (c|x)].(3) For the network G, if we still use the similar softmax loss function as in Eqn. 3, it will suffer from the same gradient instability problem as described in [1]. Therefore, we propose using the mean feature matching objective for generative network G. Let f C (x) denote features on an intermediate layer of the classification, then G tries to minimize: L GC = 1 2 c ||E x∼Pr f C (x) − E z∼Pz f C (G(z, c))|| 2 2 .(4) Here, we choose the input of the last FC layer of network C as the feature for simplicity. We also try to combine features of multiple layers, it only marginally improves the ability to preserve the identity of network G. Since there are only a few samples belonging to the same category in a minibatch, it is necessary to use moving averages of features for both real and generated samples. Pairwise Feature Matching Although, using mean feature matching could prevent all outputs from moving toward a single point, thus reducing the likelihood of mode collapse, it does not completely solve this problem. Once mode collapse occurs, the generative network outputs the same images for different latent vectors, thus the gradient descent will not be able to separate these identical outputs. Moreover, despite the generated samples and real samples having the same feature center, they may have different distributions. In order to generate diverse samples, DCGAN [29] uses Batch Normalization, McGAN [20] uses both mean and covariance feature statistics, and Salimans et al. [33] use minibatch discrimination. They are all based on using multiple generated examples. Different from these methods, we add an encoder network E to obtain a mapping from the real image x to the latent space z. Therefore, we explicitly set up the relationship between the latent space and real image space. Similar to VAE, for each sample, the encoder network outputs the mean and covariance of the latent vector, i.e., µ and . We use the KL loss to reduces the gap between the prior P (z) and the proposal distributions, i.e., L KL = 1 2 µ T µ + sum(exp( ) − − 1) .(5) We can then sample the latent vector z = µ+r exp( ), where r ∼ N (0, I) is a random vector and represents the element-wise multiplication. After obtaining a mapping from x to z, we obtain the generated image x with network G. Then, we add a 2 reconstruction loss and pair-wise feature matching loss between x and x , L G = 1 2 (||x − x || 2 2 + ||f D (x) − f D (x )|| 2 2 + ||f C (x) − f C (x )|| 2 2 ),(6) where, f D and f C are the features of an intermediate layer of discriminative network D and classification network C, respectively. Objective of CVAE-GAN The goal of our approach is to minimize the following loss function: where the exact forms of each of the terms are presented in Eqns. 1∼6. Every term of the above formula is meaningful. L KL is only related to the encoder network E. It represents whether the distribution of the latent vector is under expectation. L G , L GD and L GC are related to the generative network G. They represent whether the synthesized image is similar to the input training sample, the real image, and other samples within the same category, respectively. L C is related to the classification network C, which represents the capability of the network to classify images from different categories, and L D is related to the discriminative network, which represents how good the network is at distinguishing between real/synthesized images. All these objectives are complementary to each other, and ultimately enable our algorithm to obtain superior results. The whole training procedure is described in Algorithm 1. In our experiments. we empirically set λ 1 = 3, λ 2 = 1, λ 3 = 10 −3 and λ 4 = 10 −3 . L = L D + L C + λ 1 L KL + λ 2 L G + λ 3 L GD + λ 4 L GC ,(7) Analysis of Toy Example In this section, we present and demonstrate the benefits of the mean feature matching based GAN with a toy example. We assume that we have a real data distribution which is a "ring" as shown in Figure 4(a). The center of the ring is set to (100, 100), such that it is far from the generated distribution at the beginning. We compare the traditional GAN, WGAN, and the mean feature matching based GAN introduced in Section 3.1 to learn the real data distribution. The three compared models share the same settings. Generator G is an MLP with 3 hidden layers with 32, 64, and 64 units, respectively. Discriminator D is also an MLP with 3 hidden layers with 32, 64, and 64 units, respectively. We use RMSProp and a fixed learning rate of 0.00005 for all methods. We trained each model for 2M iterations until they all converge. The generated samples of each model at different iterations are plotted in Figure 4. From the results we can observe that: 1) For traditional GAN (first row in Figure 4(b)), the generated samples only lie in a limited area of the real data distribution, which is known as the mode collapse problem. This problem always exists during the training process. 2) For WGAN (second row in Figure 4(b)), it cannot learn the real data distribution at early iterations, we think this problem is caused by the clamping weights trick, which influence D's ability to distinguishing between real and fake samples. We also tried to vary the clamp values to accelerate the training process, and find that if the value is too small, it will cause a gradient vanishing problem. If too large, the network will diverge. 3) The third row shows the results of the proposed feature matching based GAN. It correctly learns the real data distribution the fastest. Experiments In this section, we use experiments to validate the effectiveness of the proposed method. We evaluate our model on three datasets: the FaceScrub [21],the 102 Category Flower [24], and CUB-200 [43] datasets. These three datasets contain three completely different objects, which are human faces, birds, and flowers, respectively. The sizes of input and synthesized images are 128 × 128 for all experiments. For the FaceScrub dataset, we first detect the facial region with the JDA face detector [3], and then locate five facial landmarks (two eyes, nose tip and two mouth corners) with SDM [45]. After that, we use similarity transformation based on the facial landmarks to align faces to a canonical position. Finally, we crop a 128 × 128 face region centered around the nose tip. For the 102 Category Flower dataset, we tightly crop a rectangle region based on the ground-truth mask which contains the flower, and then resize it into 128 × 128. For the CUB-200 dataset, we just Figure 5. Comparison of randomly generated samples from different methods on FaceScrub [21], 102 Category Flower datasets [24] and CUB-200 [43] datasets. a) 9 random real images from one category. b) Results from CVAE, which is blurry and cannot preserve the category identity, c) Results from traditional CGAN, it loses diversity and structure info. d) Results from our mean feature matching CGAN, showing diverse results, but also lose of structure info. e) Results from our CVAE-GAN, which shows realistic, diversity and category-keeping results. use the original images from the dataset. In our experiments, the encoder network E is a GoogleNet [36], The category information and the image is merged at the last FC layer of the E network. The G network consists of 2 fully-connected layers, followed by 6 deconv layers with 2-by-2 upsampling. The convolution layers have 256, 256, 128, 92, 64 and 3 channels with filter size of 3 × 3, 3 × 3, 5 × 5, 5 × 5, 5 × 5, 5 × 5. For the D network we use the same D network as the DCGAN [29].For the C network, we use an Alexnet [13] structure, and change the input to 128 × 128. We fix the latent vector dimension to be 256 and find this configuration sufficient for generating images. The batch normalization layer [11] is also applied after each convolution layer. The model is implemented using the deep learning toolbox Torch. Visualization comparison with other models In this experiment, we compare the proposed mean feature matching based CGAN introduced in Section 3.2 (FM-CGAN), and CVAE-GAN model with other generative models for image synthesis of fine-grained images. In order to fairly compare each method, we use the same network structure and same training data for all methods. All networks are trained from scratch. In the testing stage, the network architectures are the same. All three methods only use network G to generate images. Therefore, although our approach has more parameters in the training stage, we believe this comparison is fair. We conduct experiments on three datasets: FaceScrub, 102 Category Flower and CUB-200 dataset. We perform category conditioned image generation for all methods. For each dataset, all methods are trained with all the data in that dataset. In the test stage, we first randomly chose a category c, and then randomly generate samples of that category by sampling the latent vector z ∼ N (0, I). For evaluation, we visualize the samples generated from all methods. The comparison results are presented in Figure 5. All images are randomly selected without any personal bias. We observe that images generated by CVAE are often blurry. For traditional CGAN, the variation within a category is very small, which is because of the mode collapse. For FM-CGAN, we observe clear images with well preserved identities, but some images lose the structure of an object, such as the shape of the face. On the other hand, images generated by the proposed CVAE-GAN models look realistic and clear, and are non-trivially different from each other, Algorithm 1 The training pipeline of the proposed CVAE-GAN algorithm. Require: m, the batch size. n, class number. θE, initial E network parameters. θG, initial G network parameters. θD, initial D network parameters. θC , initial C network parameters, λ1 = 3, λ2 = 1, λ3 = 10 −3 and λ4 = 10 −3 . 1: while θG has not converged do 2: Sample {xr, cr} ∼ Pr a batch from the real data; 3: LC ← −log(P (cr|xr)) 4: z ← E(xr, cr) 5: LKL ← KL(q(z|xr, cr)||Pz) 6: x f ← G(z, cr) 7: Sample zp ∼ Pz a batch of random noise, sample cp a batch of random classes; 8: xp ← G(zp, cp) Calculate each class ci feature center f c i C (xr) for xr and f c i C (xp) for xp using moving average method; 13: LGC ← 1 2 c i ||f c i C (xr) − f c i C (xp)|| 2 2 14: LG ← 1 2 (||xr −x f || 2 2 +||fD(xr)−fD(x f )|| 2 2 +||fC (xr)− fC (x f )|| 2 2 ) 15: θC + ←− −∇ θ C (LC ) 16: θD + ←− −∇ θ D (LD) 17: θG + ←− −∇ θ G (λ2LG + λ3LGD + λ4LGC ) 18: θE + ←− −∇ θ E (λ1LKL + λ2LG) Quantitative Comparison Evaluating the quality of a synthesized image is challenging due to the variety of probabilistic criteria [38]. We attempt to measure the generative model on three criteria: discriminability, diversity and realism. We use face images for this experiment. First, we randomly generate 53k samples (100 for each class) from CVAE, CGAN, FM-CGAN, and CVAE-GAN models for evaluation. To measure discriminability, we use a pre-trained face classification network on the real data. Here we use GoogleNet [36]. With this trained model, we evaluate the top-1 accuracy of the generated samples from each method. The results are shown in Table 1. Our model achieves the best top-1 accuracy with a big gap to other generative models. This demonstrates the effectiveness of the proposed method. Following the method in [33], we use the Inception Score to evaluate the realism and diversity of generated samples. We train a classification model on the CASIA [48] datasets, Figure 6. Results of attributes morphing. and adopt exp(E x KL(p(y|x)||p(y))) as the metric to measure the realism and diversity of the generative models, where p(y|x) represents the posterior probability of each class of generated samples. Images that contain meaningful objects should have a conditional label distribution p(y|x) with low entropy. Moreover, if the model generate diverse images, the marginal p(y) = p(y|G(z))dz should have high entropy. A larger score means the generator can produce more realistic and diverse images. The results are shown in Table 1. Our proposed CVAE-GAN and FM-CGAN achieve better scores than the other models, which are also very close to the real data. Attributes Morphing In this part, we validate that the attribute in the generated images will continuously change with the latent vector. We call this phenomenon attribute morphing. We also test our model on the FaceScrub, CUB-200 and 102 Category Flower datasets. We first select a pair of images x 1 and x 2 in the same category, and then extract the latent vector z 1 and z 2 using the encoder network E. Finally, we obtain a series of latent vectors z by linear interpolation,i.e., z = αz 1 + (1 − α)z 2 , α ∈ [0, 1]. Figure 6 shows the results of attribute morphing. In each row, the attribute, such as pose, emotion, color, or flower number, gradually changes from left to right. Image Inpainting In this part, we show that our model can also be applied to image inpainting. We first randomly corrupt a 50 × 50 patch of an original 128 × 128 image x (Fig.7b), and then feed it to the E network to obtain a latent vector z, then we can synthesize an image x by G(z, c) where c is the class label, then we update the image by the following equation,i.e., results are shown in Figure 7 (c). We should emphasize that all input images are downloaded from websites, with none of them belonging to the training data. We can iteratively feed the resulting images into the model to obtain a better results, as shown in Figure 7 (d,e). x = M x + (1 − M ) x,(8) Comparing Different Combination of Losses In our model, we propose using pairwise feature matching at the image pixel level, the feature level in the classification network C and the discriminative network D to update the network G. To understand the effects of each loss component, we separate the L G + L GD + L GC to three parts: L G (img) + L G (D) + L G (C), where L G (img) is the 2 distance at the pixel level of the image, L G (D) is the 2 distance at the feature level in the discriminative network D, L G (C) is the 2 distance at the feature level in the classification network C. We repeat the training of the CVAE-GAN model with the same settings but using different combination of losses in L G (img), L G (D), and L G (C), and compared the quality of the reconstructed samples. As shown in Fig. 8, we find that removing the adversarial loss L G (D) will cause the model to generate blurry images. Removing the pixel level reconstruction loss L G (img) causes images to lose details. Lastly, if we remove the feature level loss L G (C) in the classification network C, the generated samples will lose category info. Despite this, our model produces best results. CVAE-GAN for Data Augmentation We further show that the images synthesized from our model can be used for data augmentation for training better face recognition model. We use the FaceScrub dataset as training data, and test using the LFW [16] dataset. We experiment with two data augmentation strategies: 1) generating more images for existing identities in the training datasets; 2) generating new identities by mixing different identities. We test these two kinds of data augmentation methods. Method Training Data Accuracy no data augmentation 80K 91.87% existing identities augmentation 80K + 100K 92.77% 5k new identities augmentation 80K + 500K 92.98% Table 2. Results of face data augmentation. per person, totaling 100k images. For 2), we create 5k new identities by randomly mixing the label of three different existing identities, and generate 100 images for each new identity. For both strategies, the generated images are combined with the Facescrub dataset to train a face recognition model. In the testing stage, we directly use the cosine similarity of features to measure the similarity between two faces. In Table 2, we compare face verification accuracy on the LFW dataset with and without additional synthesized faces. With the data augmentation of new identities, we achieve about 1.0% improvement in accuracy compared with no augmentation. This demonstrates that our generative network has a certain extrapolation ability. Conclusion In this paper, we propose a new CVAE-GAN model for fine-grained category image generation. The superior performance on three different datasets demonstrates the ability to generate various kinds of objects. The proposed method can support a wide variety of applications, including image generation, attribute morphing, image inpainting, and data augmentation for training better face recognition models. Our future work will explore how to generate samples of an unknown category, such as face images of a person that do not exist in the training dataset. Figure 1 . 1Synthesized images using our CVAE-GAN model at high resolution (128×128) for different classes. The generated samples are realistic and diverse within a class. Figure 4 . 4Results on a toy example for different generative models. The blue dots are the real points, the red dots are the generated points. a) The real data distribution which is like a "ring". b) The generated points by traditional GAN, WGAN and mean feature matching GAN at different iterations. 19: end while especially for view-point and background color. Our model is also able to keep the identity information. It shows the strength of the proposed CVAE-GAN method. where M is the binary mask for the corrupted patch, and denotes the element-wise product. So (1 − M ) x is the uncorrupted area in the original image. The inpainting a) Original images b) Masked images c) CVAE-GAN-1 d) CVAE-GAN-5 e) CVAE-GAN-10 Figure 7 . 7Result of image inpainting using our proposed model CVAE-GAN-1 ∼ 10 shows the results of iteration 1 ∼ 10. Figure 8 . 8For 1), we randomly generate about 200 images Visualization comparison between different generator G, each trained with different combination of losses. Towards principled methods for training generative adversarial networks. M Arjovsky, L Bottou, NIPS 2016 Workshop on Adversarial Training. In review for ICLR. M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In NIPS 2016 Workshop on Adversarial Training. In review for ICLR, vol- ume 2016, 2017. . M Arjovsky, S Chintala, L Bottou, arXiv:1701.07875Wasserstein gan. arXiv preprintM. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. Joint cascade face detection and alignment. D Chen, S Ren, Y Wei, X Cao, J Sun, European Conference on Computer Vision. SpringerD. Chen, S. Ren, Y. Wei, X. Cao, and J. Sun. Joint cascade face detection and alignment. In European Conference on Computer Vision, pages 109-122. Springer, 2014. Deep generative image models using a laplacian pyramid of adversarial networks. E L Denton, S Chintala, R Fergus, Advances in neural information processing systems. E. L. Denton, S. Chintala, R. Fergus, et al. Deep genera- tive image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing sys- tems, pages 1486-1494, 2015. Generating images with perceptual similarity metrics based on deep networks. A Dosovitskiy, T Brox, Advances in Neural Information Processing Systems. A. Dosovitskiy and T. Brox. Generating images with percep- tual similarity metrics based on deep networks. In Advances in Neural Information Processing Systems, pages 658-666, 2016. Learning to generate chairs, tables and cars with convolutional networks. A Dosovitskiy, J Springenberg, M Tatarchenko, T Brox, A. Dosovitskiy, J. Springenberg, M. Tatarchenko, and T. Brox. Learning to generate chairs, tables and cars with convolutional networks. 2016. Conditional generative adversarial nets for convolutional face generation. J Gauthier, Convolutional Neural Networks for Visual Recognition. Winter semester231J. Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recog- nition, Winter semester, 2014, 2014. Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Gen- erative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672-2680, 2014. Reducing the dimensionality of data with neural networks. G E Hinton, R R Salakhutdinov, Science. 3135786G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507, 2006. Independent component analysis. A Hyvärinen, J Karhunen, E Oja, John Wiley & Sons46A. Hyvärinen, J. Karhunen, and E. Oja. Independent compo- nent analysis, volume 46. John Wiley & Sons, 2004. Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, arXiv:1502.03167arXiv preprintS. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. D P Kingma, M Welling, arXiv:1312.6114Auto-encoding variational bayes. arXiv preprintD. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012. The neural autoregressive distribution estimator. H Larochelle, I Murray, AISTATS. 1H. Larochelle and I. Murray. The neural autoregressive dis- tribution estimator. In AISTATS, volume 1, page 2, 2011. Autoencoding beyond pixels using a learned similarity metric. A B L Larsen, S K Sønderby, O Winther, arXiv:1512.09300arXiv preprintA. B. L. Larsen, S. K. Sønderby, and O. Winther. Autoen- coding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015. Labeled faces in the wild: A survey. E Learned-Miller, G B Huang, A Roychowdhury, H Li, G Hua, Advances in Face Detection and Facial Image Analysis. SpringerE. Learned-Miller, G. B. Huang, A. RoyChowdhury, H. Li, and G. Hua. Labeled faces in the wild: A survey. In Advances in Face Detection and Facial Image Analysis, pages 189- 248. Springer, 2016. A Makhzani, J Shlens, N Jaitly, I Goodfellow, arXiv:1511.05644Adversarial autoencoders. arXiv preprintA. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow. Adver- sarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. M Mirza, S Osindero, arXiv:1411.1784Conditional generative adversarial nets. arXiv preprintM. Mirza and S. Osindero. Conditional generative adversar- ial nets. arXiv preprint arXiv:1411.1784, 2014. Generating more realistic images using gated mrf's. V Mnih, G E Hinton, Advances in Neural Information Processing Systems. V. Mnih, G. E. Hinton, et al. Generating more realistic im- ages using gated mrf's. In Advances in Neural Information Processing Systems, pages 2002-2010, 2010. Y Mroueh, T Sercu, V Goel, Mcgan, arXiv:1702.08398Mean and covariance feature matching gan. arXiv preprintY. Mroueh, T. Sercu, and V. Goel. Mcgan: Mean and covariance feature matching gan. arXiv preprint arXiv:1702.08398, 2017. A data-driven approach to cleaning large face datasets. H.-W Ng, S Winkler, 2014 IEEE International Conference on Image Processing (ICIP). IEEEH.-W. Ng and S. Winkler. A data-driven approach to clean- ing large face datasets. In 2014 IEEE International Con- ference on Image Processing (ICIP), pages 343-347. IEEE, 2014. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. A Nguyen, A Dosovitskiy, J Yosinski, T Brox, J Clune, Advances in Neural Information Processing Systems. A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Advances in Neural Information Processing Systems, pages 3387-3395, 2016. Plug & play generative networks: Conditional iterative generation of images in latent space. A Nguyen, J Yosinski, Y Bengio, A Dosovitskiy, J Clune, arXiv:1612.00005arXiv preprintA. Nguyen, J. Yosinski, Y. Bengio, A. Dosovitskiy, and J. Clune. Plug & play generative networks: Conditional it- erative generation of images in latent space. arXiv preprint arXiv:1612.00005, 2016. Automated flower classification over a large number of classes. M.-E Nilsback, A Zisserman, Computer Vision, Graphics & Image Processing. IEEEICVGIP'08. Sixth Indian Conference onM.-E. Nilsback and A. Zisserman. Automated flower classi- fication over a large number of classes. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP'08. Sixth In- dian Conference on, pages 722-729. IEEE, 2008. A Odena, C Olah, J Shlens, arXiv:1610.09585Conditional image synthesis with auxiliary classifier gans. arXiv preprintA. Odena, C. Olah, and J. Shlens. Conditional image synthesis with auxiliary classifier gans. arXiv preprint arXiv:1610.09585, 2016. A V Oord, N Kalchbrenner, O Vinyals, L Espeholt, A Graves, K Kavukcuoglu, arXiv:1606.05328Conditional image generation with pixelcnn decoders. arXiv preprintA. v. d. Oord, N. Kalchbrenner, O. Vinyals, L. Espe- holt, A. Graves, and K. Kavukcuoglu. Conditional im- age generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016. Gaussian mixture models of texture and colour for image database retrieval. H Permuter, J Francos, I H Jermyn, III-569Acoustics, Speech, and Signal Processing. IEEE3Proceedings.(ICASSP'03)H. Permuter, J. Francos, and I. H. Jermyn. Gaussian mixture models of texture and colour for image database retrieval. In Acoustics, Speech, and Signal Processing, 2003. Proceed- ings.(ICASSP'03). 2003 IEEE International Conference on, volume 3, pages III-569. IEEE, 2003. Loss-sensitive generative adversarial networks on lipschitz densities. G.-J Qi, arXiv:1701.06264arXiv preprintG.-J. Qi. Loss-sensitive generative adversarial networks on lipschitz densities. arXiv preprint arXiv:1701.06264, 2017. Unsupervised representation learning with deep convolutional generative adversarial networks. A Radford, L Metz, S Chintala, arXiv:1511.06434arXiv preprintA. Radford, L. Metz, and S. Chintala. Unsupervised repre- sentation learning with deep convolutional generative adver- sarial networks. arXiv preprint arXiv:1511.06434, 2015. S Reed, Z Akata, X Yan, L Logeswaran, B Schiele, H Lee, arXiv:1605.05396Generative adversarial text to image synthesis. arXiv preprintS. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396, 2016. Stochastic backpropagation and approximate inference in deep generative models. D J Rezende, S Mohamed, D Wierstra, arXiv:1401.4082arXiv preprintD. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep genera- tive models. arXiv preprint arXiv:1401.4082, 2014. Deep boltzmann machines. R Salakhutdinov, G E Hinton, AISTATS. 13R. Salakhutdinov and G. E. Hinton. Deep boltzmann ma- chines. In AISTATS, volume 1, page 3, 2009. T Salimans, I Goodfellow, W Zaremba, V Cheung, A Radford, X Chen, arXiv:1606.03498Improved techniques for training gans. arXiv preprintT. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Rad- ford, and X. Chen. Improved techniques for training gans. arXiv preprint arXiv:1606.03498, 2016. Learning structured output representation using deep conditional generative models. K Sohn, H Lee, X Yan, Advances in Neural Information Processing Systems. K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems, pages 3483-3491, 2015. Real-time american sign language recognition from video using hidden markov models. T Starner, A Pentland, Motion-Based Recognition. SpringerT. Starner and A. Pentland. Real-time american sign lan- guage recognition from video using hidden markov mod- els. In Motion-Based Recognition, pages 227-243. Springer, 1997. Going deeper with convolutions. C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, A Rabinovich, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionC. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1-9, 2015. Mixtures of conditional gaussian scale mixtures applied to multiscale image representations. L Theis, R Hosseini, M Bethge, PloS one. 7739857L. Theis, R. Hosseini, and M. Bethge. Mixtures of condi- tional gaussian scale mixtures applied to multiscale image representations. PloS one, 7(7):e39857, 2012. A note on the evaluation of generative models. L Theis, A V D Oord, M Bethge, arXiv:1511.01844arXiv preprintL. Theis, A. v. d. Oord, and M. Bethge. A note on the evalua- tion of generative models. arXiv preprint arXiv:1511.01844, 2015. Learning generative models via discriminative approaches. Z Tu, 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEEZ. Tu. Learning generative models via discriminative ap- proaches. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE, 2007. Face recognition using eigenfaces. M A Turk, A P Pentland, Proceedings CVPR'91., IEEE Computer Society Conference on. CVPR'91., IEEE Computer Society Conference onIEEEComputer Vision and Pattern RecognitionM. A. Turk and A. P. Pentland. Face recognition using eigen- faces. In Computer Vision and Pattern Recognition, 1991. Proceedings CVPR'91., IEEE Computer Society Conference on, pages 586-591. IEEE, 1991. A Van Den Oord, N Kalchbrenner, K Kavukcuoglu, arXiv:1601.06759Pixel recurrent neural networks. arXiv preprintA. van den Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. An uncertain future: Forecasting from static images using variational autoencoders. J Walker, C Doersch, A Gupta, M Hebert, European Conference on Computer Vision. SpringerJ. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncer- tain future: Forecasting from static images using variational autoencoders. In European Conference on Computer Vision, pages 835-851. Springer, 2016. Caltech-UCSD Birds 200. P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, CNS-TR-2010-001California Institute of TechnologyTechnical ReportP. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Be- longie, and P. Perona. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001, California Institute of Technol- ogy, 2010. J Xie, Y Lu, S.-C Zhu, Y N Wu, arXiv:1602.03264A theory of generative convnet. arXiv preprintJ. Xie, Y. Lu, S.-C. Zhu, and Y. N. Wu. A theory of genera- tive convnet. arXiv preprint arXiv:1602.03264, 2016. Supervised descent method and its applications to face alignment. X Xiong, F De La, Torre , Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionX. Xiong and F. De la Torre. Supervised descent method and its applications to face alignment. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 532-539, 2013. On convergence properties of the em algorithm for gaussian mixtures. L Xu, M I Jordan, Neural computation. 81L. Xu and M. I. Jordan. On convergence properties of the em algorithm for gaussian mixtures. Neural computation, 8(1):129-151, 1996. Attribute2image: Conditional image generation from visual attributes. X Yan, J Yang, K Sohn, H Lee, arXiv:1512.00570arXiv preprintX. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. arXiv preprint arXiv:1512.00570, 2015. Learning face representation from scratch. D Yi, Z Lei, S Liao, S Z Li, arXiv:1411.7923arXiv preprintD. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face represen- tation from scratch. arXiv preprint arXiv:1411.7923, 2014.
[]
[ "Attention Is All You Need for Chinese Word Segmentation", "Attention Is All You Need for Chinese Word Segmentation", "Attention Is All You Need for Chinese Word Segmentation", "Attention Is All You Need for Chinese Word Segmentation" ]
[ "Sufeng Duan \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n", "Hai Zhao [email protected] \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n", "Sufeng Duan \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n", "Hai Zhao [email protected] \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina\n\nMoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n\n" ]
[ "Department of Computer Science and Engineering\nShanghai Jiao Tong University\n", "Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina", "MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n", "Department of Computer Science and Engineering\nShanghai Jiao Tong University\n", "Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina", "MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n", "Department of Computer Science and Engineering\nShanghai Jiao Tong University\n", "Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina", "MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n", "Department of Computer Science and Engineering\nShanghai Jiao Tong University\n", "Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering\nShanghai Jiao Tong University\nShanghaiChina", "MoE Key Lab of Artificial Intelligence\nAI Institute\nShanghai Jiao Tong University\n" ]
[]
This paper presents a fast and accurate Chinese word segmentation (CWS) model with only unigram feature and greedy decoding algorithm. Our model uses only attention mechanism for network block building. In detail, we adopt a Transformer-based encoder empowered by self-attention mechanism as backbone to take input representation. Then we extend the Transformer encoder with our proposed Gaussian-masked directional multihead attention, which is a variant of scaled dotproduct attention. At last, a bi-affinal attention scorer is to make segmentation decision in a linear time. Our model is evaluated on SIGHAN Bakeoff benchmark dataset. The experimental results show that with the highest segmentation speed, the proposed attentiononly model achieves new state-of-the-art or comparable performance against strong baselines in terms of closed test setting.
10.18653/v1/2020.emnlp-main.317
[ "https://arxiv.org/pdf/1910.14537v1.pdf" ]
207,758,164
1910.14537
d4f68b2c033a79fc02f30d8cffb6cbc532cdbd51
Attention Is All You Need for Chinese Word Segmentation Sufeng Duan Department of Computer Science and Engineering Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University ShanghaiChina MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University Hai Zhao [email protected] Department of Computer Science and Engineering Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Shanghai Jiao Tong University ShanghaiChina MoE Key Lab of Artificial Intelligence AI Institute Shanghai Jiao Tong University Attention Is All You Need for Chinese Word Segmentation This paper presents a fast and accurate Chinese word segmentation (CWS) model with only unigram feature and greedy decoding algorithm. Our model uses only attention mechanism for network block building. In detail, we adopt a Transformer-based encoder empowered by self-attention mechanism as backbone to take input representation. Then we extend the Transformer encoder with our proposed Gaussian-masked directional multihead attention, which is a variant of scaled dotproduct attention. At last, a bi-affinal attention scorer is to make segmentation decision in a linear time. Our model is evaluated on SIGHAN Bakeoff benchmark dataset. The experimental results show that with the highest segmentation speed, the proposed attentiononly model achieves new state-of-the-art or comparable performance against strong baselines in terms of closed test setting. Introduction Chinese word segmentation (CWS) is a task for Chinese natural language process to delimit word boundary. CWS is a basic and essential task for Chinese which is written without explicit word delimiters and different from alphabetical languages like English. (Xue, 2003) treats Chinese word segmentation (CWS) as a sequence labeling task with character position tags, which is followed by (Lafferty et al., 2001;Peng et al., 2004;Zhao et al., 2006). Traditional CWS models depend on the design of features heavily which effects the performance of model. To minimize the effort in feature engineering, some CWS models (Zheng et al., 2013;Pei et al., 2014;Chen et al., 2015a,b;Xu and Sun, 2016;Cai and Zhao, 2016;Liu et al., 2016;Cai et al., 2017) are developed following neural network architecture for sequence labeling tasks (Collobert et al., 2011). Neural CWS models perform strong ability of feature representation, employing unigram and bigram character embedding as input and approach good performance. The CWS task is often modeled as one graph model based on a scoring model that means it is composed of two parts, one part is an encoder which is used to generate the representation of characters from the input sequence, the other part is a decoder which performs segmentation according to the encoder scoring. Table 1 summarizes typical CWS models according to their decoding ways for both traditional and neural models. Markov models such as (Ng and Low, 2004) and (Zheng et al., 2013) depend on the maximum entropy model or maximum entropy Markov model both with a Viterbi decoder. Besides, conditional random field (CRF) or Semi-CRF for sequence labeling has been used for both traditional and neural models though with different representations (Peng et al., 2004;Andrew, 2006;Liu et al., 2016;Wang and Xu, 2017;Ma et al., 2018). Generally speaking, the major difference between traditional and neural network models is about the way to represent input sentences. Recent works about neural CWS which focus on benchmark dataset, namely SIGHAN Bakeoff (Emerson, 2005), may be put into the following three categories roughly. Encoder. Practice in various natural language processing tasks has been shown that effective representation is essential to the performance improvement. Thus for better CWS, it is crucial to encode the input character, word or sentence into effective representation. Table 2 summarizes regular feature sets for typical CWS models including ours as well. The building blocks that encoders use include recurrent neural network (RNN) and convolutional neural network (CNN), and long- (Ng and Low, 2004), (Low et al., 2005) MMTNN: (Pei et al., 2014) (Zheng et al., 2013), LSTM: (Chen et al., 2015b) Viterbi Sequence Labeling Model CRF: (Peng et al., 2004), semi-CRF: (Andrew, 2006), (Sun et al., 2009) BiLSTM+semi-CRF: (Liu et al., 2016) , CNN+CRF: (Wang and Xu, 2017), BiLSTM+CRF: (Ma et al., 2018) General Graph Model (Zhang and Clark, 2007) LSTM+GCNN: (Cai and Zhao, 2016), LSTM+GCNN: (Cai et al., 2017) (Wang et al., 2019) Beam search Models Characters Words character based Ours c 0 , c 1 , . . . , c i , c i+1 , . . . , c n - (Zheng et al., 2013), . . . Chen et al., 2015b) c 0 , c 1 , . . . , c i , c i+1 , c i+2 word based (Zhang and Clark, 2007), . . . c in w j−1 , w j , w j+1 w j−1 , w j , w j+1 (Cai and Zhao, 2016;Cai et al., 2017) c 0 , c 1 , . . . , c i w 0 , w 1 , . . . , w j term memory network (LSTM). Graph model. As CWS is a kind of structure learning task, the graph model determines which type of decoder should be adopted for segmentation, also it may limit the capability of defining feature, as shown in Table 2, not all graph models can support the word features. Thus recent work focused on finding more general or flexible graph model to make model learn the representation of segmentation more effective as (Cai and Zhao, 2016;Cai et al., 2017). c i−2 , c i−1 , c i , c i+1 , c i+2 - ( External data and pre-trained embedding. Whereas both encoder and graph model are about exploring a way to get better performance only by improving the model strength itself. Using external resource such as pre-trained embeddings or language representation is an alternative for the same purpose (Yang et al., 2017;. SIGHAN Bakeoff defines two types of evaluation settings, closed test limits all the data for learning should not be beyond the given training set, while open test does not take this limitation (Emerson, 2005). In this work, we will focus on the closed test setting by finding a better model design for further CWS performance improvement. Shown in Table 1, different decoders have particular decoding algorithms to match the respec-tive CWS models. Markov models and CRF-based models often use Viterbi decoders with polynomial time complexity. In general graph model, search space may be too large for model to search. Thus it forces graph models to use an approximate beam search strategy. Beam search algorithm has a kind low-order polynomial time complexity. Especially, when beam width b=1, the beam search algorithm will reduce to greedy algorithm with a better time complexity O(M n) against the general beam search time complexity O(M nb 2 ), where n is the number of units in one sentences, M is a constant representing the model complexity. Greedy decoding algorithm can bring the fastest speed of decoding while it is not easy to guarantee the precision of decoding when the encoder is not strong enough. In this paper, we focus on more effective encoder design which is capable of offering fast and accurate Chinese word segmentation with only unigram feature and greedy decoding. Our proposed encoder will only consist of attention mechanisms as building blocks but nothing else. Motivated by the Transformer (Vaswani et al., 2017) and its strength of capturing long-range dependencies of input sentences, we use a self-attention network to generate the representation of input which makes the model encode sentences at once without feed-ing input iteratively. Considering the weakness of the Transformer to model relative and absolute position information directly (Shaw et al., 2018) and the importance of localness information, position information and directional information for CWS, we further improve the architecture of standard multi-head self-attention of the Transformer with a directional Gaussian mask and get a variant called Gaussian-masked directional multi-head attention. Based on the newly improved attention mechanism, we expand the encoder of the Transformer to capture different directional information. With our powerful encoder, our model uses only simple unigram features to generate representation of sentences. For decoder which directly performs the segmentation, we use the bi-affinal attention scorer, which has been used in dependency parsing (Dozat and Manning, 2017) and semantic role labeling (Cai et al., 2018), to implement greedy decoding on finding the boundaries of words. In our proposed model, greedy decoding ensures a fast segmentation while powerful encoder design ensures a good enough segmentation performance even working with greedy decoder together. Our model will be strictly evaluated on benchmark datasets from SIGHAN Bakeoff shared task on CWS in terms of closed test setting, and the experimental results show that our proposed model achieves new state-of-the-art. The technical contributions of this paper can be summarized as follows. • We propose a CWS model with only attention structure. The encoder and decoder are both based on attention structure. • With a powerful enough encoder, we for the first time show that unigram (character) featues can help yield strong performance instead of diverse n-gram (character and word) features in most of previous work. • To capture the representation of localness information and directional information, we propose a variant of directional multi-head self-attention to further enhance the state-ofthe-art Transformer encoder. Models The CWS task is often modelled as one graph model based on an encoder-based scoring model. v b = (v b 1 , ..., v b n ) and v f = (v f 1 , ...v f n ) as the representation of sentences. With v b and v f , the bi-affinal scorer calculates the probability of each segmentation gaps and predicts the word boundaries of input. Similar as the Transformer, the encoder is an attention network with stacked self-attention and point-wise, fully connected layers while our encoder includes three independent directional encoders. Encoder Stacks In the Transformer, the encoder is composed of a stack of N identical layers and each layer has one multi-head self-attention layer and one positionwise fully connected feed-forward layer. One residual connection is around two sub-layers and followed by layer normalization (Vaswani et al., 2017). This architecture provides the Transformer a good ability to generate representation of sentence. With the variant of multi-head self-attention, we design a Gaussian-masked directional encoder to capture representation of different directions to improve the ability of capturing the localness information and position information for the importance of adjacent characters. One unidirectional encoder can capture information of one particular direction. For CWS tasks, one gap of characters, which is from a word boundary, can divide one sequence into two parts, one part in front of the gap and one part in the rear of it. The forward encoder and backward encoder are used to capture information of two directions which correspond to two parts divided by the gap. One central encoder is paralleled with forward and backward encoders to capture the information of entire sentences. The central encoder is a special directional encoder for forward and backward information of sentences. The central encoder can fuse the information and enable the encoder to capture the global information. The encoder outputs one forward information and one backward information of each positions. The representation of sentence generated by center encoder will be added to these information directly: v b = (v b 1 , ..., v b n ) = r b + r c = (r b 1 + r c 1 , ..., r b n + r b n ), v f = (v f 1 , ..., v f n ) = r f + r c = (r f 1 + r c 1 , ..., r f n + r c n )(1) where v b = (v b 1 , ..., v b n ) is the backward informa- tion, v f = (v f 1 , ..., v f n ) is the forward informa- tion, r b = (r b 1 , ..., r b n ) is the output of backward encoder, r c = (r c 1 , ..., r c n ) is the output of center encoder and r f = (r f 1 , ..., r f n ) is the output of forward encoder. Gaussian-Masked Directional Multi-Head Attention Similar as scaled dot-product attention (Vaswani et al., 2017), Gaussian-masked directional attention can be described as a function to map queries and key-value pairs to the representation of input. Here queries, keys and values are all vectors. Standard scaled dot-product attention is calculated by dotting query Q with all keys K, dividing each values by √ d k , where √ d k is the dimension of keys, and apply a softmax function to generate the weights in the attention: Attention(Q, K, V ) = sof tmax( QK T √ d k )V (2) Different from scaled dot-product attention, Gaussian-masked directional attention expects to pay attention to the adjacent characters of each positions and cast the localness relationship between characters as a fix Gaussian weight for attention. We assume that the Gaussian weight only relys on the distance between characters. Firstly we introduce the Gaussian weight matrix G which presents the localness relationship between each two characters: G =        g 11 g 21 g 31 · · · g n1 g 12 g 22 g 32 · · · g n2 g 13 g 23 g 33 · · · g n3 . . . . . . . . . . . . . . . g 1n g 2n g 3n · · · g nn        (3) g ij = Φ(dis ij ) = 2 σ 2 π −dis ij −∞ exp(− x 2 2σ 2 )dx(4) where g ij is the Gaussian weight between character i and j, dis ij is the distance between character i and j, Φ(x) is the cumulative distribution function of Gaussian, σ is the standard deviation of Gaussian function and it is a hyperparameter in our method. Equation (4) can ensure the Gaussian weight equals 1 when dis ij is 0. The larger distance between charactersis, the smaller the weight is, which makes one character can affect its adjacent characters more compared with other characters. To combine the Gaussian weight to the selfattention, we produce the Hadamard product of Gaussian weight matrix G and the score matrix produced by QK T AG(Q, K, V ) = sof tmax( QK T * G √ d k )V (5) where AG is the Gaussian-masked attention. It ensures that the relationship between two characters with long distances is weaker than adjacent characters. The scaled dot-product attention models the relationship between two characters without regard to their distances in one sequence. For CWS task, the weight between adjacent characters should be more important while it is hard for self-attention to achieve the effect explicitly because the selfattention cannot get the order of sentences directly. The Gaussian-masked attention adjusts the weight between characters and their adjacent character to a larger value which stands for the effect of adjacent characters. For forward and backward encoder, the selfattention sublayer needs to use a triangular matrix mask to let the self-attention focus on different weights: g f ij = g ij , pos j ≤ pos i , −∞, others. g b ij = g ij , pos i ≤ pos j , −∞, others.(6) where pos i is the position of character c i . The triangular matrix for forward and backward encode are:        1 0 0 · · · 0 1 1 0 · · · 0 1 1 1 · · · 0 . . . . . . . . . . . . . . . 1 1 1 · · · 1               1 1 1 · · · 1 0 1 1 · · · 1 0 0 1 · · · 1 . . . . . . . . . . . . . . . 0 0 0 · · · 1        Similar as (Vaswani et al., 2017), we use multihead attention to capture information from different dimension positions as Figure 3(a) and get Gaussian-masked directional multi-head attention. With multi-head attention architecture, the representation of input can be captured by M H(Q, K, V ) = Concat(head 1 , ..., head h )W m , head i = AG(QW q i , KW k i , V W v i ) (7) where M H is the Gaussian-masked multi-head at- tention, W q i , W k i , W v i ∈ R d k ×d h is the parameter matrices to generate heads, d k is the dimension of model and d h is the dimension of one head. Bi-affinal Attention Scorer Regarding word boundaries as gaps between any adjacent words converts the character labeling task to the gap labeling task. Different from character labeling task, gap labeling task requires information of two adjacent characters. The relationship between adjacent characters can be represented as the type of gap. The characteristic of word boundaries makes bi-affine attention an appropriate scorer for CWS task. Bi-affinal attention scorer is the component that we use to label the gap. Bi-affinal attention is developed from bilinear attention which has been used in dependency parsing (Dozat and Manning, 2017) and SRL (Cai et al., 2018). The distribution of labels in a labeling task is often uneven which makes the output layer often include a fixed bias term for the prior probability of different labels (Cai et al., 2018). Bi-affine attention uses bias terms to alleviate the burden of the fixed bias term and get the prior probability which makes it different from bilinear attention. The distribution of the gap is uneven that is similar as other labeling task which fits bi-affine. Bi-affinal attention scorer labels the target depending on information of independent unit and the joint information of two units. In bi-affinal attention, the score s ij of characters c i and c j (i < j) is calculated by: s ij = Biaf f inalScorer(v f i , v b j ) = (v f i ) T W v b j + U (v f i ⊕ v b j ) + b(8) where v f i is the forward information of c i and v b i is the backward information of c j . In Equation (8) In our model, the biaffine scorer uses the forward information of character in front of the gap and the backward information of the character behind the gap to distinguish the position of characters. Figure 4 is an example of labeling gap. The method of using biaffine scorer ensures that the boundaries of words can be determined by adjacent characters with different directional information. The score vector of the gap is formed by the probability of being a boundary of word. Further, the model generates all boundaries using activation function in a greedy decoding way. Table 3 shows the statistics of train data. We use F-score to evaluate CWS models. To train model with pre-trained embeddings in AS and CITYU, we use OpenCC 1 to transfer data from traditional Chinese to simplified Chinese. Pre-trained Embedding We only use unigram feature so we only trained character embeddings. Our pre-trained embedding are pre-trained on Chinese Wikipedia corpus by word2vec (Mikolov et al., 2013) toolkit. The corpus used for pretrained embedding is all transferred to simplified Chinese and not segmented. On closed test, we use embeddings initialized randomly. Hyperparameters For different datasets, we use two kinds of hyperparameters which are presented in Table 4. We use hyperparameters in Table 4 for small corpora (PKU and CITYU) and normal corpora (MSR and AS). We set the standard deviation of Gaussian function in Equation (4) to 2. Each training batch contains sentences with at most 4096 tokens. Optimizer To train our model, we use the Adam (Kingma and Ba, 2015) optimizer with β 1 = 0.9, β 2 = 0.98 and = 10 −9 . The learning rate schedule is the same as (Vaswani et al., 2017): lr = d −0.5 · min(step −0.5 , step · warmup −1.5 step )(9) where d is the dimension of embeddings, step is the step number of training and warmup s tep is the step number of warmup. When the number of steps is smaller than the step of warmup, the learning rate increases linearly and then decreases. Hardware and Implements We trained our models on a single CPU (Intel i7-5960X) with an nVidia 1080 Ti GPU. We implement our model in Python with Pytorch 1.0. Results Tables 5 and 6 reports the performance of recent models and ours in terms of closed test setting. Without the assistance of unsupervised segmentation features userd in (Wang et al., 2019), our model outperforms all the other models in MSR and AS except (Ma et al., 2018) and get comparable performance in PKU and CITYU. Note that all the other models for this comparison adopt various n-gram features while only our model takes unigram ones. With unsupervised segmentation features introduced by (Wang et al., 2019), our model gets a higher result. Specially, the results in MSR and AS achieve new state-of-the-art and approaching previous state-of-the-art in CITYU and PKU. The unsupervised segmentation features are derived from the given training dataset, thus using them does not violate the rule of closed test of SIGHAN Bakeoff. Table 7 compares our model and recent neural models in terms of open test setting in which any external resources, especially pre-trained embeddings or language models can be used. In MSR and AS, our model gets a comparable result while our results in CITYU and PKU are not remarkable. However, it is well known that it is always hard to compare models when using open test setting, especially with pre-trained embedding. Not all models may use the same method and data to pretrain. Though pre-trained embedding or language model can improve the performance, the performance improvement itself may be from multiple sources. It often that there is a success of pretrained embedding to improve the performance, while it cannot prove that the model is better. Compared with other LSTM models, our model performs better in AS and MSR than in CITYU and PKU. Considering the scale of different corpora, we believe that the size of corpus affects our model and the larger size is, the better model performs. For small corpus, the model tends to be overfitting. Tables 5 and 6 also show the decoding time in different datasets. Our model finishes the segmentation with the least decoding time in all four datasets, thanks to the architecture of model which only takes attention mechanism as basic block. Related Work Chinese Word Segmentation CWS is a task for Chinese natural language process to delimit word boundary. (Xue, 2003) for the first time formulize CWS as a sequence labeling task. (Zhao et al., 2006) show that different character tag sets can make essential impact for CWS. (Peng et al., 2004) use CRFs as a model for CWS, achieving new state-of-the-art. Works of statistical CWS has built the basis for neural CWS. Neural word segmentation has been widely used to minimize the efforts in feature engineering which was important in statistical CWS. (Zheng et al., 2013) introduce the neural model with sliding-window based sequence labeling. (Chen et al., 2015a) propose a gated recursive neural network (GRNN) for CWS to incorporate complicated combination of contextual character and ngram features. (Chen et al., 2015b) use LSTM to learn long distance information. (Cai and Zhao, 2016) propose a neural framework that eliminates context windows and utilize complete segmentation history. (Lyu et al., 2016) explore a joint model that performs segmentation, POS-Tagging and chunking simultaneously. (Chen et al., 2017a) propose a feature-enriched neural model for joint CWS and part-of-speech tagging. ) present a joint model to enhance the segmentation of Chinese microtext by performing CWS and informal word detection simultaneously. (Wang and Xu, 2017) propose a character-based convolutional neural model to capture n-gram features automatically and an effective approach to incorporate word embeddings. (Cai et al., 2017) improve the model in (Cai and Zhao, 2016) and propose a greedy neural word segmenter with balanced word and character embedding inputs. propose a novel neural network model to incorporate unlabeled and partially-labeled data. propose two methods that extend the Bi-LSTM to perform incorporating dictionaries into neural networks for CWS. (Gong et al., 2019) propose Switch-LSTMs to segment words and provided a more flexible solution for multi-criteria CWS which is easy to transfer the learned knowledge to new criteria. Transformer Transformer (Vaswani et al., 2017) is an attentionbased neural machine translation model. The Transformer is one kind of self-attention networks (SANs) which is proposed in (Lin et al., 2017). Encoder of the Transformer consists of one selfattention layer and a position-wise feed-forward layer. Decoder of the Transformer contains one self-attention layer, one encoder-decoder attention layer and one position-wise feed-forward layer. The Transformer uses residual connections around the sublayers and then followed by a layer normalization layer. Scaled dot-product attention is the key component in the Transformer. The input of attention contains queries, keys, and values of input sequences. The attention is generated using queries and keys like Equation (2). Structure of scaled dotproduct attention allows the self-attention layer generate the representation of sentences at once and contain the information of the sentence which is different from RNN that process characters of sentences one by one. Standard self-attention is similar as Gaussian-masked direction attention while it does not have directional mask and gaussian mask. (Vaswani et al., 2017) also propose multi-head attention which is better to generate representation of sentence by dividing queries, keys and values to different heads and get information from different subspaces. Conclusion In this paper, we propose an attention mechanism only based Chinese word segmentation model. Our model uses self-attention from the Transformer encoder to take sequence input and biaffine attention scorer to predict the label of gaps. To improve the ability of capturing the localness and directional information of self-attention based encoder, we propose a variant of self-attention called Gaussian-masked directional multi-head attention to replace the standard self-attention. We also extend the Transformer encoder to capture directional features. Our model uses only unigram features instead of multiple n-gram features in previous work. Our model is evaluated on standard benchmark dataset, SIGHAN Bakeoff 2005, which shows not only our model performs segmentation faster than any previous models but also gives new higher or comparable segmentation performance against previous state-of-the-art models. Figure 1 : 1The architecture of our model. The model for CWS task is composed of an encoder to represent the input and a decoder based on the encoder to perform actual segmentation.Figure 1 is the architecture of our model. The model feeds sentence into encoder. Embedding captures the vector e = (e 1 , ..., e n ) of the input character sequences of c = (c 1 , ..., c n ). The encoder maps vector sequences of e = (e 1 , .., e n ) to two sequences of vector which are Figure 2 : 2The structure of Gaussian-Masked directional encoder. Figure 3 : 3The illustration of Gaussian-masked directional multi-head attention and Gaussian-masked directional attention. Figure 4 : 4, W , U and b are all parameters that can be updated in training. W is a matrix with shape(d i ×N ×d j ) and U is a (N × (d i + d j )) matrix where d i is the dimension of vector v fi and N is the number of labels. An example of bi-affinal scorer labeling the gap. The bi-affinal attention scorer only uses the forward information of front character and the backward information of character to label the gap. Table 1 : 1The classification of Chinese word segmentation model. Table 2 : 2Feature windows of different models. i(j) is the index of current character(word). Table 3 : 3The statistics of SIGHAN Bakeoff 2005 datasets.Parameters Table 4 : 4Hyperparameters.3 Experiments 3.1 Experimental Settings Data We train and evaluate our model on datasets from SIGHAN Bakeoff 2005 (Emerson, 2005) which has four datasets, PKU, MSR, AS and CITYU. Table 5 : 5Results on PKU and MSR compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from (Wang et al., 2019). Models AS CITYU F 1 Training (hours) Test (sec.) F 1 Training (hours) Test (sec.) (Cai et al., 2017) 95.2 - - 95.4 - - (Ma et al., 2018) 95.5 - - 95.7 - - (Wang et al., 2019)* 95.6* - - 95.9* - - Our results 95.5 63 9 95.4 17 1.5 Our results* 95.7* 69 9 95.7* 15 1.5 Table 6 : 6Results on AS and CITYU compared with previous models in closed test. The asterisks indicate the result of model with unsupervised label from(Wang et al., 2019). Table 7 : 7F1 scores of our results on four datasets in open test compared with previous models. https://github.com/BYVoid/OpenCC A hybrid Markov/semi-Markov conditional random field for sequence segmentation. Galen Andrew, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingSydney, AustraliaAssociation for Computational LinguisticsGalen Andrew. 2006. A hybrid Markov/semi-Markov conditional random field for sequence segmentation. In Proceedings of the 2006 Conference on Empiri- cal Methods in Natural Language Processing, pages 465-472, Sydney, Australia. Association for Com- putational Linguistics. Neural word segmentation learning for Chinese. Deng Cai, Hai Zhao, 10.18653/v1/P16-1039Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsDeng Cai and Hai Zhao. 2016. Neural word segmen- tation learning for Chinese. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 409-420, Berlin, Germany. Association for Compu- tational Linguistics. Fast and accurate neural word segmentation for Chinese. Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, Feiyue Huang, 10.18653/v1/P17-2096Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics2Short Papers)Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 608-615, Vancouver, Canada. Association for Computational Linguistics. A full end-to-end semantic role labeler, syntacticagnostic over syntactic-aware?. Jiaxun Cai, Shexia He, Zuchao Li, Hai Zhao, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsJiaxun Cai, Shexia He, Zuchao Li, and Hai Zhao. 2018. A full end-to-end semantic role labeler, syntactic- agnostic over syntactic-aware? In Proceedings of the 27th International Conference on Computational Linguistics, pages 2753-2765, Santa Fe, New Mex- ico, USA. Association for Computational Linguis- tics. A feature-enriched neural model for joint chinese word segmentation and part-of-speech tagging. Xinchi Chen, Xipeng Qiu, Xuanjing Huang, 10.24963/ijcai.2017/553Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. the Twenty-Sixth International Joint Conference on Artificial IntelligenceMelbourne, AustraliaXinchi Chen, Xipeng Qiu, and Xuanjing Huang. 2017a. A feature-enriched neural model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 3960-3966. Gated recursive neural network for Chinese word segmentation. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Xuanjing Huang, 10.3115/v1/P15-1168Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational LinguisticsLong Papers)Xinchi Chen, Xipeng Qiu, Chenxi Zhu, and Xuanjing Huang. 2015a. Gated recursive neural network for Chinese word segmentation. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1744-1753, Beijing, China. Association for Computational Linguistics. Long short-term memory neural networks for Chinese word segmentation. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, Xuanjing Huang, 10.18653/v1/D15-1141Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsXinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015b. Long short-term mem- ory neural networks for Chinese word segmentation. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 1197-1206, Lisbon, Portugal. Association for Com- putational Linguistics. Adversarial multi-criteria learning for Chinese word segmentation. Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang, 10.18653/v1/P17-1110Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017b. Adversarial multi-criteria learning for Chinese word segmentation. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1193-1203, Vancouver, Canada. Association for Computational Linguistics. Dag-based long short-term memory for neural word segmentation. Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang, abs/1707.00248CoRRXinchi Chen, Zhan Shi, Xipeng Qiu, and Xuan- jing Huang. 2017c. Dag-based long short-term memory for neural word segmentation. CoRR, abs/1707.00248. Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel P Kuksa, J. Mach. Learn. Res. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537. Deep biaffine attention for neural dependency parsing. Timothy Dozat, D Christopher, Manning, 5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsTimothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. The second international Chinese word segmentation bakeoff. Thomas Emerson , Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. the Fourth SIGHAN Workshop on Chinese Language ProcessingThomas Emerson. 2005. The second international Chi- nese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. Switch-lstms for multi-criteria chinese word segmentation. Jingjing Gong, Xinchi Chen, Tao Gui, Xipeng Qiu, 10.1609/aaai.v33i01.33016457The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence. Honolulu, Hawaii, USA2019Jingjing Gong, Xinchi Chen, Tao Gui, and Xipeng Qiu. 2019. Switch-lstms for multi-criteria chinese word segmentation. In The Thirty-Third AAAI Con- ference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial In- telligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019., pages 6457-6464. Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning. the Eighteenth International Conference on Machine LearningWilliams College, Williamstown, MA, USAJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning (ICML 2001), Williams College, Williamstown, MA, USA, June 28 -July 1, 2001, pages 282-289. A structured self-attentive sentence embedding. Zhouhan Lin, Minwei Feng, Cícero Nogueira, Mo Santos, Bing Yu, Bowen Xiang, Yoshua Zhou, Bengio, 5th International Conference on Learning Representations. Toulon, FranceConference Track ProceedingsZhouhan Lin, Minwei Feng, Cícero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sen- tence embedding. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings. Exploring segment representations for neural segmentation models. Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, Ting Liu, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016. the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016New York, NY, USA, 9Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2880-2886. A maximum entropy approach to Chinese word segmentation. Jin Kiat Low, Hwee Tou Ng, Wenyuan Guo, Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. the Fourth SIGHAN Workshop on Chinese Language ProcessingJin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to Chinese word seg- mentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. Joint word segmentation, pos-tagging and syntactic chunking. Chen Lyu, Yue Zhang, Donghong Ji, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix, Arizona, USA.Chen Lyu, Yue Zhang, and Donghong Ji. 2016. Joint word segmentation, pos-tagging and syntac- tic chunking. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12- 17, 2016, Phoenix, Arizona, USA., pages 3007- 3014. State-of-the-art Chinese word segmentation with bi-LSTMs. Ji Ma, Kuzman Ganchev, David Weiss, 10.18653/v1/D18-1529Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJi Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art Chinese word segmentation with bi- LSTMs. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4902-4908, Brussels, Belgium. Associ- ation for Computational Linguistics. Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, 1st International Conference on Learning Representations, ICLR 2013. Scottsdale, Arizona, USAWorkshop Track ProceedingsTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. In 1st International Con- ference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings. Chinese part-ofspeech tagging: One-at-a-time or all-at-once? wordbased or character-based?. Tou Hwee, Jin Kiat Ng, Low, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, SpainAssociation for Computational LinguisticsHwee Tou Ng and Jin Kiat Low. 2004. Chinese part-of- speech tagging: One-at-a-time or all-at-once? word- based or character-based? In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 277-284, Barcelona, Spain. Association for Computational Linguistics. Maxmargin tensor neural network for Chinese word segmentation. Wenzhe Pei, Tao Ge, Baobao Chang, 10.3115/v1/P14-1028Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsMarylandAssociation for Computational Linguistics1BaltimoreWenzhe Pei, Tao Ge, and Baobao Chang. 2014. Max- margin tensor neural network for Chinese word seg- mentation. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 293-303, Bal- timore, Maryland. Association for Computational Linguistics. Chinese segmentation and new word detection using conditional random fields. Fuchun Peng, Fangfang Feng, Andrew Mccallum, COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics. Geneva, Switzerland. COLINGFuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detec- tion using conditional random fields. In COLING 2004: Proceedings of the 20th International Confer- ence on Computational Linguistics, pages 562-568, Geneva, Switzerland. COLING. Self-attention with relative position representations. Peter Shaw, Jakob Uszkoreit, Ashish Vaswani, 10.18653/v1/N18-2074Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsPeter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computa- tional Linguistics. A discriminative latent variable Chinese segmenter with hybrid word/character information. Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, Jun&apos;ichi Tsujii, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational LinguisticsBoulder, ColoradoAssociation for Computational LinguisticsXu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshi- masa Tsuruoka, and Jun'ichi Tsujii. 2009. A dis- criminative latent variable Chinese segmenter with hybrid word/character information. In Proceedings of Human Language Technologies: The 2009 An- nual Conference of the North American Chapter of the Association for Computational Linguistics, pages 56-64, Boulder, Colorado. Association for Computational Linguistics. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008. Convolutional neural network with word embeddings for Chinese word segmentation. Chunqi Wang, Bo Xu, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanLong Papers1Asian Federation of Natural Language ProcessingChunqi Wang and Bo Xu. 2017. Convolutional neu- ral network with word embeddings for Chinese word segmentation. In Proceedings of the Eighth Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 163-172, Taipei, Taiwan. Asian Federation of Natural Lan- guage Processing. Unsupervised learning helps supervised neural word segmentation. Xiaobin Wang, Deng Cai, Linlin Li, Guangwei Xu, Hai Zhao, Luo Si, 10.1609/aaai.v33i01.33017200The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019. Honolulu, Hawaii, USAXiaobin Wang, Deng Cai, Linlin Li, Guangwei Xu, Hai Zhao, and Luo Si. 2019. Unsupervised learning helps supervised neural word segmentation. In The Thirty-Third AAAI Conference on Artificial Intelli- gence, AAAI 2019, The Thirty-First Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Hon- olulu, Hawaii, USA, January 27 -February 1, 2019., pages 7200-7207. Dependency-based gated recursive neural network for Chinese word segmentation. Jingjing Xu, Xu Sun, 10.18653/v1/P16-2092Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics2Jingjing Xu and Xu Sun. 2016. Dependency-based gated recursive neural network for Chinese word segmentation. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 567-572, Berlin, Germany. Association for Computational Linguistics. Chinese word segmentation as character tagging. Nianwen Xue, Special Issue on Word Formation and Chinese Language Processing. 8Nianwen Xue. 2003. Chinese word segmentation as character tagging. In International Journal of Com- putational Linguistics & Chinese Language Pro- cessing, Volume 8, Number 1, February 2003: Spe- cial Issue on Word Formation and Chinese Lan- guage Processing, pages 29-48. Neural word segmentation with rich pretraining. Jie Yang, Yue Zhang, Fei Dong, 10.18653/v1/P17-1078Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Jie Yang, Yue Zhang, and Fei Dong. 2017. Neural word segmentation with rich pretraining. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 839-849, Vancouver, Canada. Asso- ciation for Computational Linguistics. Segmenting chinese microtext: Joint informal-word detection and segmentation with neural networks. Meishan Zhang, Guohong Fu, Nan Yu, 10.24963/ijcai.2017/591Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. the Twenty-Sixth International Joint Conference on Artificial IntelligenceMelbourne, AustraliaMeishan Zhang, Guohong Fu, and Nan Yu. 2017. Seg- menting chinese microtext: Joint informal-word de- tection and segmentation with neural networks. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4228-4234. Neural networks incorporating dictionaries for chinese word segmentation. Qi Zhang, Xiaoyu Liu, Jinlan Fu, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)New Orleans, Louisiana, USAQi Zhang, Xiaoyu Liu, and Jinlan Fu. 2018. Neu- ral networks incorporating dictionaries for chinese word segmentation. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Arti- ficial Intelligence (IAAI-18), and the 8th AAAI Sym- posium on Educational Advances in Artificial Intel- ligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5682-5689. Chinese segmentation with a word-based perceptron algorithm. Yue Zhang, Stephen Clark, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsPrague, Czech RepublicAssociation for Computational LinguisticsYue Zhang and Stephen Clark. 2007. Chinese segmen- tation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the As- sociation of Computational Linguistics, pages 840- 847, Prague, Czech Republic. Association for Com- putational Linguistics. Effective tag set selection in Chinese word segmentation via conditional random field modeling. Hai Zhao, Chang-Ning Huang, Mu Li, Bao-Liang Lu, Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation. the 20th Pacific Asia Conference on Language, Information and ComputationWuhan, ChinaTsinghua University PressHuazhong Normal UniversityHai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006. Effective tag set selection in Chinese word segmentation via conditional random field modeling. In Proceedings of the 20th Pacific Asia Conference on Language, Information and Compu- tation, pages 87-94, Huazhong Normal University, Wuhan, China. Tsinghua University Press. Neural networks incorporating unlabeled and partially-labeled data for cross-domain chinese word segmentation. Lujun Zhao, Qi Zhang, Peng Wang, Xiaoyu Liu, 10.24963/ijcai.2018/640Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAIStockholm, SwedenLujun Zhao, Qi Zhang, Peng Wang, and Xiaoyu Liu. 2018. Neural networks incorporating unlabeled and partially-labeled data for cross-domain chinese word segmentation. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelli- gence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden., pages 4602-4608. Deep learning for Chinese word segmentation and POS tagging. Xiaoqing Zheng, Hanyang Chen, Tianyu Xu, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsXiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for Chinese word segmentation and POS tagging. In Proceedings of the 2013 Confer- ence on Empirical Methods in Natural Language Processing, pages 647-657, Seattle, Washington, USA. Association for Computational Linguistics. Word-context character embeddings for Chinese word segmentation. Hao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, Xinyu Dai, Jiajun Chen, 10.18653/v1/D17-1079Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsHao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2017. Word-context character embeddings for Chinese word segmenta- tion. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Processing, pages 760-766, Copenhagen, Denmark. Association for Computational Linguistics.
[ "https://github.com/BYVoid/OpenCC" ]
[ "SPEAKER INDEPENDENCE OF NEURAL VOCODERS AND THEIR EFFECT ON PARAMETRIC RESYNTHESIS SPEECH ENHANCEMENT", "SPEAKER INDEPENDENCE OF NEURAL VOCODERS AND THEIR EFFECT ON PARAMETRIC RESYNTHESIS SPEECH ENHANCEMENT" ]
[ "Soumi Maiti [email protected] \nThe Graduate Center\nBrooklyn College Computer and Information Science Brooklyn\nCUNY Computer Science New York\nNY, NYUSA, USA\n", "Michael I Mandel \nThe Graduate Center\nBrooklyn College Computer and Information Science Brooklyn\nCUNY Computer Science New York\nNY, NYUSA, USA\n" ]
[ "The Graduate Center\nBrooklyn College Computer and Information Science Brooklyn\nCUNY Computer Science New York\nNY, NYUSA, USA", "The Graduate Center\nBrooklyn College Computer and Information Science Brooklyn\nCUNY Computer Science New York\nNY, NYUSA, USA" ]
[]
Traditional speech enhancement systems produce speech with compromised quality. Here we propose to use the high quality speech generation capability of neural vocoders for better quality speech enhancement. We term this parametric resynthesis (PR). In previous work, we showed that PR systems generate high quality speech for a single speaker using two neural vocoders, WaveNet and WaveGlow. Both these vocoders are traditionally speaker dependent. Here we first show that when trained on data from enough speakers, these vocoders can generate speech from unseen speakers, both male and female, with similar quality as seen speakers in training. Next using these two vocoders and a new vocoder LPCNet, we evaluate the noise reduction quality of PR on unseen speakers and show that objective signal and overall quality is higher than the state-of-the-art speech enhancement systems Wave-U-Net, Wavenet-denoise, and SEGAN. Moreover, in subjective quality, multiple-speaker PR outperforms the oracle Wiener mask.
10.1109/icassp40776.2020.9053296
[ "https://arxiv.org/pdf/1911.06266v1.pdf" ]
208,006,340
1911.06266
03db0b294c3b170836a585b9584e5d9a26ddb822
SPEAKER INDEPENDENCE OF NEURAL VOCODERS AND THEIR EFFECT ON PARAMETRIC RESYNTHESIS SPEECH ENHANCEMENT 14 Nov 2019 Soumi Maiti [email protected] The Graduate Center Brooklyn College Computer and Information Science Brooklyn CUNY Computer Science New York NY, NYUSA, USA Michael I Mandel The Graduate Center Brooklyn College Computer and Information Science Brooklyn CUNY Computer Science New York NY, NYUSA, USA SPEAKER INDEPENDENCE OF NEURAL VOCODERS AND THEIR EFFECT ON PARAMETRIC RESYNTHESIS SPEECH ENHANCEMENT 14 Nov 2019Index Terms-Speech enhancementNeural vocodersanalysis- by-synthesisenhancement-by-synthesis Traditional speech enhancement systems produce speech with compromised quality. Here we propose to use the high quality speech generation capability of neural vocoders for better quality speech enhancement. We term this parametric resynthesis (PR). In previous work, we showed that PR systems generate high quality speech for a single speaker using two neural vocoders, WaveNet and WaveGlow. Both these vocoders are traditionally speaker dependent. Here we first show that when trained on data from enough speakers, these vocoders can generate speech from unseen speakers, both male and female, with similar quality as seen speakers in training. Next using these two vocoders and a new vocoder LPCNet, we evaluate the noise reduction quality of PR on unseen speakers and show that objective signal and overall quality is higher than the state-of-the-art speech enhancement systems Wave-U-Net, Wavenet-denoise, and SEGAN. Moreover, in subjective quality, multiple-speaker PR outperforms the oracle Wiener mask. INTRODUCTION Traditional speech enhancement systems modify a noisy mixture to reduce the amount of noise it contains, but in doing so they introduce distortion in the speech. The distortion increases when there is more noise in the mixture leading to poor quality speech [1]. In contrast, speech synthesis systems generate high quality speech from only textual information. These text-to-speech systems (TTS) are complex as they need to generate realistic acoustic representation without a reference audio signal. In this work, we propose to combine these two methods, i.e., using speech synthesis techniques for speech enhancement. This is an easier task than TTS since we have a reference noisy audio signal from which we can extract the desired prosody instead of having to invent it. By predicting the "acoustic features" of the clean speech from the noisy speech in the speech enhancement system, we can generate high quality noise-free resyntheses. Parametric Resynthesis (PR) systems [2,3] predict clean acoustic parameters from noisy speech and synthesize speech from these predicted parameters using a speech synthesizer or vocoder. Current speech synthesizers are trained to generate high quality speech for a single speaker. In previous work we showed that a single speaker PR system can synthesize very high quality clean speech at 22 KHz [2] and performs better than the corresponding TTS system [3]. Hence, a critical question is whether these systems can be generalized to unknown speakers. The main contribution of the current work is to show that when trained on a large number of speakers, neural vocoders can successfully generalize to unseen speakers. Furthermore, we show that PR systems using these neural vocoders can also generalize to unseen speakers in the presence of noise. In this work, we test the speaker dependence of neural vocoders, and their effect on the enhancement quality of PR. We show that when trained on 56 speakers, WaveGlow [4], WaveNet [5], and LPC-Net [6] are able to generalize to unseen speakers. We compare the noise reduction quality of PR with three state-of-the-art speech enhancement models and show that PR-LPCNet outperforms every other system including an oracle Wiener mask-based system. In terms of objective metrics, the proposed PR-WaveGlow performs better in objective signal and overall quality. Related work Traditional speech enhancement systems generally predict a time-Frequency mask to reduce noise in the magnitude spectrum domain, for example [7,8]. Recent works perform speech enhancement in the time-domain directly, which has the additional advantage of reconstructing the phase of the signal. A modified WaveNet was proposed for speech denoising [9], by using non-causal convolutions on noisy speech and predicting both clean speech and the noise signal. Another approach is to progressively downsample the noisy audio to a bottleneck feature and then upsample with skip connections to the corresponding downsampled features to enhance speech. SEGAN [10] uses this approach in a GAN setting and Wave-U-Net [11,12] uses it in the U-Net setting. The aim of these approaches is to remove noise from the audio at different scales. Compared to these systems, we do not focus on modelling noise but only focus on modelling speech. We evaluate our approach against three of these systems [9][10][11]. These papers publish results on the same dataset we used and also each provide several enhanced files, which we utilize in our listening tests. SYSTEM OVERVIEW Our PR models have two parts. First is a prediction model that estimates the clean acoustic features from noisy audio. Second, a vocoder synthesizes "clean" speech from the predicted "clean" acoustic parameters. The aim of the prediction model is to reduce noise while the vocoder synthesizes high quality audio. Prediction model The prediction model is trained with parallel clean and noisy speech. It takes noisy mel-spectrogram Y as input and is trained to predict clean acoustic features X. The predicted clean acoustic features vary based on the vocoder used. In this work we used WaveGlow, WaveNet, LPCNet and WORLD [13] as vocoders. For Waveglow and WaveNet, we predict clean mel-spectrograms. For LPC-Net, we predict 18-dimensional Bark-scale frequency cepstral coefficients (BFCC) and two pitch parameters: period and correlation. For WORLD we predict the spectral envelope, aperiodicity, and pitch. For WORLD and LPCNet, we also predict the ∆ and ∆∆ of these acoustic features for smoother outputs. The prediction model is trained to minimize the mean squared error (MSE) of the acoustic features MSE : L = X −X 2(1) whereX are the predicted and X are the clean acoustic features. The Adam optimizer [14] is used for training. During test, for a given a noisy mel-spectrogram, clean acoustic parameters are predicted. For LPCNet and WORLD we use maximum likelihood parameter generation (MLPG) [15] algorithms to refine our estimate of the clean acoustic features from predicted acoustic features, ∆, and ∆∆. Vocoders The second part of PR resynthesizes speech from the predicted acoustic parametersX using a vocoder. The vocoders are trained on clean speech samples x and clean acoustic features X. During synthesis, we use predicted acoustic parametersX to generate predicted clean speechx. In the rest of this section we describe the vocoders, three neural: WaveGlow, WaveNet, LPCNet and one non-neural: WORLD. WaveGlow: WaveGlow [4] is a Glow based network [16] for synthesizing speech. WaveGlow learns a sequence of invertible transformations of audio samples x to a Gaussian distribution conditioned on the mel spectrogram X. For inference, WaveGlow samples a latent variable z from the learned Gaussian distribution and applies the inverse transformations conditioned on X to reconstruct the speech samplex. The model is trained to minimize the log likelihood of the clean speech ln p(x | X) = ln p(z) + log det dz dx ,(2) where ln p(z) is the log-likelihood of the spherical zero mean Gaussian with variance σ 2 . During training σ = 1 is used. We use the officially published waveGlow implementation 1 with the original setup, i.e., 12 coupling layers, each consisting of 8 layers of dilated convolution with 512 residual and 256 skip connections. We will refer to the PR system with WaveGlow as its vocoder as PR-WaveGlow. LPCNet: LPCNet is a variation of WaveRNN [17] that simplifies the vocal tract response using linear prediction pt from previous time-step samples pt = M k=1 a k x t−k . LPC coefficients a k are computed from the 18-band BFCC. It predicts the LPC predictor residual et, at time t. Then sample xt is generated by adding et and pt. A frame conditioning feature f is generated from 20 input features: 18-band BFCC and 2 pitch parameters via two convolutional and two fully connected layers. The probability p(et) is predicted from xt−1, et−1, pt, f via two GRUs [18] (A and B) combined with dualFC layer followed by a softmax. The largest GRU (GRU-A) weight matrix is forced to be sparse for faster synthesis. The model is trained on the categorical cross-entropy loss of p(et) and the predicted probability of the excitationp(et). Speech samples are 8-bit mu-law quantized. We use the officially published LPCNet implementation 2 with 640 units in GRU-A and 16 units in GRU-B. We refer to the PR system with LPCNet as its vocoder as PR-LPCNet. WaveNet: WaveNet [5] is a autoregressive speech waveform generation model built with dilated causal convolutional layers. The generation of one speech sample at time step t, xt, is conditioned on all previous time step samples (x1, x2, . . . xt−1). We use the Nvidia implementation 3 which is the Deep-Voice [19] model of WaveNet for faster synthesis. Speech samples are mu-law qauantized to 8 bits. The normalized log mel-spectrogram is used in local conditioning. WaveNet is trained on the cross-entropy between the quantized sample x µ t and the predicted quantized samplex µ t . For WaveNet, we used a smaller model that is able to synthesize speech with moderate quality. We tested the PR model's dependency on speech synthesis quality by testing on a smaller model. We used 20 layers with 64 residual, 128 skip connections, and 256 gate channels with maximum dilation of 128. This model can synthesize clean speech with average predicted mean opinion score (MOS) 3.25 for a single speaker [19]. The PR system with WaveNet as its vocoder is referred to as PR-WaveNet. WORLD: Lastly, we use a non-neural vocoder WORLD which synthesizes speech from three acoustic parameters: spectral envelope, aperiodicity, and F 0. We use WORLD with the Merlin toolkit 4 . WORLD is a source-filter model that takes previously mentioned parameters and synthesizes speech. We also use spectral enhancement to modify the predicted parameters as is standard in Merlin [20]. Table 2. Speech enhancement objective metrics on full 824-file test set: higher is better. Top system uses oracle clean speech information. Bottom section compares to published comparison system results. EXPERIMENTS Dataset We use the publicly available noisy VCTK dataset [21] for our experiments. The dataset contains 56 speakers for training: 28 male and 28 female speakers from the US and Scotland. The test set contains two unseen voices, one male and another female. Further, there is another available training set, consisting 14 male and 14 female from England, which we use to test generalization to more speakers. The noisy training set contains ten types of noise: two are artificially created, and the eight other are chosen from DEMAND [22]. The two artificially created are speech shaped noise and babble noise. The eight from DEMAND are noise from a kitchen, meeting room, car, metro, subway car, cafeteria, restaurant, and subway station. Exp 1: Speaker independence of neural vocoders Firstly, we test if WaveGlow and WaveNet can generalize to unseen speakers on clean speech. Using the data described above, we train both of these models with a large number of speakers (56) and test them on 6 unseen speakers. Next, we compare their performance to LPCNet which has previusly been shown to generalize to unseen speakers. In this test, each neural vocoder synthesizes speech from the original clean acoustic parameters. Following the three baseline papers [9][10][11], we measure synthesis quality with objective enhancement quality metrics [23] consisting of three composite scores: CSIG, CBAK, and COVL. These three measures are on a scale from 1 to 5, with higher being better. CSIG provides and estimate of the signal quality, BAK provides an estimate of the background noise reduction, and OVL provides an estimate of the overall quality. LPCNet is trained for 120 epochs with a batch size of 48, where each sequence has 15 frames. WaveGlow is trained for 500 epochs with batch size 4 utterances. WaveNet is trained for 200 epochs with batch size 4 utterances. For WaveNet and WaveGlow we use GPU synthesis, while for LPCNet CPU synthesis is used as it is faster 5 . WaveGlow and WaveNet synthesize from clean mel-spectrograms 5 We also found that GPU synthesis code was incomplete as of commit 3a7ef33 with window length 64 ms and hop size 16 ms. LPCNet acoustic features use a window size of 20 ms and a hop size of 10 ms. We report the synthesis quality of three unseen male and three unseen female speakers, and compare them with unseen utterances from one known male speaker. For each speaker, the average quality is calculated over 10 files. Table 1 shows the composite quality results along with the objective intelligibility score from STOI [24]. We observe that WaveGlow has the best quality scores in all the measures. The female speaker scores are close to the known speaker while the unseen male speaker scores are a little lower. We note here that these values are not as high as single speaker WaveGlow, which can synthesize speech very close to the ground truth. We also note that LPCNet scores are lower than those of WaveGlow but better than WaveNet. Between LPCNet and WaveNet, we do not observe a significant difference in synthesis quality for male and female voices. Although WaveNet has lower scores, it is consistent across known and unknown speakers. Thus, we can say that WaveNet generalizes to unseen speakers. Exp 2: Speaker independence of parametric resynthesis Next, we test the generalizability of the PR system across different SNRs and unseen voices. We use the test set of 824 files with 4 different SNRs. The prediction model is a 3-layer bi-directional LSTM with 800 units that is trained with a learning rate of 0.001. For WORLD filter size is 1024 and hop length is 5 ms. We compare PR models with a mask based oracle, the Oracle Wiener Mask (OWM), that has clean information available during test. Table 2 reports the objective enhancement quality metrics and STOI. We observe that the OWM performs best, PR-WaveGlow performs better than Wave-U-Net and SEGAN on CSIG and COVL. PR-WaveGlow's CBAK score is lower, which is expected since this score is not very high even when we synthesize clean speech (as shown in Table 1). Among PR models, PR-WaveGlow scores best and PR-WaveNet performs worst in CSIG. The average synthesis quality of the WaveNet model affects the performance of the PR system poorly. PR-WORLD and PR-LPCNet scores are lower as well, we observe that both of these models sound much better than the objective scores would suggest. We believe, as both of these models predicts F 0, even a slight error in F 0 prediction affects the objective scores adversely. For this, we test the PR-LPCNet using the noisy F 0 instead of the prediction, and the quality scores increase. In informal listening the subjective quality with noisy F0 is similar to or worse than the predicted F 0 files. Hence we can say that the objective enhancement metrics are not a very good measure of quality for PR-LPCNet and PR-WORLD. We also test objective quality of PR models and OWM against different SNR and noise types. The results are shown in Figure 1. We observe with decreasing SNR, CBAK quality for PR models stays the same, while for OWM, CBAK score decreases rapidly. This shows that the noise has a smaller effect on background quality compared to a mask based system, i.e., the background quality is more related to the presence of synthesis artifacts than recorded background noise. Listening tests Next, we test the subjective quality of the PR systems with a listening test. For the listening test, we choose 12 of the 824 test files, with four files from each of the 2.5, 7.5 and 12.5 dB SNRs. We observed the 17.5 dB file to have very little noise, and all systems perform well with them. In the listening test, we also compare with the OWM and three comparison models. For these comparison systems, we included the publicly available output files in our listening tests, selecting five files from each: Wave-U-Net has 3 from 12.5 dB and 2 from 2.5 dB, Wavenet-denoise and SEGAN have 2 common files from 2.5 dB, 2 more files each are selected from 7.5 dB and 1 from 12.5 dB. For Wave-U-Net, there were no 7.5 dB files available publicly. The listening test follows the Multiple Stimuli with Hidden Reference and Anchor (MUSHRA) paradigm [25]. Subjects were presented with 8-10 anonymized and randomized versions of each file to facilitate direct comparison: 4 PR systems (PR-WaveNet, PR-WaveGlow, PR-LPCNet, PR-World), 4 comparison speech enhancement systems (OWM, Wave-U-Net, WaveNet-denoise, and SEGAN), and clean and noisy signals. Subjects were also provided reference clean and noisy versions of each file 6 . Five subjects took part in the listening test. They were told to rate the speech quality, noise-suppression quality, and overall quality of the speech from 0 − 100, with 100 being the best. We observe intelligibility of all of the files to be very high, so instead of doing an intelligibility 6 All files are available at http://mr-pc.org/work/icassp20/ Table 3. Speech enhancement objective metrics and subjective intelligibility on the 12 listening test files. listening test, we ask subjects to rate the subjective intelligibility as a score from 0 − 100. Figure 3 shows the result of the quality listening test. PR-LPCNet performs best in all three quality scores, followed by PR-WaveGlow and PR-World. The next best model is the Oracle Wiener mask followed by Wave-U-Net. Table 3 shows the subjective intelligibility ratings, where PR-LPCNet has the highest subjective intelligibility, followed by OWM, PR-WaveGlow, and PR-World. It also reports the objective quality metrics on the 12 files selected for the listening test for comparison with Table 2 on the full test set. We observe that while PR-LPCNet and PR-WORLD have very similar objective metrics (both quality and intelligibility), they have very different subjective metrics, with PR-LPCNet being rated much higher). Tolerance to error Finally, we measure the tolerance of PR models to inaccuracy of the prediction LSTM using the two best performing vocoders, WaveGlow and LPCNet. For this test, we randomly select 30 noisy test files. We make the predicted featureX noisy as,Xe =X + ǫN , where ǫ = M SE ×e%. The random noise N is generated from a Gaussian distribution with the same mean and variance at each freuency as X. Next, we synthesize with the vocoder fromXe. For WaveGlow, X is the mel-spectrogram and for LPCNet, X is 20 features. We repeat the LPCNet test adding noise into all features and only the 18 BFCC features (not adding noise to F 0). Figure 2 shows the objective metrics for these files. We observe that for WaveGlow, e = 0−10% does not affect the synthesis quality very much and e > 10% decreases performance incrementally. For LPCNet, we observe that errors in the BFCC are tolerated better than errors in F 0. CONCLUSION We show that the neural vocoders WaveGlow, WaveNet, and LPCNet can be used for speaker-independent speech synthesis when trained on 56 speakers. We also show that using these three vocoders, the parametric resynthesis model is able to generalize to new noises and new speakers across different SNRs. We find that PR-LPCNet outperforms the oracle Wiener mask-based system in subjective quality. The noisy training files are available at four SNR levels: 15, 10, 5, and 0 dB. The noisy test set contains five other noises from DE-MAND: living room, office, public square, open cafeteria, and bus. The test files have higher SNR: 17.5, 12.5, 7.5, and 2.5 dB. All files are downsampled to 16 KHz for comparison with other systems. There are 23, 075 training audio files and 824 testing audio files. Fig. 1 .Fig. 2 . 12Overall objective quality of PR systems and OWM broken down by noise type (824 test files). Objective metrics as error is artificially added to the predictions of the acoustic features, higher is better. Error is measured as a proprotion of the standard deviation of the vocoders' acoustic features over time. Fig. 3 . 3Subjective quality: higher is better. Error bars show twice the standard error. Table 1. Speaker generalization of neural vocoders. Objective quality metrics for synthesis from true acoustic features, higher is better. Sorted by SIG.1 https://github.com/NVIDIA/waveglow Model #spk SIG BAK OVL STOI Seen WaveGlow 1 4.7±0.12 2.9±0.10 3.9±0.16 0.94±0.01 LPCNet 1 3.8±0.16 2.2±0.12 2.9±0.21 0.91±0.02 WaveNet 1 3.3±0.15 2.1±0.06 2.5±0.13 0.81±0.03 Unseen -Male WaveGlow 3 4.4±0.03 2.8 ± 0.01 3.7±0.02 0.94±0.01 LPCNet 3 4.0±0.14 2.4±0.10 3.2±0.16 0.90±0.04 WaveNet 3 3.2±0.08 2.1±0.07 2.5±0.10 0.83±0.01 Unseen -Female WaveGlow 3 4.7±0.04 2.9±0.03 3.9±0.05 0.95±0.01 LPCNet 3 3.9±0.15 2.3±0.12 3.0±0.20 0.90±0.04 WaveNet 3 3.3±0.10 2.0±0.06 2.5±0.10 0.80±0.01 ModelSIG BAK OVL STOI Subj. Intel.Oracle Wiener 4.3 3.8 3.9 0.98 0.91 PR-WaveGlow 3.7 2.4 3.0 0.91 0.90 PR-World 3.0 1.9 2.2 0.86 0.90 PR-LPCNet 3.0 1.8 2.2 0.85 0.92 PR-WaveNet 2.9 2.0 2.2 0.83 0.74 https://github.com/mozilla/LPCNet 3 https://github.com/NVIDIA/nv-wavenet 4 https://github.com/CSTR-Edinburgh/merlin New insights into the noise reduction wiener filter. Jingdong Chen, Jacob Benesty, Yiteng Huang, Simon Doclo, IEEE Transactions on Audio, Speech, and Language Processing. 144Jingdong Chen, Jacob Benesty, Yiteng Huang, and Simon Do- clo, "New insights into the noise reduction wiener filter," IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 4, pp. 1218-1234, 2006. Parametric resynthesis with neural vocoders. Soumi Maiti, Michael, Mandel, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. To appearSoumi Maiti and Michael I Mandel, "Parametric resynthesis with neural vocoders," in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2019, To appear. Speech denoising by parametric resynthesis. Soumi Maiti, Michael, Mandel, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. the IEEE International Conference on Acoustics, Speech, and Signal ProcessingIEEESoumi Maiti and Michael I Mandel, "Speech denoising by parametric resynthesis," in Proceedings of the IEEE Interna- tional Conference on Acoustics, Speech, and Signal Processing. IEEE, 2019, pp. 6995-6999. Waveglow: A flow-based generative network for speech synthesis. Ryan Prenger, Rafael Valle, Bryan Catanzaro, arXiv:1811.00002arXiv preprintRyan Prenger, Rafael Valle, and Bryan Catanzaro, "Waveglow: A flow-based generative network for speech synthesis," arXiv preprint arXiv:1811.00002, 2018. WaveNet: A generative model for raw audio. Aäron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, W Andrew, Koray Senior, Kavukcuoglu, Proc. ISCA SSW. ISCA SSW125Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Si- monyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, An- drew W Senior, and Koray Kavukcuoglu, "WaveNet: A gener- ative model for raw audio.," in Proc. ISCA SSW, Sept. 2016, p. 125. Lpcnet: Improving neural speech synthesis through linear prediction. Jean-Marc Valin, Jan Skoglund, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. the IEEE International Conference on Acoustics, Speech, and Signal ProcessingIEEEJean-Marc Valin and Jan Skoglund, "Lpcnet: Improving neural speech synthesis through linear prediction," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 2019, pp. 5891-5895. On training targets for supervised speech separation. Yuxuan Wang, Arun Narayanan, Deliang Wang, IEEE Transactions on Audio, Speech, and Language Processing. 2212Yuxuan Wang, Arun Narayanan, and DeLiang Wang, "On training targets for supervised speech separation," IEEE Trans- actions on Audio, Speech, and Language Processing, vol. 22, no. 12, pp. 1849-1858, 2014. Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks. Hakan Erdogan, John R Hershey, Shinji Watanabe, Jonathan Le Roux, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. the IEEE International Conference on Acoustics, Speech, and Signal Processing2015Hakan Erdogan, John R. Hershey, Shinji Watanabe, and Jonathan Le Roux, "Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks," in Proceedings of the IEEE International Conference on Acous- tics, Speech, and Signal Processing, 2015, vol. 2015-Augus. A wavenet for speech denoising. Dario Rethage, Jordi Pons, Xavier Serra, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. the IEEE International Conference on Acoustics, Speech, and Signal ProcessingDario Rethage, Jordi Pons, and Xavier Serra, "A wavenet for speech denoising," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2018, pp. 5069-5073. Segan: Speech enhancement generative adversarial network. Santiago Pascual, Antonio Bonafonte, Joan Serrà, arXiv:1703.09452arXiv preprintSantiago Pascual, Antonio Bonafonte, and Joan Serrà, "Segan: Speech enhancement generative adversarial network," arXiv preprint arXiv:1703.09452, 2017. Improved speech enhancement with the wave-u-net. Craig Macartney, Tillman Weyde, arXiv:1811.11307arXiv preprintCraig Macartney and Tillman Weyde, "Improved speech enhancement with the wave-u-net," arXiv preprint arXiv:1811.11307, 2018. Wave-unet: A multi-scale neural network for end-to-end audio source separation. Daniel Stoller, Sebastian Ewert, Simon Dixon, arXiv:1806.03185arXiv preprintDaniel Stoller, Sebastian Ewert, and Simon Dixon, "Wave-u- net: A multi-scale neural network for end-to-end audio source separation," arXiv preprint arXiv:1806.03185, 2018. WORLD: a vocoder-based high-quality speech synthesis system for real-time applications. Masanori Morise, Fumiya Yokomori, Kenji Ozawa, IEICE Transactions on Information and Systems. 997Masanori Morise, Fumiya Yokomori, and Kenji Ozawa, "WORLD: a vocoder-based high-quality speech synthesis sys- tem for real-time applications," IEICE Transactions on Infor- mation and Systems, vol. 99, no. 7, pp. 1877-1884, Jul. 2016. Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980Diederik P. Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization," arXiv:1412.6980 [cs], Dec. 2014. Speech parameter generation algorithms for hmm-based speech synthesis. Keiichi Tokuda, Takayoshi Yoshimura, Takashi Masuko, Takao Kobayashi, Tadashi Kitamura, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. the IEEE International Conference on Acoustics, Speech, and Signal Processing3Keiichi Tokuda, Takayoshi Yoshimura, Takashi Masuko, Takao Kobayashi, and Tadashi Kitamura, "Speech parameter gen- eration algorithms for hmm-based speech synthesis," in Pro- ceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 2000, vol. 3, pp. 1315- 1318. P Diederik, Prafulla Kingma, Dhariwal, arXiv:1807.03039Glow: Generative flow with invertible 1x1 convolutions. arXiv preprintDiederik P Kingma and Prafulla Dhariwal, "Glow: Gener- ative flow with invertible 1x1 convolutions," arXiv preprint arXiv:1807.03039, 2018. Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron Van Den, Sander Oord, Koray Dieleman, Kavukcuoglu, arXiv:1802.08435Efficient neural audio synthesis. arXiv preprintNal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stim- berg, Aaron van den Oord, Sander Dieleman, and Koray Kavukcuoglu, "Efficient neural audio synthesis," arXiv preprint arXiv:1802.08435, 2018. Empirical evaluation of gated recurrent neural networks on sequence modeling. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555arXiv preprintJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," arXiv preprint arXiv:1412.3555, 2014. Deep voice: Real-time neural text-to-speech. Sercanö Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningSercanÖ Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman, et al., "Deep voice: Real-time neural text-to-speech," in Proceedings of the Inter- national Conference on Machine Learning. JMLR. org, 2017, pp. 195-204. Merlin: An open source neural network speech synthesis system. Zhizheng Wu, Oliver Watts, Simon King, Proc. SSW. SSWZhizheng Wu, Oliver Watts, and Simon King, "Merlin: An open source neural network speech synthesis system," Proc. SSW, 2016. Noisy speech database for training speech enhancement algorithms and tts models. Cassia Valentini-Botinhao, School of Informatics. Centre for Speech Technology Research. University of EdinburghCassia Valentini-Botinhao et al., "Noisy speech database for training speech enhancement algorithms and tts models," Uni- versity of Edinburgh. School of Informatics. Centre for Speech Technology Research (CSTR), 2017. The diverse environments multi-channel acoustic noise database (demand): A database of multichannel environmental noise recordings. Joachim Thiemann, Nobutaka Ito, Emmanuel Vincent, Proceedings of Meetings on Acoustics ICA2013. Meetings on Acoustics ICA2013ASA1935081Joachim Thiemann, Nobutaka Ito, and Emmanuel Vin- cent, "The diverse environments multi-channel acoustic noise database (demand): A database of multichannel environmental noise recordings," in Proceedings of Meetings on Acoustics ICA2013. ASA, 2013, vol. 19, p. 035081. Evaluation of objective measures for speech enhancement. Yi Hu, C Philipos, Loizou, Proceedings of Interspeech. InterspeechYi Hu and Philipos C Loizou, "Evaluation of objective mea- sures for speech enhancement," in Proceedings of Interspeech, 2006. A short-time objective intelligibility measure for time-frequency weighted noisy speech. H Cees, Taal, C Richard, Richard Hendriks, Jesper Heusdens, Jensen, Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. the IEEE International Conference on Acoustics, Speech, and Signal ProcessingCees H Taal, Richard C Hendriks, Richard Heusdens, and Jes- per Jensen, "A short-time objective intelligibility measure for time-frequency weighted noisy speech," in Proceedings of the IEEE International Conference on Acoustics, Speech, and Sig- nal Processing, 2010, pp. 4214-4217. Method for the subjective assessment of intermediate quality level of audio systems. International Telecommunication Union Radiocommunication Standardization Sector (ITU-R). Tech. Rep. BS.1534-3"Method for the subjective assessment of intermediate quality level of audio systems," Tech. Rep. BS.1534-3, International Telecommunication Union Radiocommunication Standardiza- tion Sector (ITU-R), 2015.
[ "https://github.com/NVIDIA/waveglow", "https://github.com/mozilla/LPCNet", "https://github.com/NVIDIA/nv-wavenet", "https://github.com/CSTR-Edinburgh/merlin" ]
[ "Enhancing Relation Extraction Using Syntactic Indicators and Sentential Contexts", "Enhancing Relation Extraction Using Syntactic Indicators and Sentential Contexts" ]
[ "Qiongxing Tao [email protected] ", "Cn ", "Xiangfeng Luo ", "Luoxf@shu Edu Cn ", "Hao Wang [email protected] ", "\nSchool of Computer Engineering and Science\nSchool of Computer Engineering and Science\nShanghai University Shanghai\nChina\n", "\nSchool of Computer Engineering and Science\nShanghai University Shanghai\nChina\n", "\nShanghai University Shanghai\nChina\n" ]
[ "School of Computer Engineering and Science\nSchool of Computer Engineering and Science\nShanghai University Shanghai\nChina", "School of Computer Engineering and Science\nShanghai University Shanghai\nChina", "Shanghai University Shanghai\nChina" ]
[]
State-of-the-art methods for relation extraction consider the sentential context by modeling the entire sentence. However, syntactic indicators, certain phrases or words like prepositions that are more informative than other words and may be beneficial for identifying semantic relations. Other approaches using fixed text triggers capture such information but ignore the lexical diversity. To leverage both syntactic indicators and sentential contexts, we propose an indicator-aware approach for relation extraction. Firstly, we extract syntactic indicators under the guidance of syntactic knowledge. Then we construct a neural network to incorporate both syntactic indicators and the entire sentences into better relation representations. By this way, the proposed model alleviates the impact of noisy information from entire sentences and breaks the limit of text triggers. Experiments on the SemEval-2010 Task 8 benchmark dataset show that our model significantly outperforms the state-of-the-art methods.
10.1109/ictai.2019.00227
[ "https://arxiv.org/pdf/1912.01858v1.pdf" ]
208,617,509
1912.01858
a95293a950c6006f4703a6ddcf6aa22ae3777689
Enhancing Relation Extraction Using Syntactic Indicators and Sentential Contexts Qiongxing Tao [email protected] Cn Xiangfeng Luo Luoxf@shu Edu Cn Hao Wang [email protected] School of Computer Engineering and Science School of Computer Engineering and Science Shanghai University Shanghai China School of Computer Engineering and Science Shanghai University Shanghai China Shanghai University Shanghai China Enhancing Relation Extraction Using Syntactic Indicators and Sentential Contexts Index Terms-relation extractionsyntactic indicatorssenten- tial context State-of-the-art methods for relation extraction consider the sentential context by modeling the entire sentence. However, syntactic indicators, certain phrases or words like prepositions that are more informative than other words and may be beneficial for identifying semantic relations. Other approaches using fixed text triggers capture such information but ignore the lexical diversity. To leverage both syntactic indicators and sentential contexts, we propose an indicator-aware approach for relation extraction. Firstly, we extract syntactic indicators under the guidance of syntactic knowledge. Then we construct a neural network to incorporate both syntactic indicators and the entire sentences into better relation representations. By this way, the proposed model alleviates the impact of noisy information from entire sentences and breaks the limit of text triggers. Experiments on the SemEval-2010 Task 8 benchmark dataset show that our model significantly outperforms the state-of-the-art methods. I. INTRODUCTION Relation extraction is the task of assigning a semantic relation to the target entity pair in a given sentence. Accurately extracting semantic relations from unstructured texts is important for many natural language applications, such as information extraction [1] [2], question answering [3] [4], and construction of semantic networks [5] [6]. Recent approaches for relation extraction primarily concentrate on deep neural networks [7]- [12]. Commonly, these models encode the entire sentence to capture the contextual information for relation representation, based on the assumption that each word in a sentence helps classify relations. A majority of these methods use entity information to improve the performance of relation extraction, such as entity position [7] [8], entity hypernym [7] and latent entity typing [13]. They all assume that the information related to target entities is more important. However, these models have two disadvantages: first, some words in a sentence irrelevant to the relation are as noises to classification; second, entity information is very limited in predicting relation types and the contributions from other words are prone to be ignored. Besides, a few approaches rely on particular lexical constraints [14] and relation triggers [15] that explicitly indicate the occurrence of relations in sentences. However, these meth- ods are not suited to cases where no relation trigger found in the sentence. In this paper, we revisit the problem from another perspective. As shown in Fig. 1, the phrase moved into is the key to identify the relation type Entity-Destination(e1,e2). On the contrary, it is insufficient to recognize relation types by the linguistic features about entity boss and entity office, let alone a non-existent explicit relation trigger. Intuitively, words like of and from are informative for relation extraction. Here and after we call this kind of words or phrases syntactic indicator. Syntactic indicator contains vibrant information for identifying semantic relations between target entities. Besides, the words My, new, yesterday in the first sentence are ubiquitous while not useful for relation identification. We can acquire better performance by reducing their impact. Therefore, we propose an indicator-aware neural model to condition both the syntactic indicator and the sentential context for better performance on relation extraction. This is achieved by a two-phase process. Firstly, under the guidance of syntactic knowledge, we extract syntactic indicators by removing unrelated words through entity disambiguation, principal component extraction, and unrelated entity removal. Then, we feed both of entire sentences and syntactic indicators into a contextual encoder based on the pre-trained BERT (Bidirectional Encoder Representations from Transformers) [16] to encode the semantic relation representations. The syntactic indicator is treated as the principal constraint on the contextual representation. By this way, the proposed model takes advantage of the relevant information and reduces the impact of noisy words. Our main contributions are listed as follows: • We define syntactic indicators that help to distinguish relation types and extract syntactic indicators under the guidance of syntactic knowledge, which is conducive to capture the important information and reduce noisy information that is irrelevant to relation extraction. • We propose an indicator-aware neural model using the pre-extracted indicators to improve relation extraction, which makes use of the key information by imposing constraints on contextual representations for better prediction. • The proposed model obtains an F1-score of 90.36% on the benchmark dataset, outperforming the state-ofthe-art methods. More ablation experiments demonstrate that incorporating syntactic indicators into contextual representations significantly improves the performance of relation extraction. II. RELATED WORK Conventional non-neural models for relation extraction include feature-based models [17] [18] and kernel-based models [19] [20]. These methods invariably suffer from error propagation due to their high dependence on the manual feature extraction process. Besides they may omit useful information for relation extraction. Therefore, the performance of these methods is very limited. Recently, a variety of works for relation extraction focus on deep neural networks. These methods mitigate the problem of error propagation and show promising results. On the one hand, Zeng et al. [7] propose a deep convolutional neural network (CNN) to address this task. They utilize sentence-level features and lexical level features, including entities, left and right tokens of entities and WordNet hypernyms of entities. Santos et al. [8] propose the Ranking CNN (CR-CNN) model using a new rank loss to reduce the impact of artificial classes. They also demonstrate that the words between target nominals are almost as useful as using positing embeddings. Inspired by their work, we extract syntactic indicators from the text between two entities. Shen and Huang [12] propose attentionbased convolutional neural network (Attention-CNN), which employs a word-level attention mechanism to get the critical information for relation representation. These methods have limitations on learning sequence structures because of the shortages of convolutional neural networks. On the other hand, the RNN-based models show outstanding performance in learning the linguistic structure in text. Zhang and Wang [21] propose a bidirectional recurrent neural network (Bi-RNN) to learn the long-term dependency between two entities, however, it suffers the vanishing gradient problem in RNNs. Soon after, Zhang et al. [9] apply the bidirectional LSTM network (Bi-LSTM) and utilize the word position and external features to improve the performance of relation extraction, including POS tags, named entity information, and dependency parse. In [10], Zhou et al. apply attention mechanisms in bidirectional LSTM networks (Attention Bi-LSTM). Xiao and Liu [11] separate each sentence into three context subsequences according to the locations of two target entities and use a Hierarchical Recurrent Neural Network with two Attention Bi-LSTM networks (Hier Attention Bi-LSTM) to get a better result. Most recently, Lee et al. [13] propose a model incorporating entity-aware attention mechanisms with a latent entity typing (LET) and obtain state-of-the-art performance. Approaches mentioned above encode the entire sentence to capture the contextual features, resulting in ignorance of other important features in sentences. Although a number of methods utilize various entity information, including the entity position, entity semantics, latent entity typing, and entity hypernyms, and such information holds an irreplaceable impact on identifying relations, it is too limited to fully capture distinctive features. There are also some works concentrate on relation triggers, the phrases that explicitly expresses the occurrence of one relationship in the given text. Björne et al. [15] propose the relation triggers and determine their arguments to reduce the complexity of the task. Open IE systems ReVerb [14] also uses special phrases to identify different relation types by lexical constraints. Nevertheless, there are many texts with no explicit relation trigger inside, semantical relations cannot be extracted from such sentences with these methods. Unlike these methods, our approach makes use of syntactic indicators, which can be able to vary with the different expressions of semantic relations rather than match fixed phrases templates. Pre-trained Language models have shown the great success on many NLP tasks [22] [23]. Especially, BERT proposed by Devlin et al. shows a significant impact [16], which learns the deep bidirectional representations by jointly conditioning on both left and right context in the training procedure. It has been applied to multiple NLP tasks and obtains new startof-the-art results on eleven tasks, such as text classification, sequence labeling, and question answering. In recent research, Wu and He [24] propose an R-BERT model, which employs the pre-trained BERT language model and reaches the top of the leaderboard in relation extraction. By the way, related works on the relation extraction can be mainly grouped into two categories, supervised methods [7] [11] [25] and distant supervised methods [26]- [28]. They are different in whether the data contains a large number of noisy labels. Supervised methods without noisy labels achieve more reliable results, which play a dominant role in the relation classification. In this paper, we focus on supervised relation extraction. III. OUR MODEL In this section, we first give an overview of the proposed indicator-aware neural model. After that, we present each module in details. A. Model Architecture The overall architecture of the proposed model is shown in Fig. 2. Given a sentence, we first extract the corresponding syntactic indicator under the guidance of syntactic knowledge (the process detailed in the following paragraphs). Subsequently, the entire sentence and the indicator sequence are concatenated after WordPiece tokenization [29]. Then, we feed the aggregate token sequence into a BERT-based contextual encoder to learn the deep bidirectional representation for each token. The final representations of the aggregate sequence, two entities, and the syntactic indicators are respectively acquired with different operations in the later network layers. At last, these vector representations are concatenated to produce a final prediction distribution. B. Definition of Syntactic Indicators Definition: The syntactic indicator is certain words or phrases in a sentence, providing essential information to identify the semantic relation between target entities. Different from text triggers, syntactic indicators are rich in manifestation rather than match fixed phrase templates. Each sentence produces an exclusive syntactic indicator. It may consist of any verbs, prepositions, pronouns or phrases, relying on the current language expression. As shown in Fig. 3, caused by is the syntactic indicator in the first instance. Accordingly, we can affirm that relation Cause-Effect(e2,e1) exists in two target enties e 1 =shock and e 2 =attack. Similarly in the other two instances, we can recognize relation Content-Container(e1,e2) and relation Instrument-Agency(e2,e1) based are enclosed in and using, respectively. C. Syntactic Indicator Extraction We extract syntactic indicators from the text between two target entities by removing irrelevant words. Fortunately, the target subsequence is accessible from a sentence via entity markers. After that, we acquire the syntactic indicators under the guidance of syntactic knowledge, which can be characterized as follows: a) Entity Disambiguation: Nouns that are around with a conjunction word and or or, and compound nouns that consist of no less than two nouns will be disambiguated by removing the restrictive and supplementary words. As shown in Fig. 3, shock and anger is transformed to shock, plastic case and propagation method are transformed to case and method, we remove the highlighted parts marked with a subscript 1. Each instance in the labeled data contains only one relationship, like the relation in the first example of Fig. 3 is about shock and attack, so nouns in target entities naturally remain. b) Principal Component Extraction: Remove adjectives, adverbs and other modifiers from the text to obtain the principal components, expressing the primary semantic relations. In Fig. 3 c) Unrelated Entity Removal: Remove any other named entity and the corresponding actions except two target entities to obtain an indicator sequence shaped like shock caused by attack, coins are enclosed in case and analyzer using method shown in Fig. 3. In the third instance, the irrelevant entity paths and its corresponding action identifies are removed. Finally, we acquire an exclusive indicator sequence from a given sentence, which deemed without any irrelevant words. The syntactic indicator is included between two target entities. D. BERT-based Contextual Encoder The pre-trained BERT language representation model [16] is a multi-layer bidirectional transformer encoder [30], designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. The input of BERT can be able to a single sentence or a pair of sentences. A special token [CLS] is always the first token of each sequence. Sentence pairs are separated with a token [SEP] and packed together into a single sequence. BERT is the first fine-tuning based representation model for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. Because of the ubiquitous use of BERT recently, we will omit an exhaustive background description of the architecture of BERT. a) BERT Module: Given the sentence S, we insert four markers e11, e12, e21 and e22 at the beginning and end of two target entities (e 1 , e 2 ), which conduces to capture the entity locations. While the corresponding indicator sequence S * always start with entity e 1 and end with e 2 , we insert # behind e 1 and insert $ before e 2 to mark the syntactic indicator. To fine-tune BERT, we feed both two sequences into the WordPiece tokenizer and then concatenate the obtained subtokens into a single token sequence T . Following the original implementation of BERT, we add a token [CLS] to the beginning of the token sequence and separate two sequences with a token [SEP]. Then, we feed T into the BERT to produce the current representation of each token. b) Aggregate Sequence Representation: The final hidden state sequence H output from the BERT module corresponds to the task-oriented embedding of each token. Suppose H 0 is the hidden state of first special token [CLS], we add an activation operation and a fully connected layer to obtain a vector H 0 as the representation of the aggregate sequence. H 0 = W 0 (tanh (H 0 )) + b 0(1) c) Entity Representations: Hidden state H m , H n , H p and H q are vector representations of four entity markers e11, e12, e21 and e22. For the target entities, vectors between H m and H n represent entity e 1 , and vectors between H p and H q represent entity e 2 . We apply an average operation to get a single vector representation following with a tanh activation operation and a fully connected layer. In this step, two entities share the same parameters W e and b e . As the following equations, the final representations of two target entities are respectively H e1 , H e2 : H e1 =W e tanh 1 n − m + 1 n t=m H t + b e H e2 =W e tanh 1 q − p + 1 q t=p H t + b e (2) d) Syntactic Indicator Representation: H i+1 to H i+j are the hidden state vectors corresponds to the indicator sequence S * . We also apply an average operation following with a tanh activation operation and a fully connected layer to obtain the final representation: z = W z tanh 1 j j t=1 H i+t + b z (3) where H i+t is the t th vector representation in S * . For fine-tuning, we concatenate H 0 , H e1 , H e2 , and z, then consecutively add two fully connected layers with weights W 1 , W 2 and biases b 1 , b 2 . Finally, we obtain a relation representation vector r used for classifying relations. r = W 2 [W 1 [concat (H 0 , H e1 , H e2 , z)] + b 1 ] + b 2 (4) E. Relation classifer Given an instance x with entire sentence S and indicator sequence S * , we can obtaine the relation representation r by the relation encoder. For classifying, we apply a fully connected softmax layer to produce a probability distribution p (y|x, θ) over all predefined relation types: p (y|x, θ) = softmax (W * r + b * )(5) where y ∈ Y is the target relation type, θ refers all learnable parameters in the network including W * ∈ R |Y |×d h and b * ∈ R |Y | , where |Y | is the number of relation types. F. Training Procedure For the perpose of making a clear distinction between different relation categories and reducing the influence of noise, we design our loss function based on the commonly used cross-entropy, referring to the rank loss function proposed by Santos et al. [8]. The total loss L on a batch with the size of k can be expressed as the following equation: L = − k i=1 log p y + |c, θ − β k i=1 log 1 − p y − |c, θ + λ θ 2 2 (6) where the first term in the right side decreases as the probability p (y + |c, θ) increases and the second term with a hyperparameter β in the right side decreases as the the probability p (y − |c, θ) decreases. For each instance, y + ∈ Y is the correct relation label, while y − ∈ Y is a negative category chose with the highest probability among all incorrect relation types in each training round: y − = arg max y∈Y ;y =y + p (y|x, θ)(7) In relation extraction, an artificial class Other is used to refer to the relation between target entities that does not belong to any of natural classes. Therefore, the class Other is too noisy to have common representative characteristics since it consists of many different categories of relations. For this reason, we calculate loss on each relation class except Other to reduce the impact of noise, reflected in the loss function as y + = Other and y − = Other. To alleviate overfitting, we add a dropout layer before the fully connected softmax layer in training procedure and constrain the L2 regularization with a coefficient λ as the third term in the right side. IV. EXPERIMENTS A. Dataset and Evaluation Metric To evaluate the performance of our model, we conduct experiments on the SemEval-2010 Task 8 dataset [31], the published benchmark for relation extraction. The dataset contains 10, 717 annotated instances, including 8, 000 instances for training and 2, 717 instances for testing. All instances are annotated with 9 directed relations types and an artificial class Other. Nine directed relations are respectively Cause-Effect, Instrument-Agency, Product-Producer, Content-Container, Entity-Origin, Entity-Destination, Component-Whole, Member-Collection, and Message-Topic. We take direction into consideration and the total number of relation types is 19. We adopt macro-averaged F1-score for nine actual relations (excluding Other) to evaluate the model, which is the official evaluation metric for SemEval-2010 Task 8. Hyper-parameter β in Loss Function 5.0 B. Experimental Settings For the pre-trained BERT model, we use the uncased model to integrate our approach. The hyper-parameters we set in the proposed model are shown in table I. Furthermore, the parameters of the pre-trained BERT model are initialized according to the original [16]. C. Result Results of various neural models are demonstrated in table II. We achieve a strong empirical result based on the proposed approach. Table II shows that our model obtains an F1-score of 90.36%, outperforming the state-of-the-art models substantially. The best results of the CNN-based and RNN-based models range from 84% to 86%, while the recent R-BERT model proposed by Wu and He [24] obtains the best F1score of 89.25%, which has an approximately 4-point gap with previous methods. It is noteworthy that the proposed relation extraction model introducing syntactic indicators has a further performance improvement in this task. D. Analysis To demonstrate that introducing syntactic indicators indeed affects relation extraction, we create two more settings to conduct experiments for comparison and further build another neural model without BERT structure for more forceful evidence. Experimental results shown in table III provide ample proof that incorporating syntactic indicators indeed improves the performance of relation extraction. a) Experiments on BERT-based Model: that indicator sequences contain enough information for classifying relations but are likely to provide incomplete information. The proposed BERT-based model leverages both the syntactic indicator and the sentential context for relation extraction, which can be considered to be able to maintain a balance between reducing noise and capturing complete features. b) Experiments on Non-BERT Model: • We construct a model without BERT structure for further confirmations, which consists of a CNN module to capture the indicative features from indicator sequences and a Bi-LSTM module to capture the contextual information from entire sentences. Experimental results obtained from the Non-BERT structure model are listed in lines six through nine in table III. The model obtains an F1score of 85.9% by combining the information from two modules, which outperforms the best CNN-based and RNN-based models. Even compared with the approaches using high-level lexical features such as WordNet, DPT, DEP, NLP tags or NER tags, it also has the best result. Likewise, we separately feed one of the sequences into the model. Correspondingly, the entire sentence is encoded using the Bi-LSTM module while the syntactic indicator is encoded using the CNN module. Unsurprisingly, both of the F1-scores are not bad but lower, which further proves the validity of the constraint on relation representations by syntactic indicators. • We further capture the features of the entire sentence twice using the CNN module and Bi-LSTM module respectively and then combine them to make a final prediction, the result becomes worse instead. This proves that noisy information unrelated to entity relations exist in the sentence, and excessive use of irrelevant features as relational features will degrade the performance of relation extraction. Therefore, it is very necessary to impose constraints on semantic relation representations to avoid the impact of noisy information. c) Contributions of Syntactic Indicators: • Table IV shows the contributions of syntactic indicators on precision, recall and F1-score for each relation cat- V. CONCLUSIONS In this paper, we propose syntactic indicators that are insensitive to lexical word forms and a novel indicator-aware neural model leveraging both syntactic indicators and sentential contexts to fulfill the relation extraction. The proposed approach performed on BERT-based model achieves an F1score of 90.36% in SemEval-2010 Task 8, outperforming the state-of-the-art methods. The implementation with the non-BERT model also achieves the best result in CNN-based and RNN-based models. Thanks to the incorporating of syntactic indicators, capturing more determinative features for classifying relations while reducing noise impact, our approach effectively improves the performance of relation extraction. In the future, we expect to leverage the syntactic indicators into more complex multi-relation extraction and distantly supervised relation extraction. Furthermore, we will research how to utilize the deep neural network to automatically locate the indicator in sentences, rather than extract indicators under the guidance of syntactic knowledge. Fig. 1 . 1The decisive influence of syntactic indicators in identifying relations. The right part shows the syntactic indicator and the correct relation between target entities for each instance. Fig. 2 . 2The overall architecture of the proposed model. Fig. 3 . 3Syntactic Indicator Extraction. The blue highlights with a subscript 1 are removed abiding the first rule, Entity Disambiguation. And the orange highlights with a subscript 2 and the green highlights with a subscript 3 are removed respectively abiding Principal Component Extraction and Unrelated Entities Removal. , the highlighted parts marked with a subscript 2 are removed from subsequences, such as [the, surprise], [a, clear, hard], and [first, the, infeasible, the, constraint]. TABLE I HYPER I-PARAMETERS Description Value Max Sequence Length after Tokenization 128 Batch Size for Training 16 Initial Learning Rate for Adam 2 × 10 −5 Number of Training Epochs 5.0 Dropout Rate 0.1 L2 Regularixation Coefficient 5 × 10 −3 TABLE II PERFORMANCE IICOMPARISON ON EXTRACTING RELATIONSModel F1 CNN (Zeng et al., 2014) [7] 78.9 + WN 82.7 CR-CNN (Santos et al., 2015) [8] 84.1 Attention CNN (Shen and Huang, 2016) [12] 84.3 + POS, WN, WAN 85.9 Bi-LSTM (Zhang et al., 2015) [21] 82.7 + POS, NER, DEP, WN 84.3 Attention Bi-LSTM (Zhou et al., 2016) [10] 84.0 Hier Attention Bi-LSTM (Xiao and Liu, 2016) [11] 84.3 Attention Bi-LSTM (Lee et al., 2019) [13] 84.7 + LET 85.2 R-BERT (Wu et al., 2019) [24] 89.25 Indicator-aware BERT (Ours) 90.36 TABLE III EXPERIMENTAL IIIRESULTS BASED ON DIFFERENT INPUT AND MODELSModel Input F1 BERT-based Entire Sentence + Indicator Sequence 90.36 Entire Sentence 89.30 Indicator Sequence 86.79 Non-BERT LSTM CNN Entire Sentence Indicator Sequence 85.9 Entire Sentence - 84.4 - Indicator Sequence 82.5 Entire Sentence Entire Sentence 84.0 TABLE IV CONTRIBUTIONS IVOF SYNTACTIC INDICATORS ON PRECISION, RECALL AND F1-SCORE FOR EACH RELATION CATEGORY IS: Indicator Sequence egory(performed on BERT-based model). The proposed model incorporating syntactic indicators increases the F1-score on each category, where the precisions on all categories except Entity-Destination are increased and the recalls on most categories are improved or remained the same. Especially, the precisions on Instrument-Agency, Member-Collection, Message-Topic and Product-Producer increased by 3.13, 3.06, 2.61 and 5.95 percentage points respectively. The effects of syntactic indicators are more prominently reflected on these categories because of instances containing such types of relations often have more noisy words in the text between target entities.Relation Precision Recall F1-score - +IS - +IS - +IS Cause-Effect 93.27 94.48 92.99 93.90 93.13 94.19 Component-Whole 86.52 88.46 88.46 88.46 87.48 88.46 Content-Container 89.05 90.77 93.23 92.19 91.09 91.47 Entity-Destination 93.84 93.33 93.84 95.89 93.84 94.59 Entity-Origin 89.66 90.77 90.70 91.47 90.17 91.12 Instrument-Agency 85.92 89.05 78.21 78.21 81.88 83.28 Member-Collection 84.49 87.55 88.84 87.55 86.61 87.55 Message-Topic 87.54 90.15 96.93 94.64 92.00 92.34 Product-Producer 84.84 90.79 89.61 89.61 87.16 90.20 • Two additional experiments only use one of the sequences and the experimental results are listed in lines two through four in table III. The experiment only using the entire sentence as input produces an F1-score of 89.30%, which is 1.06% lower than the proposed approach. Although an indicator sequence just composed of a few words, the experiment only using the indicator sequence produces an F1-score of 86.79%. It can be said ACKNOWLEDGMENTThe research reported in this paper was supported in part by the National Natural Science Foundation of China under the grant No. 91746203. Identifying relations for open information extraction. A Fader, S Soderland, O Etzioni, Proceedings of the conference on empirical methods in natural language processing. the conference on empirical methods in natural language processingAssociation for Computational LinguisticsA. Fader, S. Soderland, and O. Etzioni, "Identifying relations for open information extraction," in Proceedings of the conference on empirical methods in natural language processing. Association for Computational Linguistics, 2011, pp. 1535-1545. Open information extraction using wikipedia. F Wu, D S Weld, Proceedings of the 48th annual meeting of the association for computational linguistics. the 48th annual meeting of the association for computational linguisticsAssociation for Computational LinguisticsF. Wu and D. S. Weld, "Open information extraction using wikipedia," in Proceedings of the 48th annual meeting of the association for computational linguistics. Association for Computational Linguistics, 2010, pp. 118-127. Using syntactic and semantic relation analysis in question answering. R S J J Y Fan, T H C T S Chua, M.-Y Kan, Proceedings of the 14th Text REtrieval Conference (TREC). the 14th Text REtrieval Conference (TREC)National Institute of Standards and TechnologySpecial PublicationR. S. J. J. Y. Fan, T. H. C. T.-S. Chua, and M.-Y. Kan, "Using syntactic and semantic relation analysis in question answering," in Proceedings of the 14th Text REtrieval Conference (TREC), vol. Special Publication 500-266. National Institute of Standards and Technology, 2005. Semantic parsing for single-relation question answering. W Yih, X He, C Meek, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)W.-t. Yih, X. He, and C. Meek, "Semantic parsing for single-relation question answering," in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics, 2014, pp. 643-648. Semantic networks of english. G A Miller, C Fellbaum, Cognition. 411-3G. A. Miller and C. Fellbaum, "Semantic networks of english," Cogni- tion, vol. 41, no. 1-3, pp. 197-229, 1991. A multilingual database with lexical semantic networks. P Vossen, Kluwer Academic Publishers. doi10DordrechtP. Vossen, "A multilingual database with lexical semantic networks," Dordrecht: Kluwer Academic Publishers. doi, vol. 10, pp. 978-94, 1998. Relation classification via convolutional deep neural network. D Zeng, K Liu, S Lai, G Zhou, J Zhao, Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. COLING 2014, the 25th International Conference on Computational Linguistics: Technical PapersAssociation for Computational LinguisticsD. Zeng, K. Liu, S. Lai, G. Zhou, and J. Zhao, "Relation classification via convolutional deep neural network," in Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. Association for Computational Linguistics, 2014, pp. 2335-2344. Classifying relations by ranking with convolutional neural networks. C Santos, B Xiang, B Zhou, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingAssociation for Computational Linguistics1Long Papers)C. dos Santos, B. Xiang, and B. Zhou, "Classifying relations by ranking with convolutional neural networks," in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, 2015, pp. 626-634. Bidirectional long shortterm memory networks for relation classification. S Zhang, D Zheng, X Hu, M Yang, Proceedings of the 29th Pacific Asia conference on language, information and computation. the 29th Pacific Asia conference on language, information and computationAssociation for Computational LinguisticsS. Zhang, D. Zheng, X. Hu, and M. Yang, "Bidirectional long short- term memory networks for relation classification," in Proceedings of the 29th Pacific Asia conference on language, information and computation. Association for Computational Linguistics, 2015, pp. 73-78. Attentionbased bidirectional long short-term memory networks for relation classification. P Zhou, W Shi, J Tian, Z Qi, B Li, H Hao, B Xu, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics2Short Papers)P. Zhou, W. Shi, J. Tian, Z. Qi, B. Li, H. Hao, and B. Xu, "Attention- based bidirectional long short-term memory networks for relation classi- fication," in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), vol. 2. Asso- ciation for Computational Linguistics, 2016, pp. 207-212. Semantic relation classification via hierarchical recurrent neural network with attention. M Xiao, C Liu, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersAssociation for Computational LinguisticsM. Xiao and C. Liu, "Semantic relation classification via hierarchical recurrent neural network with attention," in Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. Association for Computational Linguistics, 2016, pp. 1254-1263. Attention-based convolutional neural network for semantic relation extraction. X Huang, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersAssociation for Computational LinguisticsX. Huang et al., "Attention-based convolutional neural network for semantic relation extraction," in Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. Association for Computational Linguistics, 2016, pp. 2526- 2536. Semantic relation classification via bidirectional lstm networks with entity-aware attention using latent entity typing. J Lee, S Seo, Y S Choi, Symmetry. 116785J. Lee, S. Seo, and Y. S. Choi, "Semantic relation classification via bidirectional lstm networks with entity-aware attention using latent entity typing," Symmetry, vol. 11, no. 6, p. 785, 2019. Open question answering over curated and extracted knowledge bases. A Fader, L Zettlemoyer, O Etzioni, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningACMA. Fader, L. Zettlemoyer, and O. Etzioni, "Open question answering over curated and extracted knowledge bases," in Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014, pp. 1156-1165. Extracting contextualized complex biological events with rich graph-based feature sets. J Björne, J Heimonen, F Ginter, A Airola, T Pahikkala, T Salakoski, Computational Intelligence. 274J. Björne, J. Heimonen, F. Ginter, A. Airola, T. Pahikkala, and T. Salakoski, "Extracting contextualized complex biological events with rich graph-based feature sets," Computational Intelligence, vol. 27, no. 4, pp. 541-557, 2011. Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, 2019, pp. 4171-4186. Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. N Kambhatla, Proceedings of the ACL 2004 on Interactive poster and demonstration sessions. the ACL 2004 on Interactive poster and demonstration sessionsAssociation for Computational Linguistics22N. Kambhatla, "Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations," in Proceedings of the ACL 2004 on Interactive poster and demonstration sessions. Association for Computational Linguistics, 2004, p. 22. Combining linguistic and statistical analysis to extract relations from web documents. F M Suchanek, G Ifrim, G Weikum, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. the 12th ACM SIGKDD international conference on Knowledge discovery and data miningACMF. M. Suchanek, G. Ifrim, and G. Weikum, "Combining linguistic and statistical analysis to extract relations from web documents," in Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2006, pp. 712-717. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. L Qian, G Zhou, F Kong, Q Zhu, P Qian, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsAssociation for Computational Linguistics1L. Qian, G. Zhou, F. Kong, Q. Zhu, and P. Qian, "Exploiting constituent dependencies for tree kernel-based semantic relation extraction," in Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1. Association for Computational Linguistics, 2008, pp. 697-704. Subsequence kernels for relation extraction. R J Mooney, R C Bunescu, Advances in neural information processing systems. R. J. Mooney and R. C. Bunescu, "Subsequence kernels for relation extraction," in Advances in neural information processing systems, 2006, pp. 171-178. Relation classification via recurrent neural network. D Zhang, D Wang, arXiv:1508.01006arXiv preprintD. Zhang and D. Wang, "Relation classification via recurrent neural network," arXiv preprint arXiv:1508.01006, 2015. Semi-supervised sequence learning. A M Dai, Q V Le, Advances in neural information processing systems. A. M. Dai and Q. V. Le, "Semi-supervised sequence learning," in Advances in neural information processing systems, 2015, pp. 3079- 3087. Universal language model fine-tuning for text classification. J Howard, S Ruder, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1J. Howard and S. Ruder, "Universal language model fine-tuning for text classification," in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2018, pp. 328-339. Enriching pre-trained language model with entity information for relation classification. S Wu, Y He, arXiv:1905.08284arXiv preprintS. Wu and Y. He, "Enriching pre-trained language model with entity information for relation classification," arXiv preprint arXiv:1905.08284, 2019. Semantic compositionality through recursive matrix-vector spaces. R Socher, B Huval, C D Manning, A Y Ng, Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning. the 2012 joint conference on empirical methods in natural language processing and computational natural language learningAssociation for Computational LinguisticsR. Socher, B. Huval, C. D. Manning, and A. Y. Ng, "Semantic com- positionality through recursive matrix-vector spaces," in Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning. Association for Computational Linguistics, 2012, pp. 1201-1211. Distant supervision for relation extraction without labeled data. M Mintz, S Bills, R Snow, D Jurafsky, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPAssociation for Computational LinguisticsM. Mintz, S. Bills, R. Snow, and D. Jurafsky, "Distant supervision for relation extraction without labeled data," in Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, 2009, pp. 1003- 1011. Distant supervision for relation extraction via piecewise convolutional neural networks. D Zeng, K Liu, Y Chen, J Zhao, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingD. Zeng, K. Liu, Y. Chen, and J. Zhao, "Distant supervision for relation extraction via piecewise convolutional neural networks," in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. The Association for Computational Linguistics, 2015, pp. 1753-1762. Distant supervision for relation extraction with an incomplete knowledge base. B Min, R Grishman, L Wan, C Wang, D Gondek, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. The Association for Computational Linguistics. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. The Association for Computational LinguisticsB. Min, R. Grishman, L. Wan, C. Wang, and D. Gondek, "Distant supervision for relation extraction with an incomplete knowledge base," in Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. The Association for Computational Linguistics, 2013, pp. 777-782. Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Long Papers)R. Sennrich, B. Haddow, and A. Birch, "Neural machine translation of rare words with subword units," in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, 2016, pp. 1715-1725. Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. I Hendrickx, S N Kim, Z Kozareva, P Nakov, D Séaghdha, S Padó, M Pennacchiotti, L Romano, S Szpakowicz, Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. the Workshop on Semantic Evaluations: Recent Achievements and Future DirectionsAssociation for Computational LinguisticsI. Hendrickx, S. N. Kim, Z. Kozareva, P. Nakov, D.Ó Séaghdha, S. Padó, M. Pennacchiotti, L. Romano, and S. Szpakowicz, "Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nomi- nals," in Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. Association for Computational Linguistics, 2009, pp. 94-99.
[]
[ "GA-Net: Guided Aggregation Net for End-to-end Stereo Matching", "GA-Net: Guided Aggregation Net for End-to-end Stereo Matching" ]
[ "Feihu Zhang \nUniversity of Oxford\n\n", "Victor Prisacariu \nUniversity of Oxford\n\n", "Ruigang Yang \nBaidu Research\nBaidu Inc\n\n", "Philip H S Torr \nUniversity of Oxford\n\n" ]
[ "University of Oxford\n", "University of Oxford\n", "Baidu Research\nBaidu Inc\n", "University of Oxford\n" ]
[]
In the stereo matching task, matching cost aggregation is crucial in both traditional methods and deep neural network models in order to accurately estimate disparities. We propose two novel neural net layers, aimed at capturing local and the whole-image cost dependencies respectively. The first is a semi-global aggregation layer which is a differentiable approximation of the semi-global matching, the second is the local guided aggregation layer which follows a traditional cost filtering strategy to refine thin structures.These two layers can be used to replace the widely used 3D convolutional layer which is computationally costly and memory-consuming as it has cubic computational/memory complexity. In the experiments, we show that nets with a two-layer guided aggregation block easily outperform the state-of-the-art GC-Net which has nineteen 3D convolutional layers. We also train a deep guided aggregation network (GA-Net) which gets better accuracies than state-of-the-art methods on both Scene Flow dataset and KITTI benchmarks. Code will be available at
10.1109/cvpr.2019.00027
[ "https://arxiv.org/pdf/1904.06587v1.pdf" ]
119,304,432
1904.06587
3a4545fc08c6719776e041543f5721491f43524f
GA-Net: Guided Aggregation Net for End-to-end Stereo Matching Feihu Zhang University of Oxford Victor Prisacariu University of Oxford Ruigang Yang Baidu Research Baidu Inc Philip H S Torr University of Oxford GA-Net: Guided Aggregation Net for End-to-end Stereo Matching In the stereo matching task, matching cost aggregation is crucial in both traditional methods and deep neural network models in order to accurately estimate disparities. We propose two novel neural net layers, aimed at capturing local and the whole-image cost dependencies respectively. The first is a semi-global aggregation layer which is a differentiable approximation of the semi-global matching, the second is the local guided aggregation layer which follows a traditional cost filtering strategy to refine thin structures.These two layers can be used to replace the widely used 3D convolutional layer which is computationally costly and memory-consuming as it has cubic computational/memory complexity. In the experiments, we show that nets with a two-layer guided aggregation block easily outperform the state-of-the-art GC-Net which has nineteen 3D convolutional layers. We also train a deep guided aggregation network (GA-Net) which gets better accuracies than state-of-the-art methods on both Scene Flow dataset and KITTI benchmarks. Code will be available at Introduction Stereo reconstruction is a major research topic in computer vision, robotics and autonomous driving. It aims to estimate 3D geometry by computing disparities between matching pixels in a stereo image pair. It is challenging due to a variety of real-world problems, such as occlusions, large textureless areas (e.g. sky, walls etc.), reflective surfaces (e.g. windows), thin structures and repetitive textures. Traditionally, stereo reconstruction is decomposed into three important steps: feature extraction (for matching cost computation), matching cost aggregation and disparity prediction [9,21]. Feature-based matching is often ambiguous, with wrong matches having a lower cost than the correct ones, due to occlusions, smoothness, reflections, noise etc. Therefore, cost aggregation is a key step needed to obtain accurate disparity estimations in challenging regions. Deep neural networks have been used for matching cost * Part of the work was done when working in Baidu Research. (a) Input image (b) GC-Net [13] (c) Our GA-Net-2 (d) Ground truth computation in, e.g, [30,33], with (i) cost aggregation based on traditional approaches, such as cost filtering [10] and semi-global matching (SGM) [9] and (ii) disparity computation with a separate step. Such methods considerably improve over traditional pixel matching, but still struggle to produce accurate disparity results in textureless, reflective and occluded regions. End-to-end approaches that link matching with disparity estimation were developed in e.g. DispNet [15], but it was not until GC-Net [13] that cost aggregation, through the use of 3D convolutions, was incorporated in the training pipeline. The more recent work of [3], PSMNet, further improves accuracy by implementing the stacked hourglass backbone [17] and considerably increasing the number of 3D convolutional layers for cost aggregation. The large memory and computation cost incurred by using 3D convolutions is reduced by down-sampling and up-sampling frequently, but this leads to a loss of precision in the disparity map. Among these approaches, traditional semi-global matching (SGM) [9] and cost filtering [10] are all robust and efficient cost aggregation methods which have been widely used in many industrial products. But, they are not differentiable and cannot be easily trained in an end-to-end manner. In this work, we propose two novel cost aggregation layers for end-to-end stereo reconstruction to replace the use of 3D convolutions. Our solution considerably increases accuracy, while decreasing both memory and computation costs. First, we introduce a semi-global guided aggregation layer (SGA) which implements a differentiable approximation of semi-global matching (SGM) [9] and aggregates the matching cost in different directions over the whole image. This enables accurate estimations in occluded regions or large textureless/reflective regions. Second, we introduce a local guided aggregation layer (LGA) to cope with thin structures and object edges in order to recover the loss of details caused by down-sampling and up-sampling layers. As illustrated in Fig. 1, a cost aggregation block with only two GA layers and two 3D convolutional layers easily outperforms the state-of-the-art GC-Net [13], which has nineteen 3D convolutional layers. More importantly, one GA layer has only 1/100 computational complexity in terms of FLOPs (floating-point operations) as that of a 3D convolution. This allows us to build a real-time GA-Net model, which achieves better accuracy compared with other existing real-time algorithms and runs at a speed of 15∼20 fps. We further increase the accuracy by improving the network architectures used for feature extraction and matching cost aggregation. The full model, which we call "GA-Net", achieves the state-of-the-art accuracy on both the Scene Flow dataset [15] and the KITTI benchmarks [7,16]. Related Work Feature based matching cost is often ambiguous, as wrong matches can easily have a lower cost than correct ones, due to occlusions, smoothness, reflections, noise etc. To deal with this, many cost aggregation approaches have been developed to refine the cost volume and achieve better estimations. This section briefly introduces related work in the application of deep neural networks in stereo reconstruction with a focus on the existing matching cost aggregation strategies, and briefly reviews approaches for traditional local and semi-global cost aggregations. Deep Neural Networks for Stereo Matching Deep neural networks were used to compute patch-wise similarity scores in [4,6,29,33], with traditional cost aggregation and disparity computation/refinement methods [9,10] used to get the final disparity maps. These approaches achieved state-of-the-art accuracy, but, limited by the traditional matching cost aggregation step, often produced wrong predictions in occluded regions, large textureless/reflective regions and around object edges. Some other methods looked to improve the performance of traditional cost aggregation, with, e.g. SGM-Nets [23] predicting the penalty-parameters for SGM [9] using a neural net, whereas Schönberger et al. [22] learned to fuse proposals by optimization in stereo matching and Yang et al. proposed to aggregate costs using a minimum spanning tree [28]. Recently, end-to-end deep neural network models have become popular. Mayer et al. created a large synthetic dataset to train end-to-end deep neural network for disparity estimation (e.g. DispNet) [15]. Pang et al. [19] built a twostage convolutional neural network to first estimate and then refine the disparity maps. Tulyakov et al. proposed end-toend deep stereo models for practical applications [26]. GC-Net [13] incorporated the feature extraction, matching cost aggregation and disparity estimation into a single end-toend deep neural model to get state-of-the-art accuracy on several benchmarks. PSMNet [3] used pyramid feature extraction and a stacked hourglass block [18] with twenty-five 3D convolutional layers to further improve the accuracy. Cost Aggregation Traditional stereo matching algorithms [1,9,27] added an additional constraint to enforce smoothness by penalizing changes of neighboring disparities. This can be both local and (semi-)global, as described below. Local Cost Aggregation The cost volume C is formed of matching costs at each pixel's location for each candidate disparity value d. It has a size of H × W × D max (with H: image height, W : image width, D max : maximum of the disparities) and can be sliced into D max slices for each candidate disparity d. An efficient cost aggregation method is the local cost filter framework [10,31], where each slice of the cost volume C(d) is filtered independently by a guided image filter [8,25,31]. The filtering for pixel's location p = (x, y) at disparity d is a weighted average of all neighborhoods q ∈ N p in the same slice C(d): C A (p, d) = ∑ q∈N p ω(p, q) ·C(q, d)(1) Where C(q, d) means the matching cost at location p for candidate disparity d. C A (p, d) represents the aggregated matching cost. Different image filters [8,25,31] can be used to produce the guided filter weights ω. Since these methods only aggregate the cost in a local region N p , they can run at fast speeds and reach real-time performance. Semi-Global Matching When enforcing (semi-)global aggregation, the matching cost and the smoothness constraints are formulated into one energy function E(D) [9] with the disparity map of the input image as D. The problem of stereo matching can now be formulated as finding the best disparity map D * that min-imizes the energy E(D): E(D) = ∑ p {C p (D p ) + ∑ q∈N p P 1 · δ (|D p − D q | = 1) + ∑ q∈N p P 2 · δ (|D p − D q | > 1)}. (2) The first term ∑ p C p (D p ) is the sum of matching costs at all pixel locations p for disparity map D. The second term is a constant penalty P 1 for locations q in the neighborhood of p if they have small disparity discontinuities in disparity map D (|D p − D q | = 1). The last term adds a larger constant penalty P 2 , for all larger disparity changes (|D p − D q | > 1). Hirschmuller proposed to aggregate matching costs in 1D from sixteen directions to get a approximate solution with O(KN) time complexity, which is well known as semiglobal matching (SGM) [9]. The cost C A r (p, d) of a location p at disparity d aggregates along a path over the whole image in the direction r, and is defined recursively as: C A r (p, d) = C(p, d) + min        C A r (p − r, d), C A r (p − r, d − 1) + P 1 , C A r (p − r, d + 1) + P 1 , min i C A r (p − r, i) + P 2 .(3) Where r is a unit direction vector. The same aggregation steps were used in MC-CNN [23,30], and similar iterative steps were employed in [1,2,14]. In the following section, we detail our much more efficient guided aggregation (GA) strategies, which include a semi-global aggregation (SGA) layer and a local guided aggregation (LGA) layer. Both GA layers can be implemented with back propagation in end-to-end models to replace the low-efficient 3D convolutions and obtain higher accuracy. Guided Aggregation Net In this section, we describe our proposed guided aggregation network (GA-Net), including the guided aggregation (GA) layers and the improved network architecture. Guided Aggregation Layers State-of-the-art end-to-end stereo matching neural nets such as [3,13] build a 4D matching cost volume (with size of H × W × D max × F, H: height, W : width, D max : max disparity, F: feature size) by concatenating features between the stereo views, computed at different disparity values. This is next refined by a cost aggregation stage, and finally used for disparity estimation. Different from these approaches, and inspired by semi-global and local matching cost aggregation methods [9,10], we propose our semiglobal guided aggregation (SGA) and local guided aggregation (LGA) layers, as outlined below. Semi-Global Aggregation Traditional SGM [9] aggregates the matching cost iteratively in different directions (Eq. (3)). There are several difficulties in using such a method in end-to-end trainable deep neural network models. First, SGM has many user-defined parameters (P 1 , P 2 ), which are not straightforward to tune. All of these parameters become unstable factors during neural network training. Second, the cost aggregations and penalties in SGM are fixed for all pixels, regions and images without adaptation to different conditions. Third, the hard-minimum selection leads to a lot of fronto parallel surfaces in depth estimations. We design a new semi-global cost aggregation step which supports backpropagation. This is more effective than the traditional SGM and can be used repetitively in a deep neural network model to boost the cost aggregation effects. The proposed aggregation step is: C A r (p, d) = C(p, d) + sum        w 1 (p, r) ·C A r (p − r, d), w 2 (p, r) ·C A r (p − r, d − 1), w 3 (p, r) ·C A r (p − r, d + 1), w 4 (p, r) · max i C A r (p − r, i).(4) This is different from the SGM in three ways. First, we make the user-defined parameters learnable and add them as penalty coefficients/weights of the matching cost terms. These weights would therefore be adaptive and more flexible at different locations for different situations. Second, we replace the first/external minimum selection in Eq. (3) with a weighted sum, without any loss in accuracy. This change was proven effective in [24], where convolutions with strides were used to replace the max-pooling layers to get an all convolutional network without loss of accuracy. Third, the internal/second minimum selection is changed to a maximum. This is because the learning target in our models is to maximize the probabilities at the ground truth depths instead of minimizing the matching costs. Since max (4) can be shared by C A r (p, d) for d different locations, here, we do not use another weighted summation to replace it in order to reduce the computational complexity. i C A r (p − r, i) in Eq. For both Eq. (3) and Eq. (4), the values of C A r (p, d) increase along the path, which may lead to very large values. We normalize the weights of the terms to avoid such a problem. This leads to our new semi-global aggregation: C A r (p, d) = sum            w 0 (p, r) ·C(p, d) w 1 (p, r) ·C A r (p − r, d), w 2 (p, r) ·C A r (p − r, d − 1), w 3 (p, r) ·C A r (p − r, d + 1). w 4 (p, r) · max i C A r (p − r, i). s.t. ∑ i=0,1,2,3,4 w i (p, r) = 1(5) C(p, d) is known as the cost volume (with a size of H ×W × D max × F). Same as the traditional SGM [9], the cost volume can be sliced into D max slices at the third dimension for The final aggregated output C A (p) is obtained by selecting the maximum between the four directions: C A (p, d) = max r C A r (p, d)(6) The last maximum selection keeps the best message from only one direction. This guarantees that the aggregation effects are not blurred by the other directions. The backpropagation for w and C(p, d) in the SGA layer can be done inversely as Eq. (5) (details are available in the Appendix A.). Our SGA layer can be repeated several times in the neural network model to obtain better cost aggregation effects (as illustrated in Fig. 2). Local Aggregation We now introduce the local guided aggregation (LGA) layer which aims to refine the thin structures and object edges. Down-sampling and up-sampling are widely used in stereo matching models which blurs thin structures and object edges. The LGA layer learns several guided filters to refine the matching cost and aid in the recovery of thin structure information. The local aggregation follows the cost filter definition [10] (Eq. (1)) and can be written as: C A (p, d) = sum    ∑ q∈N p ω 0 (p, q) ·C(q, d), ∑ q∈N p ω 1 (p, q) ·C(q, d − 1), ∑ q∈N p ω 2 (p, q) ·C(q, d + 1). s.t. ∑ q∈N p ω 0 (p, q) + ω 1 (p, q) + ω 2 (p, q) = 1(7) Different slices (totally D max slices) of cost volume share the same filtering/aggregation weights in LGA. This is the same as the original cost filter framework [10] and the SGA (Eq.(5)) in this paper. While, different with the traditional cost filter [10] which uses a K × K filter kernel to filter the cost volume in a K × K local/neighboor region N p , the proposed LGA layer has three K × K filters (ω 0 , ω 1 and ω 2 ) at each pixel location p for disparities d, d − 1 and d + 1 respectively. Namely, it aggregates with a K × K × 3 weight matrix in a K × K local region for each pixel location p. The setting of the weight matrix is also similar to [11], but, weights and filters are shared during the aggregation as designed in [10]. Efficient Implementation We use several 2D convolutional layers to build a fast guidance subnet (as illustrated in Fig. 2). The implementation is similar to [32]. It uses the reference image as input and outputs the aggregation weights w (Eq. (5)). For a 4D cost volume C with size of H ×W × D × F (H: height, W : width, D: max disparity, F: feature size), the output of the guidance subnet is split, reshaped and normalized as four H ×W × K × F (K = 5) weight matrices for four directions' aggregation using Eq. (5). Note that aggregations for dif-ferent disparities corresponding to a slice d share the same aggregation weights. Similarly, the LGA layer need to learn a H × W × 3K 2 × F (K = 5) weight matrix and aggregates using Eq. (7). Even though the SGA layer involves an iterative aggregation across the width or the height, the forward and backward can be computed in parallel due to the independence between elements in different feature channels or rows/columns. For example, when aggregating in the left direction, the elements in different channels or rows are independent and can be computed simultaneously. The elements of the LGA layer can also be computed in parallel by simply decomposing it into element-wise matrix multiplications and summations. In order to increase the receptive field of the LGA layer, we repeat the computation of EQ. (7) twice with the same weight matrix, which is similar to [5]. Network Architecture As illustrated in Fig.2, the GA-Net consists of four parts: the feature extraction block, the cost aggregation for the 4D cost volume, the guidance subnet to produce the cost aggregation weights and the disparity regression. For the feature extraction, we use a stacked hourglass network which is densely connected by concatenations between different layers. The feature extraction block is shared by both left and right views. The extracted features for left and right images are then used to form a 4D cost volume. Several SGA layers are used for the cost aggregation and LGA layers can be implemented before and after the softmax layer of the disparity regression. It refines the thin-structures and compensate for the accuracy loss caused by the down-sampling done for the cost volume. The weight matrices (in Eq.(5) and Eq. (7)) are generated by an extra guidance subnet which uses the reference view (e.g. the left image) as input. The guidance subnet consists of several fast 2D convolutional layers and the outputs are reshaped and normalized into required weight matrices for these GA layers. 1 Loss Function We adopt the smooth L 1 loss function to train our models. Smooth L 1 is robust at disparity discontinuities and has low sensitivity to outliers or noises, as compared to L 2 loss. The loss function for training our models is defined as: L(d, d) = 1 N N ∑ n=1 l(|d − d|) l(x) = x − 0.5, x ≥ 1 x 2 /2, x < 1(8) where, |d − d| measures the absolute error of the disparity predictions, N is the number of valid pixels with ground truths for training. For the disparity estimation, we employ the disparity regression proposed in [13]: d = D max ∑ d=0 d × σ (−C A (d))(9) The disparity predictiond is the sum of each disparity candidate weighted by its probability. The probability of each disparity d is calculated after cost aggregation via the softmax operation σ (·). The disparity regression is shown more robust than classification based methods and can generate sub-pixel accuracy. Experiments In this section, we evaluate our GA-Nets with different settings using Scene Flow [15] and KITTI [7,16] datasets. We implement our architectures using pytorch or caffe [12] (only for real-time models' implementation). All models are optimized with Adam (β 1 = 0.9, β 2 = 0.999). We train with a batch size of 16 on eight GPUs using 240 × 576 random crops from the input images. The maximum of the disparity is set as 192. Before training, we normalize each channel of the image by subtracting their means and dividing their standard deviations. We train the model on Scene Flow dataset for 10 epochs with a constant learning rate of 0.001. For the KITTI datasets, we fine-tune the models pretrained on Scene Flow dataset for a further 640 epochs. The learning rate for fine-tuning begins at 0.001 for the first 300 epochs and decreases to 0.0001 for the remaining epochs. Ablation Study We evaluate the performance of GA-Nets with different settings, including different architectures and different number (0-4) of GA layers. As listed in Table 1, The guided aggregation models significantly outperform the baseline setting which only has 3D convolutional layers for cost aggregation. The new architectures for feature extraction and cost aggregation improve the accuracy by 0.14% on KITTI dataset and 0.9% on Scene Flow dataset. Finally, the best setting of GA-Net with three SGA layers and one LGA layer gets the best 3-pixel threshold error rate of 2.71% on KITTI 2015 validation set. It also achieves the best average EPE of 0.84 pixel and the best 1-pixel threshold error rate of 9.9% on the Scene Flow test set. Effects of Guided Aggregations In this section, we compare the guided aggregation strategies with other matching cost aggregation methods. We also analyze the effects of the GA layers by observing the post-softmax probabilities output by different models. Firstly, our proposed GA-Nets are compared with the cost aggregation architectures in GC-Net (with nineteen 3D convolutions) and PSMNet (with twenty-five 3D convolutions). We fixed the feature extraction architecture as pro- posed above. As shown in Table 2, GA-Nets have fewer parameters, run at a faster speed and achieve better accuracy. E.g., with only two GA layers and two 3D convolutions, our GA-Net-2 outperforms the GC-Net by 0.29 pixel in average EPE. Also, the GA-Net-7 with three GA layers and seven 3D convolutions outperforms the current best PSMNet [3] which has twenty-five 3D convolutional layers. We also study the effects of the GA layers by comparing with the same architectures without GA steps. These baseline models "GA-Nets * " have the same network architectures and all other settings except that there is no GA layer implemented. As shown in Fig. 3, for all these models, GA layers have significantly improved the models' accuracy (by 0.5-1.0 pixels in average EPE). For example, the GA-Net-2 with two 3D convolutions and two GA layers produces lower EPE (1.51) compared with GA-Net * -11 (1.54) which utilizes eleven 3D convolutions. This implies that two GA layers are more effective than nine 3D convolutional layers. Finally, in order to observe and analyze the effects of GA layers, in Fig. 4, we plot the post-softmax probabilities with respect to a range of candidate disparities. These probabilities are directly used for disparity estimation using Eq. (9) and can reflect the effectiveness of the cost aggregation strategies. The data samples are all selected from some challenging regions, such as a large textureless region (sky), the reflective region (window of a car) and pixels around the object edges. Three different models are compared. The first model (first row of Fig. 4) only has 3D convolutions (without any GA layers), the second model (second row of Fig. 4) has SGA layers and the last model (last row of Fig. 4) has both SGA layers and LGA layer. As illustrated in Fig. 4(a), for large textureless regions, there would be a lot of noise since there is no any distinctive features in these regions for correct matching. The SGA layers successfully suppress these noise in the probabilities by aggregating surrounding matching information. The LGA layer further concentrates the probability peak on the ground truth value. It could refine the matching results. Similarly, in the sample of reflective region ( Fig. 4(b)), the SGA and LGA layers correct the wrong matches and concentrate the peak on the correct disparity value. For the samples around the objects edges (Fig. 4(c)), there are usually two peaks in the probability distribution which are influenced by the background and the foreground respectively. The SGA and LGA use spatial aggregation along with appropriate maximum selection to cut down the aggregation of the wrong matching information from the background and therefore suppress the false probability peak appeared at the background's disparity value. Comparisons with SGMs and 3D Convolutions The SGA layer is a differentiable approximation of the SGM [9]. But, it produces far better results compared with both the original SGM with handcrafted features and the MC-CNN [30] with CNN based features (as shown in Table 5). This is because 1) SGA does not have any user-defined parameters that are all learned in an end-to-end fashion. 2) The aggregation of SGA is fully guided and controlled by the weight matrices. The guidance subnet learns effective geometrical and contextual knowledge to control the directions, scopes and strengths of the cost aggregations. Moreover, compared with original SGM, most of the fronto-parallel approximations in large textureless regions have been avoided. (Example is in Fig. 5.) This might be benefited from: 1) the use of the soft weighted sum in Eq. (5) (instead of the hard min/max selection in Eq. (3)); and 2) the regression loss of Eq. (9) which helps achieve the subpixel accuracy. Our SGA layer is also more efficient and effective than the 3D convolutional layer. This is because the 3D convolutional layer could only aggregate in a local region restricted by the kernel size. As a result, a series of 3D convolutions along with encoder and decoder architectures are indispensable in order to achieve good results. As a comparison, our SGA layer aggregates semi-globally in a single layer which is more efficient. Another advantage of the SGA is that the aggregation's direction, scope and strength are fully guided by variable weights according to different geometrical and contextual information in different locations. E.g., the SGA behaves totally different in the occlusions and the large smoothness regions. But, the 3D convolutional layer has fixed weights and always perform the same for all locations in the whole image. Complexity and Real-time Models The computational complexity of one 3D convolutional layer is O(K 3 CN), where N is the elements number of the output blob. K is the size of the convolutional kernel and C is the channel number of the input blob. As a comparison, the complexity of SGA is O(4KN) or O(8KN) for four or eight-direction aggregations. In GC-Net [13] and PSM-Net [3], K = 3, C = 32, 64 or 128 and in our GA-Nets, K is used as 5 (for SGA layer). Therefore, the computational complexity in terms of floating-point operations (FLOPs) of the proposed SGA step is less than 1/100 of one 3D convolutional layer. The SGA layer are much faster and more effective than 3D convolutions. This allows us to build an accurate realtime model. We implement one caffe [12] version of the GA-Net-1 (with only one 3D convolutional layer and without LGA layers). The model is further simplified by using 4× down-sampling and up-sampling for cost volume. The real-time model could run at a speed of 15∼20 fps for 300×1000 images on a TESLA P40 GPU. We also compare the accuracy of the results with the state-of-the-art real-time models. As shown in Table 3, the real-time GA-Net far outperforms other existing real-time stereo matching models. Evaluations on Benchmarks For the benchmark evaluations, we use the GA-Net-15 with full settings for evaluations. We compare our GA-Net with the state-of-the-art deep neural network models on the Scene Flow dataset and the KITTI benchmarks. [13]. Third row: Results of PSMNet [3]. Last row: Results of our GA-Net. Significant improvements are pointed out by blue arrows. The guided aggregations can effectively aggregate the disparity information to the large textureless regions (e.g. the cars and the windows) and give precise estimations. It can also aggregate the object knowledge and preserve the depth structure very well (last column). Foreground Avg All Our GA-Net- 15 3.39% 1.84% 3.91% 2.03% PSMNet [3] 4.31% 2.14 % 4.62% 2.32% GC-Net [13] 5.58% 2.61% 6.16% 2.87% SGM-Nets [23] 7.43% 3.09% 8.64% 3.66% MC-CNN [30] 7.64% 3.33% 8.88% 3.89% SGM [9] 11.68% 5.62% 13.00% 6.38% Scene Flow Dataset The Scene Flow synthetic dataset [15] contains 35,454 training and 4,370 testing images. We use the "final" set for training and testing. GA-Nets are compared with other state-of-the-art DNN models by evaluating with the average end point errors (EPE) and 1-pixel threshold error rates on the test set. The results are presented in Table 2. We find that our GA-Net outperforms the state-of-the-arts on both of the two evaluation metrics by a noteworthy margin (2.2% improvement in error rate and 0.25 pixel improvement in EPE compared with the current best PSMNet [3].). KITTI 2012 and 2015 Datasets After training on Scene Flow dataset, we use the GA-Net-15 to fine-tune on the KITTI 2015 and KITTI 2012 data sets respectively. The models are then evaluated on the test sets. According to the online leader board, as shown in Table 4 and Table 5, our GA-Net has fewer low-efficient 3D convolutions but achieves better accuracy. It surpasses current best PSMNet in all the evaluation metrics. Examples are shown in Fig. 6. The GA-Nets can effectively aggregate the correct matching information into the challenging large textureless or reflective regions to get precise estimations. It also keeps the object structures very well. Conclusion In this paper, we developed much more efficient and effective guided matching cost aggregation (GA) strategies, including the semi-global aggregation (SGA) and the local guided aggregation (LGA) layers for end-to-end stereo matching. The GA layers significantly improve the accuracy of the disparity estimation in challenging regions, such as occlusions, large textureless/reflective regions and thin structures. The GA layers can be used to replace computationally costly 3D convolutions and get better accuracy. (11) where, ∂ E ∂C b r is a temporary gradient variable which can be calculated iteratively by (if d = i max ): ∂ E ∂C b r (p,d) = ∂ E ∂C A r (p,d) + sum        ∂ E C b r (p+r,d) · w 1 (p + r, r), ∂ E C b r (p+r,d+1) · w 2 (p + r, r), ∂ E C b r (p+r,d−1) · w 3 (p + r, r). (12) or (if d = i max ): ∂ E ∂C b r (p,d) = ∂ E ∂C A r (p,d) + sum              ∂ E C b r (p+r,d) · w 1 (p + r, r), ∂ E C b r (p+r,d+1) · w 2 (p + r, r), ∂ E C b r (p+r,d−1) · w 3 (p + r, r), ∑ i ∂ E C b r (p+r,i) · w 4 (p + r, r). (13) where i max is the index of max Table 6 presents the details of the GA-Net-15 which is used in experiments to produce state-of-the-art accuracy on Scene Flow dataset [15] and KITTI benchmarks [7,16]. It has three SGA layers, two LGA layers and fifteen 3D convolutional layers for cost aggregation. B. Details of the Architecture Figure 1 : 1Performance illustrations. (a) a challenging input image. (b) Result of the state-of-the-art method GC-Net [13] which has nineteen 3D convolutional layers for matching cost aggregation. (c) Result of our GA-Net-2, which only uses two proposed GA layers and two 3D convolutional layers. It aggregates the matching information into the large textureless region and is an order of magnitude faster than GC-Net. (d) Ground truth. Figure 2 : 2(a) Architecture overview. The left and right images are fed to a weight-sharing feature extraction pipeline. It consists of a stacked hourglass CNN and is connected by concatenations. The extracted left and right image features are then used to form a 4D cost volume, which is fed into a cost aggregation block for regularization, refinement and disparity regression. The guidance subnet (green) generates the weight matrices for the guided cost aggregations (SGA and LGA). (b) SGA layers semi-globally aggregate the cost volume in four directions. (c) The LGA layer is used before the disparity regression and locally refines the 4D cost volume for several times. each candidate disparity d and each of these slices repeats the aggregation operation of Eq. (5) with the shared weight matrices (w 0...4 ). All the weights w 0...4 can be achieved by a guidance subnet (as shown in Fig. 2). Different to the original SGM which aggregates in sixteen directions, in order to improve the efficiency, the proposed aggregations are done in totally four directions (left, right, up and down) along each row or column over the whole image, namely r ∈ {(0, 1), (0, −1), (1, 0), (−1, 0)}. Figure 3 : 3Illustration of the effects of guided aggregations. GA-Nets are compared with the same architectures without GA Layers. Evaluations are on Scene Flow dataset using average EPE. Figure 4 : 15 Figure 5 : 4155Post-softmax probability distributions with respect to disparity values. Red lines illustrate the ground truth disparities. Samples are selected from three challenging regions: (a) the large smooth region (sky), (b) the reflective region from one car window and (c) one region around the object edges. The first row shows the probability distributions without GA layers. The second row shows the effects of semi-global aggregation (SGA) layers and the last row is the refined probabilities with one extra local guided aggregation (LGA) layer. (a) Input view (b) large textureless region (c) result of traditional SGM (d) result of our GA-Net-Comparisons with traditional SGM. More results and comparisons are avaiable at GA-Net-15 and SGM. Figure 6 : 6Results visualization and comparisons. First row: input image. Second row: Results of GC-Net (p, i) during the forward propagation in Eq.(5). Table 1 : 1Evaluations of GA-Nets with different settings. Average end point error (EPE) and threshold error rate are used for evaluations.Feature Extraction Cost Aggregation Scene Flow KITTI 2015 Stacked Block Densely Concatenate SGA Layer LGA Layer EPE Error Error Rates (%) Error Rates (%) 1.26 13.4 3.39 √ 1.19 13.0 3.31 √ √ 1.14 12.5 3.25 √ √ +1 1.05 11.7 3.09 √ √ +2 0.97 11.0 2.96 √ √ +3 0.90 10.5 2.85 √ √ +4 0.89 10.4 2.83 √ √ +3 √ 0.84 9.9 2.71 0 5 10 15 20 25 30 0.5 1.0 1.5 2.0 2.5 3.0 Only 3D conv GANets GC-Net PSMNet End point error Number of 3D conv Table 2 : 2Comparisons of different cost aggregation methods. Av-erage end point error (EPE) and 1-pixel threshold error rate are used for evaluations on Scene Flow dataset. Models 3D Conv Number Param Time(s) EPE Error Error Rates GC-Net 19 2.9M 4.4 1.80 15.6 PSMNet 25 3.5M 2.1 1.09 12.1 GA-Net-1 1 0.5M 0.17 1.82 16.5 GA-Net-2 2 0.7M 0.35 1.51 15.0 GA-Net-3 3 0.8M 0.42 1.36 13.9 GA-Net-7 7 1.3M 0.62 1.07 11.9 GA-Net-11 11 1.8M 0.95 0.95 10.8 GA-Net-15 15 2.3M 1.5 0.84 9.9 Table 3 : 3Comparisons with existing real-time algorithmsMethods End point error Error rates Speed (fps) Our GA-Net 0.7 px 3.21 % 15 (GPU) DispNet [15] 1.0 px 4.65 % 22 (GPU) Toast [20] 1.4 px 7.42 % 25 (CPU) Table 4 : 4Evaluation Results on KITTI 2012 BenchmarkModels error rates (2 pixels) error rates (3 pixels) Reflective regions Avg-All (end point) Our GA-Net 2.18 % 1.36 % 7.87% 0.5 px PSMNet [3] 2.44 % 1.49 % 8.36% 0.6 px GC-Net [13] 2.71 % 1.77 % 10.80% 0.7 px MC-CNN [30] 3.90 % 2.43 % 17.09% 0.9 px Table 5 : 5Evaluation Results on KITTI 2015 BenchmarkModels Non Occlusion All Areas Foreground Avg All The parameter settings of "GA-Net-15" used in our experiments are detailed in the Appendix B. Appendix A. Backpropagation of SGAThe backpropagation for w and C(p, d) in SGA (Eq.(5)) can be computed inversely. Assume the gradient from next layer (max-selection) of Eq. (6) is ∂ E ∂C A r . The backpropagation of SGA can be implemented as:· w 0 (p, r).repeat 4-19 1 /3H× 1 /3W×32 concat connection(9,12),(7,14),(5,16),(3,18),(17,20),(15,22),(13,24),(11,26),(split, reshape, normalizesplit, reshape, normalize 4× 1 /3H× 1 /3W×5×32 (9)-(11)from(6), repeat (6)-(8)from(1), 3×3 conv H×W×16(13)3×3 conv (no bn & relu) H×W×75(14)split, reshape, normalize H×W×75 (15)-(17)from(12)3×3×3, 3D conv, stride 2 1 /6H× 1 /6W×48×48[5]3×3×3, 3D conv, stride 2 1 /12H× 1 /12W×48×64[6]3×3×3, 3D deconv, stride 2 1 /6H× 1 /6W×48×48[7]3×3×3, 3D conv 1 /6H× 1 /6W×48×48[8]3×3×3, 3D deconv, stride 2 1 /3H× 1 /3W×48×32[9]3×3×3, 3D conv 1 /3H× 1 /3W×48×32[10]SGA layer: weight matrices from (8) 1 /3H× 1 /3W×48×32 output 3×3×3, 3D to 2D conv, upsamping H×W×193 softmax, regression, loss weight: 0.6 H×W×1[11]3×3×3, 3D conv, stride 2 1 /6H× 1 /6W×48×48[12]3×3×3, 3D conv 1 /6H× 1 /6W×48×48[13]3×3×3, 3D conv, stride 2 1 /12H× 1 /12W×48×64[14]3×3×3, 3D deconv, stride 2 1 /6H× 1 /6W×48×48[15]3×3×3, 3D conv 1 /6H× 1 /6W×48×48[16]3×3×3, 3D deconv, stride 2 1 /3H× 1 /3W×48×32[17]3×3×3, 3D conv 1 /3H× 1 /3W×48×32[18]SGA layer: weight matrixes from (11) 1 /3H× 1 /3W×48×32final output 3×3×3, 3D to 2D conv, upsamping H×W×193LGA, softmax, LGA: weights from(14),(17)H×W×193 regression, loss weight: 1.0 H×W×1 connection concat: (4,6),(3,8),(7,11),(5,13),(12,14),(10,16) Pmbp: Patchmatch belief propagation for correspondence field estimation. F Besse, C Rother, A Fitzgibbon, J Kautz, International Journal of Computer Vision. 11013F. Besse, C. Rother, A. Fitzgibbon, and J. Kautz. Pmbp: Patchmatch belief propagation for correspondence field esti- mation. International Journal of Computer Vision, 110(1):2- 13, Oct 2014. 2, 3 Patchmatch stereostereo matching with slanted support windows. M Bleyer, C Rhemann, C Rother, British Machine Vision Conference (BMVC). M. Bleyer, C. Rhemann, and C. Rother. Patchmatch stereo- stereo matching with slanted support windows. In British Machine Vision Conference (BMVC), pages 1-11, 2011. 3 Pyramid stereo matching network. J.-R Chang, Y.-S Chen, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)7J.-R. Chang and Y.-S. Chen. Pyramid stereo matching net- work. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5410-5418, 2018. 1, 2, 3, 6, 7, 8 A deep visual correspondence embedding model for stereo matching costs. Z Chen, X Sun, L Wang, Y Yu, C Huang, Proceedings of the IEEE International Conference on Computer Vision (ICCV). the IEEE International Conference on Computer Vision (ICCV)Z. Chen, X. Sun, L. Wang, Y. Yu, and C. Huang. A deep visual correspondence embedding model for stereo matching costs. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 972-980, 2015. 2 Depth estimation via affinity learned with convolutional spatial propagation network. X Cheng, P Wang, R Yang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)X. Cheng, P. Wang, and R. Yang. Depth estimation via affin- ity learned with convolutional spatial propagation network. In Proceedings of the European Conference on Computer Vi- sion (ECCV), pages 103-119, 2018. 5 Deepstereo: Learning to predict new views from the world's imagery. J Flynn, I Neulander, J Philbin, N Snavely, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)J. Flynn, I. Neulander, J. Philbin, and N. Snavely. Deep- stereo: Learning to predict new views from the world's im- agery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5515-5524, 2016. 2 Are we ready for autonomous driving? the kitti vision benchmark suite. A Geiger, P Lenz, R Urtasun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)29A. Geiger, P. Lenz, and R. Urtasun. Are we ready for au- tonomous driving? the kitti vision benchmark suite. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3354-3361. IEEE, 2012. 2, 5, 9 Guided image filtering. K He, J Sun, X Tang, IEEE Transactions on Pattern Analysis and Machine Intelligence. 6K. He, J. Sun, and X. Tang. Guided image filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, (6):1397-1409, 2013. 2 Stereo processing by semiglobal matching and mutual information. H Hirschmuller, IEEE Transactions on Pattern Analysis and Machine Intelligence. 302H. Hirschmuller. Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Anal- ysis and Machine Intelligence, 30(2):328-341, 2008. 1, 2, 3, 6, 8 Fast cost-volume filtering for visual correspondence and beyond. A Hosni, C Rhemann, M Bleyer, C Rother, M Gelautz, IEEE Transactions on Pattern Analysis and Machine Intelligence. 352A. Hosni, C. Rhemann, M. Bleyer, C. Rother, and M. Gelautz. Fast cost-volume filtering for visual correspon- dence and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2):504-511, 2013. 1, 2, 3, 4 Dynamic filter networks. X Jia, B De Brabandere, T Tuytelaars, L V Gool, Advances in Neural Information Processing Systems (NIPS). X. Jia, B. De Brabandere, T. Tuytelaars, and L. V. Gool. Dy- namic filter networks. In Advances in Neural Information Processing Systems (NIPS), pages 667-675, 2016. 4 Caffe: Convolutional architecture for fast feature embedding. Y Jia, E Shelhamer, J Donahue, S Karayev, J Long, R Girshick, S Guadarrama, T Darrell, Proceedings of the ACM International Conference on Multimedia. the ACM International Conference on MultimediaACM57Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir- shick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, pages 675-678. ACM, 2014. 5, 7 End-to-end learning of geometry and context for deep stereo regression. CoRR. A Kendall, H Martirosyan, S Dasgupta, P Henry, R Kennedy, A Bachrach, A Bry, abs/1703.043097A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry. End-to-end learning of geometry and context for deep stereo regression. CoRR, vol. abs/1703.04309, 2017. 1, 2, 3, 5, 7, 8 Learning affinity via spatial propagation networks. S Liu, S Mello, J Gu, G Zhong, M.-H Yang, J Kautz, Advances in Neural Information Processing Systems (NIPS). S. Liu, S. De Mello, J. Gu, G. Zhong, M.-H. Yang, and J. Kautz. Learning affinity via spatial propagation net- works. In Advances in Neural Information Processing Sys- tems (NIPS), pages 1520-1530, 2017. 3 A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. N Mayer, E Ilg, P Hausser, P Fischer, D Cremers, A Dosovitskiy, T Brox, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)89N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train convo- lutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 4040- 4048, 2016. 1, 2, 5, 7, 8, 9 Object scene flow for autonomous vehicles. M Menze, A Geiger, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)59M. Menze and A. Geiger. Object scene flow for autonomous vehicles. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 3061- 3070, 2015. 2, 5, 9 Stacked hourglass networks for human pose estimation. A Newell, K Yang, J Deng, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionSpringerA. Newell, K. Yang, and J. Deng. Stacked hourglass net- works for human pose estimation. In Proceedings of the European Conference on Computer Vision, pages 483-499. Springer, 2016. 1 Stacked hourglass networks for human pose estimation. A Newell, K Yang, J Deng, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)SpringerA. Newell, K. Yang, and J. Deng. Stacked hourglass net- works for human pose estimation. In Proceedings of the Eu- ropean Conference on Computer Vision (ECCV), pages 483- 499. Springer, 2016. 2 J Pang, W Sun, J S Ren, C Yang, Q Yan, Cascade residual learning: A two-stage convolutional neural net. J. Pang, W. Sun, J. S. Ren, C. Yang, and Q. Yan. Cas- cade residual learning: A two-stage convolutional neural net- A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. D Scharstein, R Szeliski, International Journal of Computer Vision. 471-3D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Interna- tional Journal of Computer Vision, 47(1-3):7-42, 2002. 1 Learning to fuse proposals from multiple scanline optimizations in semiglobal matching. J L Schönberger, S N Sinha, M Pollefeys, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)J. L. Schönberger, S. N. Sinha, and M. Pollefeys. Learning to fuse proposals from multiple scanline optimizations in semi- global matching. In Proceedings of the European Conference on Computer Vision (ECCV), pages 739-755, 2018. 2 Sgm-nets: Semi-global matching with neural networks. A Seki, M Pollefeys, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)A. Seki and M. Pollefeys. Sgm-nets: Semi-global matching with neural networks. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 6640-6649, 2017. 2, 3, 8 J T Springenberg, A Dosovitskiy, T Brox, M Riedmiller, arXiv:1412.6806Striving for simplicity: The all convolutional net. arXiv preprintJ. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Ried- miller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014. 3 Bilateral filtering for gray and color images. C Tomasi, R Manduchi, Proceedings of the IEEE International Conference on Computer Vision (ICCV). the IEEE International Conference on Computer Vision (ICCV)C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In Proceedings of the IEEE International Con- ference on Computer Vision (ICCV), pages 839-846. IEEE, 1998. 2 S Tulyakov, A Ivanov, F Fleuret, arXiv:1806.01677Practical deep stereo (pds): Toward applications-friendly deep stereo matching. arXiv preprintS. Tulyakov, A. Ivanov, and F. Fleuret. Practical deep stereo (pds): Toward applications-friendly deep stereo matching. arXiv preprint arXiv:1806.01677, 2018. 2 Pm-pm: Patchmatch with potts model for object segmentation and stereo matching. S Xu, F Zhang, X He, X Shen, X Zhang, IEEE Transactions on Image Processing. 247S. Xu, F. Zhang, X. He, X. Shen, and X. Zhang. Pm-pm: Patchmatch with potts model for object segmentation and stereo matching. IEEE Transactions on Image Processing, 24(7):2182-2196, July 2015. 2 A non-local cost aggregation method for stereo matching. Q Yang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Q. Yang. A non-local cost aggregation method for stereo matching. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 1402- 1409. IEEE, 2012. 2 Learning to compare image patches via convolutional neural networks. S Zagoruyko, N Komodakis, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)S. Zagoruyko and N. Komodakis. Learning to compare im- age patches via convolutional neural networks. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4353-4361, 2015. 2 Computing the stereo matching cost with a convolutional neural network. J Zbontar, Y Lecun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)7J. Zbontar and Y. LeCun. Computing the stereo matching cost with a convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1592-1599, 2015. 1, 3, 7, 8 Segment graph based image filtering: fast structure-preserving smoothing. F Zhang, L Dai, S Xiang, X Zhang, Proceedings of the IEEE International Conference on Computer Vision (ICCV). the IEEE International Conference on Computer Vision (ICCV)F. Zhang, L. Dai, S. Xiang, and X. Zhang. Segment graph based image filtering: fast structure-preserving smoothing. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 361-369, 2015. 2 Supplementary meta-learning: Towards a dynamic model for deep neural networks. F Zhang, B W Wah, Proceedings of the IEEE International Conference on Computer Vision (ICCV). the IEEE International Conference on Computer Vision (ICCV)F. Zhang and B. W. Wah. Supplementary meta-learning: To- wards a dynamic model for deep neural networks. In Pro- ceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4344-4353, 2017. 4 Fundamental principles on learning new features for effective dense matching. F Zhang, B W Wah, IEEE Transactions on Image Processing. 272F. Zhang and B. W. Wah. Fundamental principles on learn- ing new features for effective dense matching. IEEE Trans- actions on Image Processing, 27(2):822-836, 2018. 1, 2
[]
[]
[]
[]
[]
Fi ni te di m ensi onali ntegrabl e Ham i l toni an system s associ ated wi th D SIequati on by Bargm ann constrai nts Zi xi ang Zhou Insti tute ofM athem ati cs,Fudan U ni versi ty,Shanghai200433,C hi na E-m ai l : zxzhou@ guom ai . sh. cn W en-X i u M a D epartm ent ofM athem ati cs,C i ty U ni versi ty ofH ong K ong,K ow l oon,H ong K ong,C hi na E-m ai l : m aw x@ ci tyu. edu. hkA bstractT he D avey-Stewartson I equati on i s a typi cali ntegrabl e equati on i n 2+ 1 di m ensi ons. ItsLax system bei ng essenti al l y i n 1+ 1 di m ensi onalform has been found through nonl i neari zati on from 2+ 1 di m ensi ons to 1+ 1 di m ensi ons. In the present paper,thi s essenti al l y 1+ 1 di m ensi onalLax system i s further nonl i neari zed i nto 1+ 0 di m ensi onalH am i l toni an system s by taki ng the B argm ann constrai nts. It i s show n that the resul ti ng 1+ 0 di m ensi onalH am i l toni an system s are com pl etel y i ntegrabl e i n Li ouvi l l e sense by ndi ng a ful l set ofi ntegral s ofm oti on and provi ng thei r functi onali ndependence.
10.1143/jpsj.70.1241
[ "https://export.arxiv.org/pdf/nlin/0103025v1.pdf" ]
15,198,983
nlin/0103025
7d76e76f132cc2ff2e4c975d094ec53ab4bd81ad
17 Mar 2001 17 Mar 2001 Fi ni te di m ensi onali ntegrabl e Ham i l toni an system s associ ated wi th D SIequati on by Bargm ann constrai nts Zi xi ang Zhou Insti tute ofM athem ati cs,Fudan U ni versi ty,Shanghai200433,C hi na E-m ai l : zxzhou@ guom ai . sh. cn W en-X i u M a D epartm ent ofM athem ati cs,C i ty U ni versi ty ofH ong K ong,K ow l oon,H ong K ong,C hi na E-m ai l : m aw x@ ci tyu. edu. hkA bstractT he D avey-Stewartson I equati on i s a typi cali ntegrabl e equati on i n 2+ 1 di m ensi ons. ItsLax system bei ng essenti al l y i n 1+ 1 di m ensi onalform has been found through nonl i neari zati on from 2+ 1 di m ensi ons to 1+ 1 di m ensi ons. In the present paper,thi s essenti al l y 1+ 1 di m ensi onalLax system i s further nonl i neari zed i nto 1+ 0 di m ensi onalH am i l toni an system s by taki ng the B argm ann constrai nts. It i s show n that the resul ti ng 1+ 0 di m ensi onalH am i l toni an system s are com pl etel y i ntegrabl e i n Li ouvi l l e sense by ndi ng a ful l set ofi ntegral s ofm oti on and provi ng thei r functi onali ndependence. Introduction T he D avey-Stewartson I(D SI)equati on i sa fam ous2+ 1 di m ensi onali ntegrabl e equati on w hi ch descri bes the m oti on ofwater wave [ 1] . T hi s equati on has l ocal i zed sol i ton sol uti ons and has been studi ed i n vari ous ways,such as i nverse scatteri ng [ 2,3] ,bi nary D arboux transform ati on [ 4] ,nonl i neari zati on to 1+ 1 di m ensi onalprobl em s [ 5,6]etc. For 1+ 1 di m ensi onali ntegrabl e system s, the nonl i neari zati on procedure,both m ono-nonl i neari zati on [ 7]and bi nary nonl i neari zati on [ 8] ,reduces them to ni te di m ensi onal(1+ 0 dim ensi onal ) i ntegrabl e H am i l toni an system s [ 9,10,11,12] . T herefore,i t transform s a parti al di erenti alequati on to a system s ofordi nary di erenti alequati ons. T hi s greatl y si m pl i es the procedure ofgetti ng sol uti ons,atl eastnum eri calsol uti ons. Som e i m portantexpl i ci t sol uti ons, especi al l y peri odi c or quasi -peri odi c sol uti ons have been obtai ned i n thi s way. T he nonl i nearconstrai ntm ethod hasal so been appl i ed to som e 2+ 1 di m ensi onalequati ons l i ke the K P,M K P,N -wave equati ons etc. [ 13,14,15] . For the D SI equati on,we have al ready found i ts new Lax system (1) by nonl i neari zati on i n w hi ch al lthe deri vati ves are separated. Each pai r of equati ons i n thi s system i s 1+ 1 di m ensi onal . H ence the deri ved Lax system i s l ooked as essenti al l y 1+ 1 di m ensi onalbecause we can use 1+ 1 di m ensi onalm ethod to sol ve i t. It i s possi bl e to nonl i neari ze thi s essenti al l y 1+ 1 di m ensi onal system agai n to get ni te di m ensi onalH am i l toni an system s. In the present paper,we show that there are Bargm ann constrai nts w hi ch reduce the D SI equati on to ni te di m ensi onalH am i l toni an system s. T hese H am i l toni an system shave a ful lset ofi ntegral s ofm oti ons and these i ntegral s ofm oti on are functi onal l y i ndependent i n a dense open subsetofthephasespace.T herefore,these H am i l toni an system sarecom pl etel y i ntegrabl e i n Li ouvi l l e sense. N onlinearization W e consi der the fol l ow i ng Lax system [ 5] x = W x = 0 B B B B @ i 0 i f 0 i i g i f i g 0 1 C C C C A y = W y = 0 B B B B @ i u i f u i i g i f i g 0 1 C C C C A t = W t = 0 B B B B @ 2i 2 + i j uj 2 + i v 1 2 u + i u y 2i f + 2 f y 2u + i u y 2i 2 i j uj 2 i v 2 2i g + 2 g y 2i f 2f y 2i g 2g y 2i (j fj 2 j gj 2 ) 1 C C C C A :(1) H ere u,f and g are com pl ex functi ons,v 1 and v 2 are realfuncti ons. Its i ntegrabi l i ty condi ti ons xy = yx , xt = tx and yt = ty consi st of the fol l ow i ng three parts. (1) D SIequati on i u t = u xx + u yy + 2j uj 2 u + 2(v 1 + v 2 )u v 1;x v 1;y = v 2;x + v 2;y = (j uj 2 ) x :(2) (2) Standard Lax pai r ofthe D SIequati on F y = 0 B @ 1 0 0 1 1 C A F x + 0 B @ 0 u u 0 1 C A F F t = 2i 0 B @ 1 0 0 1 1 C A F xx + 2i 0 B @ 0 u u 0 1 C A F x + i 0 B @ j uj 2 + 2v 1 u x + u y u x + u y j uj 2 2v 2 1 C A F(3) w here F = (f;g) T . (3) N onl i near constrai nt F F = 1 2 0 B @ v 1 u x u x v 2 1 C A :(4) H ence any sol uti on of(2){(4)gi ves a sol uti on ofthe D SIequati on. N oti ce thati f i sa vectorsol uti on of(1)forreal ,then = i i sa sol uti on ofthe adjoi nt equati ons x = (W x ) T y = (W y ) T t = (W t ) T(5) w here each entry of i s the com pl ex conjugati on ofthe correspondi ng entry of . In order to obtai n the ni te di m ensi onal H am i l toni an system s, we rst nonl i neari ze the y-equati on of(1) i n the fol l ow i ng way. C onsi derthe pai r y = W y and t = W t .Letw = (u; u;i f;i f; i g; i g)contai ni ng al l the vari abl es i n W y [ 10] . T hen the recursi on rel ati ons ofthi s A K N S system can be expressed i n Lenard form JG l = K G l 1 (l= 1;2; )(6) w here (J;K ) i s the Lenard pai r (J i s a non-degenerate constant m atri x) w hi ch was gi ven by [ 10](here 1 ; 2 ; 3 ; 1 ; 2 ; 3 i n [ 10]are 1 = i , 2 = i , 3 = 0, 1 = 2i , 2 = 2i , 3 = 0) and fG 0 ;G 1 ;G 2 ; g i sthe Lenard sequence. T he rstel em entofthi sLenard sequence i sgi ven by [ 10]as G 0 = 2( u;u;i f;i f; i g; i g)(7) w hi ch i s a kernelofK . O n the other hand, the vari ati on of spectralparam eter can be com puted by the general form ul a [ 17] w = C 0 tr T @W y @w ! = C 0 (i 1 2 ;i 2 1 ;i 1 3 ;i 3 1 ;i 2 3 ;i 3 2 )(8) w here C 0 i s a constant. N ow l et 1 ; ; N be N di sti nctnon-zero realnum bers,( 1 ; 2 ; 3 ) T be the correspondi ng sol uti on ofthe Lax system (1)for = .Let = di ag( 1 ; ; N ), j = ( j1 ; ; jN ) T . By the generalform ul ati on ofnonl i neari zati on,we i m pose the nonl i near constrai nt G 0 = 2 N X j= 1 j w(9) w hi ch gi ves the rel ati ons h 2 ; 1 i= i u h 3 ; 1 i= f h 3 ; 2 i= g(10) w here hV 1 ;V 2 i = V 1 V 2 for any two vectors V 1 and V 2 . T hese are the Bargm ann constrai nts between (u;f;g)and ( 1 ; 2 ; 3 ). R em ark 1 Ifwe consider the pair y = W y and x = W x ,G 0 is di erent. In thatcase,we can notobtain Bargm ann constraints,butonl y N eum ann constraints [16]. Let L( )= 0 B B B B @ 1 1 0 1 C C C C A + N X = 1 1 0 B B B B @ 1 1 2 1 3 1 1 2 2 2 3 2 1 3 2 3 3 3 1 C C C C A :(11) Lem m a 1 T he Lax equations L x = [ W x ;L] L y = [ W y ;L] L t = [ W t ;L](12) hol d ifand onl y if(10) hol ds. Proof. Let F = ( 1 ; 2 ; 3 ) T ,then L( )= C + N X = 1 1 F F :(13) Si nce F ;y = W y ( )F and (W y ( )) = W y ( ),we have L y ( ) = X 1 [ W y ( );F F ] = [ W y ( );L( ) C ] X 1 [ W y ( ) W y ( );F F ] = [ W y ( );L( ) C ] i " C ; X F F # :(14) H ence L y ( )= [ W y ( );L( )]i fand onl y i f i " C ; X F F # = [ C ;W y ( )] :(15) W ri tten i n the com ponents,i t becom es (10). If(10) hol ds,the other two equati ons i n (12) are obtai ned si m i l arl y. T he l em m a i s proved. From the above constrai nts,we have f x = i h 3 ; 1 i+ i f(h 3 ; 3 i h 1 ; 1 i) ug g x = i h 3 ; 2 i i g(h 3 ; 3 i h 2 ; 2 i) uf f y = i h 3 ; 1 i+ i f(h 3 ; 3 i h 1 ; 1 i) g y = i h 3 ; 2 i+ i g(h 3 ; 3 i h 2 ; 2 i) u x = 2f g u y = 2h 2 ; 1 i+ h 2 ; 1 i(h 1 ; 1 i h 2 ; 2 i) v 1 = 2h 1 ; 3 ih 3 ; 1 i v 2 = 2h 2 ; 3 ih 3 ; 2 i(16) W i th the constrai nts (10),the system (1) i s changed to a system of ordi nary di erenti al equati ons 1;x = i 1 + i f 3 2;x = i 2 + i g 3 3;x = i f 1 + i g 2 1;y = i 1 + u 2 + i f 3 2;y = u 1 i 2 i g 3 3;y = i f 1 i g 2 1;t = ( 2i 2 + i j uj 2 + i v 1 ) 1 + ( 2u + i u y ) 2 + ( 2i f 2f y ) 3 2;t = (2 u + i u y ) 1 + (2i 2 i j uj 2 i v 2 ) 2 + (2i g 2g y ) 3 3;t = ( 2i f + 2 f y ) 1 + (2i g + 2 g y ) 2 2i (j fj 2 j gj 2 ) 3(17) w here u,u y ,v 1 ,v 2 ,f,g,f y ,g y are gi ven by (10) and (16). R e j and Im j (j = 1;2;3; = 1; ;N ) form a system of coordi nates ofthe phase space R 6N . For si m pl i ci ty,we use the com pl ex coordi nates j and j i nstead of R e j and Im j . R 6N has the standard sym pl ecti c form ! = 2 3 X j= 1 N X = 1 d Im ( j )^d R e( j ) = i 3 X j= 1 N X = 1 d j ^d j :(18) T he correspondi ng Poi sson bracket oftwo functi ons and i s f ; g = i 3 X j= 1 N X = 1 @ @ j @ @ j @ @ j @ @ j ! :(19) By di rect com putati on,we have Lem m a 2 (17) is equival entto three H am il tonian equations i j ;x = @H x @ j i j ;x = @H x @ j i j ;y = @H y @ j i j ;y = @H y @ j i j ;t = @H t @ j i j ;t = @H t @ j(20) where H x = h 1 ; 1 i h 2 ; 2 i j h 1 ; 3 ij 2 + j h 2 ; 3 ij 2 H y = h 1 ; 1 i+ h 2 ; 2 i j h 1 ; 3 ij 2 j h 2 ; 3 ij 2 j h 1 ; 2 ij 2 H t = 2h 1 ; 2 1 i 2h 2 ; 2 2 i + 4 R e(h 1 ; 3 ih 3 ; 1 i) + 4 R e(h 2 ; i 3 ih 3 ; 2 i) + 4 R e(h 1 ; 2 ih 2 ; 1 i) + 2(h 3 ; 3 i h 1 ; 1 i)j h 1 ; 3 ij 2 2(h 3 ; 3 i h 2 ; 2 i)j h 2 ; 3 ij 2 (h 1 ; 1 i h 2 ; 2 i)j h 1 ; 2 ij 2 :@L( ) @ j = 1 F e T j @L( ) @ j = 1 e j F :(23) Si nce tr @L k ( ) @ j ! = tr k X r= 1 L r 1 ( ) @L( ) @ j L k r ( ) = k tr L k 1 ( ) @L( ) @ j ! ; tr @L k ( ) @ j ! = k tr L k 1 ( ) @L( ) @ j ! ;(24) we have i kl ftrL k ( );trL l ( )g = X j; tr(L k 1 ( ) @L( ) @ j ) tr(L l 1 ( ) @L( ) @ j ) tr(L l 1 ( ) @L( ) @ j ) tr(L k 1 ( ) @L( ) @ j ) = X a;b;j; 1 (L k 1 ( )) aj a 1 (L l 1 ( )) jb b X a;b;j; 1 (L l 1 ( )) aj a 1 (L k 1 ( )) jb b = X a;b; 1 1 a b [ L k 1 ( );L l 1 ( )] ab = X a;b; 1 1 1 a b [ L k 1 ( );L l 1 ( )] abp k ( )= X 1 j 1 < < j k 3 j 1 ( ) j k ( ):(27) Si nce p k ( ) can be expressed as a pol ynom i alof trL l ( ) (l= 1;2; ),fp j ( );p k ( )g = 0 for any two com pl ex num bers and and two i ntegers j;k 0. Expand p k ( ) as a Laurant seri es of p k ( )= 1 X m = 1 E (k) m m 1(28) w hi ch i s convergent w hen j j> m ax 1 N j j ,then E (1) m = h 1 ; m 1 i+ h 2 ; m 2 i+ h 3 ; m 3 i E (2) m = h 1 ; m 1 i+ h 2 ; m 2 i + X 1 i< j 3 m X l= 1 h i ; l 1 i i h j ; m l i i h i ; l 1 j i h j ; m l j i E (3) m = h 3 ; m 3 i m X l= 1 h 1 ; l 1 1 i h 3 ; m l 1 i h 1 ; l 1 3 i h 3 ; m l 3 i + m X l= 1 h 2 ; l 1 2 i h 3 ; m l 2 i h 2 ; l 1 3 i h 3 ; m l 3 i + X i+ j+ k= m 2 i;j;k 0 h 1 ; i 1 i h 1 ; i 2 i h 1 ; i 3 i h 2 ; j 1 i h 2 ; j 2 i h 2 ; j 3 i h 3 ; k 1 i h 3 ; k 2 i h 3 ; k 3 i :(29) T he sum s are zero i f the l ower bound i s greater than the upper bound. T hese E (k) m ' s are i n i nvol uti on. Let 1 = h 1 ; 1 i= (E (1) 0 E (2) 0 + E (3) 0 )=2 2 = h 2 ; 2 i= (E (1) 0 + E (2) 0 + E (3) 0 )=2 3 = h 3 ; 3 i= E (3) 0(30) then Proof. Let P 0 i n R 6N be gi ven by j = (j = 1;2;3; = 1;2; H x = E (1) 1 E (3) 1 ( 1 2 ) 3 H y = E (2) 1 1 2 1 3 2 3 H t = 2E(2)2 + 2( 1 + 2 )E (1) 1 + ( 1 + 2 2 3 )H x + ( 1 2 )H y(31) ;N ) w here i s a sm al lreal constant. T hen,at P 0 , @E (k) m @ j = " m b kj + O (" 3 )(35) w here (b kj )= 0 B B B B @ 1 1 1 1 1 0 0 0 1 1 C C C C A :(36) T he Jacobi an determ i nant ;R e( 3N );Im ( 11 ); ;Im ( 3N )) i s offul lrank 3N . T he l em m a i s proved. J @(E (1) 0 ; ;E (1) N 1 ;E (2) 0 ; ;E (2) N 1 ;E (3) 0 ; ;E (3) N 1 ) @( 11 ; ; 1N ; 21 ; ; 2N ; 31 ; ; 3N ) = 2 3N 0 @ Y 1 < N ( ) 1 A 3 + O ( 3N + 2 ):(37) In sum m ary,from Lem m a 2{5,we have ;N 1) given by (29), which are in invol ution and functionall y independent in a dense open subset of the phase space R 6N . M oreover, each sol ution of these H am il tonian system s gives a sol ution ofthe D SI equation. A cknow ledgem ents T hi s work was supported by the C hi nese M ajor State Basi c R esearch Project \N onl i near Science",the C i ty U ni versi ty ofH ong K ong (G rant N o.7001041),the R esearch G rants C ounci l ofH ong K ong (G rant N o.9040395, 9040466),the D octoralProgram Foundati on and the K ey ProjectforYoung TeachersoftheM i ni stry ofEducati on ofC hi na.T he rstauthor(Z.X . Zhou) i sgratefulto the D epartm entofM athem ati csofthe C i ty U ni versi ty ofH ong K ong forthe hospi tal i ty. For any two com pl ex num bers , and any positive integers k,l, ftrL k ( );trL l ( ) L( ) L( ))[ L k 1 ( );L l 1 ( )ng thi sl em m a,we can constructi nvol uti ve i ntegral sofm oti on from trL k (k = 1;2; ). For any com pl ex num ber ,l et det( L( ))= 3 p 1 ( ) 2 + p 2 ( ) p 3 ( ) (26) then p k ( ) i s the sum ofal lthe determ i nants ofthe pri nci palsubm atri ces ofL( ) oforder k. Suppose the ei genval ues ofL( ) are 1 ( ); 2 ( ); w hi ch have al lbeen expressed as pol ynom i al s ofE (k) m ' s. H ence the fol l ow i ng l em m a hol ds.Lem m a 4 fH x ;H y g = fH x ;H t g = fH y ;H t g = 0;(32)fH x ;E (k) m g = fH y ;E (k) m g = fH t ;E (k) m g = 0(33)and fE (k) m ;E (l) are functionall y independent in a dense open subsetofR6N . J 6 6= 0 at P 0 i f 6 = 0 i s sm al lenough. Si nce J i s a realanal yti c functi on of ( 11 ; ; 3N ; 11 ; ; 3N ); J 6 = 0 i n a dense open subset ofR 6N . T herefore,the Jacobi an determ i T heorem 1 1T he H am il tonian system s given by the H am il tonians(21) are invol utive and compl etel y integrabl e in Liouvill e sense. T he integral s of m otion are E . A , K Stewartson, Proc.R oy.Soc.London A. 338101A .D avey and K .Stewartson,Proc.R oy.Soc.London A 338,101 (1974). Pem pi nel l i. M Oi Ti, J P Leon, F , J.M ath.Phys. 312612M .B oi ti ,J.P.Leon and F.Pem pi nel l i ,J.M ath.Phys.31,2612 (1990). . A S Fokas, P M Santi Ni, Phys.R ev.Lett. 631329A .S.Fokas and P.M .Santi ni ,Phys.R ev.Lett.63,1329 (1989). Sal l e,D arboux transform ati ons and sol i tons,Spri nger-Verl ag. V B , M A , V .B .M atveev and M .A .Sal l e,D arboux transform ati ons and sol i tons,Spri nger-Verl ag (1991). . Z X Zhou, Inverse Probl em s. 1289Z.X .Zhou,Inverse Probl em s 12,89 (1996). . Z X Zhou, Inverse Probl em s. 141371Z.X .Zhou,Inverse Probl em s 14,1371 (1998). . C W Sci .I N C Hi, Na Ser, A. 33528C .W .C ao,Sci .i n C hi na Ser.A 33,528 (1990). . W X , W Stram, Phys.Lett.A. 185277W .X .M a and W .Stram pp,Phys.Lett.A 185,277 (1994). . W X , Q Ng, W G Zhang, B Q Lu, IlN uovo C i m ento B. 1111135W .X .M a,Q .D i ng,W .G .Zhang and B .Q .Lu,IlN uovo C i m ento B 111,1135 (1996). . Y T , X G Eng, J Ath, Phys. 403409Y .T .W u and X .G .G eng,J.M ath.Phys.40,3409 (1999). . Y B Zeng, R L Li N, J Ath, Phys. 395964Y .B .Zeng and R .L.Li n,J.M ath.Phys.39,5964 (1998). . R G Zhou, J , Phys. 382535R .G .Zhou,J.M ath.Phys.38,2535 (1997). . C W Ao, Y T , X G Eng, J Ath, Phys. 403948C .W .C ao,Y .T .W u and X .G .G eng,J.M ath.Phys.40,3948 (1999). . X G Eng, C W , Phys.Lett.A. 261289X .G .G eng and C .W .C ao,Phys.Lett.A 261,289 (1999). . W X , Z X Zhou, Prepri ntW .X .M a and Z.X .Zhou,Prepri nt (2000). Z X Zhou, W X , R G Zhou, Prepri nt. Z.X .Zhou,W .X .M a and R .G .Zhou,Prepri nt (2000). . W X , B Ner, W , Physi ca A. 233331W .X .M a,B .Fuchstei ner and W .O evel ,Physi ca A 233,331 (1996).
[]
[ "Procedia Computer Science A Cut-and-solve Algorithm for Virtual Machine Consolidation Problem", "Procedia Computer Science A Cut-and-solve Algorithm for Virtual Machine Consolidation Problem" ]
[ "Jiang-Yao Luo \nSchool of Science\nBeijing University of Posts and Telecommunications\n100876BeijingChina\n", "Liang Chen \nSchool of Mathematical Sciences\nUniversity of Chinese Academy of Sciences\nBeijingChina\n", "Wei-Kun Chen \nSchool of Mathematics and Statistics/Beijing Key Laboratory on MCAACI\nBeijing Institute of Technology\n100081BeijingChina\n", "Jian-Hua Yuan \nSchool of Science\nBeijing University of Posts and Telecommunications\n100876BeijingChina\n", "Yu-Hong Dai \nSchool of Mathematical Sciences\nUniversity of Chinese Academy of Sciences\nBeijingChina\n" ]
[ "School of Science\nBeijing University of Posts and Telecommunications\n100876BeijingChina", "School of Mathematical Sciences\nUniversity of Chinese Academy of Sciences\nBeijingChina", "School of Mathematics and Statistics/Beijing Key Laboratory on MCAACI\nBeijing Institute of Technology\n100081BeijingChina", "School of Science\nBeijing University of Posts and Telecommunications\n100876BeijingChina", "School of Mathematical Sciences\nUniversity of Chinese Academy of Sciences\nBeijingChina" ]
[ "Procedia Computer Science" ]
The virtual machine consolidation problem (VMCP) attempts to determine which servers to be activated, how to allocate virtual machines (VMs) to the activated servers, and how to migrate VMs among servers such that the summation of activated, allocation, and migration costs is minimized subject to the resource constraints of the servers and other practical constraints. In this paper, we first propose a new mixed integer linear programming (MILP) formulation for the VMCP. We show that compared with existing formulations, the proposed formulation is much more compact in terms of smaller numbers of variables or constraints, which makes it suitable for solving large-scale problems. We then develop a cut-and-solve (C&S) algorithm, a tree search algorithm to efficiently solve the VMCP to optimality. The proposed C&S algorithm is based on a novel relaxation of the VMCP that provides a stronger lower bound than the natural continuous relaxation of the VMCP, making a smaller search tree. By extensive computational experiments, we show that (i) the proposed formulation significantly outperforms existing formulations in terms of solution efficiency; and (ii) compared with standard MILP solvers, the proposed C&S algorithm is much more efficient.
10.2139/ssrn.4335867
[ "https://export.arxiv.org/pdf/2212.12341v1.pdf" ]
255,096,659
2212.12341
b2ff3ff9f2dd01fd8ef2e3826eed9c5d7e984348
Procedia Computer Science A Cut-and-solve Algorithm for Virtual Machine Consolidation Problem 2022 Jiang-Yao Luo School of Science Beijing University of Posts and Telecommunications 100876BeijingChina Liang Chen School of Mathematical Sciences University of Chinese Academy of Sciences BeijingChina Wei-Kun Chen School of Mathematics and Statistics/Beijing Key Laboratory on MCAACI Beijing Institute of Technology 100081BeijingChina Jian-Hua Yuan School of Science Beijing University of Posts and Telecommunications 100876BeijingChina Yu-Hong Dai School of Mathematical Sciences University of Chinese Academy of Sciences BeijingChina Procedia Computer Science A Cut-and-solve Algorithm for Virtual Machine Consolidation Problem Procedia Computer Science 002022Cut-and-solveCutting planeExact algorithmMixed integer linear programmingVirtual machine consolidation The virtual machine consolidation problem (VMCP) attempts to determine which servers to be activated, how to allocate virtual machines (VMs) to the activated servers, and how to migrate VMs among servers such that the summation of activated, allocation, and migration costs is minimized subject to the resource constraints of the servers and other practical constraints. In this paper, we first propose a new mixed integer linear programming (MILP) formulation for the VMCP. We show that compared with existing formulations, the proposed formulation is much more compact in terms of smaller numbers of variables or constraints, which makes it suitable for solving large-scale problems. We then develop a cut-and-solve (C&S) algorithm, a tree search algorithm to efficiently solve the VMCP to optimality. The proposed C&S algorithm is based on a novel relaxation of the VMCP that provides a stronger lower bound than the natural continuous relaxation of the VMCP, making a smaller search tree. By extensive computational experiments, we show that (i) the proposed formulation significantly outperforms existing formulations in terms of solution efficiency; and (ii) compared with standard MILP solvers, the proposed C&S algorithm is much more efficient. Introduction Nowadays, cloud computing provides a great flexibility and availability of computing resources to customers and has become more and more popular in many industries including the manufacturing industry [1], conference management systems [2], and E-commerce [3]. The reason behind this success is that cloud service providers can provide customers with reliable, inexpensive, customized, and elastically priced computing resources without requiring customers to host them at a dedicated place. In particular, various applications requested by different customers can be instantiated inside virtual machines (VMs) and flexibly deployed to any server in any data center of the cloud [4]. However, as VMs change dynamically and run over a shared common cloud infrastructure, it is crucial to (re)allocate cloud resources (by migrating existing VMs among different servers and/or mapping new VMs into appropriate servers) to meet diverse application requirements while minimizing the operational costs of the service providers. The above problem is called the virtual machine consolidation problem (VMCP) in the literature, which determines the activation state of all servers and the (re)allocation of all VMs in such a way that the predefined cost function (server activation, VM allocation, and migration costs) is minimized subject to resource constraints of the servers and other practical constraints. The VMCP is strongly NP-hard, as it includes the bin packing problem (BPP) [5]. Therefore, there is no polynomial-time algorithm to solve the VMCP to optimality unless P = NP. As a result, most existing works investigated heuristic algorithms for solving the VMCP. In particular, the first-fit decreasing and best-fit decreasing algorithms, first investigated for BPPs, are used to solve VMCPs; see [5,6]. Reference [5] first formulated the VMCP as a mixed integer linear programming (MILP) problem and then proposed a linear programming (LP) relaxation based heuristic algorithm. This heuristic algorithm solves the LP relaxation of the MILP problem, fixes the variables taking integral values, and tries to find a solution by solving the reduced MILP problem. Reference [7] proposed a heuristic algorithm based on the convex optimization method and dynamic programming. Several metaheuristic algorithms were also developed to solve the VMCP, including genetic algorithm [8-1 10], simulated annealing [11], colony optimization [12,13], and evolution algorithm [14]. However, the above heuristic algorithms cannot guarantee to find an optimal solution for the VMCP. Indeed, as found in [5,7,15], the solutions found by the heuristic algorithms are 6% to 49% far from the optimal solutions. Therefore, determining an optimal solution for the VMCP is highly needed. Usually, the VMCP can be formulated as an MILP problem, which allows us to leverage state-of-the-art MILP solvers such as Gurobi [16] and SCIP [17] to solve it to optimality. In particular, in the formulations in [5,6,8,11,[18][19][20][21][22][23], the authors used binary variables to denote whether a given VM is mapped into a given server, and presented the constraints and the objective function based on these binary variables. One weakness of these formulations is that the problem size grows linearly with the number of VMs. When the number of VMs is large, these formulations are difficult to be solved by standard MILP solvers. In practice, the requested loads of many VMs are identical, meaning that the number of VM types is relatively small even when the number of VMs is huge [24]. Reference [15] took this observation into account and proposed a formulation using a family of integer variables, which represents the number of VMs of a given type on a given server. As a result, the problem size of this formulation grows linearly with the number of VM types but not the number of VMs. However, in order to model the migration process, the authors used a family of 3-index integer variables indicating the number of VMs of a given type migrated from one server to another server. Due to this family of variables, the problem size grows quadratically with the number of servers, making it unrealistic to solve this formulation by standard MILP solvers within a reasonable time limit, especially when the number of servers is large. To summarize, the existing formulations for the VMCP suffer from a large problem size when the number of VMs is large or the number of servers is large. This fact makes it difficult to (i) employ a standard MILP solver to solve the VMCP within a reasonable time limit, and (ii) develop an efficient customized exact algorithm for the VMCP (as such algorithms are usually based on a formulation with a small problem size). The motivation of this work is to fill this research gap. In particular, 1) We present new MILP formulations for VMCPs, which minimizes the summation of server activation, VM allocation, and migration costs subject to resource constraints and other practical constraints. The proposed new formulation is much more compact than the existing formulations in [5] and [15] in terms of the smaller number of variables or constraints. 2) We develop a cut-and-solve algorithm (called C&S) to solve VMCPs to optimality based on the new formulations. The proposed C&S algorithm is based on a novel relaxation of the VMCP that provides a stronger lower bound than the natural continuous relaxation of the VMCP, making a smaller search tree. Extensive computational results demonstrate that (i) the proposed formulation significantly outperforms existing formula-tions in terms of solution efficiency; and (ii) compared with standard MILP solvers, the proposed C&S algorithm is much more efficient. The paper is organized as follows. Section 2 introduces the novel MILP formulations for VMCP and compares them with the formulations in [5] and [15]. Section 3 describes the proposed C&S algorithm to solve VMCPs. Section 4 presents the computational results. Finally, Section 5 draws some concluding remarks. Virtual machine consolidation problems The VMCP attempts to determine which servers to be activated, how to allocate VMs to the activated servers, and how to migrate VMs among servers such that the sum of server activation, VM allocation, and migration costs is minimized subject to the resource constraints of the servers and other practical constraints. In this section, we first present an MILP formulation for a basic version of the VMCP (in which only the resource constraints of the servers are considered). Then, we present a variant of the VMCP that considers other practical constraints. Finally, we show the advantage of the proposed formulation by comparing it with those in Speitkamp and Bichler [5] and Mazumdar and Pranzo [15]. Parameters u i,r units of resource r requested by a VM of type i s k,r units of resource r provided by server s c alloc i,k cost of allocating a VM of type i to server k c run k activation cost of server k c mig i,k cost of migrating a VM of type i to server k d i number of VMs of type i that need to be allocated n i,k number of VMs of type i that (currently) be allocated to server k Ł maximum number of allowed migrations m k maximum number of VMs allocated to server k d new i number of new incoming VMs of type i that need to be allocated Variables x i,k integer variable representing the number of VMs of type i allocated to server k y k binary variable indicating whether or not server k is activated z i,k integer variable representing the number of VMs of type i migrated to server k x new i,k integer variable representing the number of new incoming VMs of type i allocated to server k Let K, I, and R denote the set of the servers, the set of types of VMs that need to be allocated to the servers, and the set of resources (e.g., CPU, RAM, and Bandwidth [5]) of the / Procedia Computer Science 00 (2022) 1-13 3 servers, respectively. Each server k can provide s k,r units of resource r and each VM of type i requests u i,r units of resource r. Before the VM consolidation, there are n i,k VMs of type i that are currently allocated to server k. For notations purpose, we denote d i = k∈K n i,k for all i ∈ I. We introduce integer variable x i,k to represent the number of VMs of type i allocated to server k (after the VM consolidation), binary variable y k to indicate whether or not server k is activated, and integer variable z i,k to represent the number of VMs of type i migrated to server k. Then the mathematical formulation of the VMCP can be written as: The basic virtual machine consolidation problem min k∈K c run k y k + i∈I k∈K c alloc i,k n i,k + i∈I k∈K c mig i,k z i,k (1a) s.t. i∈I u i,r x i,k ≤ s k,r y k , ∀ k ∈ K, ∀ r ∈ R, (1b) k∈K x i,k = d i , ∀ i ∈ I,(1c)(x i,k − n i,k ) + = z i,k , ∀ i ∈ I, ∀ k ∈ K, (1d) x i,k , z i,k ∈ Z + , x i,k ≤ v i,k , ∀ i ∈ I, ∀ k ∈ K,(1e)y k ∈ {0, 1}, ∀ k ∈ K.(1f) Constraint (1b) ensures that the total workload of VMs allocated to each server does not exceed any of its resource capacity. Constraint (1c) enforces that all VMs of each type have to be allocated to servers. Constraint (1d) relates variables x i,k and z i,k . More specifically, it enforces that if the number of VMs of type i allocated to server k after the VM consolidation, x i,k , is larger than that before the VM consolidation, n i,k , then the number of VMs of type i migrated to server k, z i,k , must be equal to x i,k − n i,k ; otherwise, it is equal to zero. Finally, constraints (1e) and (1f) enforce x i,k , z i,k , and y k to be integer/binary variables and trivial upper bounds {v i,k } for variables x i,k where v i,k = min        k∈K n i,k , min r∈R s k,r u i,r        , ∀ i ∈ I, ∀ k ∈ K. The objective function (1a) to be minimized is the sum of the activation cost of servers, the cost of allocating all VMs to servers, and the cost of migrating VMs among servers. Here c run k ≥ 0, c alloc i,k ≥ 0, and c mig i,k ≥ 0 denote the activation cost of server k, the cost of allocating a VM of type i to server k, and the cost of migrating a VM of type i to server k, respectively. In practice, c run k and c alloc i,k reflect the power consumption of activating server k and allocating a VM of type i to server k, respectively [15]. As demonstrated in [15], the VM migration process creates non-negligible energy overhead on the source and destination servers (see also [25,26]). Therefore, we follow Mazumdar and Pranzo [15] to consider the energy cost as the migration cost and assume c mig i,k = c alloc i,k for all i ∈ I and k ∈ K. Notice that for a VM of type i ∈ I that is previously hosted at server k ∈ K, if it is still run on server k after the VM consolidation, it will only incur the allocation cost at the source server; and if it is migrated to a destination server k , it will incur the allocation costs at the source and destination servers k and k (as c mig i,k = c alloc i,k for all i ∈ I and k ∈ K). Problem (1) is an MILP problem since the nonlinear con-straint (1d) can be equivalently linearized. Indeed, by c mig i,k ≥ 0, constraint (1d) can be equivalently presented as the following linear constraint x i,k − n i,k ≤ z i,k , ∀ i ∈ I, ∀ k ∈ K. (1d') Note that the linearity of all variables in problem (1) is vital, which enables to leverage the efficient MILP solver such as CPLEX [27] to solve the problem to global optimality. Extensions of the virtual machine consolidation problem The VMCP attempts to (re)allocate VMs to servers subject to resource constraints. In practice, however, a VM manager should also deal with other practical requirements. In this subsection, we introduce four side constraints derived from practical applications in the literature including new incoming VMs [15], restriction on the maximum number of VM migrations [5,15,28], restriction on the maximum number of VMs on servers [15,29], and restriction on allocating a VM type to a server [29,30]. All these constraints can be incorporated into the VMCP. • New incoming VMs The cloud data center needs to embed new incoming VMs into the servers [15]. We denote the number of new incoming VMs of type i, i ∈ I, as d new i . To deal with new incoming VMs, we introduce integer variable x new i,k to denote the number of new incoming VMs of type i allocated to server k. To ensure all new incoming VMs are allocated to servers, we need constraints k∈K x new i,k = d new i , ∀ i ∈ I.(2) In addition, the term i∈I k∈K c alloc i,k x new i,k must be included into the objective function of problem (1) to reflect the cost of allocating all new VMs to the servers. Moreover, as the embedding new incoming VMs into a server also leads to resource consumption, the capacity constraint (1b) must be changed into i∈I u i,r x i,k + i∈I u i,r x new i,k ≤ s k,r y k , ∀ k ∈ K, ∀ r ∈ R. (3) • Maximum number of VM migrations Due to limit administrative costs, the cloud data center manager requires that the number of migrations of VMs cannot exceed a predefined number Ł [5,15,28], which can be enforced by i∈I k∈K z i,k ≤ Ł,(4) • Maximum number of VMs on servers In practice, the cloud data center manager may spend a lot of time in the event of a server failure if too many VMs are allocated to the server [15,29]. Consequently, it is reasonable to impose a threshold m k on the maximum number of VMs that are allocated to server k ∈ K: i∈I x i,k ≤ m k , ∀ k ∈ K.(5) If new incoming VMs are also required to be embedded in the servers, then constraint (5) should be rewritten as i∈I x i,k + i∈I x new i,k ≤ m k , ∀ k ∈ K.(6) • Allocation restriction constraints A subset of servers may exhibit some properties, such as kernel version, clock speed, and the presence of an external IP address. It is impossible to allocate VMs with specific attribute requirements to a server that does not provide these attributes [29,30]. This can be enforced by constraint x i,k = 0, ∀ i ∈ I, ∀ k ∈ K(i),(7) where K(i) ⊆ K denotes the set of servers that cannot process VMs of type i. Similarly, if new incoming VMs are also required to be embedded in the servers, then constraint x new i,k = 0, ∀ i ∈ I, ∀ k ∈ K(i),(8) needs to be included in problem (1). Comparison with the formulations in [5] and [15] To formulate the VMCP or its extensions, other MILP formulations in the literature can be used. In this subsection, we briefly review the two MILP formulations in [5] and [15] and show the advantages of the proposed formulation over these two existing formulations. For easy presentation and fair comparison, we change the objective functions and constraints of the problems in [5] and [15] to be the same as those of the basic VMCP in Section 2.1 1 . First, we compare the proposed formulation (1) with the formulation in [5]. Different from our proposed formulation where a 2-index integer variable x i,k is used to represent the number of VMs of type i allocated to server k, in the formulation of [5], a 3-index binary variable x i,v,k is used to represent whether or not VM v of type i is allocated to server k after VM consolidation. Similarly, n i,v,k and z i,v,k are used to represent whether VM v of type i is allocated to server k before VM consolidation and whether or not VM v of type i is migrated to server k, respectively. The formulation in [5] can be written as 1 We remark that (i) the problem in [5] attempts to allocate new incoming VMs to the servers such that the activation cost is minimized subject to the resource constraint; (ii) and the problem in [15] is an extension of the basic VMCP (1) in which the new incoming VMs (constraints (2)-(3)) and the limitation on the number of VM migrations (constraint (4)) are considered. min k∈K c run k y k + i∈I v∈D(i) k∈K c alloc i,k n i,v,k + i∈I v∈D(i) k∈K c mig i,k z i,v,k (9a) s.t. i∈I v∈D(i) u i,r x i,v,k ≤ s k,r y k , ∀ k ∈ K, ∀ r ∈ R, (9b) k∈K x i,v,k = 1, ∀ i ∈ I, ∀ v ∈ D(i),(9c)x i,v,k − n i,v,k ≤ z i,v,k , ∀ i ∈ I, ∀ v ∈ D(i), ∀ k ∈ K, (9d) x i,v,k , z i,v,k ∈ {0, 1}, ∀ i ∈ I, ∀ v ∈ D(i), ∀ k ∈ K, (9e) y k ∈ {0, 1}, ∀ k ∈ K,(9f) where D(i) = {1, . . . , d i }. Though formulations (1) and (9) are equivalent in terms of returning the same optimal solution, the problem size of the proposed formulation (1) is much smaller than that of (9). Indeed, the number of variables and constraints in (1) are O(|K||I|) and O(|K|(|I| + |R|)), respectively, while those in (9) are O(|K| i∈I d i ) and O(|K|( i∈I d i + |R|)), respectively. In practice, the requested loads of many VMs such as CPU cores and normalized memory, are identical [24], implying that |I| i∈I d i . Next, we compare the proposed formulation (1) with the formulation in [15]. Different from the proposed formulation (1) where a 2-index integer variable z i,k is used to represent the number of VMs of type i migrated to server k, in the formulation in [15], a 3-index integer variable z i, j,k is used to indicate the number of VMs of type i migrated from server j to server k. The mathematical formulation in [15] can be presented as min k∈K c run k y k + i∈I k∈K c alloc i,k n i,k + i∈I j∈K k∈K c mig i,k z i, j,k (10a) s.t. n i,k + j∈K (z i, j,k − z i,k, j ) ≥ 0, ∀ i ∈ I, ∀ k ∈ K, (10b) i∈I u i,r          n i,k + j∈K (z i, j,k − z i,k, j )          ≤ s k,r y k , ∀ k ∈ K, ∀ r ∈ R, (10c) z i, j,k ∈ {0, . . . , n i,k }, ∀ i ∈ I, ∀ j ∈ K, ∀ k ∈ K,(10d)y k ∈ {0, 1}, ∀ k ∈ K. (10e) Though both numbers of constraints in (10) and (1) are O(|K|(|I| + |R|)), the number of variables in (10), however, is much larger than that in (1) (O(|K| 2 |I|) versus O(|K||I|)). Based on the above discussion, we can conclude that the proposed formulation (1) for the VMCP is much more compact than the two existing formulations (9) and (10) in [5] and [15]. Therefore, formulation (1) can be much easier to solve than formulations (9) and (10) by standard MILP solvers (e.g., Gurobi and CPLEX), as demonstrated in the Section 4.2. In addition, as shown in [5,Theorem 1], the basic VMCP is strongly NP-hard even for the case c mig i,k = 0 for all i ∈ I and k ∈ K, meaning that customized (exact or heuristic) algorithms for efficiently solving the VMCP is needed in practice, especially when the problem's dimension is large. We remark that compact formulation (1) is an important step towards developing an efficient customized algorithm for solving the VMCP (e.g., it will lead to a compact LP relaxation, which is the basis of many efficient customized LP relaxation based algorithms). In the next section, we shall develop an efficient customized exact algorithm based on formulation (1) for solving the VMCP. The cut-and-solve algorithm In this section, we shall develop a cut-and-solve (C&S) algorithm to efficiently obtain an optimal solution of the VMCP. Specifically, in Section 3.1, we demonstrate how to apply the C&S procedure [31], a tree search procedure, to solve the VMCP. Then, in Section 3.2, we propose a new relaxation for the VMCP and develop a cutting plane approach to solve this new relaxation. The newly proposed relaxation provides a stronger lower bound than the natural continuous relaxation of the VMCP, which effectively reduce the search tree size, making a more efficient C&S algorithm. For simplicity of presentation, we only present the algorithm for the basic VMCP in Section 2.1 as the proposed algorithm can easily be adapted to solve the extension of the VMCP in Section 2.2. Cut-and-solve The C&S procedure, first proposed by Climer and Zhang [31], has been applied to solve well-known structured mixed binary programming problems, e.g., the traveling salesman problem [31], the facility location problem [32][33][34], and the multicommodity uncapacitated fixed-charge network design problem [35]. For these problems, C&S has been demonstrated to be much more efficient than generic MILP solvers. In this subsection, we shall demonstrate how to apply the C&S procedure to solve VMCPs. The C&S procedure is essentially a branch-and-bound algorithm in which a search tree (see Fig. 1) will be constructed during the search. More specifically, the C&S denotes the original problem (1) as DP 0 and the best objective value of all feasible solutions found so far as UB min (the corresponding feasible solution is denoted as (x min , y min , z min )). At the q-th (q ∈ Z + ) level of the C&S search tree (see Fig. 1), we first solve the LP relaxation of DP q , denoted its solution by (x * , y * , z * ) and the corresponding objective value by LB q . i) If (x * , y * , z * ) is an integer vector, then (x * , y * , z * ) is an optimal solution of problem DP q and hence the searching procedure can be terminated; ii) If LB q ≥ UB min , then (x min , y min , z min ) must be an optimal solution of problem (1) and hence the searching procedure can also be terminated. If neither (i) nor (ii) is satisfied, we then decompose the DP q into two subproblems SP q+1 and DP q+1 , which are defined as DP q with the so-called piercing cuts [31] k∈S y k ≤ ϕ, and k∈S y k ≥ ϕ + 1, respectively. Here ϕ is a positive integer and S is a subset of K. A common selection used in [31][32][33][34] is to set ϕ = 0, making the right subproblem DP q+1 and the left subproblem SP q+1 being a dense problem and a sparse problem (in terms of small solution space). Indeed, for SP q+1 , such a selection forces (i) y k = 0 for all k ∈ S and (ii) x i,k = 0 and z i,k = 0 for all i ∈ I and k ∈ S (implied by constraints (1b) and (1d')). Due to the small solution space, sparse problem SP q+1 can be solved by standard MILP solvers within a reasonable time limit, especially when |S| is large. Moreover, as far as the sparse problem SP q+1 has a feasible solution, it can provide an upper bound UB q+1 for problem (1). If UB q+1 < UB min , UB min UB q+1 will be updated. The above procedure is repeated until case i) or ii) is satisfied. The details are summarized in Algorithm 1. We now discuss the selection of S in (11) and (12), which defines subproblems SP q+1 and DP q+1 . A straightforward strategy is to choose the set S as S 1 := k ∈ K : y * k = 0 ,(13) where y * , as stated, is the optimal solution of DP q 's relaxation. The rationale behind strategy lies in the fact that an optimal solution to DP q usually has a number of components that are identical to those of an optimal solution to DP q 's relaxation. Consequently, it is more likely to find a good feasible solution (in terms of small objective value) by solving the sparse problem SP q+1 . However, our preliminary experiments showed that due to the (general) degeneracy of the LP relaxation of DP q , such a simple strategy cannot improve the lower bound (returned by solving the LP relaxation of DP q+1 ) fast enough, leading to a large search tree. For this reason, we use a more sophisticated strategy, suggested by Climer and Zhang [31], to determine S, which is detailed as follows. From the basic LP theory, the reduced cost r * k of a variable y k is a lower bound on the increase of the objective value if the value of this variable is changed by one unit. These reduced costs can be obtained by solving the LP relaxation of DP q+1 . Moreover, 1) If y * k = 0, then r * k ≥ 0; 2) If y * k = 1, then r * k ≤ 0; 3) If 0 < y * k < 1, then r * k = 0. For more details, we refer to [? , Chapter 5]. Using the reduced costs, we set S as S 2 := k ∈ K : r * k ≥ ,(14) where > 0 that controls the problem size of SP q+1 and the size of C&S's search tree. Indeed, it follows that S 2 ⊆ S 1 , leading to 5 / Procedia Computer Science 00 (2022) 1-13 6 Algorithm 1: C&S algorithm 1 Initialize UB min +∞ and q 0; 2 while true do 3 Solve the LP relaxation of problem DP q with the solution (x * , y * , z * ) and the objective value LB q ; 4 if LB q ≥ UB min then 5 Stop and return the optimal solution (x min , y min , z min ); 6 if (x * , y * , z * ) is an integer vector then 7 Stop and return the optimal solution (x * , y * , z * ); 8 Use the piercing cuts (11) and (12) to decompose DP q into two subproblems SP q+1 and DP q+1 ; 9 Solve SP q+1 to optimality by an MILP solver and denote its solution and objective value by (x , y , z ) and UB q+1 ; 10 if UB q+1 < UB min then 11 Update (x min , y min , z min ) (x , y , z ) and UB min UB q+1 ; 12 Set q q + 1; a relatively large sparse problem SP q+1 , as compared to that defined by S 1 . The larger the is, the larger the solution space of SP q+1 is. However, this also enables to obtain a much large better lower bound returned by solving DP q+1 's relaxation (indeed, the difference of the objective values DP q 's and DP q+1 's relaxations is at least min {r k : k ∈ S 2 } ≥ > 0), yielding a smaller C&S search tree. In our implementation, we set = 10 −4 . An improved relaxation problem and the cutting plane approach The LP relaxation of problem (1), obtained by ignoring the integrality requirement on all decision variables, is as follows: min i∈I k∈K c alloc i,k n i,k + k∈K c run k y k + i∈I k∈K c mig i,k z i,k (15a) s.t. (1b), (1c), (1d'),(15b)x i,k , z i,k ∈ R + , x i,k ≤ v i,k , ∀ i ∈ I, ∀ k ∈ K,(15c)y k ∈ [0, 1], ∀ k ∈ K.(15d) However, the feasible region of LP relaxation (15) is actually enlarged compared with that of the original problem (1). As a result, this relaxation usually provides a weak lower bound, leading to a large C&S search tree; see Section 4.3 further ahead. To overcome this weakness, we present a new relaxation that has a much more compact feasible region and hence can provide a much stronger lower bound, as compared with relaxation (15). Then we provide a cutting plane approach to solve this newly proposed relaxation. An improved relaxation problem To proceed, we observe that in problem (1), it is required that (x ·,k , y k ) ∈ X(r, k) := (x ·,k , y k ) ∈ Z |I| + × {0, 1} : i∈I u i,r x i,k ≤ s k,r y k , x i,k ≤ v i,k , ∀ i ∈ I ,(16) for all k ∈ K and r ∈ R. However, in problem (15), such a requirement is relaxed to (x ·,k , y k ) ∈ X L (r, k) := (x ·,k , y k ) ∈ R |I| + × [0, 1] : i∈I u i,r x i,k ≤ s k,r y k , x i,k ≤ v i,k , ∀ i ∈ I ,(17) making a much larger feasible region for (x ·,k , y k ). Notice that by (16) and X(r, k) ⊆ conv(X(r, k)), (x ·,k , y k ) ∈ conv(X(r, k)),(18) must hold for every feasible solution of problem (1). Therefore, our first refinement of relaxation (15) is to replace (17) with (18). As conv(X(r, k)) ⊆ X L (r, k), such a refinement can possibly make a smaller feasible region for (x ·,k , y k ) when relaxing the integrality requirement on variables (x ·,k , y k ). Next, we pose more restrictions on vector y. In particular, for r ∈ R, adding all the constraints in (1b) for all k ∈ K and using constraint (1c), we obtain k∈K s k,r y k ≥ i∈I d i u i,r . The above constraint requires that the total resources of activated servers should be larger than or equal to the total required resources of all VMs. We remark that in problem (1), it is required that y ∈ Y(r) := y ∈ {0, 1} |K| : (19) , ∀ r ∈ R,(20) while in relaxation (15), y ∈ {0, 1} |K| is relaxed to y ∈ [0, 1] |K| , and as a result, it follows y ∈ Y L (r) := y ∈ [0, 1] |K| : (19) , ∀ r ∈ R.(21) Our second refinement of relaxation (15) is to enforce y ∈ conv(Y(r)), ∀ r ∈ R.(22) Similarly, as conv(Y(r)) ⊆ Y L (r), enforcing (22) in relaxation (15) can possibly make a smaller feasible region for vector y when relaxing the integrality requirement on variables y. (x ·,k , y k ) ∈ conv(X(r, k)), ∀ k ∈ K, ∀ r ∈ R, y ∈ conv(Y(r)), ∀ r ∈ R. As discussed, (23c) and (23d) make a smaller feasible region for the decision variables in relaxation (23), as compared with that of relaxation (15). As a result, relaxation (23) can provide a tighter lower bound and hence makes a smaller C&S search tree; see Section 4.3 further ahead. 3.2.2. The cutting plane approach to solve (23) conv(X(r, k)) and conv(Y(r)) are polytopes that can be expressed by a finite number of inequalities, called facet-defining inequalities; see, e.g, [36,Proposition 8.1]. However, it is not practical to solve problem (23) by enumerating all inequalities of conv(X(r, k)) and conv(Y(r)) due to the following two reasons. First, it is computationally expensive to find all inequalities that are required to describe conv(X(r, k)) and conv(Y(r)). Second, the numbers of inequalities that describe conv(X(r, k)) and conv(Y(r)) are potentially huge (usually exponential), making it hard to solve problem (23). Due to this, we use a cutting plane approach to solve problem (23), which is used, e.g., in [37] in the context of solving the generalized assignment problem. This approach is detailed as follows. First, we solve relaxation (15) to obtain its solution (x * , y * ). Then, we solve the separation problem, that is, either (i) find a set of inequalities which are valid for conv(X(r, k)), k ∈ K and r ∈ R, and conv(Y(r)), r ∈ R, but can cut off point (x * , y * ) (called violated inequalities) or (ii) prove (x * , y * ) ∈ conv(X(r, k)) for all k ∈ K and r ∈ R , and y * ∈ conv(Y(r)) for all r ∈ R. For case (i), we add the violated inequalities into relaxation (15) and solve it again. For case (ii), (x * , y * ) must be an optimal solution of problem (23). The above procedure is iteratively applied until case (ii) holds. In the following, we demonstrate how to solve the separation problem in detail. • Integer knapsack set To solve the separation problem over conv(X(r, k)) or conv(Y(r)), it suffices to consider the separation problem over conv(X), where X is the generic integer knapsack set: X =        x ∈ Z |N| + : i∈N a i x i ≤ b, x i ≤ v i , ∀ i ∈ N        , a i ≥ 0, a i v i ≤ b for all i ∈ N, and b ≥ 0. Indeed, by replacing variable y k with y k = 1 − y k for all k ∈ K in Y(r), we obtain the so-called binary knapsack set Y (r) =        y ∈ {0, 1} |K| : k∈K s k,r y k ≤ k∈K s k − i∈I d i u i,r        , ∀ r ∈ R, which is a special case of integer knapsack set, that is, v i = 1, i ∈ N in X. We remark that inequality k∈K α k,r y k ≤ β r is valid for conv(Y (r)) if and only if k∈K α k,r (1 − y k ) ≤ β r is valid for conv(Y(r)). The set X(r, k) is a form of X y =        (x, y) ∈ Z |N| + × {0, 1} : i∈N a i x i ≤ by, x i ≤ v i , ∀ i ∈ N        . The following proposition shows that all nontrivial facetdefining inequalities of conv(X y ) can be derived from facetdefining inequalities of conv(X). Proposition 1. (i) All facet-defining inequalities of conv(X) , except x i ≥ 0, i ∈ N, are of the form i∈N π i x i ≤ π 0 with π i ≥ 0, i ∈ N, and π 0 > 0; and (ii) all facet-defining inequalities of conv(X y ), except x i ≥ 0, i ∈ N, y ≤ 1, have the form i∈N π i x i ≤ π 0 y, where i∈N π i x i ≤ π 0 is a facet-defining inequality of conv(X) differing from inequalities x i ≥ 0, i ∈ N. Proof. The proof is relegated to the appendix. Let (x * , y * ) ∈ R |N| + × [0, 1]. If y * = 0, then x * = 0 and thus (x * , y * ) ∈ conv(X y ). Otherwise, by Proposition 1, it follows that (x * , y * ) ∈ conv(X y ) if and only ifx ∈ conv(X) wherex i = x * i y * for all i ∈ N. Based on the above discussion, we shall concentrate on the separation problem over conv(X) in the following. • Exact separation for integer knapsack polytope conv(X) Next, we solve the separation problem of polytope conv(X), that is, either construct a hyperplane separating pointx from the conv(X) strictly, i.e., π x ≤ π 0 , ∀ x ∈ conv(X) (or equivalently, ∀ x ∈ X), (24) andx π > π 0 , (25) or prove that none exists, i.e.,x ∈ conv(X). This separation problem can be reduced to the following LP problem ω(x) = max (π,π 0 )∈R |N|+1 x π − π 0 : π x ≤ π 0 , ∀ x ∈ X ,(26) If ω(x) ≤ 0, we must havex ∈ conv(X); otherwise, π x ≤ π 0 is a valid inequality violated byx. By Proposition 1 (i), we can, without loss of generality, add π i ≥ 0 for all i ∈ N and π 0 > 0 into problem (26). Moreover, we can further normalize π 0 as 1 (as π 0 > 0) and obtain the following equivalent problem ω(x) = max π∈R |N| + x π : π x ≤ 1, ∀ x ∈ X .(27) In particular, letting π * be an optimal solution of (27), ifx π * ≤ 1, we provex ∈ conv(X); otherwise, we find the inequality π * x ≤ 1 violated byx. One weakness of problem (27) is its large problem size. Indeed, the number of constraint π x ≤ 1 may be exponential as the points in X may be exponential. Consequently, from a computational perspective, it is not practical to solve the separation problem when all constraints are explicitly expressed. For this reason, we use the row generation method, an iterative approach that starts with a subset of constraints and then dynamically adds other constraints when violations occur [38]. More specifically, we first choose an initial subset U = v i e i : i ∈ N ⊆ X,(28) where e i is the i-th |N|-dimensional unit vector, and solve the partial separation problem defined by this subset U: ω (x) = max π∈R |N| + x π : π x ≤ 1, ∀ x ∈ U .(29) 7 / Procedia Computer Science 00 (2022) 1-13 8 Let π * be an optimal solution of (29). If ω (x) ≤ 1, we provex ∈ conv(X) (as U ⊆ X); otherwise, we check whether π * x ≤ 1 holds for all x ∈ X by solving the following bounded knapsack problem h * ∈ arg max h π * h : ∀ h ∈ X .(30) (i) If π * h * ≤ 1, π * x ≤ 1 holds for all x ∈ X; (ii) otherwise, we add h * into U and solve problem (29) again. The above procedure is iteratively applied until case (i) holds. The row generation method is summarized in Algorithm 2. Algorithm 2: The row generation method to solve the separation problem of conv(X). Input: The set X and a solutionx. Output: Find a violated inequality π * x ≤ 1 separatinḡ x from conv(X) or conclude thatx ∈ conv(X). We remark that problem (30) is an integer knapsack problem which is generally NP-hard. However, it can be solved by the dynamic programming algorithm in [39], which runs in pseudo-polynomial time but is quite efficient in practice. In addition, problem (30) may have multiple optimal solutions. We follow [40] to choose a solution with large values h * . This strategy provides a much stronger inequality π h * ≤ 1 for problem (29), which effectively decreases the number of iterations in Algorithm 2. For more details, we refer to [40]. • Efficient implementation In each iteration of Algorithm 2, we need to solve the LP problem (29), which is still time-consuming, especially when the dimension of X is large. Below, we introduce two simple techniques to reduce the dimension of X. First, we can aggregate multiple variables with the same coefficient into a single variable. Specifically, suppose that a i , i ∈ N , are equal. Then replacing i∈N x i by a new variable δ (with δ ≤ i∈N v i ) in X, we obtain a new set X . After constructing a valid inequality for X , we substitute δ = i∈N x i into this inequality and obtain a valid inequality for X. In our experience, this simple technique effectively reduces the CPU time spent by Algorithm 2, especially when most coefficients in X are equal. The second technique to reduce the dimension of X comes from Vasilyev et al. [38], which consists of two steps. In the first step, the separation problem over a projected polytope conv(X(x)) is solved to obtain a valid inequality (violated byx): i∈N R π i x i ≤ 1,(31) where X(x) =          x ∈ Z |N R | + : i∈N R a i x i ≤b, x i ≤ v i , ∀ i ∈ N R          ,(32)N L = {i ∈ N :x i = 0}, N U = {i ∈ N :x i = v i }, N R = N\(N L ∪ N U ), andb = b − i∈N U a i v i . It can be expected that the separation problem over conv(X(x)) is easier to solve than that over conv(X), especially when the number of fixed variables |N L | + |N U | is large. In the second step, we derive a valid inequality i∈N R π i x i + i∈N L π i x i + i∈N U π i x i ≤ 1 + i∈N U π i v i ,(33) for conv(X) using sequential lifting; see, e.g., [38,[41][42][43][44] for a detailed discussion of sequential lifting. Numerical results In this section, we present simulation results to illustrate the effectiveness and efficiency of the proposed formulation (1) and the proposed C&S algorithm for solving VMCPs. More specifically, we first perform numerical experiments to compare the performance of solving the proposed formulation (1) and the two existing formulations in [5] and [15] by standard MILP solvers. Then, we present some simulation results to demonstrate the efficiency of the proposed C&S algorithm for solving VMCPs over standard MILP solvers. Finally, we evaluate the performance of our proposed C&S algorithm under different problem parameters. In our implementation, the proposed C&S was implemented in C++ linked with IBM ILOG CPLEX optimizer 20.1.0 [27]. The time limit and relative gap tolerance were set to 7200 seconds and 0%, respectively, in all experiments. The cutting plane approach was stopped if the optimal value of the LP relaxation problem of VMCP is improved by less than 0.05% between two adjacent calls. Unless otherwise specified, all other CPLEX parameters were set to their default values. All experiments were performed on a cluster of Intel(R) Xeon(R) Gold 6140 @ 2.30GHz computers, with 192 GB RAM, running Linux (in 64 bit mode). Testsets We tested all algorithms on problem instances with 5 VM types and 10 server types with different features (CPU, RAM, and Bandwidth resources and power consumption), as studied in [15]. The VM types and server types are in line with industry standards and are described in Tables 2 and 3, respectively. The basic VMCP (1) instances are constructed using the same procedure in [15]. Specifically, each instance has an equal number of servers of each type. The number of servers |K| is selected from {250, 500, 750, 1000}. For each server k, we iteratively assign a uniform random number n i,k (satisfying n i,k ∈ {0, . . . , s k,r u i,r }) of VMs of type i until the maximum usage of the available resource (CPU, RAM, and Bandwidth) load σ k , defined by, σ k = max i∈I u i,r n i,k s k,r : ∀ r ∈ R ,(34) exceeds a predefined value α. In general, the larger the α, the more VMs will be constructed. We choose α ∈ {20%, 40%}. In our test, we attempt to minimize the total power consumption of the servers. As shown in [8,15,45], the power consumption of a server can be represented by the following linear model: P k = P idle,k + P max,k − P idle,k U k ,(35) where P idle,k is the idle power consumption (at the idle state) of server k, P max,k is the maximum power consumption (at the peak state) of server k, and U k (U k ∈ [0, 1]) is the CPU utilization of server k. As such, (i) the activation cost c run k is set to the idle power consumption of server k, which is equal to 60% of the maximum power consumption, as assumed by [15]; and (ii) the allocation cost c alloc i,k is set to P max,k − P idle,k u i,CPU s k,CPU . As illustrated in Section 2, c mig i,k is set to c alloc i,k in the experiments. The extended VMCP (i.e., problem (1) with constraints (2)-(8)) instances are constructed based on the basic VMCP instances described above with the parameters for constraints (2)-(8) described as follows. For constraints (2) and (3), parameter d new i is obtained by setting existing VMs as new incoming VMs with a probability β, chosen in {35%, 45%}. For constraint (4), parameter Ł is set as η i∈I k∈K n i,k where η is chosen in {30%, 40%}. For constraints (5) or (6), the maximum number of VMs on each server k, m k , is set to λ max i∈I min r∈R s k,r u i,r , where λ ∈ {85%, 90%}. Notice that min r∈R s k,r u i,r is the maximum number of VMs of type i that can be allocated to a given server k, and hence max i∈I min r∈R s k,r u i,r is the maximum number of VMs of a single type that can be allocated to a given server k. For constraints (7) and (8), we randomly choose the element k ∈ K to subset K(i), i ∈ I, with a probability θ, chosen in {5%, 10%}. For each fixed |K| ∈ {250, 500, 750, 1000} and α ∈ {20%, 40%}, 50 basic VMCP instances are randomly generated, leading to an overall 400 basic VMCP instances testbed. In addition, for each fixed |K| ∈ {250, 500, 750, 1000}, α ∈ {20%, 40%}, β ∈ {35%, 45%}, η ∈ {30%, 40%}, λ ∈ {85%, 90%}, and θ ∈ {5%, 10%}, 10 extended VMCP instances are randomly generated, leading to an overall 1280 extended VMCP instances testbed. Efficiency of the proposed formulation In this subsection, we present computational results to illustrate the efficiency of the proposed formulation (1) over those in [5] and [15] (i.e., formulations (9) and (10)). Table 4 summarized the computational results of the three formulations solved by CPLEX. We report the number of instances that can be solved to optimality witnin the given time limit (#S), the average CPU time (T), and the average number of constraints and variables (#CONS and #VARS, respectively). As expected, (i) the numbers of variables and constraints in the proposed formulation (1) are much smaller than those in [5]; and (ii) the number of variables in the proposed formulation (1) is also much smaller than the one in [15], but the numbers of constraints in the two formulations are fairly equal. Consequently, it can be clearly seen that it is much more efficient to solve formulation (1) than those in [5] and [15]. More specifically, using the proposed formulation (1), 325 instances (among 400 instances) can be solved to optimality. In sharp contrast, using the formulations in [5] and [15], only 115 and 120 instances can be solved to optimality, respectively. Indeed, for large-scale cases (e.g., |K| = 750, 1000), only a few instances can be solved to optimality by using the two existing formulations in [5] and [15]. Moreover, as observed in Table 4, compared with the formulations in [5] and [15], the CPU time taken by solving the proposed formulation (1) is much smaller (19.5 seconds versus 4227.8 seconds and 3449.4 seconds). From this computational result, we can conclude that formulation (1) significantly outperforms the formulations in [5] and [15] in terms of solution efficiency. Efficiency of the proposed C&S algorithm In this subsection, we compare the performance of the proposed C&S algorithm with the approach using the MILP solver CPLEX (called CPX). In addition, to address the advantage of embedding the proposed relaxation (23) into the C&S algorithm, we compare C&S with C&S' algorithm, in which the LP relaxation (15) is used, to solve VMCPs. Figs. 2-3 plot the performance profiles of the three settings CPX, C&S, and C&S'. Each point with coordinates (a, b) in a line represents that for b% of the instances, the CPU time is less than or equal to a seconds. From Figs. 2-3, CPX can solve Table 4. Comparison results of the proposed formulation (1) with existing formulations in [5] and [15] (|K| , α) #VM Proposed formulation (1) Formulation in Speitkamp and Bichler [5] Formulation in Mazumdar and Pranzo [15] #S more basic VMCP instances within 10 seconds and more extended VMCP instances within 5 seconds, respectively. This shows that CPX performs a bit better than C&S for easy instances. However, for the hard instances, C&S significantly outperforms CPX, especially for basic VMCPs. In particular, C&S can solve 97% basic VMCP instances to optimality while CPX can solve only 81% basic VMCP instances to optimality. In addition, from the two figures, we can conclude that the performance of C&S is much better than C&S' for basic and extended VMCPs. This indicates that the proposed compact relaxation (23) has a significantly positive performance impact on the C&S approach. To gain more insight into the computational efficiency of C&S over C&S', we compare the numbers of levels of the cutand-solve search trees returned by C&S and C&S'. The results for the basic and extended VMCP instances are summarized in Figs. 4 and 5, respectively. From the two figures, we can conclude that the number of levels returned by C&S is much less than that returned by C&S', especially for basic VMCP instances. More specifically, more than 97% of the basic VMCP instances can be solved by C&S within 5 levels while only about 70% of the basic VMCP instances can be solved by C&S' within 20 levels. This shows the advantage of the proposed relaxation (23), i.e., it can effectively reduces the C&S search tree size. From the above computational results, we can conclude that (i) the proposed C&S algorithm is much more effective than standard MILP solver, especially for the hard instances; (ii) the proposed relaxation (23) can effectively reduce the size of the C&S search tree, which plays a crucial role in the efficiency of the proposed C&S algorithm. Performance comparison of the proposed C&S algorithm To gain more insights into the performance of the proposed C&S algorithm, we compare the performance of C&S on instances with different numbers of servers |K| and different loads α (the higher the load α, the larger the number of VMs). Figs. 6 and 7 plot performance profiles of CPU time, grouped by the number of servers |K|, for the basic and extended VMCPs, respectively. As expected, the CPU time of C&S generally increases with the number of servers |K| for both basic and extended VMCPs. This is reasonable as the problem size and the search space grow with the number of servers. Nevertheless, even for the largest case (|K| = 1000), C&S can still solve 95% of basic VMCP instances and 87% of extended VMCP instances to optimality, respectively, which shows the scalability of the proposed C&S with the increasing number of servers. Next, we compare the results for basic and extended VM-CPs with loads α = 20% and α = 40%. The results for basic and extended VMCPs are summarized in Figs. 8 and 9, respectively. We observe that the CPU time does not increase with the increasing value of α for basic VMCPs. Even in extended VMCPs, the CPU time of solving instances with load α = 40% is only slightly larger than that of solving instances with load α = 20%. However, the same behavior cannot be observed in the computational results returned by CPX (as illustrated in Table 4, the CPU time of using CPX to solve the basic VMCPs with α = 20% is smaller than that of using CPX to solve the basic VMCPs with α = 40%). This shows another advantage of the proposed C&S, i.e., a higher load α does not lead to a larger CPU time for solving VMCPs. Conclusion and remarks In this paper, we have proposed new problem formulations for VMCP which minimized the summation of server activation, VM allocation, and migration costs subject to the resource constraints of the servers and other practical constraints. Compared with existing formulations in Speitkamp and Bichler [5] and Mazumdar and Pranzo [15] that suffer from large problem sizes due to the 3-index variables, the proposed formulation uses the 2-index variables, making a much smaller problem size. We have developed a cut-and-solve algorithm to solve the new formulations of VMCPs to optimality. The proposed algorithm is based on a newly proposed relaxation, which compared with the natural LP relaxation, is much more compact in terms of providing a better relaxation bound, making it suitable to solve large-scale VMCPs. Extensive computational results demonstrate that (i) the proposed formulation significantly outperforms existing formulations in terms of solution efficiency; and (ii) compared with standard MILP solvers, the proposed C&S algorithm is much more efficient. Proof of Proposition 1. As a i > 0 and b > 0, X is an independent system and thus every facet-defining inequality of conv(X), except x i ≥ 0, is of the form i∈N π i x i ≤ π 0 with π i ≥ 0 (i ∈ N) and π 0 > 0; see [46,Page 237]. In addition, X y can be transformed to X y = (x, y ) ∈ Z |N| + × {0, 1} : i∈N a i x i + by ≤ b, x i ≤ v i , ∀ i ∈ N by replacing variable y with 1 − y ∈ {0, 1}. Similarly, X y is an independent system, and thus every facet-defining inequality, except x i ≥ 0 and y ≥ 0, is of the form i∈N π i x i + π 0 y ≤ α with π i ≥ 0 (i ∈ N), π 0 ≥ 0, and α > 0. This implies that all facet-defining inequalities for conv(X y ) except x i ≥ 0 and y ≤ 1 are of the form i∈N π i x i ≤ π 0 y + α − π 0 where π i ≥ 0 (i ∈ N), π 0 ≥ 0, and α > 0. Since (0, 0) ∈ X y , we have α − π 0 ≥ 0. If α − π 0 > 0, inequality i∈N π i x i ≤ π 0 y + α − π 0 can be strengthened to i∈N π i x i ≤ αy and thus cannot be facet-defining for conv(X y ). Consequently, we must have π 0 = α > 0. Next, we shall complete the proof by showing that i∈N π i x i ≤ π 0 y with π i ≥ 0 (i ∈ N) and π 0 > 0 (differing from y ≥ 0) is facet-defining for conv(X y ) if and only if i∈N π i x i ≤ π 0 with π i ≥ 0 (i ∈ N) and π 0 > 0 is facet-defining for conv(X). Suppose that i∈N π i x i ≤ π 0 y with π i ≥ 0 (i ∈ N) and π 0 > 0 (differing from y ≥ 0) is facet-defining for conv(X y ). Then (i) i∈N π i x i ≤ π 0 is valid for X (as (x, 1) ∈ X y if and only if x ∈ X); and (ii) there must exist |N| + 1 affinely independent points (x , y ), = 1, · · · , |N| + 1, in F y = (x, y) ∈ conv(X y ) : i∈N π i x i = π 0 y . As (0, 0) is the only point in F satisfying y = 0 and i∈N π i x i ≤ π 0 y differs from y ≥ 0, such |N| + 1 points must be (0, 0) and (x , 1), = 1, · · · , |N|. Apparently, points x , = 1, · · · , |N|, must be affinely independent and in F = x ∈ conv(X) : i∈N π i x i = π 0 . Therefore, i∈N π i x i ≤ π 0 is facet-defining for conv(X). Now suppose that i∈N π i x i ≤ π 0 with π i ≥ 0 (i ∈ N) and π 0 > 0 is facet-defining for conv(X). Then (i) i∈N π i x i ≤ π 0 y is valid for X y (as (x, 1) ∈ X y if and only if x ∈ X and i∈N π i x i ≤ π 0 y holds at (0, 0)); and (ii) there must exist |N| affinely independent points x , = 1, · · · , |N| in F . Apparently, (0, 0) and (x , 1), = 1, · · · , |N|, are in F y , which shows that i∈N π i x i ≤ π 0 y is facet-defining for conv(X y ). Figure 1 . 1C&S search tree t. (1c), (1d'), (15c), (15d), Figure 2 . 2Comparison of the CPU time between CPX, C&S, and C&S' on basic VMCPs. Figure 3 . 3Comparison of the CPU time between CPX, C&S, and C&S' on extended VMCPs. Figure 4 . 4Comparison of the number of levels of the search tree between C&S and C&S' on basic VMCPs. Figure 5 . 5Comparison of the number of levels of the search tree between C&S and C&S' on extended VMCPs. Figure 6 .Figure 7 . 67Comparison of the CPU time for C&S on basic VMCPs with different numbers of servers |K|. Comparison of the CPU time for C&S on extended VMCPs with different numbers of servers |K|. Figure 8 . 8Comparison of the CPU time for C&S on basic VMCPs with different loads α. Figure 9 . 9Comparison of the CPU time for C&S on extended VMCPs with different loads α. Table 1 . 1Summary of parameters and variables. Table 2 . 2The five VM types.Type CPU RAM (GB) Bandwidth (Mbps) VM 1 1 1 10 VM 2 2 4 100 VM 3 4 8 300 VM 4 6 12 1000 VM 5 8 16 1200 Table 3 . 3The ten server types.Type CPU RAM (GB) Bandwidth (Mbps) Maximum power consumption (W) Server 1 4 8 1000 180 Server 2 8 16 1000 200 Server 3 10 16 2000 250 Server 4 12 32 2000 250 Server 5 14 32 2000 280 Server 6 14 32 2000 300 Server 7 16 32 4000 300 Server 8 16 64 4000 350 Server 9 18 64 4000 380 Server 10 18 64 4000 410 In the table, "-" means that the average CPU time reaches the time limit.T #CONS #VARS #S T #CONS #VARS #S T #CONS #VARS (250, 20%) 959 50 1.5 2005 2750 46 905.8 241459 479750 45 574.3 2000 312750 (250, 40%) 1163 49 2.9 2005 2750 37 1664.7 292663 581750 37 870.7 2000 312750 (500, 20%) 1924 50 3.9 4005 5500 20 5179.7 965424 1924500 20 3782.5 4000 1250500 (500, 40%) 2310 45 12.5 4005 5500 8 5266.9 1158810 2310500 13 4302.6 4000 1250500 (750, 20%) 2855 41 38.4 6005 8250 1 6996.1 2146355 4283250 1 6962.0 6000 2813250 (750, 40%) 3493 31 108.7 6005 8250 1 6998.3 2625493 5240250 3 6825.0 6000 2813250 (1000, 20%) 3832 36 67.7 8005 11000 0 - 3838832 7665000 0 - 8000 5001000 (1000, 40%) 4655 23 316.2 8005 11000 2 7038.0 4662655 9311000 1 7198.1 8000 5001000 All 325 19.5 115 4227.8 120 3449.4 Choose an initial subset U as in(28); 2 Solve the partial separation problem(29) to obtain its solution π * ; 3 Ifx π * ≤ 1, concludex ∈ conv(X) and stop; otherwise, solve the bounded knapsack problem(30) to obtain the solution h * ; 4 If π * h * > 1, set U U ∪ {h * } and go to step 2;otherwise, stop and return the violated inequality π * x ≤ 1; AcknowledgementThis work was partially supported by the Chinese NSF grants(Nos. 1210011180, 12171052, 11971073, 11871115, 12021001, 11991021, and 12201620), and Alibaba Group through Alibaba Innovative Research Program. From cloud computing to cloud manufacturing. X Xu, Robot. Comput. Integr. Manuf. 281X. Xu, From cloud computing to cloud manufacturing, Robot. Comput. Integr. Manuf. 28 (1) (2012) 75-86. Cloud computing privacy concerns on our doorstep, Commun. M D Ryan, ACM. 541M. D. Ryan, Cloud computing privacy concerns on our doorstep, Com- mun. ACM 54 (1) (2011) 36-38. Influences of cloud computing on e-commerce businesses and industry. D Wang, J. Softw. Eng. Appl. 66D. Wang, Influences of cloud computing on e-commerce businesses and industry, J. Softw. Eng. Appl. 6 (6) (2013) 313-318. Xen and the art of virtualization. P Barham, B Dragovic, K Fraser, S Hand, T Harris, A Ho, R Neugebauer, I Pratt, A Warfield, Proc. ACM Symp. Operat. Syst. Principles. 375P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neuge- bauer, I. Pratt, A. Warfield, Xen and the art of virtualization, Proc. ACM Symp. Operat. Syst. Principles 37 (5) (2003) 164-177. A mathematical programming approach for server consolidation problems in virtualized data centers. B Speitkamp, M Bichler, IEEE Trans. Serv. Comput. 34B. Speitkamp, M. Bichler, A mathematical programming approach for server consolidation problems in virtualized data centers, IEEE Trans. Serv. Comput. 3 (4) (2010) 266-278. Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing. A Beloglazov, J Abawajy, R Buyya, Future Gener. Comput. Syst. 285A. Beloglazov, J. Abawajy, R. Buyya, Energy-aware resource allocation heuristics for efficient management of data centers for cloud computing, Future Gener. Comput. Syst. 28 (5) (2012) 755-768. SLA-based optimization of power and migration cost in cloud computing. H Goudarzi, M Ghasemazar, M Pedram, 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). H. Goudarzi, M. Ghasemazar, M. Pedram, SLA-based optimization of power and migration cost in cloud computing, in: 12th IEEE/ACM Inter- national Symposium on Cluster, Cloud and Grid Computing (CCGRID), 2012, pp. 172-179. Energy and migration cost-aware dynamic virtual machine consolidation in heterogeneous cloud datacenters. Q Wu, F Ishikawa, Q Zhu, Y Xia, IEEE Trans. Serv. Comput. 124Q. Wu, F. Ishikawa, Q. Zhu, Y. Xia, Energy and migration cost-aware dy- namic virtual machine consolidation in heterogeneous cloud datacenters, IEEE Trans. Serv. Comput. 12 (4) (2019) 550-563. Multi-objective energy efficient virtual machines allocation at the cloud data center. N K Sharma, G R M Reddy, IEEE Trans. Serv. Comput. 121N. K. Sharma, G. R. M. Reddy, Multi-objective energy efficient virtual machines allocation at the cloud data center, IEEE Trans. Serv. Comput. 12 (1) (2019) 158-171. Developing resource consolidation frameworks for moldable virtual machines in clouds. L He, D Zou, Z Zhang, C Chen, H Jin, S A Jarvis, Future Gener. Comput. Syst. 32L. He, D. Zou, Z. Zhang, C. Chen, H. Jin, S. A. Jarvis, Developing re- source consolidation frameworks for moldable virtual machines in clouds, Future Gener. Comput. Syst. 32 (2014) 69-81. A simulated annealing based approach for power efficient virtual machines consolidation. A Marotta, S Avallone, IEEE 8th International Conference on Cloud Computing. IEEEA. Marotta, S. Avallone, A simulated annealing based approach for power efficient virtual machines consolidation, in: IEEE 8th International Con- ference on Cloud Computing, IEEE, 2015, pp. 445-452. Using ant colony system to consolidate VMs for green cloud computing. F Farahnakian, A Ashraf, T Pahikkala, P Liljeberg, J Plosila, I Porres, H Tenhunen, IEEE Trans. Serv. Comput. 82F. Farahnakian, A. Ashraf, T. Pahikkala, P. Liljeberg, J. Plosila, I. Por- res, H. Tenhunen, Using ant colony system to consolidate VMs for green cloud computing, IEEE Trans. Serv. Comput. 8 (2) (2015) 187-198. DataABC: A fast ABC based energyefficient live VM consolidation policy with data-intensive energy evaluation model. J Jiang, Y Feng, J Zhao, K Li, Future Gener. Comput. Syst. 74J. Jiang, Y. Feng, J. Zhao, K. Li, DataABC: A fast ABC based energy- efficient live VM consolidation policy with data-intensive energy evalua- tion model, Future Gener. Comput. Syst. 74 (2017) 132-141. Energy-efficient and quality-aware VM consolidation method. Z Li, X Yu, L Yu, S Guo, V Chang, Future Gener. Comput. Syst. 102Z. Li, X. Yu, L. Yu, S. Guo, V. Chang, Energy-efficient and quality-aware VM consolidation method, Future Gener. Comput. Syst. 102 (2020) 789- 809. Power efficient server consolidation for cloud data center. S Mazumdar, M Pranzo, Future Gener. Comput. Syst. 70S. Mazumdar, M. Pranzo, Power efficient server consolidation for cloud data center, Future Gener. Comput. Syst. 70 (2017) 4-16. . Gurobi Gurobi, Optimizer Reference, Manual, GUROBI, GUROBI Optimizer Reference Manual (2022). URL https://www.gurobi.com/documentation/10.0/refman /index.html SCIP: solving constraint integer programs. T Achterberg, Math. Program. Comput. 11T. Achterberg, SCIP: solving constraint integer programs, Math. Program. Comput. 1 (1) (2009) 1-41. Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers. A Beloglazov, R Buyya, Concurrency Computat.: Pract. Exper. 2413A. Beloglazov, R. Buyya, Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consoli- dation of virtual machines in cloud data centers, Concurrency Computat.: Pract. Exper. 24 (13) (2011) 1397-1420. Server consolidation with migration control for virtualized data centers. T C Ferreto, M A Netto, R N Calheiros, C A De Rose, Future Gener. Comput. Syst. 278T. C. Ferreto, M. A. Netto, R. N. Calheiros, C. A. De Rose, Server consol- idation with migration control for virtualized data centers, Future Gener. Comput. Syst. 27 (8) (2011) 1027-1034. An iterative budget algorithm for dynamic virtual machine consolidation under cloud computing environment. Y Laili, F Tao, F Wang, L Zhang, T Lin, IEEE Trans. Serv. Comput. 141Y. Laili, F. Tao, F. Wang, L. Zhang, T. Lin, An iterative budget algorithm for dynamic virtual machine consolidation under cloud computing envi- ronment, IEEE Trans. Serv. Comput. 14 (1) (2021) 30-43. More than bin packing: Dynamic resource allocation strategies in cloud data centers. A Wolke, B Tsend-Ayush, C Pfeiffer, M Bichler, Inf. Syst. 52A. Wolke, B. Tsend-Ayush, C. Pfeiffer, M. Bichler, More than bin pack- ing: Dynamic resource allocation strategies in cloud data centers, Inf. Syst. 52 (2015) 83-95. Allocation of virtual machines in cloud data centers-a survey of problem models and optimization algorithms. Z Á Mann, ACM Comput. Surv. 481Z.Á. Mann, Allocation of virtual machines in cloud data centers-a sur- vey of problem models and optimization algorithms, ACM Comput. Surv. 48 (1) (2015) 1-34. A survey of data center consolidation in cloud computing systems. L Helali, M N Omri, Comput. Sci. Rev. 39100366L. Helali, M. N. Omri, A survey of data center consolidation in cloud computing systems, Comput. Sci. Rev. 39 (2021) 100366. . Cluster Alibaba, Data, Alibaba, Cluster data (2022). URL https://github.com/alibaba/clusterdata Estimation of the cost of VM migration. W Dargie, 23rd International Conference on Computer Communication and Networks (ICCCN). IEEEW. Dargie, Estimation of the cost of VM migration, in: 23rd International Conference on Computer Communication and Networks (ICCCN), IEEE, 2014, pp. 1-8. Investigation into the energy cost of live migration of virtual machines. K Rybina, W Dargie, A Strunk, A Schill, Sustainable Internet and ICT for Sustainability (SustainIT). IEEEK. Rybina, W. Dargie, A. Strunk, A. Schill, Investigation into the energy cost of live migration of virtual machines, in: Sustainable Internet and ICT for Sustainability (SustainIT), IEEE, 2013, pp. 1-8. User's Manual for CPLEX. CPLEXCPLEX, User's Manual for CPLEX (2022). URL https://www.ibm.com/docs/en/icos/20.1.0?topic=cp lex-users-manual Capacity planning for virtualized servers. M Bichler, T Setzer, B Speitkamp, Workshop on Information Technologies and Systems (WITS). M. Bichler, T. Setzer, B. Speitkamp, Capacity planning for virtualized servers, in: Workshop on Information Technologies and Systems (WITS), 2006. Service consolidation with end-toend response time constraints. J Anselmi, E Amaldi, P Cremonesi, 34th Euromicro Conference Software Engineering and Advanced Applications. IEEEJ. Anselmi, E. Amaldi, P. Cremonesi, Service consolidation with end-to- end response time constraints, in: 34th Euromicro Conference Software Engineering and Advanced Applications, IEEE, 2008, pp. 345-352. A constraint programming approach for the service consolidation problem, in: International Conference on Integration of Artificial Intelligence (AI) and Operations Research (OR) Techniques in Constraint Programming. K Dhyani, S Gualandi, P Cremonesi, SpringerK. Dhyani, S. Gualandi, P. Cremonesi, A constraint programming ap- proach for the service consolidation problem, in: International Confer- ence on Integration of Artificial Intelligence (AI) and Operations Re- search (OR) Techniques in Constraint Programming, Springer, 2010, pp. 97-101. Cut-and-solve: An iterative search strategy for combinatorial optimization problems. S Climer, W Zhang, Artif. Intell. 1708-9S. Climer, W. Zhang, Cut-and-solve: An iterative search strategy for com- binatorial optimization problems, Artif. Intell. 170 (8-9) (2006) 714-738. A cut-and-solve based algorithm for the singlesource capacitated facility location problem. Z Yang, F Chu, H Chen, Eur. J. Oper. Res. 2213Z. Yang, F. Chu, H. Chen, A cut-and-solve based algorithm for the single- source capacitated facility location problem, Eur. J. Oper. Res. 221 (3) (2012) 521-532. An effective hybrid approach to the two-stage capacitated facility location problem. Z Yang, H Chen, F Chu, N Wang, Eur. J. Oper. Res. 2752Z. Yang, H. Chen, F. Chu, N. Wang, An effective hybrid approach to the two-stage capacitated facility location problem, Eur. J. Oper. Res. 275 (2) (2019) 467-480. An improved cut-and-solve algorithm for the single-source capacitated facility location problem. S L Gadegaard, A Klose, L R Nielsen, EURO J. Comput. Optim. 61S. L. Gadegaard, A. Klose, L. R. Nielsen, An improved cut-and-solve algorithm for the single-source capacitated facility location problem, EURO J. Comput. Optim. 6 (1) (2018) 1-27. Exact algorithms based on benders decomposition for multicommodity uncapacitated fixed-charge network design. C A Zetina, I Contreras, J.-F Cordeau, Comput. Oper. Res. 111C. A. Zetina, I. Contreras, J.-F. Cordeau, Exact algorithms based on ben- ders decomposition for multicommodity uncapacitated fixed-charge net- work design, Comput. Oper. Res. 111 (2019) 311-324. L A Wolsey, Integer Programming. John Wiley and SonsL. A. Wolsey, Integer Programming, John Wiley and Sons, 2020. A computational study of exact knapsack separation for the generalized assignment problem. P Avella, M Boccia, I Vasilyev, Comput. Optim. Appl. 45P. Avella, M. Boccia, I. Vasilyev, A computational study of exact knap- sack separation for the generalized assignment problem, Comput. Optim. Appl. 45 (2010) 543-555. An implementation of exact knapsack separation. I Vasilyev, M Boccia, S Hanafi, J. Glob. Optim. 66I. Vasilyev, M. Boccia, S. Hanafi, An implementation of exact knapsack separation, J. Glob. Optim. 66 (2016) 127-150. A minimal algorithm for the bounded knapsack problem. D Pisinger, IN-FORMS J. Comput. 121D. Pisinger, A minimal algorithm for the bounded knapsack problem, IN- FORMS J. Comput. 12 (1) (2000) 75-82. An exact separation algorithm for unsplittable flow capacitated network design arc-set polyhedron. L Chen, W.-K Chen, M.-M Yang, Y.-H Dai, J. Glob. Optim. 81L. Chen, W.-K. Chen, M.-M. Yang, Y.-H. Dai, An exact separation algo- rithm for unsplittable flow capacitated network design arc-set polyhedron, J. Glob. Optim. 81 (2021) 659-689. Separation algorithms for 0-1 knapsack polytopes. K Kaparis, A N Letchford, Math. Program. 124K. Kaparis, A. N. Letchford, Separation algorithms for 0-1 knapsack polytopes, Math. Program. 124 (2010) 69-91. Lifted cover inequalities for 0-1 integer programs: Computation. Z Gu, G L Nemhauser, M W Savelsbergh, INFORMS J. Comput. 104Z. Gu, G. L. Nemhauser, M. W. Savelsbergh, Lifted cover inequalities for 0-1 integer programs: Computation, INFORMS J. Comput. 10 (4) (1998) 427-437. Sequence independent lifting in mixed integer programming. Z Gu, G L Nemhauser, M W Savelsbergh, J. Comb. Optim. 4Z. Gu, G. L. Nemhauser, M. W. Savelsbergh, Sequence independent lift- ing in mixed integer programming, J. Comb. Optim. 4 (2000) 109-129. Easily computable facets of the knapsack polytope. E Zemel, Math. Oper. Res. 144E. Zemel, Easily computable facets of the knapsack polytope, Math. Oper. Res. 14 (4) (1989) 760-764. A multi-objective ant colony system algorithm for virtual machine placement in cloud computing. Y Gao, H Guan, Z Qi, Y Hou, L Liu, J. Comput. Syst. Sci. 798Y. Gao, H. Guan, Z. Qi, Y. Hou, L. Liu, A multi-objective ant colony system algorithm for virtual machine placement in cloud computing, J. Comput. Syst. Sci. 79 (8) (2013) 1230-1242. Integer and combinatorial optimization. G L Nemhauser, L A Wolsey, John Wiley and SonsG. L. Nemhauser, L. A. Wolsey, Integer and combinatorial optimization, John Wiley and Sons, 1988.
[ "https://github.com/alibaba/clusterdata" ]
[ "Dual and Generalized Dual Cones in Banach Spaces", "Dual and Generalized Dual Cones in Banach Spaces" ]
[ "Akhtar A Khan ", "Dezhou Kong ", "Jinlu Li " ]
[]
[]
This paper proposes and analyzes the notion of dual cones associated with the metric projection and generalized projection in Banach spaces. We show that the dual cones, related to the metric projection and generalized metric projection, lose many important properties in transitioning from Hilbert spaces to Banach spaces. We also propose and analyze the notions of faces and visions in Banach spaces and relate them to metric projection and generalized projection. We provide many illustrative examples to give insight into the given results.Let X be a real Banach space with norm · X , let X * be the topological dual of X with norm · X * , and let ·, · be the duality pairing between X * and X. We will denote the null elements in X and X * by θ and θ * . Moreover, the closed and convex hull of a set M ∈ X is denoted by co(M). Given a Banach space X, for r > 0, we denote the closed ball, open ball and sphere with radius r and center θ byFor details on the notions recalled in this section, see[37]. Given a uniformly convex and uniformly smooth Banach space X with dual space X * , the normalized duality map J : X → X * is a single-valued mapping defined byIn a uniformly convex and uniformly smooth Banach space X, the normalized map J X : X → X * is one-to-one, onto, continuous and homogeneous. Furthermore, the normalized duality mapping J * : X * → X is the inverse of J, that is J * J = I X and JJ * = I X * , where I X and I * X are the identity maps in X and X * . On the other hand, in a general Banach space X with dual X * , the normalized duality mapping J : X → 2 X * is a set-valued mapping with nonempty valued. In particular, if X * is strictly convex, then J : X → X * is a single-valued mapping. See[37].The following example will be repeatedly used in this work.Example 2.1. Let X = R 3 be equipped with the 3-norm · 3 defined for any z = (z 1 , z 2 , z 3 ) ∈ X, byThen (X, · 3 ) is a uniformly convex and uniformly smooth Banach space (and is not a Hilbert space). The dual space of (X, · 3 ) is (X * , · 3 2 ) so that for any ψ = (ψ 1 , ψ 2 , ψ 2 ), we haveWe shall now recall useful notions of projections in Banach spaces.Definition 2.2. Let X be a uniformly convex and uniformly smooth Banach space, let X * be the dual of X, and let C be a nonempty, closed, and convex subset of X.
null
[ "https://export.arxiv.org/pdf/2303.00071v2.pdf" ]
257,254,975
2303.00071
ca42483c18d318d276021ddf5105736fe6353c52
Dual and Generalized Dual Cones in Banach Spaces 7 Mar 2023 Akhtar A Khan Dezhou Kong Jinlu Li Dual and Generalized Dual Cones in Banach Spaces 7 Mar 2023Received: date / Accepted: dateNoname manuscript No. (will be inserted by the editor)Generalized projection · metric projection · dual cones · faces and visions in Banach spaces Mathematics Subject Classification (2010) 41A1041A5047A0558C06 This paper proposes and analyzes the notion of dual cones associated with the metric projection and generalized projection in Banach spaces. We show that the dual cones, related to the metric projection and generalized metric projection, lose many important properties in transitioning from Hilbert spaces to Banach spaces. We also propose and analyze the notions of faces and visions in Banach spaces and relate them to metric projection and generalized projection. We provide many illustrative examples to give insight into the given results.Let X be a real Banach space with norm · X , let X * be the topological dual of X with norm · X * , and let ·, · be the duality pairing between X * and X. We will denote the null elements in X and X * by θ and θ * . Moreover, the closed and convex hull of a set M ∈ X is denoted by co(M). Given a Banach space X, for r > 0, we denote the closed ball, open ball and sphere with radius r and center θ byFor details on the notions recalled in this section, see[37]. Given a uniformly convex and uniformly smooth Banach space X with dual space X * , the normalized duality map J : X → X * is a single-valued mapping defined byIn a uniformly convex and uniformly smooth Banach space X, the normalized map J X : X → X * is one-to-one, onto, continuous and homogeneous. Furthermore, the normalized duality mapping J * : X * → X is the inverse of J, that is J * J = I X and JJ * = I X * , where I X and I * X are the identity maps in X and X * . On the other hand, in a general Banach space X with dual X * , the normalized duality mapping J : X → 2 X * is a set-valued mapping with nonempty valued. In particular, if X * is strictly convex, then J : X → X * is a single-valued mapping. See[37].The following example will be repeatedly used in this work.Example 2.1. Let X = R 3 be equipped with the 3-norm · 3 defined for any z = (z 1 , z 2 , z 3 ) ∈ X, byThen (X, · 3 ) is a uniformly convex and uniformly smooth Banach space (and is not a Hilbert space). The dual space of (X, · 3 ) is (X * , · 3 2 ) so that for any ψ = (ψ 1 , ψ 2 , ψ 2 ), we haveWe shall now recall useful notions of projections in Banach spaces.Definition 2.2. Let X be a uniformly convex and uniformly smooth Banach space, let X * be the dual of X, and let C be a nonempty, closed, and convex subset of X. Introduction Dual cones, induced by the metric projections, have a simple structure and valuable properties in the setting of Hilbert spaces. The derivations of such properties heavily exploit the underlying Hilbertian structure. The Hilbertian structure also equips the metric projection with attractive features, see Zarantonello [1,Lemma 1.5]. However, during the last three decades, many important studies of metric projection have been conducted in Banach spaces. This development is partly motivated by the real-world applications of metric projection in optimization, approximation theory, inverse problems, variational inequalities, image processing, neural networks, machine learning, and others. For an overview of these details and some of the related developments, see [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32], and the cited references. The primary objective of this research is to propose and analyze the notion of dual cones associated with the metric projection in Banach spaces. We note that the shortcomings of the metric projection in Banach spaces have resulted in important extensions, namely, the generalized projection and the generalized metric projection, which enjoy better properties in a Banach space framework, see [33,34,35,22,36]. We show that the dual cones, related to the metric projection and generalized metric projection, lose many properties in transitioning from Hilbert spaces to Banach spaces. We also propose and analyze the notions of faces and visions in Banach spaces and relate them to the metric projection and generalized projection. Illustrative examples are given. The contents of this paper are organized into five sections. After a brief introduction in Section 1, we recall some background material Section 2. Section 3 studies dual cones related to the metric projection where as the dual cones related to the generalized projection are studied in Section 4. various notions of projections and give new results concerning normalized duality mapping. Section 5 studies the faces and visions in Banach spaces. 2. The generalized projection π C : X * → C is a single-value map that satisfies V (ψ, π C ψ) = inf y∈C V (ψ, y), for any ψ ∈ X * . (3) The following result collects some of the basic properties of the metric projection defined above. Proposition 2.3. Let X be a uniformly convex and uniformly smooth Banach space and let C be a nonempty, closed, and convex subset of X. 1. The metric projection P C : X → C is a continuous map that enjoys the following variational characterization: u = P C (x) ⇔ J X (x − u), u − z ≥ 0, for all z ∈ C.(4) 2. The generalized projection π C : X * → C enjoys the following variational characterization: For any ψ ∈ X * and y ∈ C, y = π C (ψ), if and only if, ψ − J X y, y − z ≥ 0, for all z ∈ C.(5) We will also need the following notions. Given a Banach space X, for any u, v ∈ X with u = v, we write (a) [v, u] = {tv + (1 − t)u : 0 ≤ t ≤ 1}. (b) [v, u⌈= {tv + (1 − t)u : 0 ≤ t < ∞}. (c) ⌉u, v⌈= {tv + (1 − t)u : −∞ < t < ∞}. The set [v, u] is a closed segment with end points u and v. The set [v, u⌈ is a closed ray in X with end point v with direction u − v, which is a closed convex cone with vertex at v and is a special class of cones in X. The set ⌉u, v⌈ is a line in X passing through points v and u. We conclude this section by recalling the following result (see [38]): Theorem 2.4. Let X be a uniformly convex and uniformly smooth Banach space and let C a nonempty, closed, and convex subset of X. For any y ∈ C, let x ∈ X\C be such that y = P C x. We define the inverse image of y under the metric projection P C : X → C by P −1 C (y) = {u ∈ X : P C (u) = y}. Then P −1 C (y) is a closed cone with vertex at y in X. However, P −1 C (y) is not convex, in general. Dual Cones for the Metric Projection A cone K in a vector space is said to pointed if K has vertex at θ , K ∩ (−K) = {θ }, and K = θ . Let H be a Hilbert space, and let K be a cone in H with vertex at v. We define the dual cone of K in H with respect to the metric projection P K by K ⊥ = {x ∈ H : x − v, v − z ≥ 0, for all z ∈ K}. The dual cone has the following properties in Hilbert spaces (see Zarantonello [1]): (1) K ⊥ is a closed and convex cone in H with vertex at v. (2) K ⊥⊥ = coK. (3) If K is a closed and convex cone, then K ⊥ and K are dual cones of each other. (4) If K is a closed, convex and pointed cone, then P K is positive homogeneous and x, P K x = P K x 2 , for all x ∈ H. In this following, we extend the concept of a dual cone from Hilbert spaces to uniformly convex and uniformly smooth Banach spaces and derive their valuable properties. We will show that the properties (3) and (4) given above do not hold, in general, in Banach spaces. Definition 3.1. Let X be a Banach space, let the dual X * of X be strictly convex, and let K be a cone in X with vertex v. We define the dual cone with respect to the metric projection P by K ⊥ P = {x ∈ X : J(x − v), v − z ≥ 0, for all z ∈ K}.(6) The following result shows that K ⊥ P is a cone in X, and K ⊥ P and K have the same vertex v. Theorem 3.2. Let X be a Banach space, let the dual X * of X be strictly convex, and let K be a cone in X with vertex v. Then, the following statements hold: (a) K ⊥ P is a cone with vertex at θ in X. (b) If X is uniformly convex and uniformly smooth, then K ⊥ P is closed. (c) If X is uniformly convex and uniformly smooth and K is closed and convex, then K ⊥ P = P −1 K (v). (d) K ⊥ P is not convex. (e) K ⊆ (K ⊥ P ) ⊥ P . ( f ) K and K ⊥ P are not dual of each other. Proof. (a) For an arbitrary x ∈ K ⊥ P with x = v, and for any t > 0, by the homogeneity property of the normalized duality mapping J, we have J(v + t(x − v) − v), v − z = J(t(x − v)), v − z = t J(x − v), v − z ≥ 0, for all z ∈ K, implying that v + t(x − v) ∈ K ⊥ P , for all t > 0. Thus, K ⊥ P is a cone in X with vertex at v. (b) Under the additional hypothesis on X, J is continuous, which proves that K ⊥ P is closed. (c) By the basic variational principle of P K , for any given x ∈ X, we have P K (x) = v ⇔ J(x − v), v − z ≥ 0, for all z ∈ K.(7) Since (7) coincides with (6), we deduce that K ⊥ P = P −1 K (v). J(x − θ ) = 9 x 3 , −4 x 3 , −1 x 3 = 9 3 √ 36 , −4 3 √ 36 , −1 3 √ 36 . Next, we compute , and hence J(x − θ ), θ − αu =J(y − θ ), θ − αu = 1 3 √ 36 , −9 3 √ 36 , 4 3 √ 36 , −α(−25, −37, −77) = 0, for all αu ∈ K, α ∈ [0, ∞), which proves that y ∈ K ⊥ P . For h = 2 3 x + 1 3 y, we have h − θ = h = 7 3 , − 7 3 , 0 , yielding J(h − θ ) = 7 3 √ 4 6 (1, −1, 0). We now compute J(h − θ ), θ − αu = −14 3 √ 4α < 0, for every αu ∈ K, α ∈ (0, ∞), which proves that h = K ⊥ P . Thus, K ⊥ P is not convex. (e) Since K ⊥ P is a closed cone with vertex at θ , (K ⊥ P ) ⊥ P is a closed cone with vertex at θ . We will use the counterexample from (d). Recall that u = (−25, −37, −77) and K = [θ , u⌈. We showed that x = (3, −2, −1) ∈ K ⊥ P . Then, J(αu − θ ), θ − x = α Ju, −x < 0, which implies that αu / ∈ (K ⊥ P ) ⊥ P , for any αu ∈ K with α ∈ (0, ∞), which prove (e). Finally, ( f ) follows from (e). Proposition 3.3. Let X be a uniformly convex and uniformly smooth Banach space and let K be a closed, convex, and pointed cone in X. Then P K is positive homogeneous. In general, Jw, P K w = P K w 2 X , for w ∈ X.(8) Proof. For any t > 0, since K is a closed, convex and pointed cone in X, for any x ∈ X, we have J(tx − tP K x),tP K x − z = t 2 J(x − P K x), P K x − t −1 z ≥ 0, for all z ∈ K, and by appealing to the basic variational property of P K , this implies that P K (tx) = tP K x. We next construct a counterexample to prove (8). Let x = R 3 be as in Example 2.1. Let u = (−25, −37, −77) and K = [θ , u⌈. We take a point w = (−28, −35, −76). Then w − u = (3, −2, −1). By the proof of Theorem 3.2, we have J(w − u), u − αu = (1 − α) 9 3 √ 36 , −4 3 √ 36 , −1 3 √ 36 , (−25, −37, −77) = 0, for any αu ∈ K, α ∈ [0, ∞). By the basic variational principle, we have P K (w) = u. Next we calculate, w X = 3 28 3 + 35 3 + 76 3 < u X = 3 25 3 + 37 3 + 77 3 , which implies that | Jw, P K w | = | Jw, u | ≤ Jw X * u X = w X u X < u 2 X , which verifies (8). The proof is complete. Generalized dual cones with respect to the generalized projection π We now study the generalized dual cone of K for the generalized projection π. We first recall some properties of the inverse image of the vertex of a cone by the generalized projection π C in X * . Given a uniformly convex and uniformly smooth Banach space X with dual space X * and a cone K with vertex at v, we recall that π −1 C (v) = {ψ ∈ X * : π C (ψ) = v}.(9) Theorem 4.1. Let X be a uniformly convex and uniformly smooth Banach space with dual X * and let K be a closed and convex cone in X with vertex at v. Then, (a) π −1 K (v) is a closed and convex cone in X * with vertex at Jv. (b) π −1 π −1 K (v) (Jv) = K. Proof. (a) See Theorem 2.4. (b) For a fixed z ∈ K, we have ψ − Jv, v − z ≥ 0, for all ψ ∈ π −1 K (v), which, taking into account the identity J * = J −1 , implies that Jv − ψ, z − J * (Jv) ≥ 0, for all ψ ∈ π −1 K (v). By the basic variational principle for π π −1 K (v) , we obtain that z ∈ π −1 π −1 K (v) (Jv), for all z ∈ K, proving K ⊆ π −1 π −1 K (v) (Jv).(10) For the converse, for any z ∈ π −1 π −1 K (v) (Jv), we have π π −1 K (v) (z) = Jv. Appealing to the variational principle for π π −1 K (v) once again, we have Jv − ψ, z − J * (Jv) ≥ 0, for all ψ ∈ π −1 K (v), and hence ψ − Jv, v − z ≥ 0, for all ψ ∈ π −1 K (v) . We recall that for ℓ ∈ X * , we have ℓ ∈ π −1 K (v) ⇔ ℓ − Jv, v − x ≥ 0, for all x ∈ K.(11) For the given z ∈ π −1 π −1 K (v) (Jv), let y = z − P K z + v. Then, for any x ∈ K, we define w = x + P K z − v = v + (P K z − v) + (x − v). Since K is a closed and convex cone with vertex v, for all x ∈ K, we have that w ∈ K. By the variational principle for P K , we have J(y − v), v − x = J(z − P K z), v − x = J(z − P K z), P K z − w ≥ 0, for all x ∈ K,(12) which implies that P K y = v, that is, y ∈ P −1 K (v). Now, let ψ = J(y − v) + Jv.(13) Then ψ ∈ X * . By (12), we have ψ − Jv, v − x = J(y − v), v − x ≥ 0, for all x ∈ K.(14) By (11) and (14), we have ψ ∈ π −1 K (v). Since z ∈ π −1 π −1 K (v) (Jv), by the variational principle, we have Jv − γ, z − J * Jv ≥ 0, for all γ ∈ π −1 K (v), that is, γ − Jv, v − z ≥ 0, for all γ ∈ π −1 K (v), which, due to the containment ψ ∈ π −1 K (v), implies that ψ − Jv, v − z ≥ 0. Then, using (13) , we have J(y − v), v − z ≥ 0. Then, 0 ≥ J(y − v), z − v = J(z − P K (z), z − P K (z) + P K z − v) = z − P K z 2 + J(z − P K z), P K z − v which due to J(z − P K z), P K z − v ≥ 0 implies that z − P K z 2 = 0, that is, z = P K z ∈ K. Since z is arbitrary in π −1 π −1 K (v) (Jv), we obtain π −1 π −1 K (v) (Jv) ⊆ K. This, in view of (10), completes the proof. Definition 4.2. Let X be a uniformly convex and uniformly smooth Banach space with dual X * , and let K be a cone in X with vertex at v. We define the generalized dual cone of K in X * with respect to the generalized projection π by K ⊥ π = {ψ ∈ X * : ψ − Jv, v − z ≥ 0, for all z ∈ K}.(15) Theorem 4.3. Let X be a uniformly convex and uniformly smooth Banach space with dual X * , and let K be a cone in X with vertex at v. Then the following statements hold: (a) K ⊥ π = π −1 K (v). (b) K ⊥ π is a closed and convex cone with vertex at Jv in X * . (c) K and K ⊥ π are generalized dual of each other: K = (K ⊥ π ) ⊥ π . Proof. (a). By the basic variational principle for π K , for any ψ ∈ X * and v ∈ K, we have v = π K (ψ) ⇔ ψ − Jv, v − z ≥ 0, for all z ∈ K, which, due to the definition of K ⊥ π and (9) at once implies (a). (b) Since π −1 K (v) is a closed and convex cone with vertex at Jv in X * , (b) follows at once (a). Finally, (c) follows from (a) and Theorem 4.1. Corollary 4.4. Let X be a uniformly convex and uniformly smooth Banach space, and let C and K be closed and convex cones in X with a common vertex at v satisfying C ∩ K = {v}. Then, (a) C ⊆ K ⇔ C ⊥ π ⊇ K ⊥ π . (b) (C ∩ K) ⊥ π = co(C ⊥ π ∪ K ⊥ π ). Proof. (a) The proof of C ⊆ K ⇒ C ⊥ π ⊇ K ⊥ π is evident. The converse follows from part (c) of Theorem 4.3. (b) It follows at once that C ∩ K is a closed and convex cone in X with vertex v. By (a), the inclusion C ∩ K ⊆ C implies that (C ∩ K) ⊥ π ⊇ C ⊥ π and the inclusion C ∩ K ⊆ K implies that (C ∩ K) ⊥ π ⊇ K ⊥ π , and hence (C ∩ K) ⊥ π ⊇ C ⊥ π ∩C ⊥ π . However, since (C ∩ K) ⊥ π is a closed and convex cone with vertex at Jv, it follows that (C ∩ K) ⊥ π ⊇ co(C ⊥ π ∪ K ⊥ π ) . By (c) of Theorem 4.3 and (a), we have C ∩ K = ((C ∩ K) ⊥ π ) ⊥ π ⊆ (co(C ⊥ π ∪ K ⊥ π )) ⊥ π .(16) On the other hand, from C ⊥ π ⊆ co(C ⊥ π ∪ K ⊥ π ) and K ⊥ π ⊆ co(C ⊥ π ∪ K ⊥ π ), we have C = (C ⊥ π ) ⊥ π ⊇ co(C ⊥ π ∪ K ⊥ π ) and K = (K ⊥ π ) ⊥ π ⊇ co(C ⊥ π ∪ K ⊥ π ). Thus, by (16), we have C ∩ K = ((C ∩ K) ⊥ π ) ⊥ π ⊆ (co(C ⊥ π ∪ K ⊥ π )) ⊥ π ⊆ C ∩ K, which proves the desired identity. Since (C ∩ K) ⊥ π and co(C ⊥ π ∪ K ⊥ π ) are both closed and convex cones with vertex at Jv, we have the result by using Theorem 4.3. The following result can be proved in an analogous fashion: Corollary 4.5. Let X be a uniformly convex and uniformly smooth Banach space, and let {K λ : λ ∈ Λ } be a set of closed and convex cones with a common vertex at v such that ∩ λ ∈Λ K λ = {v}. Then (∩ λ ∈Λ (K λ ) ⊥ π = co(∪ λ ∈Λ (K λ ) ⊥ π ) , where Λ is an arbitrary given index set. Faces and visions in Banach spaces Faces in Banach spaces Definition 5.1. Let X be a Banach space with dual X * and let C be a nonempty, closed, and convex subset of X. For any ψ ∈ X * , we define the face of ψ on C by F C (ψ) = {y ∈ C : ψ, y = sup x∈C ψ, x }. Remark 5.2. It is evident from the above definition that for any ψ ∈ X * , the set F C (ψ) is either empty or a closed and convex subset of C. Moreover, F C (θ * ) = C. Before proceeding any further, we gather a few examples to illustrate the above notion. ∈ (R 3 ) * . Then F C (ψ) = C. (b) Let ψ = (ψ 1 , ψ 2 , ψ 3 ) ∈ (R 3 ) * with ψ i ≤ 0, for i = 1, 2, 3 and ψ 1 + ψ 1 + ψ 3 < 0. Then F C (ψ) = {θ }. (c) Let ψ = (ψ 1 , ψ 2 , ψ 3 ) ∈ (R 3 ) * with ψ i ≥ 0, for i = 1, 2, 3 and ψ 1 + ψ 1 + ψ 3 > 0. Then F C (ψ) = / 0. Proof. (a). For ψ = (−9, 4, 1) ∈ (R 3 ) * , we have ψ,tu = (−9, 4, 1),tu = t (−9, 4, 1), (25, 37, 77) = 0, for any ut ∈ X, t ≥ 0, which implies that tu ∈ F C (ψ) for all tu ∈ C. Parts (b) and (c) can be proved analogously. C = { f ∈ L p (S) : f p ≤ M}. Then C is a nonempty, closed, and convex subset in L p (S). For any A ∈ A with 1 ≤ µ(A) < ∞, let 1 A denote the characteristic function of A, which satisfies 1 A ∈ L q (S) * = L p (S), where p, q ∈ [1, ∞] are such that 1 p + 1 q = 1. Then F C (1 A ) is a nonempty, closed, and convex subset of C such that F C (1 A ) = g ∈ C : A g(s)dµ(s) = M .(17) Proof. For any g ∈ C, if A g(s)dµ(s) = M, then 1 A , g = S 1 A (s)g(s)dµ(s) = A g(s)dµ(s) = M.(18) For any f ∈ C, we have 1 A , f = S 1 A (s) f (s)dµ(s) = A f (s)dµ(s) ≤ f p ≤ M.(19) Thus, (18) and (19) imply that g ∈ C : A g(s)dµ(s) = M ⊆ F C (1 A ).(20) For the converse, we define h on S by h(s) = M µ(A) 1 A (s) , for all s ∈ S. By 1 ≤ µ(A) < ∞, we have h p = M µ(A) p−1 p ≤ M, which implies that h ∈ C. By 1 A ∈ L p (S) * , we have 1 A , h = A 1 A (s)h(s)dµ(s) = A M µ(A) 1 A (s)dµ(s) = M.(21) By the above equation, it follows that for any g ∈ F C (1 A ) ⊆ C, we must have M ≥ g p ≥ 1 A , g ≥ 1 A , h = M. It follows that 1 A , g = M, that is, A g(s)dµ(s) = M. This implies that F C (1 A ) ⊂ g ∈ C : A g(s)dµ(s) = M .(22) By combining (20) and (22), we get (17). By (21) and (20), we have h ∈ F C (1 A ), which shows that F C (1 A ) = / 0. This prove the claim. Example 5.5. For any p with 1 ≤ p < ∞, let X = ℓ p be the real Banach space of real sequences with norm · p . For any given M > 0, let C = {x ∈ ℓ P : x p ≤ M}. Then C is a nonempty, closed and convex subset in ℓ p . For any positive integers m and n with n ≥ 1. We define ℓ n m ∈ (ℓ p ) * = ℓ q by (ℓ n m ) i = 1, for i = m, m + 1, . . ., m + n − 1, 0, otherwise. Then, F C (ℓ n m ) is a nonempty, closed, and convex subset of C such that F C (ℓ n m ) = y = {y i } ∈ C : m+n−1 ∑ i=m y i = M .(23) Proof. We only need to show that F C (ℓ n m ) is nonempty. For this, we take z = {z i } ∈ ℓ p as follows: z i = M n , for i = m, m + 1, . . ., m + n − 1, 0, otherwise. Then, it is easy to verify that z ∈ C and z ∈ F C (ℓ n m ). The rest of the arguments are similar to the ones used in Example 5.4. Lemma 5.6. Let X be a reflexive Banach space with dual space X * and let C be a closed, convex and bounded set in X. Then for each ψ ∈ X * , F C (ψ) is nonempty, closed, and convex subset of C. Proof. Since C is weakly compact, for any ψ ∈ X * , the function ψ, · attains its maximum value on C. That is, there is y ∈ C such that ψ, y = max x∈C ψ, x . This implies that y ∈ F C (ψ). The set F C (ψ) is clearly, closed and convex. Theorem 5.7. Let X be a reflexive Banach space with dual X * and let C be a nonempty, closed, and convex set in X. Then (a) For any u ∈ X, F C (Ju) = {y ∈ C : y = P C (u + y)} = {y ∈ C : y = π C (Ju + Jv)}. (b) For any ψ ∈ X * , F C (ψ) = {y ∈ C : y = P C (J * ψ + y)} = {y ∈ C : y = π C (ψ + Jy)}. Proof. (a) For an arbitrary z ∈ C, by the basic variational principle for P C , we have z ∈ F C (Ju) ⇔ 0 ≤ Ju, z − x , for all x ∈ C ⇔ 0 ≤ J(u + z − z), z − x , for all x ∈ C. ⇔ z = P C (u + z) ⇔ z ∈ {y ∈ C : y = P C (u + y)}. This proves the first equality in (a). To prove the second inequality, for any z ∈ C, by the basic variational principle of π C , we have z ∈ F C (Ju) ⇔ 0 ≤ Ju, z − x , for all x ∈ C ⇔ 0 ≤ Ju + Jz − Jz, z − x , for all x ∈ C. ⇔ z = π C (Ju + Jz) ⇔ z ∈ {y ∈ C : y = π C (Ju + Jy)}, which proves the second equality in (a). (b) For any ψ ∈ X * , J * ψ ∈ X by substituting J * ψ for u ∈ X in (a) and noticing JJ * ψ = ψ, (b) follows at once. The conclusion of Theorem 5.7 can be described by the form of variational inequalities. Corollary 5.8. Let X be a uniformly convex and uniformly smooth Banach space wth dual X * and let C be a nonempty, closed, and convex set in X. Then (a) For any u ∈ X, a point y ∈ C is a solution of the variational inequality Ju, y − x ≥ 0, for all x ∈ C, if and only if, y is a solution to one of the following projection equations: y = P C (u + y) or y = π C (Ju + Jy). (b) For any ψ ∈ X * , a point y ∈ C is a solution of the variational inequality ψ, y − x ≥ 0, for all x ∈ C, if and only if, y is a solution to one of the following projection equations: y = P C (J * ψ + y) or y = π C (ψ + jy). Visions in Banach spaces Definition 5.9. Let X be a Banach space with dual X * and let C ⊂ X be nonempty, closed, and convex. (a) We define the vision F −1 C (y) in X * of a point y ∈ C with respect to the background C by F −1 C (y) = {ψ ∈ X * : y ∈ F C (ψ) = {ψ ∈ X * : ψ, y = sup x∈C ψ, x }. (b) We define the vision F −2 C (y) in X of a point y ∈ C with respect to the background C by F −2 C (y) = {u ∈ X : y ∈ F C (Ju)} = {u ∈ X : Ju, y = sup x∈C Ju, x } Lemma 5.10. Let X be a uniformly convex and uniformly smooth Banach space with dual X * , and let C be a nonempty, closed, and convex subset in X. Then, for any y ∈ C, we have F −2 C (y) = J * (F −1 C (y)) or F −1 C (y) = J(F −2 C (y)). Proof. Since in a uniformly convex and uniformly smooth Banach space X, J and J * are both one-to-one and onto mapping such that J * J = I X and JJ * = I X * , the conclusions are evident. Proposition 5.11. Let X be a Banach space with dual X * and let C ⊂ X be nonempty, closed, and convex. Then, for any y ∈ C, we have (a) θ * ∈ F −1 C (y) and F −1 C (y) = / 0. (b) If {θ * } F −1 C (y), then F −1 C (y) is a closed and convex cone with vertex at θ * . Proof. Since (a) is evident, we only prove (b). For any ψ ∈ F −1 C (y) and t ≥ 0, we have tψ, y = t sup x∈C ψ, x = sup x∈C tψ, x , which implies that tψ ∈ F −1 C (y), and hence F −1 C (Y ) is a cone with vertex at θ * in X * . For any ψ, φ ∈ F −1 C (y) and for any t ∈ [0, 1] by y ∈ C, we have sup x∈C tψ + (1 − t)φ , x ≥ tψ + (1 − t)φ , y = t ψ, y + (1 − t) φ , y = t sup x∈C ψ, x + (1 − t) sup x∈C φ , x = sup x∈C tψ, x + sup x∈C (1 − t)φ , x ≥ sup x∈C tψ + (1 − t)φ , x , which implies that tψ + (1 − t)φ , y = sup x∈C tψ + (1 − t)φ , x and hence tψ + (1 − t)φ ∈ F −1 C (y), proving the desired convexity. To prove that F −1 C (y) is closed in X * . Let {ψ n } ⊆ F −1 C (y) and ψ ∈ X * be such that ψ n → ψ in X * as n → ∞. This implies that ψ, y = lim n→∞ ψ n , y ≥ lim n→∞ ψ n , x = ψ, x , which proves that ψ ∈ F −1 C (y), and hence F −1 C (y) is closed in X * . Proposition 5.12. Let X be a Banach space with dual space X * and let C be a nonempty, closed, and convex subset in X. Then for any y ∈ C, we have (a) θ inF −2 C (y) and F −2 C (y) = / 0. (b) If F −2 C {θ }, then F −2 C (y) is a closed cone with vertex at θ in X. In general F −2 C (y) is not convex. Proof. We only prove that F −2 C (y) is not convex. Let X = R 3 be as in Example 2.1. Let y = (25, 37, 77) and define C = [θ , y] = {αy : 0 ≤ α ≤ 1}. We take x = (3, −2, −1) and z = (1, −3, 2). Then x 3 = z 3 = 3 √ 36. As before, we compute Jx, y − αy = 0, for every αy ∈ C, α ∈ [0, 1]. Therefore, x ∈ F −2 C (y). Analogously, we have z ∈ F −2 C (y). We take h = 2 3 x + 1 3 z = 7 3 , − 7 3 , 0 . Proceeding as before, we have Jh, y − αy < 0, for every αy ∈ C, α ∈ [0, 1), proving that h / ∈ F −2 C (y). Thus, F −2 C (y) is not convex which proves the assertion. As a direct consequence of Proposition 5.11, we have the following result. Corollary 5.14. Let X be a Banach space with dual X * and let C be a nonempty, closed, and convex set in X. Then C = {J (C), C (C)} is a partition of C. More precisely, we have C = J (C) ∪ C (C). Corollary 5.15. Let X be a uniformly convex and uniformly smooth Banach space with dual X * and let C be a nonempty, closed, and convex set in X. For any y ∈ C, we have (a) y ∈ J (C) if and only if for ψ ∈ X * , y = π C (ψ + Jy) implies that ψ = θ * . (b) y ∈ C (C) if and only if there is θ * = ψ ∈ X * such that y = π C (ψ + Jy). An analogue of the above result can be given by using the metric projection P C . Corollary 5. 16. Let X be a uniformly convex and uniformly smooth Banach space with dual X * and let C be a nonempty, closed, and convex set in X. For any y ∈ C, we have (a) y ∈ J (C) if and only if for ψ ∈ X * , y = P C (J * ψ + y) implies that ψ = θ * . (b) y ∈ C (C) if and only if there is θ * = ψ ∈ X * such that y = P C (J * ψ + y). Next we give some examples to demonstrate the concepts of J (C) and C (C). Proof. Since C is a proper closed subspace of X, by the Hahn-Banach space theorem, there is ψ ∈ X * with ψ X * = 1 such that ψ, x = 0 for all x ∈ C. This implies that ψ ∈ F −1 C (y) for all y ∈ C. Since ψ = θ * , it follows at once that y ∈ C (C), for all y ∈ C. The claim then follows from Corollary 5.14. In the following result, we use the closed and open balls and the unit sphere, see Section 2. (d) If X * is strictly convex, then for any y ∈ S(r), we have F −1 B(r) (y) = [θ * , Jy⌈. Proof. (a) We first prove that θ ∈ J (B(r)). For any ψ ∈ X * with ψ X * > 0, there is x ∈ B(r) such that ψ, x = 0. Since for x ∈ B(r), we also have −x ∈ B(r), it follows that one of ψ, x and ψ, −x is positive. By ψ, θ = 0, it follows that ψ / ∈ F −1 B(r) (θ ), for any ψ ∈ X * with ψ X * = / 0. This implies F −1 B(r) (θ ) = {θ * }, and therefore θ ∈ J (B(r)). For any y ∈ B 0 (r) with 0 < y X < r, the proof of y ∈ J (B(r)) is divided into two parts. Case 1. ψ ∈ X * with ψ X * > 0 satisfying ψ, y = 0. Then, there is z ∈ B(r) and −z ∈ B(r) such that ψ, z = 0. It follows that one of ψ, x and ψ, −x is positive. Then, ψ / ∈ F −1 B(r) (y) for any ψ ∈ X * with ψ X * = 0 any ψ, y = 0. Case 2. ψ ∈ X * with ψ X * > 0 satisfying ψ, y = 0. By 0 < y X < r, there are positive numbers s and t with t > 1 > s > 0 such that ty X < r. Then, ty,ts ∈ B 0 (r) ⊂ B(r). We have max{ ψ,ty , ψ, sy } > ψ, y . By ty,ts ∈ B(r), we deduce that ψ / ∈ F −1 B(r) (y), for any ψ ∈ X * with ψ X * = 0, and ψ, y = 0. Combining (24) and (25), for any y ∈ B 0 (r) with 0 < y X < r, we have ψ / ∈ F −1 B(r) (y), for any ψ ∈ X * with ψ X * = 0. which implies that y ∈ J (B(r)), for any y ∈ B 0 (r) with y X > 0, which, when combined with the containment θ ∈ J (B(r)) proves (a). (b) For any y ∈ S(r) and for any ψ ∈ Jy ⊆ X * , we have ψ X * = y X = r and ψ, y = r 2 . Then, ψ, x ≤ ψ X * x X ≤ r 2 = ψ, y , for any x ∈ B(r), which implies ψ ∈ F −1 B(r) (y), for any y ∈ S(r). Since ψ ∈ X * with ψ = θ , we have y ∈ C (B(r)), for any y ∈ S(r). This, taking into account Remark 5.2 , implies that C (B(r)) = S(r). (c). From (26), we have Jy = {ψ ∈ Jy} ⊆ F −1 B(r) (y), for any y ∈ S(R). Next we show that for any fixed y ∈ S(r), we have [θ * , ψ⌈⊆ F −1 B(r) (y), for any ψ ∈ Jy. From (26), for any t ≥ 0, we have tψ, y = t ψ, y ≥ t ψ, x = tψ, x , for all x ∈ B(r), (29) which implies that tψ ∈ F −1 B(r) (y), for any t ≥ 0, which proves (28). Therefore, ∪ ψ∈Jy [θ * , ψ⌈⊆ F −1 B(r) (y), for any y ∈ S(r). On the other hand, for y ∈ S(r) and for given ψ ∈ F −1 B(r) (y) with ψ = θ * , as in the proof of (28), we can show that [θ * , ψ⌈⊆ F −1 B(r) (y), for any ψ ∈ F −1 B(r) (y) with ψ = θ * . So, we may assume that ψ X * = r. It follows that J * ψ X = r, which implies J * ψ ∈ S(r). By y ∈ S(r), ψ ∈ F −1 B(r) (y) and J * ψ ∈ S(r), it follows that r 2 ≥ ψ X * y X ≥ ψ, y ≥ ψ, J * ψ = ψ 2 X * = r 2 . This implies ψ X * = y X = r and ψ, y = r 2 Hence ψ ∈ Jy. We have established, ψ ∈ [θ * , ψ⌈⊆ ∪ ψ∈Jy [θ * , ψ⌈, implying F −1 B(r) (y) ⊆ ∪ ψ∈Jy [θ * , ψ⌈, for any y ∈ S(r). By combining (30) and (32), we complete the proof of (c). (d) It follows at once from (c) under the additional hypothesis on X. The following result connects generalized dual cones with the notion of visions. (d) We construct a counterexample to show that K ⊥ P is not convex. Take X = R 3 given in Example 2.1. Let u = (−25. − 37, −77), and K = [θ , u⌈= {αu : 0 ≤ α < ∞}. Take x = (3, −2, −1) and y = (1, −3, 2). Then x 3 = y 3 = 3 √ 36. By using (2.1), we have , −α(−25, −37, −77) = 0, for all αu ∈ K, α ∈ [0, ∞), which implies that x ∈ K ⊥ P . Analogously, J(y − θ Example 5 . 3 . 53Let X = R 3 be as in Example 2.1. We take u = (25, 37, 77) and let C = [θ , u⌈= {tu ∈ R 3 : t ≥ 0}. Example 5. 4 . 4Let (S, A , µ) be a measure space with µ(S) ≥ 1. For any p ∈ [1, ∞), let X = L p (S) be the real Banach space of real functions defined on S with norm · p . For any given M > 0, let Definition 5 . 13 . 513Let X be a Banach space with dual X * and let C be a nonempty, closed, and convex set in X. For any y ∈ C, we define(a) If F −1 C (y) = {θ * },then y is called an internal point of C. (b) If F −1 C (y) {θ * }, then y is called a cuticle point of C. The collection of all internal points of C is denoted by J (C)and the collection of all cuticle points of C is denoted by C (C) Corollary 5 . 17 . 517Let X be Banach space with dual X * and let C be a proper closed subspace of X. Then: (a) J (C) = / 0. (a) C (C) = C. Proposition 5 . 18 . 518Let X be Banach space with dual X * . For r > 0, we have (a) J (B(r)) = B 0 (r). (b) C (B(r)) = S(r). (c) For any y ∈ S(r), F −1 B(r) (y) is a closed and convex cone with vertex at θ * and F −1 B(r) (y) = ∪ ψ∈Jy [θ * , ψ⌈. Theorem 5.19. Let X be a uniformly convex and uniformly smooth Banach space and let K be a closed and convex cone in X with vertex at v. Then,Proof. By Theorem 4.3, we have K ⊥ π = π −1 K (v). Thus, we only need to prove the second equality in(33). For any ψ ∈ X * , we haveand the proof is complete.Remark 5.20. Equation (33) reexamines the following results:(i) K ⊥ π is a closed and convex cone with vertex at Jv in X *(Theorem 4.3).is a closed and convex cone with vertex at θ * in X * (Proposition 5.12). Projections on convex sets in Hilbert space and spectral theory. I. Projections on convex sets pp. E H Zarantonello, Zarantonello, E.H.: Projections on convex sets in Hilbert space and spectral theory. I. Projections on convex sets pp. 237-341 (1971) About the Lipschitz property of the metric projection in the Hilbert space. M V Balashov, M O Golubev, J. Math. Anal. Appl. 3942Balashov, M.V., Golubev, M.O.: About the Lipschitz property of the metric projection in the Hilbert space. J. Math. Anal. Appl. 394(2), 545-551 (2012) Convex analysis in normed spaces and metric projections onto convex bodies. V Balestro, H Martini, R Teixeira, J. Convex Anal. 284Balestro, V., Martini, H., Teixeira, R.: Convex analysis in normed spaces and metric projections onto convex bodies. J. Convex Anal. 28(4), 1223-1248 (2021) The composition of projections onto closed convex sets in Hilbert space is asymptotically regular. H H Bauschke, Proc. Amer. Math. Soc. 1311Bauschke, H.H.: The composition of projections onto closed convex sets in Hilbert space is asymptotically regular. Proc. Amer. Math. Soc. 131(1), 141-146 (2003) Finite-dimensional subspaces of L p with Lipschitz metric projection. P A Borodin, Y Y Druzhinin, K V Chesnokova, Mat. Zametki. 1024Borodin, P.A., Druzhinin, Y.Y., Chesnokova, K.V.: Finite-dimensional subspaces of L p with Lipschitz metric projection. Mat. Zametki 102(4), 514-525 (2017) Generalized projections on closed nonconvex sets in uniformly convex and uniformly smooth Banach spaces. M Bounkhel, J. Funct. Spaces pp. Art. ID. 4787Bounkhel, M.: Generalized projections on closed nonconvex sets in uniformly convex and uniformly smooth Banach spaces. J. Funct. Spaces pp. Art. ID 478,437, 7 (2015) On lower semi-continuous metric projections onto finite dimensional subspaces of spaces of continuous functions. A L Brown, J. Approx. Theory. 166Brown, A.L.: On lower semi-continuous metric projections onto finite dimensional subspaces of spaces of continuous functions. J. Approx. Theory 166, 85-105 (2013) Some new continuity concepts for metric projections. B Brosowski, F Deutsch, Bull. Amer. Math. Soc. 78Brosowski, B., Deutsch, F.: Some new continuity concepts for metric projections. Bull. Amer. Math. Soc. 78, 974-978 (1972) On the metric projection onto a family of closed convex sets in a uniformly convex Banach space. B T Kien, Nonlinear Anal. Forum. 71Kien, B.T.: On the metric projection onto a family of closed convex sets in a uniformly convex Banach space. Nonlinear Anal. Forum 7(1), 93-102 (2002) An example of a Banach space with non-lipschitzian metric projection on any straight line. L S Burusheva, Mat. Zametki. 1092Burusheva, L.S.: An example of a Banach space with non-lipschitzian metric projection on any straight line. Mat. Zametki 109(2), 196-205 (2021) Proximity maps for convex sets. W Cheney, A A Goldstein, Proc. Amer. Math. Soc. 10Cheney, W., Goldstein, A.A.: Proximity maps for convex sets. Proc. Amer. Math. Soc. 10, 448-450 (1959) Projection methods for approximating fixed points of Lipschitz suppressive operators. C E Chidume, J L Li, PanAmer. Math. J. 151Chidume, C.E., Li, J.L.: Projection methods for approximating fixed points of Lipschitz suppressive operators. PanAmer. Math. J. 15(1), 29-39 (2005) On differentiability of metric projections onto moving convex sets. D ; F Dentcheva, J M Lambert, J. Approx. Theory. 292On continuity of metric projectionsDentcheva, D.: On differentiability of metric projections onto moving convex sets. pp. 283-298 (2001). Optimization with data perturba- tions, II 14. Deutsch, F., Lambert, J.M.: On continuity of metric projections. J. Approx. Theory 29(2), 116-131 (1980) Uniform strong proximinality and continuity of metric projection. S Dutta, P Shunmugaraj, V Thota, J. Convex Anal. 244Dutta, S., Shunmugaraj, P., Thota, V.: Uniform strong proximinality and continuity of metric projection. J. Convex Anal. 24(4), 1263-1279 (2017) Uncertainty Quantification in Variational Inequalities. J Gwinner, B Jadamba, A A Khan, F Raciti, CRC PressGwinner, J., Jadamba, B., Khan, A.A., Raciti, F.: Uncertainty Quantification in Variational Inequalities. CRC Press (2021) Semi-continuity properties of metric projections. V Indumathi, Nonlinear analysis, Trends Math. New DelhiBirkhäuser/SpringerIndumathi, V.: Semi-continuity properties of metric projections. In: Nonlinear analysis, Trends Math., pp. 33-59. Birkhäuser/Springer, New Delhi (2014) Differentiability of the metric projection in Hilbert space. S Fitzpatrick, R R Phelps, Trans. Amer. Math. Soc. 2702Fitzpatrick, S., Phelps, R.R.: Differentiability of the metric projection in Hilbert space. Trans. Amer. Math. Soc. 270(2), 483-501 (1982) Isotonicity of the metric projection with respect to the mutually dual orders and complementarity problems. D Kong, L Liu, J Li, Y Wu, Optimization. 7116Kong, D., Liu, L., Li, J., Wu, Y.: Isotonicity of the metric projection with respect to the mutually dual orders and complementarity problems. Optimization 71(16), 4855-4877 (2022) On stability of the metric projection operator. A Kroó, A Pinkus, SIAM J. Math. Anal. 452Kroó, A., Pinkus, A.: On stability of the metric projection operator. SIAM J. Math. Anal. 45(2), 639-661 (2013) The metric projection and its applications to solving variational inequalities in Banach spaces. J L Li, Fixed Point Theory. 52Li, J.L.: The metric projection and its applications to solving variational inequalities in Banach spaces. Fixed Point Theory 5(2), 285-298 (2004) On the existence of solutions of variational inequalities in Banach spaces. J L Li, J. Math. Anal. Appl. 2951Li, J.L.: On the existence of solutions of variational inequalities in Banach spaces. J. Math. Anal. Appl. 295(1), 115-126 (2004) On the metric projection operator and its applications to solving variational inequalities in Banach spaces. J L Li, C Zhang, X Ma, Numer. Funct. Anal. Optim. 293-4Li, J.L., Zhang, C., Ma, X.: On the metric projection operator and its applications to solving variational inequalities in Banach spaces. Numer. Funct. Anal. Optim. 29(3-4), 410-418 (2008) Strong convergence for the problem of image recovery by the metric projections in Banach spaces. K Nakajo, J. Nonlinear Convex Anal. 232Nakajo, K.: Strong convergence for the problem of image recovery by the metric projections in Banach spaces. J. Nonlinear Convex Anal. 23(2), 357-376 (2022) Čebyšev sets and the continuity of metric projection. E V Ošman, Izv. Vysš. Učebn. Zaved. Matematika. 19709Ošman, E.V.:Čebyšev sets and the continuity of metric projection. Izv. Vysš. Učebn. Zaved. Matematika 1970(9 (100)), 78-82 (1970) Continuity properties of projection operators. J P Penot, J. Inequal. Appl. 5Penot, J.P.: Continuity properties of projection operators. J. Inequal. Appl. (5), 509-521 (2005) Characterizations of metric projections in Banach spaces and applications. J P Penot, R Ratsimahalo, Abstr. Appl. Anal. 31-2Penot, J.P., Ratsimahalo, R.: Characterizations of metric projections in Banach spaces and applications. Abstr. Appl. Anal. 3(1-2), 85-103 (1998) The metric projections onto closed convex cones in a Hilbert space. Y Qiu, Z Wang, J. Inst. Math. Jussieu. 215Qiu, Y., Wang, Z.: The metric projections onto closed convex cones in a Hilbert space. J. Inst. Math. Jussieu 21(5), 1617-1650 (2022) More on the metric projection onto a closed convex set in a Hilbert space. B Ricceri, Contributions in mathematics and engineering. ChamSpringerRicceri, B.: More on the metric projection onto a closed convex set in a Hilbert space. In: Contributions in mathematics and engineering, pp. 529-534. Springer, Cham (2016) Differentiability properties of metric projections onto convex sets. A Shapiro, J. Optim. Theory Appl. 1693Shapiro, A.: Differentiability properties of metric projections onto convex sets. J. Optim. Theory Appl. 169(3), 953-964 (2016) Metric projection operator and continuity of the set-valued metric generalized inverse in Banach spaces. S Shang, J Zhang, J. Funct. Spaces pp. Art. ID. 7151Shang, S., Zhang, J.: Metric projection operator and continuity of the set-valued metric generalized inverse in Banach spaces. J. Funct. Spaces pp. Art. ID 7151,430, 8 (2017) Continuity of generalized metric projections in Banach spaces. Z Zhang, Y Zhou, C Liu, Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM. 1131Zhang, Z., Zhou, Y., Liu, C.: Continuity of generalized metric projections in Banach spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 113(1), 95-102 (2019) Generalized projection operators in Banach spaces: properties and applications. Y I Alber, Functional-differential equations. Ariel1Coll. Judea SamariaAlber, Y.I.: Generalized projection operators in Banach spaces: properties and applications. In: Functional-differential equations, Funct. Differential Equations Israel Sem., vol. 1, pp. 1-21. Coll. Judea Samaria, Ariel (1993) Metric and generalized projection operators in Banach spaces: properties and applications. Y I Alber, Theory and applications of nonlinear operators of accretive and monotone type. New YorkDekker178Alber, Y.I.: Metric and generalized projection operators in Banach spaces: properties and applications. In: Theory and applications of nonlinear operators of accretive and monotone type, Lecture Notes in Pure and Appl. Math., vol. 178, pp. 15-50. Dekker, New York (1996) Generalized projection operators on general banach spaces. A A Khan, J L Li, S Reich, Journal of Nonlinear and Convex Analysis. at pressKhan, A.A., Li, J.L., Reich, S.: Generalized projection operators on general banach spaces. Journal of Nonlinear and Convex Analysis (at press) (2023) The generalized projection operator on reflexive Banach spaces and its applications. J L Li, J. Math. Anal. Appl. 3061Li, J.L.: The generalized projection operator on reflexive Banach spaces and its applications. J. Math. Anal. Appl. 306(1), 55-71 (2005) Nonlinear functional analysis. Fixed point theory and its applications. W Takahashi, Yokohama PublishersYokohamaTakahashi, W.: Nonlinear functional analysis. Fixed point theory and its applications. Yokohama Publishers, Yokohama (2000) Approximating properties of metric and generalized metric projections in uniformly convex and uniformly smooth banach spaces. A A Khan, J L Li, Under review pp. Khan, A.A., Li, J.L.: Approximating properties of metric and generalized metric projections in uniformly convex and uniformly smooth banach spaces. Under review pp. 1-14 (2023)
[]
[ "Valid Information Guidance Network for Compressed Video Quality Enhancement", "Valid Information Guidance Network for Compressed Video Quality Enhancement" ]
[ "Xuan Sun [email protected] \nBOE Technology Group Co., LTD\n\n", "Ziyue Zhang \nBOE Technology Group Co., LTD\n\n", "Guannan Chen \nBOE Technology Group Co., LTD\n\n", "Dan Zhu \nBOE Technology Group Co., LTD\n\n" ]
[ "BOE Technology Group Co., LTD\n", "BOE Technology Group Co., LTD\n", "BOE Technology Group Co., LTD\n", "BOE Technology Group Co., LTD\n" ]
[]
In recent years deep learning methods have shown great superiority in compressed video quality enhancement tasks. Existing methods generally take the raw video as the ground truth and extract practical information from consecutive frames containing various artifacts. However, they do not fully exploit the valid information of compressed and raw videos to guide the quality enhancement for compressed videos. In this paper, we propose a unique Valid Information Guidance scheme (VIG) to enhance the quality of compressed videos by mining valid information from both compressed videos and raw videos. Specifically, we propose an efficient framework, Compressed Redundancy Filtering (CRF) network, to balance speed and enhancement. After removing the redundancy by filtering the information, CRF can use the valid information of the compressed video to reconstruct the texture. Furthermore, we propose a progressive Truth Guidance Distillation (TGD) strategy, which does not need to design additional teacher models and distillation loss functions. By only using the ground truth as input to guide the model to aggregate the correct spatio-temporal correspondence across the raw frames, TGD can significantly improve the enhancement effect without increasing the extra training cost. Extensive experiments show that our method achieves the state-of-the-art performance of compressed video quality enhancement in terms of accuracy and efficiency.RawRaw Compressed Compressed A Frame of Kimono SequenceFigure 1: The valid information of the compressed patches and the raw patches.
10.48550/arxiv.2303.00520
[ "https://export.arxiv.org/pdf/2303.00520v1.pdf" ]
257,255,003
2303.00520
f7d3f9cda0857d53db37262c7063a5466d596021
Valid Information Guidance Network for Compressed Video Quality Enhancement March 2, 2023 Xuan Sun [email protected] BOE Technology Group Co., LTD Ziyue Zhang BOE Technology Group Co., LTD Guannan Chen BOE Technology Group Co., LTD Dan Zhu BOE Technology Group Co., LTD Valid Information Guidance Network for Compressed Video Quality Enhancement March 2, 2023 In recent years deep learning methods have shown great superiority in compressed video quality enhancement tasks. Existing methods generally take the raw video as the ground truth and extract practical information from consecutive frames containing various artifacts. However, they do not fully exploit the valid information of compressed and raw videos to guide the quality enhancement for compressed videos. In this paper, we propose a unique Valid Information Guidance scheme (VIG) to enhance the quality of compressed videos by mining valid information from both compressed videos and raw videos. Specifically, we propose an efficient framework, Compressed Redundancy Filtering (CRF) network, to balance speed and enhancement. After removing the redundancy by filtering the information, CRF can use the valid information of the compressed video to reconstruct the texture. Furthermore, we propose a progressive Truth Guidance Distillation (TGD) strategy, which does not need to design additional teacher models and distillation loss functions. By only using the ground truth as input to guide the model to aggregate the correct spatio-temporal correspondence across the raw frames, TGD can significantly improve the enhancement effect without increasing the extra training cost. Extensive experiments show that our method achieves the state-of-the-art performance of compressed video quality enhancement in terms of accuracy and efficiency.RawRaw Compressed Compressed A Frame of Kimono SequenceFigure 1: The valid information of the compressed patches and the raw patches. Introduction Video data typically undergo a video encoding process for efficient transmission. However, existing block-based coding frameworks [17,22] often use inaccurate quantization and motion compensation techniques that lose high-frequency information and produce many compression artifacts, as shown in Figure 1. The subjective effect of compressed videos will be notably reduced, especially at a low bit-rate. We calculate the mean value of the pixel variance of each 64×64 patch of 18 standard test sequences [1] in Table 1. Significantly, the mean-variance value of the raw video is larger than that of the compressed video under the Low-Delay (LD) configuration. In other words, the compressed frames contain less valid information than the raw frames. Moreover, low-quality compressed videos will severely impact the downstream tasks (e.g., classification, detection, segmentation). With the mighty computing power of the terminal, the quality of the video passing through the transmission can be enhanced at the decoding end without increasing the amount of transmitted data. This dramatically boosts the subjective quality of the decoded video under the limited transmission bandwidth resources. Accordingly, it is critical to study compressed video quality enhancement (VQE) tasks. Most current works [25,6,3,29,4,27,10,20,24] focus on designing huge models structure to improve the quality of compressed videos. However, these works generally follow the design ideas of video enhancement tasks, such as video super-resolution reconstruction, which are not fully designed according to the characteristics of compressed videos. Some works [25,6,3] introduce costefficient network architectures to reduce the computational burden and required memory. Although these works achieve the balance of speed and effect, specially designed or complicated architectures may pose challenges to implementation on hardware devices. Knowledge distillation is a model compression method that can assist the training process of a student network with the complementary knowledge. It can significantly improve the performance of the model without changing the structure. PISR [9] is a known knowledge distillation method for image super-resolution reconstruction. It inherits the general knowledge distillation method and needs to design a vast teacher model to assist the student model, which significantly increases the training cost. To our best knowledge, there is no knowledge distillation method for compressed video enhancement. In this paper, we propose a novel Valid Information Guidance (VIG) scheme to accomplish the compressed video enhancement task. The main idea of VIG is to fully exploit the valid cues in compressed and original videos to guide video reconstruction. In particular, we first develop an effective Compressed Redundancy Filtering (CRF) network to capture the most valid context by excluding redundant content. In addition, we devise a counter-intuitive Truth Guidance Distillation (TGD) strat-egy to enhance the reconstruction progressively by understanding the original pixel distribution in the original video. In summary, our contributions are as follows: • We propose a novel Valid Information Guidance (VIG) scheme to model the spatio-temporal dependency for compressed video enhancement tasks. • Comprehensive experiments are exhibited to explore the effect of the valid information flow between raw videos and compressed videos. • VIG outperforms contemporary methods and demonstrates the new state-of-the-art performance on the VQE benchmark dataset. Related Work Redundancy Filtering Method Over the past decade, many studies [3,25,6,29,13] have proved that using inter-frame information has a crucial impact on the results of VQE. From the perspective of information extraction, VQE techniques can be divided into two categories: the direct extraction method and the redundancy filtering method. The direct extraction method refers to the operation of direct aggregating information in the original input, such as STDF [3], MFQE2 [6] and RFDA [29]. Correspondingly, the redundancy filtering method refers to the operation of compressing information before the primary aggregation. Down-sampling is common in classification, segmentation, and other high-level tasks in computer vision [16,30,5,14] to generate high-level representations. Nevertheless, it is rarely used in image or video enhancement tasks because it may result in the loss of many details and affect the reconstruction quality. HUPN [18] proposes a framework that introduces the down-sampling operation into image super-resolution, representing the possibility of compressing information before the primary aggregation in the image enhancement task. STDF [3] employs a module like Unet [16] to predict the offset between compressed frames and does not consider the up-down sampling structure as the overall framework. Noted that compared with the image to be super-resolution, the compressed video has more redundant information to be filtered. In this work, we propose CRF to exploit spatiotemporal information more effectively and efficiently via [1]. The smaller mean-variance indicates the flatter content and less textures retained within pixel patches. excluding the redundant content several times. Note that compared with super-resolution images, the compressed video has more redundant information to filter. In this work, we propose the CRF module to effectively exploit spatio-temporal information by eliminating redundant content several times. Valid Information Application Strategy Generative self-supervised learning for computer vision has achieved tremendous progress. Several works [7,23,19,11,12] focus on using valid information via the pre-text task to improve downstream vision tasks. Several works [7,23,19,11,12] focus on improving downstream visual tasks by using effective information in pre-text tasks. In detial, MAE [7] and SimMIM [23] replace a random subset of input tokens with a special MASK symbol and aim at reconstructing original image tokens from the corrupted image with Vision transformers [5,14]. In detail, MAE [7] and SimMIM [23] replace a random subset of input tokens with a special MASK symbol designed to reconstruct the original image tokens from corrupted images using Vision transformers [5,14]. Subsequently, VideoMAE [19] proves that an extremely high proportion of masking ratio still yields favorable performance on videos. A2MIM [11] adopts a masked image modelling method on convolutional neural networks (CNNs). Further, MixMIM [12] finds that using the mask symbol significantly causes training-finetuning inconsistency and replaces the masked tokens of one image with visible tokens of another image. Notably, all the above methods are unsuitable for VQE because there is an apparent gap between reconstructing the masked region and enhancing the compressed region [26]. CutBlur [26] proposes a data augmentation method that cuts a low-resolution patch and pastes it to the corresponding high-resolution image re-gion. Nevertheless, it only cuts and pastes randomly in the image task and does not fully exploit the guidance of Ground Truth in the video task. PISR [9] designs a distillation framework by using ground-truth high-resolution images as privileged information, which is most similar to our work. However, it follows the traditional distillation method and designs an additional teacher network with an imitation loss, significantly increasing the training cost. Inspired by PISR [9], we propose a counterintuitive knowledge distillation strategy by exploiting the raw frames as the input with little increase in computing costs. Methodology Overall Architecture We train the model in two stages: pre-training and finetuning. Moreover, the overall architecture of VIG can be divided into two parts: Truth Guidance Distillation (TGD) strategy and Compressed Redundancy Filtering (CRF) network. We assume that X i is the input frame and Y i is the output frame at time i. A total of 2k + 1 frames will be used to compute a video clip. These input frames are denoted by X t = {X i−k , X i−k+1 , . . . , X i , . . . , X i+k−1 , X i+k }. We represent the enhanced frames by Y t = {Y i−k , Y i−k+1 , . . . , Y i , . . . , Y i+k−1 , Y i+k }. The enhanced frame Y t can be generated by Y t = V θ (X t ) ,(1) where V θ (·) represents the whole process of VIG. θ is the learnable parameters of VIG. Specifically, taking Y i as an example, the enhancement process can be formalized as Y i , F 1,i = V θ {X i , X i+1 , X i+2 , X i+3 , F 1,i−1 },(2) where F 1,i−1 represents the hidden features of all enhanced frames generated by the RF module. TGD strategy The goal of TGD is to generate new training samples by cut-and-pasting the random region of f GT into the corresponding f C . We adopt a patch-based random masking X i X i-1 X i+2 X i+1 X i+3 F 0, i F 0, i+1 F 0, i+2 F 0, i+3 F 1, i-1 Y i Y i-1 Y i+2 Y i+1 Y i+3 Pre-extract RF Module Fusion TGD strategy. The input of the pre-training stage is defined as X i = M r,w f GT i + (1 − M r,w ) f C i ,(3) where M r,w is the selected region of the corresponding f GT i . r is the ratio of f GT i in X i , w is the size of the mask in X i , and is element-wise multiplication. Figure 3 shows the progress of TGD. We cut f GT i into n patches according to w , and randomly replace several patches with the corresponding regions of f C i on the basis of r . We consider patch sizes of different resolution stages, from 2×2 to 32×32. Since the substituted token is chosen randomly in f GT i , TGD can encourage a model to enjoy the regularization effect by learning the local and global relationships among pixels. r and w can be changed as needed. r used in this paper is set to 0.9. In other words, the TGD strategy almost entirely uses GT as input. Due to the computational requirements of back-propagation, we only add a tiny ratio of f C i into GT. Moreover, we set w to 32, 16, 8, and 4 for four periods to progressively learn the pixel relationship from global to local. We train VIG for 5 thousand iterations in every period. Significantly, we only use TGD in the pre-training phase. So the input of the fine-tuning stage can be considered as X i = f C i . (4) CRF Network As shown in Figure 2 Figure 4: RF module. RF consists of three parts: the Temporal-spacial Downsampling (TSD) block, the Temporal Downsampling (TD) block, and the Spacial Upsampling (SU) block. extract module, R β (·) denote the RF module and F 0,i represent the output of P α (·). More specifically, the RF module receives the primitive features from current and future frames {X i , . . . , X i+k−1 , X i+k } and gets the hidden features to exploits all the information of the past frames {X i−k , X i−k+1 , . . . , X i−1 }. Then we take advantage of F γ (·) to be the Fusion module to generate the output Y i . The operation of CRF can be formalized as F 0,i = P α (X i ) ,(5)F 1,i = R β {F 0,i , F 0,i+1 , F 0,i+2 , F 0,i+3 , F 1,i−1 }, (6) Y i = F γ (F 1,i ) + X i ,(7) where α, β and γ are the learnable parameters. To keep our design paradigm concise, P α (·) and F γ (·) are each implemented with a single convolution layer. RF module. AS shown in Figure 4, We employ the TSD block D T SD δ (·) and the TD block D T D η (·) to filter the spatio-temporal redundancy, and then utilize the SU block U SU µ (·) to recover the resolution. The downsampling operation of the RF module can be formalized as F T SD1,i = D T SD δ1 {F 0,i , F 0,i+1 , F 0,i+2 , F 0,i+3 , F 1,i−1 },(8)F T SD2,i = D T SD δ2 (F T SD1,i ) ,(9)F T D,i = D T D η (F T SD2,i ) ,(10) where δ1 , δ2 , and η are the learnable parameters. The upsampling operation of the RF module can be described as F SU 1,i = U SU µ1 (F T D,i + F T SD2,i ) ,(11)F SU 2,i = U SU µ2 (F T D,i + F T SD2,i ) ,(12) where µ1 and µ2 are the learnable parameters. Training Scheme In both stages, we employ the Charbonnier Loss £ [2] to train the model. In the first stage, using TGD, only the Charbonnier loss between the compressed patches and the corresponding raw patches is calculated. This way, VIG can better learn the flow of valid information in GT. The pre-training process can be formalized as £ = ((1 − M r,w ) Y i − (1 − M r,w ) f GT i ) 2 + ,(13) where is set to 10 −6 here. In the second stage, following previous works, we use £ between Y i and f GT i to optimize the model. £ = (Y i − f GT i ) 2 + .(14) Note that only 20 thousand iterations are needed for the pre-training stage, and the fine-tuning stage costs another 100 thousand iterations in this paper. Experiments To show the effectiveness and superiority of VIG, we have conducted meticulous experiments on the MFQE 2.0 dataset following by [6,3,29]. In addition, we perform extensive ablation studies to analyze the importance of each component of VIG and to understand it comprehensively. Experimental Settings VIG is implemented based on the PyTorch framework and trained with NVIDIA GeForce RTX 3090 GPUs. We use Adam optimizer [8] with β 1 = 0.9, β 2 = 0.99, and use Cosine Annealing [15] to decay the learning rate from 3 × 10 −4 to 10 −4 . We randomly crop image clips from the raw videos and the corresponding compressed videos as training samples for training. The image size is 128 × 128, and the batch size is 32. We also adopt flip and rotation as data augmentation strategies to further expand the dataset. We split each video in the training set into several video clips containing 13 frames. We only report quality enhancement on Y-channel in YUV/YCbCr space for evaluation. We use Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) [21] to evaluate the quality of images generated by the VQE methods. Comparisons with State-of-the-art Methods Quantitative Evaluation We compare VIG to the following state-of-the-art VQE models, like AR-CNN [4], DnCNN [27], RNAN [28], MFQE1.0 [25],MFQE2.0 [6], MRDN [13], and STDF [3]. The quantitative results are summarized in Table 2 and the speed and performance comparison are also provided in Table 3. As shown in Table 2, VIG outperforms the existing state of the arts on MFQE2.0 datasets at four QPs, which proves the robustness of VIG. Furthermore, VIG remarkably outperforms the other methods on most sequences, demonstrating its superior robustness and generalization capability. In particular, it gets +10.67%/+0.08 dB higher ∆PSNR than STDF-R3 [3] and +6.41%/+0.05 dB dB higher ∆PSNR than MRDN [13] with QP 37. It is important to note that our training volume is 1/3 of STDF-R3. In addition, we use FLOPs/frame to measure computational complexity. The FLOPs/frame indicates FLOPs over per output frame. VIG follows a bidirectional propagation method, so it can simultaneously generate multiple frames corresponding to the input. Therefore, the FLOPs/frame of VIG is 56.9 GB, showing great competitiveness. The parameter weight of VIG is 1.74 MB. Efficiency of VIG. We compared VIG with STDF-R3 [3] as Table 3, who is the fastest VQE model. Compared with STDF-R3, VIG runs +10.67%/+0.08 dB with faster speed. Thanks to the design of UD, a great deal of computation is carried through on low-resolution feature maps. In addition, the output of VIG is continuous multiple frames, so the inference efficiency performs well. Quality fluctuation. To demonstrate that our method can improve the quality fluctuation, we plot PSNR curves of the FourPeople sequence in Figure 5. As can be seen, our VIG can effectively enhance all the frames and reduce the gap between high-quality and low-quality frames. Qualitative Evaluation Qualitative comparisons are shown in Figure 6. Our VIG can recover finer details and sharper edges than AR-CNN [4], RNAN [28], and STDF-R3 [3]. For instance, only VIG successfully recover more apparent texture in the floor, more distinct letters on the chair, and more precise window borders than others. Ablation Study To better understand VIG, we ablate each critical component and evaluate the performance on the MFQE2.0 dataset. Importance of components. In this section, we conduct several ablation studies as Table 4 to validate the effectiveness of VIG and the necessity of every proposed module. We validate the necessity of the Temporal Fusion (TF) module, the Up-down sampling (UD) module, and the TGD strategy. TF denotes the use of the 3D-CNN layer. UD represents the module based on Pixelshuffle and Pixel-unshuffle operation in Figure 4. For better understanding, we start with the 'Base', which denotes a simplified structure based on several ordinary convolution layers. Benefiting from the UD module, each block only needs one TF module to achieve accurate temporal alignment. It is easier to align on high-level semantic features that remove redundancy. As we can see, the complete comparisons demonstrate that each proposed module has a significant performance improvement. Including all essential modules, VIG gets +33.87%/+0.21 dB higher ∆PSNR than Base. Notably, the performance gains +20.60%/+0.13 dB with UD and +7.79%/+0.06 dB with TGD, which suggests that valid information significantly influences VQE. Components analysis of VIG TGD accelerates convergence. The performances of TGD in every stage are investigated in Figure 7. We train the contrast experiments for 120 thousand iterations, in-cluding 20 thousand iterations for pre-training and 100 thousand iterations for fine-tuning. It is worth noting that the model trained with TGD performs better than the model trained without it at every 50 thousand iterations. Although the pre-training stage takes up only 16.67% of the total training cost, the model still shows a stable advantage throughout the fine-tuning stage. Mask strategy analysis of TGD In this part, we exhibit extensive experiments to understand how the valid information of raw videos affects the final performance comprehensively. We first explore the impact of mask sizes on TGD. Then, we investigate the effect of mask ratios. Finally, we present how the multimask strategy affects the final performance. Mask sizes. Here we try to understand which size can bring higher accuracy. As shown in Figure 8, we run experiments based on changing w in formula 3. Significantly, all the single-size mask strategies can increase performance. It may divert the network from paying attention to the recovery of details if utilizing too large a mask size, i.e., 16×16 and 32×32. Moreover, it also hurts the final performance when using an improper small size, which may lead the model to ignore the non-local features. As a result, 4×4 is the most appropriate size, which can take care of both local features and non-local semantic information. Figure 9: The impacts of different mask ratios. The results of different mask ratios are studied based that the mask size is set to 4×4. For example, '0.9/0.807' means that r is set to 0.9 in formula 3 and PSNR increases by 0.807 dB. We also train the five contrast experiments for 170 thousand iterations as Figure 8. Mask ratios. We also study how different mask ratios affect the effectiveness. The fine-tuning accuracy of different approaches under multiple masking ratios is summarized in Figure 9. When the masked patch size of 4 is adopted, different ratios perform stably well on the abroad range of masking ratios, from 0.3 to 0.9. We hypothesize that the raw pixel relations may be valid enough. Thus it enforces the network to learn raw long-range connections, even when a low ratio is used (e.g., 0.3). Note that the most appropriate ratio is 0.9. In other words, the model performs best when we use the raw video almost exclusively as input, which is counter-intuitive. Multi mask strategy. As shown in Table 5, we set the mask to 4 and ratio to 0.9 as the basic training strategy, which is the best combination among the single-mask Table 5: Impact of Multi Mask. Ablation studies of mask strategy are conducted to understand TGD better. means that VIG has the current setting. strategies in Figure 8. Then we use Cosine Annealing [15] to decay the learning rate from 3 × 10 −4 to 10 −4 during both of the pre-training stage and the fine-tuning stage. Finally, we set the mask to 32, 16, 8, and 4 when using Cosine Annealing. Experimental results show that the multi-mask training strategy performs best. Conclusion In this paper, we propose a novel method to enhance compressed videos, whose main idea is to fully explore the valid clues from both compressed and raw videos. Specifically, CRF can capture the valid context of compressed videos by excluding redundant content. Furthermore, TGD is proposed to help models better understand the raw pixel distribution based on raw videos. The extensive experiments demonstrate that our method can achieve superior performance over state-of-the-art methods. The proposed modules also can be easily adapted to existing multi-frame methods and video-related low-level tasks. Figure 2 : 2An overview of VIG. VIG generally follows a single propagation approach. Specifically, it consists of two parts: Truth Guidance Distillation (TGD) strategy and Compressed Redundancy Filtering (CRF) network. For TGD, the red box exploits the corresponding compressed image region, and the black box means using the corresponding areas of GT. For CRF, Redundancy Filtering (RF) module is the key part to capture the valid spacia-temporal information. For TGD, the red box exploits the corresponding compressed image region, and the black box uses the corresponding areas of GT. For CRF, the Redundancy Filtering (RF) module is crucial to capture valid spatio-temporal information. Figure 3 : 3TGD strategy. Three red boxes are the compressed patches added into GT; the other boxes represent the raw patches. Figure 5 : 5PSNR curves of HEVC baseline, STDF-R3 and our method on FourPeople sequence at QP=37. Figure 7 : 7The impact of TGD. Figure 8 : 8The impacts of different mask sizes. The performances of different mask sizes are investigated on the premise that the ratio is selected as 0.9. For example, '4/0.807' denotes that w is set to 4×4 in formula 3 and PSNR increases by 0.807 dB. For simplicity, we train the contrast experiments for 170 thousand iterations, including 20 thousand iterations for the pre-training stage and 150 thousand iterations for the fine-tuning stage. Table 1 : 1The mean value of the pixel variance of 18 standard test sequencesMode LD RAW QP 32 37 42 - Mean Value 1099 1075 1032 1230 , CRF contains a recurrent forward network. It consists of three parts: A Pre-extract module, a Redundancy Filtering (RF) module and a Fusion module. The Pre-extract module is targeted for gaining the primitive features of input frame X i . For primitive features, at the i-th timestamp, let P α (·) denote the featureTSDB TSDB TDB SUB SUB Conv3d Pixel_unshuffle Conv2d Conv2d TSDB Conv3d Conv2d Conv2d TDB Conv2d Pixel_shuffle Conv2d Conv2d SUB Table 2 : 2Quantitative results of ∆PSNR (dB) / ∆SSIM (×10 −2 ) on test videos at 4 different QPs. The specific value of each sequence in the table is measured when the QP is 37. We show the average values of each method with 4 QPs.4.1 Datasets MFQE 2.0 dataset contains 126 videos with large range of resolutions: SIF (352×240), CIF (352×288), NTSC (720×486), 4CIF (704×576),240p (416×240), 360p (640×360), 480p (832×480), 720p (1280×720), 1080p (1920×1080), and WQXGA (2560×1600). 106 videos of them are selected for training and the rest are for validation. For testing, we adopt 18 standard test se- quences of Joint Collaborative Team on Video Coding (JCT-VC) [1] as the test set. This test dataset covers dif- ferent scene conditions and can better verify the robust- ness of different approaches widely used in developing HEVC standards. All sequences are encoded in HEVC Low-Delay-P (LDP) configuration, using HM 16.20 with four different Quantization Parameters (QPs), i.e., 22, 27, 32 and 37 [6]. Table 3 : 3Speed of VIG.Inference speed comparison be- tween our method and STDF-R3. For a fair comparison, all methods are tested with 480p video, on a NVIDIA RTX 3090.The results are reported by frame per second (FPS). In this part, various experiments illustrate the effectiveness of the proposed VIG. First, we validate the necessity of every component. Then, we investigate the impacts of different configurations of TGD.Compressed Frame HEVC AR-CNN RNAN STDF-R3 Ours GT Figure 6: Qualitative comparisons on the MFQE2.0. Method Base TF UD TGD ∆PSNR(dB) ∆SSIM(×10 −2 ) (i) 0.62 1.18 (ii) 0.66 1.20 (iii) 0.76 1.45 CRF 0.77 1.48 VIG 0.83 1.60 Table 4 : 4Impact of components. Ablation studies of each component are conducted to understand VIG better. means that VIG has the current module. Method Multi Mask Cosine Annealing ∆PSNR(dB) ∆SSIM(×10 −2 )(i) 0.81 1.54 (ii) 0.82 1.58 VIG 0.83 1.60 Common test conditions and software reference configurations. F Bossen, JCTVC-L1100. 1276F. Bossen et al., "Common test conditions and software reference configurations," JCTVC-L1100, vol. 12, no. 7, 2013. 1, 3, 6 Two deterministic half-quadratic regularization algorithms for computed imaging. P Charbonnier, L Blanc-Feraud, G Aubert, M Barlaud, Proceedings of 1st International Conference on Image Processing. 1st International Conference on Image ProcessingIEEE2P. Charbonnier, L. Blanc-Feraud, G. Aubert, and M. Bar- laud, "Two deterministic half-quadratic regularization al- gorithms for computed imaging," in Proceedings of 1st International Conference on Image Processing, vol. 2. IEEE, 1994, pp. 168-172. 5 Spatio-temporal deformable convolution for compressed video quality enhancement. J Deng, L Wang, S Pu, C Zhuo, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence347J. Deng, L. Wang, S. Pu, and C. Zhuo, "Spatio-temporal deformable convolution for compressed video quality en- hancement," in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 10 696- 10 703. 2, 5, 7 Compression artifacts reduction by a deep convolutional network. C Dong, Y Deng, C C Loy, X Tang, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision27C. Dong, Y. Deng, C. C. Loy, and X. Tang, "Compres- sion artifacts reduction by a deep convolutional network," in Proceedings of the IEEE international conference on computer vision, 2015, pp. 576-584. 2, 7 An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, arXiv:2010.1192923arXiv preprintA. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020. 2, 3 Mfqe 2.0: A new approach for multi-frame quality enhancement on compressed video. Z Guan, Q Xing, M Xu, R Yang, T Liu, Z Wang, IEEE transactions on pattern analysis and machine intelligence. 437Z. Guan, Q. Xing, M. Xu, R. Yang, T. Liu, and Z. Wang, "Mfqe 2.0: A new approach for multi-frame quality en- hancement on compressed video," IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 3, pp. 949-963, 2019. 2, 5, 6, 7 Masked autoencoders are scalable vision learners. K He, X Chen, S Xie, Y Li, P Dollár, R Girshick, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition16K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Gir- shick, "Masked autoencoders are scalable vision learners," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000-16 009. 3 Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. 6 Learning with privileged information for efficient image super-resolution. W Lee, J Lee, D Kim, B Ham, European Conference on Computer Vision. Springer23W. Lee, J. Lee, D. Kim, and B. Ham, "Learning with priv- ileged information for efficient image super-resolution," in European Conference on Computer Vision. Springer, 2020, pp. 465-482. 2, 3 An efficient deep convolutional neural networks model for compressed image deblocking. K Li, B Bare, B Yan, 2017 IEEE International Conference on Multimedia and Expo (ICME). IEEEK. Li, B. Bare, and B. Yan, "An efficient deep con- volutional neural networks model for compressed image deblocking," in 2017 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2017, pp. 1320- 1325. 2 Architecture-agnostic masked image modeling-from vit back to cnn. S Li, D Wu, F Wu, Z Zang, K Wang, L Shang, B Sun, H Li, S Li, arXiv:2205.139432022arXiv preprintS. Li, D. Wu, F. Wu, Z. Zang, K. Wang, L. Shang, B. Sun, H. Li, S. Li et al., "Architecture-agnostic masked image modeling-from vit back to cnn," arXiv preprint arXiv:2205.13943, 2022. 3 Mixmim: Mixed and masked image modeling for efficient visual representation learning. J Liu, X Huang, Y Liu, H Li, arXiv:2205.131372022arXiv preprintJ. Liu, X. Huang, Y. Liu, and H. Li, "Mixmim: Mixed and masked image modeling for efficient visual representation learning," arXiv preprint arXiv:2205.13137, 2022. 3 Deformable convolution dense network for compressed video quality enhancement. J Liu, M Zhou, M Xiao, ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE27J. Liu, M. Zhou, and M. Xiao, "Deformable convo- lution dense network for compressed video quality en- hancement," in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022, pp. 1930-1934. 2, 7 Swin transformer: Hierarchical vision transformer using shifted windows. Z Liu, Y Lin, Y Cao, H Hu, Y Wei, Z Zhang, S Lin, B Guo, pp. 10 012-10 022Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer Vision23Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, "Swin transformer: Hierarchical vision trans- former using shifted windows," in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10 012-10 022. 2, 3 Sgdr: Stochastic gradient descent with warm restarts. I Loshchilov, F Hutter, arXiv:1608.03983610arXiv preprintI. Loshchilov and F. Hutter, "Sgdr: Stochastic gra- dient descent with warm restarts," arXiv preprint arXiv:1608.03983, 2016. 6, 10 U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, International Conference on Medical image computing and computer-assisted intervention. SpringerO. Ronneberger, P. Fischer, and T. Brox, "U-net: Con- volutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234-241. 2 Overview of the high efficiency video coding (hevc) standard. G J Sullivan, J.-R Ohm, W.-J Han, T Wiegand, IEEE Transactions on circuits and systems for video technology. 22G. J. Sullivan, J.-R. Ohm, W.-J. Han, and T. Wiegand, "Overview of the high efficiency video coding (hevc) stan- dard," IEEE Transactions on circuits and systems for video technology, vol. 22, no. 12, pp. 1649-1668, 2012. 1 Hybrid pixel-unshuffled network for lightweight image superresolution. B Sun, Y Zhang, S Jiang, Y Fu, arXiv:2203.089212022arXiv preprintB. Sun, Y. Zhang, S. Jiang, and Y. Fu, "Hybrid pixel-unshuffled network for lightweight image super- resolution," arXiv preprint arXiv:2203.08921, 2022. 2 Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. Z Tong, Y Song, J Wang, L Wang, arXiv:2203.126022022arXiv preprintZ. Tong, Y. Song, J. Wang, and L. Wang, "Video- mae: Masked autoencoders are data-efficient learners for self-supervised video pre-training," arXiv preprint arXiv:2203.12602, 2022. 3 A novel deep learning-based method of improving coding efficiency from the decoder-end for hevc. T Wang, M Chen, H Chao, 2017 Data Compression Conference (DCC). IEEET. Wang, M. Chen, and H. Chao, "A novel deep learning-based method of improving coding efficiency from the decoder-end for hevc," in 2017 Data Compression Conference (DCC). IEEE, 2017, pp. 410-419. 2 Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, IEEE transactions on image processing. 134Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to struc- tural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004. 7 Overview of the h. 264/avc video coding standard. T Wiegand, G J Sullivan, G Bjontegaard, A Luthra, IEEE Transactions on circuits and systems for video technology. 13T. Wiegand, G. J. Sullivan, G. Bjontegaard, and A. Luthra, "Overview of the h. 264/avc video coding standard," IEEE Transactions on circuits and systems for video technology, vol. 13, no. 7, pp. 560-576, 2003. 1 Simmim: A simple framework for masked image modeling. Z Xie, Z Zhang, Y Cao, Y Lin, J Bao, Z Yao, Q Dai, H Hu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZ. Xie, Z. Zhang, Y. Cao, Y. Lin, J. Bao, Z. Yao, Q. Dai, and H. Hu, "Simmim: A simple framework for masked image modeling," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 9653-9663. 3 Enhancing quality for hevc compressed videos. R Yang, M Xu, T Liu, Z Wang, Z Guan, IEEE Transactions on Circuits and Systems for Video Technology. 29R. Yang, M. Xu, T. Liu, Z. Wang, and Z. Guan, "Enhancing quality for hevc compressed videos," IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 7, pp. 2039-2054, 2018. 2 Multi-frame quality enhancement for compressed video. R Yang, M Xu, Z Wang, T Li, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern Recognition27R. Yang, M. Xu, Z. Wang, and T. Li, "Multi-frame qual- ity enhancement for compressed video," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6664-6673. 2, 7 Rethinking data augmentation for image super-resolution: A comprehensive analysis and a new strategy. J Yoo, N Ahn, K.-A Sohn, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Yoo, N. Ahn, and K.-A. Sohn, "Rethinking data augmen- tation for image super-resolution: A comprehensive anal- ysis and a new strategy," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 8375-8384. 3 Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. K Zhang, W Zuo, Y Chen, D Meng, L Zhang, IEEE transactions on image processing. 2677K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, "Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising," IEEE transactions on image processing, vol. 26, no. 7, pp. 3142-3155, 2017. 2, 7 Residual non-local attention networks for image restoration. Y Zhang, K Li, K Li, B Zhong, Y Fu, arXiv:1903.10082arXiv preprintY. Zhang, K. Li, K. Li, B. Zhong, and Y. Fu, "Residual non-local attention networks for image restoration," arXiv preprint arXiv:1903.10082, 2019. 7 Recursive fusion and deformable spatiotemporal attention for video compression artifact reduction. M Zhao, Y Xu, S Zhou, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on Multimedia25M. Zhao, Y. Xu, and S. Zhou, "Recursive fusion and deformable spatiotemporal attention for video compres- sion artifact reduction," in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 5646- 5654. 2, 5 Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. Z Zhou, M M R Siddiquee, N Tajbakhsh, J Liang, IEEE transactions on medical imaging. 396Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, "Unet++: Redesigning skip connections to exploit multi- scale features in image segmentation," IEEE transactions on medical imaging, vol. 39, no. 6, pp. 1856-1867, 2019. 2
[]
[ "Online Guaranteed Reachable Set Approximation for Systems with Changed Dynamics and Control Authority", "Online Guaranteed Reachable Set Approximation for Systems with Changed Dynamics and Control Authority" ]
[ "Generic Colorized Journal ", "Vol ", "Xx ", "No ", "Xx " ]
[]
[]
This work presents a method of efficiently computing inner and outer approximations of forward reachable sets for nonlinear control systems with changed dynamics and diminished control authority, given an a priori computed reachable set for the nominal system. The method functions by shrinking or inflating a precomputed reachable set based on prior knowledge of the system's trajectory deviation growth dynamics, depending on whether an inner approximation or outer approximation is desired. These dynamics determine an upper bound on the minimal deviation between two trajectories emanating from the same point that are generated on the nominal system using nominal control inputs, and by the impaired system based on the diminished set of control inputs, respectively. The dynamics depend on the given Hausdorff distance bound between the nominal set of admissible controls and the possibly unknown impaired space of admissible controls, as well as a bound on the rate change between the nominal and off-nominal dynamics. Because of its computational efficiency compared to direct computation of the off-nominal reachable set, this procedure can be applied to on-board fault-tolerant path planning and failure recovery. In addition, the proposed algorithm does not require convexity of the reachable sets unlike our previous work, thereby making it suitable for general use. We raise a number of implementational considerations for our algorithm, and we present three illustrative examples, namely an application to the heading dynamics of a ship, a lower triangular dynamical system, and a system of coupled linear subsystems.
10.1109/tac.2023.3275495
[ "https://arxiv.org/pdf/2203.10220v1.pdf" ]
247,595,071
2203.10220
a5aafaac277745edb4a1090aa87f56bdec3979ca
Online Guaranteed Reachable Set Approximation for Systems with Changed Dynamics and Control Authority Generic Colorized Journal Vol Xx No Xx Online Guaranteed Reachable Set Approximation for Systems with Changed Dynamics and Control Authority Index Terms-Reachability analysisguaranteed reacha- bilitysafety-critical controlcomputation and control This work presents a method of efficiently computing inner and outer approximations of forward reachable sets for nonlinear control systems with changed dynamics and diminished control authority, given an a priori computed reachable set for the nominal system. The method functions by shrinking or inflating a precomputed reachable set based on prior knowledge of the system's trajectory deviation growth dynamics, depending on whether an inner approximation or outer approximation is desired. These dynamics determine an upper bound on the minimal deviation between two trajectories emanating from the same point that are generated on the nominal system using nominal control inputs, and by the impaired system based on the diminished set of control inputs, respectively. The dynamics depend on the given Hausdorff distance bound between the nominal set of admissible controls and the possibly unknown impaired space of admissible controls, as well as a bound on the rate change between the nominal and off-nominal dynamics. Because of its computational efficiency compared to direct computation of the off-nominal reachable set, this procedure can be applied to on-board fault-tolerant path planning and failure recovery. In addition, the proposed algorithm does not require convexity of the reachable sets unlike our previous work, thereby making it suitable for general use. We raise a number of implementational considerations for our algorithm, and we present three illustrative examples, namely an application to the heading dynamics of a ship, a lower triangular dynamical system, and a system of coupled linear subsystems. I. INTRODUCTION R EACHABILITY analysis forms a fundamental part of dynamical system analysis and control theory, providing a means to assess the set of states that a system can reach under admissible control inputs at a certain point in time from a given set of initial states. Inner approximations of reachable sets are often used to attain a guaranteed estimate of the system's capabilities, while outer approximations can be used to verify that the system will not reach an unsafe state. Outer approximations find widespread applications in faulttolerance analysis and formal verification [1], safe trajectory planning [2], and constrained feedback controller synthesis [3]. Methods for computing outer approximations of the reachable set include polynomial overapproximation [4], direct set propagation [5], and viscosity solutions to Hamilton-Jacobi-Bellman (HJB) equations [6]. Inner approximations of reachable sets have received comparatively less attention than outer approximations [7], but have recently seen use in path-planning problems with collision avoidance [8], as well as viability kernel computation [9], which can in turn be used for guaranteed trajectory planning [10]. Another application lies in safe set determination, in which one aims to obtain an inner approximation of the maximal robust control invariant set [11]. Methods for determining inner approximations of reachable sets have been based on various principles, including relying on polynomial inner approximation of the nonlinear system dynamics using interval calculus [12], ellipsoid calculus [13], and viscosity solutions to HJB equations [14]. One major drawback of these methods is that they are computationally intensive and are often only suitable for systems of low dimension, making them ill-suited for online use. In this work, we consider the problem of obtaining meaningful approximations of the reachable set of an off-nominal system by leveraging available a priori information on the nominal system dynamics. Here, we consider the reachable set of the nominal system, or an inner/outer approximation thereof, to be known prior to the system's operation. While obtaining reachable sets is often a computationally intensive task, it is often done during the design phase of a system, where computation times are less of a concern [15]. We then consider a change in dynamics of the system, for example due to partial system failure, which turns the nominal system into the off-nominal system. Our goals is to obtain inner and outer approximations of the off-nominal reachable sets based on the nominal reachable set, in a way that can be applied in real-time. In [16], the case in which the system experiences diminished control authority was considered, i.e., its set of admissible control inputs has shrunk with respect to that of the nominal system. In addition, stringent restrictions on the family of system that could be considered were made in [16], in particular due to the requirement that the reachable set for the nominal and off-nominal set be convex. This ultimately limits the applicability of the theory presented there to a limited set of problems. Here, we do not impose any demands on the convexity of the reachable set, while still presenting an algorithm that can be run in real-time. This latter generalization to nonconvex sets requires a significant shift in the way we reason about the minimum deviation between trajectories of the nominal and off-nominal system. In this work, we also consider a change in the system dynamics, and provide methods for obtaining tighter inner and outer approximations for the off-nominal reachable set with respect to what the theory of [16] provides. This latter improvement follows from the fact that the theory in [16] considers the trajectory deviation as expressed by the norm of the states, whereas here we separately consider the deviation in single dimensions of the state. To obtain inner and outer approximations of the off-nominal reachable set, we consider that an upper bound on the minimal rate of change of the trajectory deviation between the offnominal system's trajectories with respect to those of the nominal system's is known, with both trajectories emanating from the same point. These growth dynamics provide an upper bound on the minimal rate of change between two trajectories emanating from the same point, with one trajectory being generated by the nominal set of control inputs, and the other by the off-nominal set of control inputs and the off-nominal dynamics. An upper bound on these growth dynamics can be obtained analytically during the design phase, and allows us to obtain an inner approximation to the off-nominal system's reachable set at low computational cost, in an online manner. While other methods have been proposed to compute reachable sets under system impairment, due to their computational complexity, these have either used reduced order models, or have been limited to offline applications [17]. In more extreme cases of system impairments, such as those were very little is known about the system's present capabilities, more conservative methods for computing reachable sets exist [18]. Here, we present a general algorithm that yields guaranteed inner and outer approximations, given limited knowledge of the failure modes as expressed by a bound on the trajectory deviation growth dynamics. We leverage the fact that the offnominal system dynamics are related to the nominal system's dynamics, allowing us to leverage reachable sets computed for the nominal system, unlike in [18]. Given a sufficiently tight deviation growth bound, our approach can be applied online to high dimensional systems with no additional computational cost for the growth in system dimension. The paper is organized as follows. First, we present preliminary theory in Section II. Then, we present our main results Section III, followed by a simulation example involving the heading dynamics of ship, as well as two general scalable system examples, in Section IV. Finally, we draw conclusions in Section V. In Appendix I, we present a slightly more relaxed set of assumptions under which the theory presented continues to hold. II. PRELIMINARIES In the following, we denote by ‖ ⋅ ‖ the Euclidean norm. Given two sets , ⊆ ℝ , we denote by ⊕ Minkowski sum { + ∶ ∈ , ∈ }. We denote a ball centered around the origin with radius > 0 as  . By ( , ) we denote { } ⊕  . We denote by ' × ' the Cartesian product. We define ℝ + ∶= [0, ∞). We define the distance between two sets , ⊆ ℝ to be ( , ) ∶= sup ∈ inf ∈ ( , ),(1) where is the Euclidean metric. We denote the Hausdorff distance as H ( , ) ∶= max{ ( , ), ( , )},(2) where is the Euclidean metric. An alternative characterization of the Hausdorff distance reads [19, pp. 280-281]: H ( , ) = inf { ≥ 0 ∶ ⊆ + , ⊆ + },(3) where + denotes the -fattening of , i.e., + ∶= ⋃ ∈ { ∈ ℝ ∶ ‖ − ‖ ≤ }. Given a point ∈ and a set ⊆ , we denote ( , ) ∶= inf ∈ ( , ). We denote by the boundary of in the topology induced by the Euclidean norm. For a function ∶ → , we denote by −1 the inverse of this function if an inverse exists, and by dom( ) the domain of the function (in this case ). We denote a multifunction by ∶ ⇉ , where maps elements of to subsets of . Given a multifunction , we define a differential inclusion as being the set of ordinary differential equationṡ ∈ ( ) that have velocities in ( ). Given two vectors , ∈ ℝ , we denote by ⪯ a component-wise nonstrict inequality, i.e., ≤ for = 1, … , . By | |, we denote a component-wise absolute value, such that | | = | |. Given a set , we denote its cardinality by # . We use the abbreviation 'a.e.' (almost every) to refer to statements that are true everywhere except potentially on some zero-measure sets. Given a real value ∈ ℝ we denotes its ceiling by ⌈ ⌉ = min([ , ∞) ∩ ℕ). For a closed interval [ , ] ⊆ ℝ, we denote its length by length([ , ]) ∶= − . Given matrices (1) , … , ( ) , with ( ) ∈ ℝ × for each = 1, … , , we define diag({ (1) , … , ( ) }) to be the block diagonal matrix formed by these matrices. A. Problem Statement Consider a dynamical system of the forṁ ( ) = ( ( ), ( )), (0) = 0 ,(4) where ≥ 0, ∈ ℝ is the state, and ∈ ⊆ ℝ is the control input, where is some admissible set of control inputs. The dynamics have the form ∶ ℝ × → ℝ . We refer to these dynamics as the 'nominal' dynamics. We consider an impairment in the system dynamics, as well as the system's control authority, such that̄ ( ) ∈̄ ⊆ ⊆ ℝ . The modified dynamics then read:̄ ( ) = (̄ ( ),̄ ( )), (0) = 0 .(5) We refer to these modified dynamics as the 'off-nominal' dynamics. Definition 1 (Forward reachable set). We define a function ∶ ℝ + → as an admissible input signal, if a unique solution to (4) exists given that input signal. The set of admissible control signals is defined as all possible admissible input signals ∶= { ∶ ℝ + → }. We define a trajectory ∶ ℝ + ×ℝ × → ℝ to be such that ( ) = ( | 0 , ) satisfies (4) given initial state (0) = 0 ∈ ℝ and input signal ( ) = ( ) ∈ , i.e., ( | 0 , ) ∶= 0 + ∫ 0 ( ( ), ( )) d . From the dynamics of (4), we define a multifunction ( , ) ∶= ( , , ) ∶ ℝ + × ℝ ⇉ ℝ . This multifunction defines an ordinary differential inclusioṅ ( ) ∈ ( , ( )), of which any instance of (4) is a part. We define the solution set of this ordinary differential inclusion as follows: ( 0 ) ∶= { (⋅| 0 , ) ∶ 0 ∈ 0 , ∈ }. Given a set of initial states 0 ⊆ ℝ , we define the forward reachable set (FRS) at time ∈ ℝ + as → ( , 0 ) ∶= ⋃ ∈ 0 { ( ) ∶ ∈ ( 0 )} = { ( | 0 , ) ∶ 0 ∈ 0 , ∈ }. We consider the following main problem, comprised of two parts: one relating to obtaining inner approximations of reachable sets, and the other concerned with obtaining outer approximations of reachable sets. In this work, we treat both the case of impaired control authority, as well as changed dynamics, simultaneously. Problem 1 (Off-nominal FRS approximation). Given the nominal dynamicṡ ( ) = ( ( ), ( )), the off-nominal dynamicṡ̄ ( ) = (̄ ( ),̄ ( )), a set of admissible control inputs , (an inner (outer) approximation of the) forward reachable set → at time , and the corresponding initial set of states , find an inner (outer) approximation of the reachable set at time ,̄ → , for the dynamicṡ̄ ( ) = (̄ ( ),̄ ( )) and the admissible control inputs̄ = ℎ( ), for some control mapping ℎ ∶ →̄ . As mentioned in the introduction, inner approximations of the off-nominal reachable set are useful for safety critical control, when guaranteed reachability is demanded. However, when dealing with collision avoidance, outer approximations of the off-nominal reachable set of a moving target are needed. This justifies the need for two separate approximation objectives. B. Generalized Nonlinear Trajectory Deviation Growth Bound As mentioned in the introduction, we wish to find an upper bound on the minimum normed distance between two trajectories emanating from the same, once governed by (4), and the other by (8). We call this upper bound the trajectory deviation growth bound. To this end, we first consider a means of obtaining and upper bound to the norm of the solution of a given ordinary differential equation (ODE). This particular ODE will be described by the rate of change of the deviation between two trajectories, which we refer to as the trajectory deviation growth dynamics, as will be described shortly. We consider the following general nonlinear time-varying dynamicṡ ( ) = ℎ( , ( ), ( )), (0) = 0 ∈ ℝ ,(6) Our goal is to find an upper bound for the magnitude of ( ), given particular assumptions on the form of control input and the function ℎ, over a finite period of time. We make the following assumption on the growth rate of : Assumption 1. For all ∈ ℝ , ∈ , 0 ≤ < ∞ ‖ℎ( , , )‖ ≤ ( ) (‖ ‖, ‖ ‖) + ( ), where , are continuous and positive and is continuous, monotonic, nondecreasing and positive. In addition, is uniformly monotonically nondecreasing in ‖ ‖. = ℎ( , , ), 0 ≤ 0 ≤ < ∞, where ℎ( , , ) ∶ [ 0 , ∞)×ℝ × → ℝ is continuous for 0 ≤ < ∞, and ⊆ ℝ is compact and satisfies max ∈ ‖ ‖ = . Let Assumption 1 hold. Then, ‖ ( )‖ ≤ −1 ‖ ( 0 )‖ + ∫ 0 ( )d + ∫ 0 ( )d ,(7) where the expression on the right-hand side is strictly increasing in . In (7), we define ( ) ∶= ∫ 0 d ( , ) , > 0, 0 > 0, for arbitrary 0 > 0 and for all ≥ 0 for which it holds that ‖ ( 0 )‖ + ∫ 0 ( )d + ∫ 0 ( )d ∈ dom( −1 ).‖̃ ( )‖ ≤ −1 ∫ 0̃ ( )d + ∫ 0̃ ( )d =∶ ( , ) for all ∈ [ init , ]. Proof. Given the premise, this claim follows directly from Theorem 1. ‖̃ ( )‖ ≤ −1 + ∫ 0̃ ( )d + ∫ 0̃ ( )d =∶ ( , , ) for all ∈ [ init , ]. Proof. Similarly to Corollary 1, given the premise, this claim follows directly from Theorem 1. □ □ 1) Generalization to off-nominal dynamics: We now consider the following nonlinear time-varying off-nominal dynamics: ( ) = ( , ( ), ( )), (0) = 0 ∈ ℝ ,(8) which gives rise to the following assumption that relates these unknown dynamics to the known nominal system dynamics: Assumption 2. For all ∈ ℝ , ∈ , 0 ≤ < ∞, we have ‖ ( , , ) − ( , , )‖ ≤ ( ), where is a positive, continuous function on [ 0 , ∞). This assumption gives rise to the following lemma Lemma 1. Let Assumption 2 hold true. Consider functions ,̄ ,̄ , in the sense of Assumption 1, which apply to the following dynamicṡ̃ ( ) = ( , ( ) +̃ ( ), ( ) +̃ ( )) − ( , ( ), ( )),̃ ( 0 ) = 0. We consider nominal control signal ∈ , and an offnominal control signal of the form ( +̄ ) ∈ are such that sup̄ ( ), ∈[ 0 ,∞) ‖̄ ( )‖ ≤ . In addition, consider the following off-nominal trajectory deviation dynamics:̂ ( ) = ( , ( ) +̂ ( ), ( ) +̂ ( )) − ( , ( ), ( )),̂ ( 0 ) = 0. Control signals and̂ are defined similarly to those of the nominal dynamics. We have a nominal control signal ∈ , and an off-nominal control signal of the form ( +̂ ) ∈ such that sup̂ ( ), ∈[ 0 ,∞) ‖̂ ( )‖ ≤ . We definê ( ) ∶=̃ ( ) + ( ). We have that ‖̃ ( )‖ ≤ ( , ), where ( , ) is obtained as in Corollary 1 with̃ =̄ ,̃ =̂ , =̄ . Proof. This result follows straightforwardly by application of the the triangle inequality on the trajectory deviation growth dynamics and Corollary 1. □ □ III. MAIN RESULTS We now present the main results of this paper. We give a method for inner and outer approximation of the forward reachable sets for off-nominal systems that are subject to both diminishment of control authority and changed dynamics. Unlike in [16], a new hyperrectangular version of the Bihari inequality is required to obtain inner and outer approximations to more generalized reachable sets. As will be specified later, the only requirements imposed on these reachable sets will be that they are nonempty, compact, and connected. To construct a hyperrectangular Bihari inequality, we present a modified nonlinear bound on the deviation dynamics: Proof. The proof follows by repeated application of Corollary 1 on each dimension. Definition 2 (( , , )-bounded growth). We say that a function ℎ ∶ [ 0 , ∞) × ℝ × → ℝ has ( , , )-bounded growth if for all ∈ ℝ , ∈ , 0 ≤ < ∞, the following inequality holds: |ℎ ( , , )| ≤ ( ) (| |, ‖ ‖) + ( ), for each ∈ {1, … , },|̃ ( )| ≤ −1 ∫ 0̃ ( )d + ∫ 0̃ ( )d =∶ ( , )(9) □ □ In what follows, we will equivalently express the bound (9) as |̃ ( )| ⪯ ( , ), where ( , ) ∶= [ 1 ( , ) ⋯ ( , ) ] . Whereas Corollary 1 gives a bound on the trajectory deviation as a ball, Lemma 2 provides a hyperrectangular trajectory deviation bound. This distinction is key in the theorem that will follow next. We first introduce two new definitions relating to the hyperrectangular trajectory deviation growth bound. Definition 3 (Hyperrectangular fattening). Given a compact set ⊆ ℝ , a hyperrectangular fattening by ∈ ℝ with ⪰ 0 is defined as: Definition 4 (Hyperrectangular distance). Given two compact sets , ⊆ ℝ , we denote by R ( , ) their hyperrectangular distance, defined as follows. ⊞ ∶= ⋃ ∈ { } ⊕ × =1 [− , ] . On each Cartesian axis = 1, … , , consider each point ∈ and ∈ . We denote by ℎ ( , ) the following: ℎ ( , ) = min{| − | ∶ ∈ }. If = ∅, we say ℎ ( , ) = ∞. We denote the hyperrectangular distance as R, ( , ) = max{max ∈ ℎ ( , ), max ∈ ℎ ( , )}. Combining each component, we define R ( , ) ∶= R,1 ( , ) ⋯ R, ( , ) . Remark 2. The value of ℎ ( , ) shown above corresponds to the shortest distance from where there exists a hyperplane on a line from in the direction of , with a normal direction , that intersects (see Fig. 1 for an illustration). If such an intersecting hyperplane does not exist, we say ℎ ( , ) = ∞. We then have R, ( , ) = max{max ∈ ℎ ( , ), max ∈ ℎ ( , )}, as shown in Fig. 1. From Definition 4 it trivially follows that for two compact sets , ⊆ ℝ , we have ⊞ ⊇ and ⊞ ⊇ if and only if ⪰ R ( , ). This is analogous to the fattening-based characterization of the Hausdorff distance in (3). Before we can proceed, we must impose a number of mild conditions on the differential inclusion defined by dynamics (4) and (5), as well as the initial set of states 0 . In particular, we wish to show that the reachable set → ( , 0 ) is connected. This property is key in proving the main result of this work. We first present a prerequisite lemma on the connectedness of the solution set of a differential inclusion. To this end, we require the definition of the following metric space, as well as two propositions that provide sufficient conditions for to produce solution sets with connected and compact values. Definition 5. Let be a function space defined as ∶= ∈ ([0, ∞), ℝ ) ∶̇ ∈ 1 loc ([0, ∞), ℝ ) , endowed with distance metric defined as ( , ) ∶= ‖ (0) − (0)‖ + ∞ ∑ =1 1 2 ∫ 0 ‖̇ ( ) −̇ ( )‖ d 1 + ∫ 0 ‖̇ ( ) −̇ ( )‖ d .(10) In ( ) ∈ ( , ( )), a.e. ∈ [0, ], (0) = 0 ∈ ℝ , where ∶ [0, ∞) × ℝ ⇉ ℝ is a compact-valued multifunc- tion. Let ( , ) be path-connected for all ( , ) ∈ ℝ × ℝ , and let ( , ) be continuous and Lipschitz in and . Then for any 0 ∈ ℝ , the set of solutions ( 0 ) ∶= { ∈ ∶ (0) = 0 , ( ) ∈ ( , ( )) a.e. ∈ [0, ∞)}, is path-connected in the space ( , ). Remark 3. Since the conditions of Lemma 3 on will be required throughout this work, we will look at the applicability of these conditions to commonly encountered classes of dynamical systems. We list two here: 1) Affine-in-control systems: Consider functions ∶ [0, ∞)× ℝ → ℝ and ℎ ∶ [0, ∞) × ℝ → ℝ × that form the following differential equation: ( ) = ( , ( )) + ℎ( , ( )) ( ). From this system, we can introduce a differential inclusion defined by a multifunction ( , ) ∶= { ( , )} ⊕ ℎ( , ) , where ⊆ ℝ is nonempty, compact, and pathconnected. If and ℎ are continuous and Lipschitz in their arguments, then will satisfy conditions of Lemma 3. 2) General nonlinear systems: For a dynamical system of the form of (4), a sufficient condition for ( , ) ∶= ( , , ) to satisfy the condition of Lemma 3, is for to be nonempty, compact, and path-connected, and ( , ) to be continuous and Lipschitz in and . Relaxations to the conditions of Lemma 3 are presented in Appendix I. We proceed to prove that path-connectedness of ( 0 ) implies that the values of ( 0 ) ∶= { ( ) ∶ ∈ ( 0 )} are connected for all ∈ [0, ∞). Proposition 1 (Path-connectedness of ( 0 )). For some multifunction satisfying the hypotheses of Lemma 3, and some 0 ∈ ℝ , the set ( 0 ) = { ( ) ∶ ∈ ( 0 )} is path-connected for all ∈ [0, ∞). Proof. We consider the case where # ( 0 ) > 1; the case of a singleton is trivial. To show that the values of ( 0 ) are path-connected, let us first note that for any , ∈ ( 0 ), there exists a continuous path ∶ [0, 1] → ( 0 ) such that (0) = and (1) = . For all , for any , ∈ ( 0 )( ), there exist at least one pair of functions ( , ) ⊆ ( 0 ) such that ( ) = , ( ) = . Let , ∶ [0, 1] → ( 0 ) be a path that connects and , which exists by path-connectedness of ( 0 ). Then, (⋅)( ) ∶ [0, 1] → ℝ is a path between and . Hence, the values of ( 0 )( ) are path-connected for all ∈ [0, ∞). □ □ In Proposition 1, we have shown that the sets ( 0 ) for ∈ [0, ∞) are path-connected. In our main theorem, we will also require that these sets are compact and nonempty. The following proposition guarantees this. Proposition 2 ( ( 0 ) forms a path-connected continuum). Then, for any 0 ∈ ℝ , the solution set ( 0 ) is a pathconnected continuum in , i.e., a nonempty, compact, pathconnected subset of . In addition, the sets ( 0 ) are pathconnected continua in ℝ for all ∈ [0, ∞). Proof. The fact that ( 0 ) is a nonempty compact subset of follows from [22,Thm. 5.1,p. 228]. It is clear that the values of ( 0 ) will be nonempty, since # ( 0 ) > 0. Given some ∈ [0, ∞), we define a functional ∶ → ℝ as ( ) ∶= ( ). If can be shown to be continuous, then is a preserving map in the sense of [23, p. 21], i.e., the image ( ( 0 )) = ( 0 ) ⊆ ℝ preserves the compactness and path-connectedness properties of ( 0 ). We proceed to show that is indeed a continuous linear functional. We require linearity to show that is uniformly continuous on all of . To show that is linear, consider two functionals , ∈ , and a scalar ∈ ℝ. We clearly find is continuous at any one ∈ . Continuity of the functional is shown next by using the -criterion [24, p. 52]. We proceed to show that for each > 0, there exists some ′ > 0 such that ‖ ( ) − ( )‖ < for all ∈ such that ( , ) < ′ . To this end, let us define ∶= ∫ 0 ‖̇ ( ) − ( )‖ d . By definition of the solution set, we have for any , ∈ ( 0 ): ( , ) = ‖ (0) − (0)‖ + ∞ ∑ =1 1 2 1 + = ∞ ∑ =1 1 2 1 + = ∞ ∑ =1 1 2 1 1 + −1 , since (0) = (0) = 0 . We know by path-connectedness of ( 0 ) that for any ∈ ( 0 ), given some ′ > 0, there exists ∈ ( 0 ) ⧵ { } such that ( , ) < ′ . In general, we can upper-bound the difference between values of and at time as follows: ‖ ( ) − ( )‖ = ‖ ‖ ‖ ‖ ‖ (0) + ∫ 0̇ ( ) d − (0) − ∫ 0̇ ( ) d ‖ ‖ ‖ ‖ ‖ = ‖ ‖ ‖ ‖ ‖ ∫ 0̇ ( ) −̇ ( ) d ‖ ‖ ‖ ‖ ‖ ≤ ∫ 0 ‖̇ ( ) −̇ ( )‖ d , which follows from Jensen's inequality [25, p. 109]. Given some > 0 and ∈ (0, ∞), let ∶= ⌈ ⌉ ∈ ℕ. Since ∫ 0 ‖̇ ( ) −̇ ( )‖ d < implies ( ( ), ( )) < , it suffices to show that there exists some ′ > 0, such that any ∈ ( 0 )⧵ { } that satisfies ( , ) < ′ yields ∫ 0 ‖̇ ( ) −̇ ( )‖ d < . We choose ′ as follows: ( , ) = ∞ ∑ =1 1 2 1 1 + −1 ≥ ∑ =1 1 2 1 1 + −1 + ∞ ∑ = +1 1 2 1 1 + −1 ≥ ∑ =1 1 2 1 1 + −1 =∶ ′ ( , ). We may evaluate ′ ( , ) as ′ ( , ) = (1 − 2 − ) 1 + > 0. If we consider ∈ ( 0 ) ⧵ { } such that ( , ) < ′ ( , ), we have therefore shown that we obtain ( ( ), ( )) < ; at least one such exists by path-connectedness of ( 0 ). We thus proved continuity of . Having shown that is a continuous (linear) operator (and therefore a preserving map), we have proven that ( ( 0 )) is a path-connected continuum for all ∈ [0, ∞) and 0 ∈ ℝ . □ □ Given the result of Proposition 2, we can now show that given the above conditions and a condition on the initial set of states, the reachable set ( , 0 ) is also path-connected. Proof. We draw upon [26,Cor. 4.5,p. 233], which says that the choice of in Lemma 3 is sufficient for the solution set ∶ ℝ ⇉ to be continuous on ℝ . In other words, the solution set has a continuous dependence on the initial value. We characterize the reachable set as follows: ( ) ∶= → ( , 0 ) = ⋃ 0 ∈ 0 ( 0 ), It is clear that for any two values , ∈ ( ), there exist 0 , ′ 0 ∈ 0 such that ∈ ( 0 ) and ∈ ( ′ 0 ). Since 0 is path-connected and is continuous, there exists a continuous path 0 ∶ [0, 1] → 0 connecting 0 and ′ 0 . Therefore, the solution sets for ( 0 ) and 1] ( ) is path-connected, this implies that its values are also path-connected, analogous to the latter part of the proof of Proposition 1. Hence, ( ) is pathconnected for all ∈ [0, ∞). ( ′ 0 ) are connected by a path = • 0 ∶ [0, 1] ⇉ . Since ′ ( 0 , ′ 0 ) ∶= ⋃ ∈[0, □ □ We can now provide a means of inner and outer approximating the off-nominal reachable set based on a hyperrectangular trajectory deviation growth bound. ∈ [ 0 , 0 + ], R [ → ( , 0 ), → ( , 0 )] ⪯ * ; (iii) For all ∈ [ 0 , 0 + ], → ( , 0 ) ⧵ ( → ( , 0 )) ⊞ * ⊆ → ( , 0 ). Proof. (i) This fact follows directly from Lemma 2. (ii) From (i), the maximal distance between two points in → ( , 0 ) and → ( , 0 ) is upper-bounded by ( , ). In Theorem 1 it was shown that ( , ) is increasing in , meaning that the hyperrectangular distance bound holds for all times ≤ 0 + . (iii) We define = → ( , 0 ) and = → ( , 0 ) for any ∈ [ 0 , 0 + ]. Note that in this proof, unlike in [16], and need not be convex, making the proof more technically challenging. We wish to show that ∶= ⧵ ( ) ⊞ * is a subset of . The inclusion ⊆ can equivalently be shown by demonstrating that for all ∈ ⧵ , we have ∈ ( ) ⊞ * ; we prove the latter claim by contradiction. Assume that there indeed exists ∈ ⧵ such that ∉ ( ) ⊞ * . This point then will have some ∈ {1, … , }, such that have ( , ) ∶= min{| − | ∶ ∈ } > * . From the characterization of * in Lemma 2 and the definition of the hyperrectangular distance in Definition 4, we have R ( , ) ≤ * . In light of the contradiction given above, this inequality would then necessarily yield the following equivalent contradiction: ( , ) > max{ max ′ ∈ min{| ′ − | ∶ ∈ }, max ′ ∈ min{| ′ − | ∶ ∈ }},(11) which implies min{| − | ∶ ∈ } > max ′ ∈ min{| ′ − | ∶ ∈ }. Let * ∈ be such that ( , * ) = min{| − | ∶ ∈ }. By the hypotheses, in particular the compactness of , there exists some * ∈ such that | * − * | ⪯ * . We identify two cases: a) ∈ ∩ ( ⧵ ), and b) ∈ ⧵( ∪ ). In case a), we find ( , ) = 0, which produces the desired contradiction. We now consider case b). Let us denote by the projection of all points of a set ⊆ ℝ onto the -th Cartesian axis, such that ⊆ ℝ. Since and are both compact, connected, and nonempty, we find that and are closed intervals in ℝ for each = 1, … , . This fact follows trivially by considering that the projection operation is continuous, and continuous maps are preserving maps in the sense of [23, p. 21], i.e., their images preserve connectedness and compactness. From [27,Thm. 12.8,p. 116], any connected subspace of ℝ is an interval, which shows that the , are compact (or closed) intervals. We can then note that for any * , there exists some * ∈ such that | * − * | ≤ * by Lemma 2. Finally, let ∈ be the other point in the boundary of the interval such that ≠ * . We can identify twelve arrangements of ( , * , , * ), barring cases of symmetry. Some of these arrangements are inadmissible, as shown below. In what follows, we must have * ≠ , since would otherwise be on the boundary of , which was treated as case a). Also, necessarily, ≠ * , since we have ∉ . We prove that for each admissible arrangement, the claim of (11) does not hold; an illustration of some of these arrangements in given in Fig. 2. For some , , , ∈ ℝ, we will denote the ordering < < < by the shorthand notation , , , . We have: a) * , , * , : Not admissible since ∉ ⧵ . b) * , , , * ; Not admissible since ∉ ⧵ , and inconsistent with the definition of * . c) * , * , , : In this case, by Lemma 2 we have ( * , * ) ≤ * . Since we have ( , * ) < ( * , * ), we find that Having considered all cases, in all admissible scenarios it follows that the statement in (11) is false. This in turn contradicts the claim that there exists ∈ ⧵ such that ∉ ( ) ⊞ * . Therefore, we have proven that ⊆ . □ □ We now present two corollaries that cover the case of a changed set of initial conditions, as well as guaranteed overapproximations of the reachable set. IV. SIMULATION RESULTS We consider three numerical examples: a simplified representation of the heading dynamics of a sea-faring vessel, and lower triangular dynamical system, and an interconnected system of linear subsystems. The restriction to lower dimension systems stems from computational limitations in obtaining the nominal reachable sets with sufficient accuracy, as well as a desire to keep derivations concise. We will show how Theorem 2 and Corollary 4 can be applied to these systems. For both examples, we have used the CORA MATLAB toolkit [28] to compute the nominal and off-nominal reachable sets for illustrative purposes; in reality, such tools are not required to apply the theory presented here. In practice, the nominal reachable set would be computed prior to the system's operation using a similar toolkit. The methods used in such toolkits can often not be used online because of hardware limitations and poor scalability, hence the need for an approach such as ours. In practice, it is difficult to obtain a hyperrectangular slimming of the form ⧵ ( ) ⊞ using widely used software packages. For this reason, we propose an alternative using a conservative ball-based slimming operation. It is obvious that the following holds: ⧵ ( ) +‖ ‖ ⊆ ⧵ ( ) ⊞ ,(12) where ‖ ‖ denotes the Euclidean norm of the vector . This follows from the fact that the ball  +‖ ‖ includes the hyperrectangle × =1 [− , ]. In the following, we will show approximations based on naive ball-based slimming using single elements , which give an indication of the shape of a true hyperrectangular slimming in that particular dimension. We also give a guaranteed inner approximation by applying a ball-based slimming operation with radius ‖ ‖. Compared to [16], this approximation approach yields tighter approximations, since the bounds obtained there are greater or equal to ‖ ‖, as a bound on the Euclidean norm of the trajectory deviation is used there. A. Norrbin's Ship Steering Dynamics We first consider Norrbin's model of the heading dynamics of a ship sailing at constant velocity [29]: ( ) = ̇ 1 ( ) 2 ( ) = ( ( ), ( )) = 2 − 2 ( 2 + 3 2 ) + 0 2 2 2 , where 1 is the heading (or yaw) angle, and 2 is the heading rate; in this example, denotes the rudder angle, denotes the fixed cruise speed, and denotes the vessel length. As can be observed in the dynamics, a vessel's ability to make turns is strongly correlated with its velocity (higher speeds provide greater resistance, but induce stronger rudder authority), as well as the length of the vessel. Intuitively, a longer vessel is harder to turn due to its inertia and hydrodynamic resistance of the hull. The dynamics are of second order, as a rudder deflection naturally induces a yaw moment. We can find the following bound on trajectory deviation growth: |̃ (̄ ,̄ )| = | ( +̃ , +̃ ) − ( , )| ⪯ |̃ 2 | 2 (|̃ 2 | + |̃ 2 | 3 + 3|̃ 2 | 2 | 2 | + 3|̃ 2 || 2 | 2 ) + 2 2 2 |̃ | ,(13) where 2 = max ∈ → ( , 0 ) | 2 |. 2 can be determined since the nominal reachable set → ( , 0 ) is available to us. We note that (13) contains an integrator in statẽ 1 , which allows us to obtain a hyperrectangular trajectory deviation bound as follows. We first compute the deviation bound on statẽ 2 , such that |̃ 2 ( )| ≤ 2 ( ). We then compute an upper bound oñ 1 , which is of the form 1 ( ) = ∫ 0 2 ( ) d , which gives |̃ 1 ( )| ≤ 1 ( ). Alternatively, a more conservative expression for 1 can be obtained by defining it as 1 ( ) ∶= ( ) , which follows from the fact that is strictly increasing (see the proof of Theorem 1). We first consider the case of diminished control authority, i.e., the case in which the system dynamics remain the same, but the control inputs are draw from̄ instead of . We evaluate the reachable set at = 0.5 s, = 1 s, and = 3 s, yielding the results shown in Fig. 3. We have given a guaranteed inner approximation based on the conservative ball-based slimming approach, as well as guaranteed intervals in each Cartesian axis using the entries of ( ). These guaranteed intervals are shown as cross-hatched areas, and give an indication of what a hyperrectangular slimming would have produced, in addition to providing a guarantee that there exists at least one state in the off-nominal reachable set that has one of its coordinates on one of the intervals. Unlike the results in [16], the quality of the inner approximations degrades little with time (see Fig. 3c). This feature can be attributed to the fact that we are using a hyperrectangular growth bound in this work, as opposed to a more conservative norm-based bound. An application to computing a guaranteed reachable set of the positions of the ship after control authority diminishment based on Norrbin's model has been prepared as a video 1 . 2) Changed Dynamics: In addition to the diminished control authority, we now also consider the following changed dynamics: , where we consider < and > , to capture a slowdown and speedup of the vessel, respectively. a) Slowdown: We first consider s = 0.95 = 4.75 m/s. This slowdown causes the reachable set to shrink and shift slightly towards higher heading angles, since there is insufficient velocity to reach lower angles. The bound given in Lemma 1 is used, where we define ( ) as follows: ( ) ∶= | s − | 2 ( 2 + 3 2 ) + 2 − 2 s 2 2 max ∈ | |, where 2 = min ∈ → ( , 0 ) 2 . Combining this bound with the trajectory deviation growth bound given at the beginning of this section, we obtain the conservative inner approximation shown in Fig. 4. As can be clearly seen, not only is the offnominal reachable set smaller, but it has also drifted to the top-left. This change is intuitively correct, since at a slower velocity rudder inputs become more effective as the vessel can make tighter turns at slower speeds. This phenomenon is reflected in the upward shift in heading rate and heading angle. We now demonstrate outer approximations in the case of changed dynamics. We still consider a diminishment in control authority, but in this case, s = 1.4 = 7 m/s, indicating a speedup. In practice, one would like to know what the worst case trajectory could be in such a case, for example when attempting to avoid a high speed vessel. Instead of shrinking the nominal set, we now fatten it as shown in Corollary 4. Our ( ) in this setting is as follows: ( ) ∶= s − 2 ( 2 + 3 2 ) + | 2 − 2 s | 2 2 max ∈ | |. With a similar trajectory deviation growth bound as in the case of a slowdown, we obtain an outer approximation as shown in Fig. 5, this time with an exact hyperrectangular fattening. In this case, it is also possible to apply a conservative ball-based fattening using the same radius as given previously. In Fig. 5, it is clear that the off-nominal reachable set has shifted towards lower heading angles as rates, since the vessel will have less effective rudder authority at higher cruise speeds due to the its inertia. As a result, the outer approximation includes a large area of unused space towards the top right, since it needs to make up for both the translation and growth of the off-nominal reachable set with respect to the nominal reachable set. B. Cascaded System To demonstrate that the approach given in Theorem 2 is scalable for high-dimensional systems, we present the fol-lowing academic example. We consider a lower triangular system; such systems often arise in practice when dealing with interconnected dynamical systems [30]. Namely, we consider the system: ( ) = ̇ 1 ( ) ⋯̇ ( ) = ( ) + ( ) + ( ),(14) where ∈ ℝ and ∈ ⊆ ℝ , ∈ ℝ × is a lower triangular matrix, ∈ ℝ × is arbitrary, and ∶ [ 0 , ∞) → ℝ is a differentiable function. The contribution of ( ) is that of a nonlinear drift, possibly due to phenomena such as actuator bias or periodic disturbances. We consider both the case of diminished control authority and changed dynamics below. |̇̃ ( )| ≤ −1 ∑ =1 | , | ( ) + | , ||̃ ( )| + ∑ =1 | , | ,(15) where each ( ) is computed as per Lemma 2. We will now show how to obtain the hyperrectangular trajectory deviation bound. Given the lower triangular structure of matrix , by (15) we first compute 1 ( ) from the following growth bound: |̇̃ 1 ( )| ≤ | 1,1 ||̃ 1 ( )| + ∑ =1 | 1, | . By an application of Lemma 1, we can obtain 1 ( ). Repeated application of (15) and Lemma 1 will then yield the hyperrectangular deviation bound ( ). (16) which is obtained by the same arguments as in Lemma 1. |̇̃ ( )| ≤ −1 ∑ =1 | , | ( ) + | , ||̃ ( )| + ∑ =1 | , | + ( ), As an illustration of the bound given in (15), we consider the following parameters: It can clearly be observed that the volume fractions of the inner approximations remain reasonably tight with increasing system dimension when considering the tightness of the hyperrectangular distance bound. These results serve to demonstrate that our method is capable of producing reasonably tight approximations even for systems with increased dimension by exploiting partially decoupled system structure. In comparison, ball-based slimming used in [16] yields worse results, as can be seen by comparing the fourth and fifth column of Table I. For instance, a hyperrectangular shrinking operations defined by = [ 0. 1 10 ] yields a sufficient ball-based shrinking operation with radius ‖ ‖ ≈ 10, which would likely shrink away most of the first Cartesian dimension in practice. A similar phenomenon can be observed when considering dimensions 1 and 3 in Table I; using hyperrectangular bounds prevents excessive slimming of the reachable set. 3) Computational Complexity: To show that the theory presented in this work is scalable on systems such as (14), we consider the computational complexity of a basic algorithmic implementation to compute for each , as well as verifying if a state is guaranteed to lie in the off-nominal reachable set. Both of these tasks are subject to hard real-time constraints in practice, making it essential to study how their computational complexity grows. a) Computing the Trajectory Deviation Bound: We note that we must perform numerical integration to compute , ∫ ( ) d , and ∫ ( ) d in the Bihari inequality (7). To compute the inverse −1 , one may use a root-finding scheme or an approximate look-up table (LUT) based approach. We consider here a LUT approach. When using a method an explicit non-adaptive numerical integration scheme such as Euler's method or Runge-Kutta, it suffices to consider an a priori set integration step ℎ > 0. Let us consider the reachable set in an interval ∈ [0, ], and take ℎ = ∕ with ∈ ℕ. By the results from [31], the computational complexity of a Runge-Kutta scheme is ( ). Since we will need to perform three rounds of numerical integration per dimension (for , and in and ), which together take ( ). We store all value of and their argument in a lookup table of size . Since values can be retrieved from an array in constant time, the complexity of numerical integration to populate the LUT combined with lookup is (1) + ( ) = ( ). We then note that this process must be repeated for all dimensions, which gives computational complexity ( ). Therefore, the value of ( ), which is instrumental in producing guaranteed inner-and outer-approximations, can be computed in linear time with respect to the system dimension . b) Verifying Reachability of a State: We now consider the complexity of verifying whether a state lies in the computed inner approximation of the reachable set. Let us assume that we have access to a signed distance function, ∶ ℝ → ℝ, of the nominal reachable set at time (see, e.g., [32, p. 811] for more information on signed distance functions). We assume that we can evaluate using primitive operations. Then, to evaluate whether or not a point ∈ ℝ lies in the inner approximation of the off-nominal reachable set, it suffices to check the following: 1) Check if ( ) ≤ 0; we must first check if lies in the nominal reachable set. If this is false, is not guaranteed to lie in the off-nominal reachable set. 2) Check if ( ) ≤ − min ( ); we must verify that lies at least distance min ( ) away from the boundary of the nominal reachable set. If this is false, is not guaranteed to lie in the off-nominal reachable set. a) Check if ( ) ≤ −‖ ( )‖. This verification is based on the ball-based slimming operation of (12). If this is true, is guaranteed to lie in the off-nominal reachable set. If false, continue to the next step. b) Perform gradient ascent on starting at , such that we reach ′ that satisfies ( ′ ) = 0. This ′ is the point on the boundary of the nominal reachable set that is closest to . Verify whether ( ) ⪯ | − ′ |. If this inequality is true, is guaranteed to lie in the off-nominal reachable set. In the above algorithm, it will take at least one evaluation of to verify whether is guaranteed to be in the offnominal reachable set. Doing so requires operations, and corresponds to step 1). An evaluation of ( ) will cost ( ) operations as discussed previously, which yields a complexity of ( ). Evaluating the norm of ( ) can be done in linear time as part of step 2a), but performing gradient ascent in step 2b) may require a significant number of evaluations of . It is possible to truncate the gradient ascent algorithm based on a maximum number of evaluations of , say eval . Given some ′′ ∈ ℝ obtained after eval − 1 evaluations of , it is clear that ′ ∈ { ′′ } +| ( ′′ )| . We can check if for each = 1, … , , it holds that | − ′′ | + | ( ′′ )| ≤ ( ). If this inequality is true, then is guaranteed to lie in the off-nominal reachable set, and if not, then cannot be verified with certainty. Therefore, it is possible to verify guaranteed reachability with complexity ( ). C. Interconnected System To demonstrate the results shown in Subsection IV-B on a different system structure, we consider a cascaded system of linear equations [33]. Let there be ∈ ℕ interconnected systems, such that the -th subsystem may only depend on its own states and inputs, as well as the states of the previous subsystem ( − 1). The overall system thus takes the form: ( ) = ( + ) ( ) + ( ),(17) where = diag({ (1) , (2) , … , ( ) }) ∈ ℝ (Σ =1 )×(Σ =1 ) , = diag({ (1) , (2) , … , ( ) }) ∈ ℝ (Σ =1 )×(Σ =1 ) , = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 0 (2) ⋱ ( ) 0 ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ∈ ℝ (Σ =1 )×(Σ =1 ) . In the above definition, we have ( ) ∈ ℝ × −1 for all = 2, … , . For the first system, we can compute the hyperrectangular deviation bound as in [16], simply by considering the ball-based growth bound for that system. Doing so yields ‖̇̃ (1) ( )‖ ≤ 1 ∑ =1 1 ∑ =1 | , |‖̃ (1) ( )‖ + | , | .(18) From this inequality, we can compute the trajectory deviation bound ( ) as per Corollary 1. For subsequent systems, we find ‖̇̃ ( ) ( )‖ ≤ ∑ =1 −1 ∑ =1 | ( ) , | ( −1) ( ) + ∑ =1 ∑ =1 | , |‖̃ ( ) ( )‖ + | , | ,(19) for > 1. In case any of the constituent systems possess a decoupled structure, simplifications of the form of (15) can be made in (18) or (19). 1) Numerical Example: We consider the following system: ( ) = ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ −1 1 0 0 0 0 −1 0 0 0 0 0 −0.5 0.5 −0.1 0 0 −0.5 −0.1 1 0 0 0 0.1 −0.5 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ( ) + ⎡ ⎢ ⎢ ⎢ ⎢ ⎣⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ( ) + ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ 0.1 0 1 0 0 0.1 0 0.1 0 0.1 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ( ),(20) where the set of admissible control inputs is = [−2, 2] 2 , the set of initial states is 0 = {0}. We take the impaired set of admissible control inputs to bē = [−1.9, 1.9] 2 , such that = H ( ,̄ ) = 0.1. Using the approach of Theorem 2, we obtain the results as shown in Table II. As can be observed in Table II, the inner-approximations based on the hyperrectangular slimming outperform those based on ball-based slimming operations. In particular, in states 3 and 4, the ball-based slimming operations eliminate the entire reachable set, which is not the case with hyperrectangular slimming operations. The computations applied to the system in this example are scalable while preserving relatively tight bounds, provided that the system structure permits decoupling of subsystems as shown here. V. CONCLUSION In this work, we have introduced a new technique for efficiently computing both inner and outer approximations to a reachable set in case of changed dynamics and diminished control authority, given basic knowledge of the trajectory deviation growth as well as a precomputed nominal reachable set. This work expands on previous work by extending the theory to changes in dynamics, and lifting the assumption of convexity of the reachable sets. To obtain an inner approximation of the reachable set under diminished control authority, we have given an integral inequality that provides an upper bound on the minimal trajectory deviation between the nominal and off-nominal systems. We have extended the classical norm bound on the trajectory deviation to a hyperrectangular bound, allowing us to compute both inner and outer approximations of the off-nominal reachable set based on the nominal set, regardless of the convexity of the reachable set. Similarly to our previous results, these results can be applied online on systems at a low computational cost. We have demonstrated our approach by three examples: a model of the heading dynamics of a vessel, a lower triangular system, and an interconnected linear system. In general, the use of a hyperrectangular growth bound is superior to a norm bound for systems that have one or more integrators. The numerical examples indicate that the use of hyperrectangular slimming operations would produce tighter inner approximations, coupled with periodic reinitialization of the reachable set. As was mentioned in previous work, the tightness of both the inner and outer approximation are strongly related to the quality of the trajectory deviation bound, as well as any additional drift that appears as part of a change in dynamics. We have shown that the ability to compute these approximations online can have practical application to control of dynamical systems in off-nominal conditions. This was shown in the second example, where the computational complexity was shown to be linear in the system dimension for a lower triangular system. Finally, in the third example, it was shown how system structure can be leveraged when dealing with interconnected systems in the context of formulating an efficient hyperrectangular growth bound that consists of several coupled ball-based growth bounds. The latter approach was shown to be applicable to larger systems, provided that it is possible to decouple some subsystems from each other. In future work, we aim to study the utility of a bounding method based on non-axis-aligned hyperrectangles, as could be described by zonotopes, insofar as obtaining tighter growth bounds and approximations is concerned. A potential avenue for this work would lie in considering principle components of the system using singular value decomposition [34], or by considering the system structure itself (e.g., when the set of velocities of a system lies in a subspace). In the same direction, (normalizing) state-space transformations may also prove to be useful in obtaining tighter approximations by easing magnitude difference between states. In addition, generalized slimming and fattening operations that are based on sets that are not centered at the origin may also prove to be key to obtaining tighter approximations in the case of changes in dynamics. Finally, real-time applications of the theory presented here will be studied in future work, with a focus on safety-critical predictive control. APPENDIX I GENERALIZATIONS TO THE THEORY In the theory presented in Section III, a number of assumptions can be weakened to address a larger class of dynamical systems; we present these relaxations below. For the result of Lemma 3, it suffices that the multifunction ∶ [0, ∞) × ℝ ⇉ ℝ satisfies the following properties [ is Lipschitz with respect to , i.e., there exists ∈ 1 loc ([0, ∞), ℝ), such that ( ) > 0, and for any , ∈ ℝ , it holds that H ( ( , ), ( , )) ≤ ( )‖ − ‖ for a.e. ∈ [0, ∞); 3) There exists ∈ 1 loc ([0, ∞), ℝ) such that H ({0}, ( , 0)) ≤ ( ) for a.e. ∈ [0, ∞). If ( , ) is continuous, Lipschitz in , and has closed, path-connected values, as assumed in the main part of the paper, it satisfies these assumptions. Namely, 1) is satisfied by continuity of , while 2) and 3) are satisfied by the Lipschitz condition on ( , ) in and . For the claim of Proposition 2, it suffices that in addition to assumptions 1)-3) above, multifunction possesses the Scorza-Dragoni property [35,Def. 19 Finally, for Lemma 4 to hold, conditions 1)-3) above and the Scorza-Dragoni property again form sufficient conditions; in its proof, the solution set is indeed continuous if conditions 1)-2) are met, by Corollary 4.5 in [26]. Melkior Ornik received the Ph.D. degree from the University of Toronto, Toronto, ON, Canada, in 2017. He is currently an Assistant Professor with the Department of Aerospace Engineering and the Coordinated Science Laboratory, University of Illinois Urbana-Champaign, Urbana, IL, USA. His research focuses on developing theory and algorithms for learning and planning of autonomous systems operating in uncertain, complex, and changing environments, as well as in scenarios where only limited knowledge of the system is available. Manuscript received September 25, 2021. H. El-Kebir is with the Dept. of Aerospace Engineering at the University of Illinois Urbana-Champaign, Urbana, IL 61801 USA (e-mail: [email protected]). A. Pirosmanishvili is with the Dept. of Aerospace Engineering at the University of Illinois Urbana-Champaign, Urbana, IL 61801 USA (e-mail: [email protected]). M. Ornik is with the Dept. of Aerospace Engineering and the Coordinated Science Laboratory at the University of Illinois Urbana-Champaign, Urbana, IL 61801 USA (e-mail: [email protected]). Theorem 1 ( 1Extended Bihari inequality [16, Theorem 3.1]). Let ( ) be a solution to the equatioṅ Remark 1 . 1Theorem 1 is a generalization of the Gronwall-Bellman inequality. It can be reduced to the Gronwall-Bellman inequality by taking ( ) = and ( ) = log (see [20, Remark 2.3.2, p. 109] for a more in-depth discussion). Corollary 1 . 1For any 0 ∈ 0 ⊆ ℝ , where 0 is compact, and any initial time init ∈ [ 0 , ∞) and final time ∈ [ init , ∞), consider a trajectory ( ) satisfying ( init ) = 0 anḋ ( ) = ( , ( ), ( )), with ( ) ∈ . Consider a trajectorȳ ( ) with̄ ( init ) = 0 anḋ̄ ( ) = ( ,̄ ( ),̄ ( )), such that ( ) ∈̄ satisfies sup ∈[ init , ] ‖ ( ) −̄ ( )‖ ≤ . Let̃ ( ) ∶= ( , ( ), ( )) − ( ,̄ ( ),̄ ( )),̃ ( ) ∶= ( ) −̄ ( ), and̃ ( ) ∶= ( ) −̄ ( ). Let the following bound hold for ∈ [ init , ], and for any ( ) and̄ ( ) satisfying the previous hypotheses: ‖̃ ( )‖ ≤̃ ( )̃ (‖̃ ( )‖, ‖̃ ( )‖) +̃ ( ) with̃ ,̃ ,̃ satisfying the assumptions given in Assumption 1. Then,̃ ( ) satisfies . For any 0 ∈ 0 ⊆ ℝ , where 0 is compact, and any initial time init ∈ [ 0 , ∞) and final time ∈ [ init , ∞), consider a trajectory ( ) satisfying ( init ) = 0 anḋ ( ) = ( , ( ), ( )), with ( ) ∈ . Consider a trajectorȳ ( ) with ( init ) =̄ 0 , wherē 0 ∈̄ 0 ⊆ ℝ , anḋ̄ ( ) = ( ,̄ ( ),̄ ( )), such that̄ ( ) ∈̄ satisfies sup ∈[ init , ] ‖ ( ) −̄ ( )‖ ≤ . Let ( ) ∶= ( , ( ), ( )) − ( ,̄ ( ),̄ ( )),̃ ( ) ∶= ( ) −̄ ( ), and ( ) ∶= ( )−̄ ( ). Let H ( 0 ,̄ 0 ) ≤ . Let the following bound hold for ∈ [ init , ], and for any ( ) and̄ ( ) satisfying the previous hypotheses: ‖̃ ( )‖ ≤̃ ( )̃ (‖̃ ( )‖, ‖̃ ( )‖)+̃ ( ) with ,̃ ,̃ satisfying the assumptions given in Assumption 1. Then, ( ) satisfies where , are continuous and positive and is continuous, monotonic, nondecreasing and positive in both of its arguments.Given this definition, we can now formulate a generalization to Corollary 1:Lemma 2. For any 0 ∈ 0 ⊆ ℝ , where 0 is compact, and any initial time init ∈ [ 0 , ∞) and final time ∈ [ init , ∞), consider a trajectory ( ) satisfying ( init ) = 0 anḋ ( ) = ( , ( ), ( )), with ( ) ∈ . Consider a trajectorȳ ( ) with̄ ( init ) = 0 anḋ̄ ( ) = ( ,̄ ( ),̄ ( )), such that̄ ( ) ∈̄ satisfies sup ∈[ init , ] ‖ ( ) −̄ ( )‖ ≤ . Let ( ) ∶= ( , ( ), ( )) − ( ,̄ ( ),̄ ( )),̃ ( ) ∶= ( ) −̄ ( ), and̃ ( ) ∶= ( ) −̄ ( ). Let̃ be of (̃ ,̃ ,̃ )-bounded growth for ∈ [ init , ]. Then,̃ ( ) satisfies for each ∈ {1, … , } and for all ∈ [ init , ], where the are defined as in Theorem 1. Fig. 1 : 1Illustration of the hyperrectangular distance between two compact sets, as determined by distances to intersecting hyperplanes. Distances ℎ ( , ) are shown for each axis, as well as R, ( , ) to show that R, ( , ) ≥ ℎ ( , ). For a differential inclusioṅ ( ) ∈ ( , ( )), a.e. ∈ [0, ], (0) = 0 ∈ ℝ , where ∶ [0, ∞) × ℝ ⇉ ℝ is a compact-valued multifunction. Let satisfy the hypotheses of Lemma 3. ( + ) = ( + )( ) = ( ) + ( ) = ( ) + ( ), ( ) = ( )( ) = ( ) = ( ). If is linear, by [24, Thm. 1.1, p. 54] it suffices to show that Lemma 4 . 4For a differential inclusioṅ ( ) ∈ ( , ( )), a.e. ∈ [0, ], (0) = 0 ∈ ℝ , where ∶ [0, ∞) × ℝ ⇉ ℝ satisfies all conditions listed in Proposition 2, given a path-connected continuum 0 ⊆ ℝ , reachable set → ( , 0 ) is path-connected for all ∈ [0, ∞). Fig. 2 : 2Illustration of some of the arrangements considered in producing the various contradictions in the proof of Theorem 2(iii). Theorem 2 ( 2General FRS inner approximation with changed dynamics).Let ∶ [0, ∞) × ℝ × → ℝ , ∶ [0, ∞) × ℝ × → ℝ , where , ⊆ ℝ are such that R ( , ) ⪯ . Let( , ) = ( , , ) and ( , ) = ( , , ), and let 0 ⊆ ℝ , the set of initial states, and initial time 0 ∈ ℝ + be given. Let 0 , , , and , satisfy the conditions of Lemma 4. Let the hypotheses of Lemma 2 be satisfied with̄ = , init = 0 , and > init . Let ( , ) be obtained as in Lemma 2. Then: (i) For each 0 ∈ 0 there exists a trajectory ( ) emanating from ( 0 ) = 0 witḣ ( ) ∈ ( , ( )) and a trajectory ( ) satisfying ( 0 ) = 0 anḋ ( ) ∈ ( , ( )) such that | ( ) − ( )| ⪯ ( , ) for all ∈ [ 0 , ]; (ii) Let ∈ [0, − 0 ], and let * = ( 0 + , ). For all ( , * ) < * . d) * , * , , : Not admissible since ∉ ⧵ . e) * , , , * : We have ( * , * ) ≤ * and ( , * ) < ( * , * ), which implies ( , * ) < * . f) * , , * , : Not admissible since ∉ ⧵ . g) , * , * , : Not admissible since ∉ ⧵ . h) , * , , * : Not admissible since ∉ ⧵ , and not consistent with the definition of * . i) , * , * , : Not admissible since ∉ ⧵ . j) , , * , * : Not admissible since ∉ ⧵ , and inconsistent with the definition of * . k) * , * , , : We have ( , ) ≥ ( , * ), ( * , ) > ( , ), as well as ( * , ) ≤ * . From this it follows that ( , * ) < * . l) * , , * , : We have ( , * ) < ( * , * ) ≤ * . Corollary 3 .. 3Let the hypotheses of Theorem 2 hold. In addition, let there be an off-nominal set of initial conditions̄ 0 that is nonempty, closed, and connected, such that H ( 0 ,̄ 0 ) ≤ . Define ( , , ) as in Corollary 2. Then: (i) For each 0 ∈ 0 there exists a trajectory ( ) emanating from ( 0 ) = 0 witḣ ( ) ∈ ( , ( )) and a trajectory ( ) satisfying ( 0 ) = ′ 0 , for some ′ 0 ∈ { 0 } + ∩̄ 0 , anḋ ( ) ∈ ( , ( )) such that | ( ) − ( )| ⪯ ( , , ) for all ∈ [ 0 , ]; (ii) Let ∈ [0, − 0 ], and let * * = ( 0 + , , ). For all∈ [ 0 , 0 + ], R [ → ( ,̄ 0 ), → ( , 0 )] ⪯ * * ; (iii) For all ∈ [ 0 , 0 + ], → ( , 0 ) ⧵ ( → ( , 0 )) ⊞ * * ⊆ → ( ,̄ Let the hypotheses of Theorem 2 and Corollary 3 hold. Then: (i) For each 0 ∈ 0 there exists a trajectory ( ) emanating from ( 0 ) = 0 witḣ ( ) ∈ ( , ( )), and a trajectory ( ) satisfying ( 0 ) = 0 with 0 ∈̄ 0 such that ‖ 0 − 0 ‖ ≤ anḋ ( ) ∈ ( , ( )), such that | ( ) − ( )| ⪯ ( , 2 + , ) for all ∈ [ 0 , ] and = 1, … , , where ∶= max ∈ ‖ ‖; (ii) For all ∈ [ 0 , ], R [ → ( ,̄ 0 ), → ( , 0 )] ⪯ ( , 2 + , ); (iii) For all ∈ [ 0 , ], → ( ,̄ 0 ) ⊆ ( → ( , 0 )) ⊞ ( ,2 + , ) . Proof. (i) This claim follows from Lemma 1, where we have used the following inequality: ‖ ( ) − ( )‖ ≤ ‖ ( )‖ + ‖ ( )‖ ≤ + + H ( , ) ≤ 2 + , which follows from the triangle inequality, as well as the definition of the Hausdorff distance. (ii) This fact follows directly from Lemma 1. (iii) The proof here is similar to that of Theorem 2(iii), where we consider that any point in has a counterpart in that is distance ( , 2 + ) away. The proof is immediate from this consideration and Lemma 2. □ □ Remark 4. In Corollaries 3 and 4, the Hausdorff distance upper bound on the initial set of states does not decrease the quality of the inner approximation with increasing time, similar to the quantity 2 + . In fact, the approximations decrease in tightness with increasing time solely on account of the 'looseness' of the functions̃ ,̃ ,̃ of Lemma 2 that upper-bound the trajectory deviation growth. ( ) ∶= ( , ), with = [−25 • , 25 • ] and the impaired control set is̄ = [−20 • , 20 • ], hence = H ( ,̄ ) = 5 • . We consider the initial set of states to be a singleton: 0 = {[ 0 • 5 • ∕s ] }. The nominal velocity is taken as = 5 m/s, and the length of the vessel is = 45 m. A large area of the actual off-nominal set is lost due to the need for coping with1 Demonstration of the computation of a guaranteed reachable set for the Norrbin ship model under diminished rudder authority: https: Fig. 3 : 3Inner approximation of the off-nominal reachable set of the Norrbin model in the case of diminished control authority, as obtained using a ball-based slimming operation. The cross-hatched areas each denote an interval on the -th Cartesian dimension in which it is guaranteed that there exists at least one state in the off-nominal reachable set such that = , where lies on the interval. Fig. 4 : 4Inner approximation of the off-nominal reachable set of the Norrbin model in the case of decreased speed, as obtained using a conservative ball-based slimming operation. b) Speedup: Fig. 5 : 5Outer approximation of the off-nominal reachable set of the Norrbin model in the case of increased speed, as obtained using a hyperrectangular fattening. 1 ) 1Diminished Control Authority: We consider an off-nominal set of admissible control inputs̄ ⊆ such that H( ,̄ ) ≤ . Leṫ̃ ( ) = ( ) −̄ ( ) = ̃ ( ) + ̃ ( ),where ( ) is a solution of the nominal system, and̄ ( ) corresponds to the off-nominal system. It is then straightforward to show that a hyperrectangular trajectory deviation bound can be computed as follows: 2 ) 2Changed Dynamics: Let us now consider the case where ( ) in (14) is replaced bȳ ( ), such that | ( ) −̄ ( )| ≤ ( ). Leṫ̃ ( ) = ( ) −̄ ( ) = ̃ ( ) + ̃ ( ) + ( ) −̄ ( ), where ( ) is a solution of the nominal system, and̄ ( ) corresponds to the off-nominal system. It then suffices to modify (15) as follows: We take the admissible set of control inputs to be = × =1 [−1, 1], and the diminished set of control inputs is̄ = × =1 [−0.9, 0.9], so that = H ( ,̄ ) = 0.1. We take = 5, and = 8. By applying the bound on the trajectory deviation growth given in (16), we can compute inner-approximations to the impaired reachable set as per Theorem 2. We consider here a final time of = 0.25, and an initial set of states 0 = {0}. The results are given as length fractions of the projections in Definition 5, ( , ) is a complete metric space [21, Prop. 1, p. 1007]. Lemma 3 ( [21, Thm. 1, p. 1010]). Consider a differential inclusioṅ Table I . IUsing the notation of Theorem 2, for each -th Cartesian dimension, the first column gives the ratio length[ ⧵ (( ) ) + ( ) ]∕length[ ], and the second column shows length[ ⧵ (( ) ) +‖ ( )‖ ]∕length[ ]. TABLE I : IProjected length ratios of the lower triangular system example by dimension using hyperrectangular and ball-based slimming operations.Dim. Hyperrect. inner-approx./Off- nominal length Ball-based inner-approx./Off- nominal length 1 88.3% 29.8% 2 87.7% 41.7% 3 87.0% 52.8% 4 63.8% 18.8% 5 58.0% 30.3% TABLE II : IIProjected length ratios of the interconnected system example by dimension using hyperrectangular and ball-based slimming operations.Dim. Hyperrect. inner-approx./Off- nominal length Ball-based inner-approx./Off- nominal length 1 61.0% 34.3% 2 95.5% 89.7% 3 67.4% 0% 4 71.7% 0% 5 80.4% 13.6% 21 , 21Thm. 1, p. 1010]: 1) is ℒ ⊗ℬ(ℝ )-measurable, as defined in [21, p. 1007]; 2) .12, p. 91]: Definition 6. A multifunction ∶ [0, ∞) × ℝ ⇉ ℝ with closed values is said to have the Scorza-Dragoni property if, for all > 0, there exists a closed subset ⊂ [0, ∞) such that ([0, ∞) ⧵ ) ≤ , where is the Lebesgue measure, and is continuous on × ℝ . It trivially follows that satisfies the Scorza-Dragoni property if it is continuous and has closed values. Diagnosis and Fault-Tolerant Control. M Blanke, M Kinnaert, J Lunze, M Staroswiecki, SpringerBerlin, Germany; Berlin HeidelbergM. Blanke, M. Kinnaert, J. Lunze, and M. Staroswiecki, Diagnosis and Fault-Tolerant Control. Berlin, Germany: Springer Berlin Heidelberg, 2006. Guaranteed safe reachability-based trajectory design for a highfidelity model of an autonomous passenger vehicle. S Vaskov, U Sharma, S Kousik, M Johnson-Roberson, R Vasudevan, 2019 American Control Conference. Philadelphia, USAS. Vaskov, U. Sharma, S. Kousik, M. Johnson-Roberson, and R. Vasude- van, "Guaranteed safe reachability-based trajectory design for a high- fidelity model of an autonomous passenger vehicle," in 2019 American Control Conference, Philadelphia, USA, 2019, pp. 705-710. Mixed Monotonicity for Reachability and Safety in Dynamical Systems. S Coogan, 59th IEEE Conference on Decision and Control. Jeju, South KoreaS. Coogan, "Mixed Monotonicity for Reachability and Safety in Dynam- ical Systems," in 59th IEEE Conference on Decision and Control, Jeju, South Korea, 2020, pp. 5074-5085. Reachability analysis of nonlinear systems using conservative polynomialization and non-convex sets. M Althoff, 16th International Conference on Hybrid Systems: Computation and Control. Philadelphia, USAACM PressM. Althoff, "Reachability analysis of nonlinear systems using conser- vative polynomialization and non-convex sets," in 16th International Conference on Hybrid Systems: Computation and Control. Philadelphia, USA: ACM Press, 2013, pp. 173-182. Set Propagation Techniques for Reachability Analysis. M Althoff, G Frehse, A Girard, Robotics, and Autonomous Systems. 41Annual Review of ControlM. Althoff, G. Frehse, and A. Girard, "Set Propagation Techniques for Reachability Analysis," Annual Review of Control, Robotics, and Autonomous Systems, vol. 4, no. 1, pp. 369-395, May 2021. Hamilton-Jacobi reachability: A brief overview and recent advances. S Bansal, M Chen, S Herbert, C J Tomlin, 56th IEEEE Conference on Decision and Control. Melbourne, AustraliaIEEES. Bansal, M. Chen, S. Herbert, and C. J. Tomlin, "Hamilton-Jacobi reachability: A brief overview and recent advances," in 56th IEEEE Conference on Decision and Control. Melbourne, Australia: IEEE, 2017, pp. 2242-2253. Inner and outer reachability for the verification of control systems. E Goubault, S Putot, 22nd ACM International Conference on Hybrid Systems: Computation and Control. Montreal, CanadaACME. Goubault and S. Putot, "Inner and outer reachability for the verifi- cation of control systems," in 22nd ACM International Conference on Hybrid Systems: Computation and Control. Montreal, Canada: ACM, 2019, pp. 11-22. An NMPC approach using convex inner approximations for online motion planning with guaranteed collision avoidance. T Schoels, L Palmieri, K O Arras, M Diehl, 2020 IEEE International Conference on Robotics and Automation. Paris, FranceT. Schoels, L. Palmieri, K. O. Arras, and M. Diehl, "An NMPC approach using convex inner approximations for online motion planning with guaranteed collision avoidance," in 2020 IEEE International Conference on Robotics and Automation, Paris, France, 2020, pp. 3574-3580. Computing the viability kernel using maximal reachable sets. S Kaynama, J Maidens, M Oishi, I M Mitchell, G A Dumont, 15th ACM International Conference on Hybrid Systems: Computation and Control. Beijing, ChinaACM PressS. Kaynama, J. Maidens, M. Oishi, I. M. Mitchell, and G. A. Dumont, "Computing the viability kernel using maximal reachable sets," in 15th ACM International Conference on Hybrid Systems: Computation and Control. Beijing, China: ACM Press, 2012, pp. 55-63. Real-time control for autonomous racing based on viability theory. A Liniger, J Lygeros, IEEE Transactions on Control Systems Technology. 272A. Liniger and J. Lygeros, "Real-time control for autonomous racing based on viability theory," IEEE Transactions on Control Systems Technology, vol. 27, no. 2, pp. 464-478, Mar. 2019. Computing safe sets of linear sampled-data systems. F Gruber, M Althoff, IEEE Control Systems Letters. 52F. Gruber and M. Althoff, "Computing safe sets of linear sampled-data systems," IEEE Control Systems Letters, vol. 5, no. 2, pp. 385-390, 2021. Forward inner-approximated reachability of non-linear continuous systems. E Goubault, S Putot, 20th ACM International Conference on Hybrid Systems: Computation and Control. Pittsburgh, USAACME. Goubault and S. Putot, "Forward inner-approximated reachability of non-linear continuous systems," in 20th ACM International Conference on Hybrid Systems: Computation and Control. Pittsburgh, USA: ACM, 2017, pp. 1-10. Estimates of reachable sets of control systems with nonlinearity and parametric perturbations. T F Filippova, Proceedings of the Steklov Institute of Mathematics. the Steklov Institute of Mathematics292T. F. Filippova, "Estimates of reachable sets of control systems with nonlinearity and parametric perturbations," Proceedings of the Steklov Institute of Mathematics, vol. 292, no. S1, pp. 67-75, Apr. 2016. Inner-approximating reachable sets for polynomial systems with time-varying uncertainties. B Xue, M Franzle, N Zhan, IEEE Transactions on Automatic Control. 654B. Xue, M. Franzle, and N. Zhan, "Inner-approximating reachable sets for polynomial systems with time-varying uncertainties," IEEE Transactions on Automatic Control, vol. 65, no. 4, pp. 1468-1483, Apr. 2020. Robust maneuvering envelope estimation based on reachability analysis in an optimal control formulation. T Lombaerts, S Schuet, K Wheeler, D Acosta, J Kaneshige, 2013 Conference on Control and Fault-Tolerant Systems. Nice, FranceIEEET. Lombaerts, S. Schuet, K. Wheeler, D. Acosta, and J. Kaneshige, "Robust maneuvering envelope estimation based on reachability analysis in an optimal control formulation," in 2013 Conference on Control and Fault-Tolerant Systems. Nice, France: IEEE, 2013, pp. 318-323. Online inner approximation of reachable sets of nonlinear systems with diminished control authority. H El-Kebir, M Ornik, 2021 Conference on Control and Its Applications. Philadelphia, PASociety for Industrial and Applied MathematicsH. El-Kebir and M. Ornik, "Online inner approximation of reachable sets of nonlinear systems with diminished control authority," in 2021 Conference on Control and Its Applications. Philadelphia, PA: Society for Industrial and Applied Mathematics, 2021. Investigating impaired aircraft's flight envelope variation predictability using least-squares regression analysis. R Norouzi, A Kosari, M H Sabour, Journal of Aerospace Information Systems. 171R. Norouzi, A. Kosari, and M. H. Sabour, "Investigating impaired aircraft's flight envelope variation predictability using least-squares regression analysis," Journal of Aerospace Information Systems, vol. 17, no. 1, pp. 3-23, Jan. 2020. T Shafa, M Ornik, arXiv:2108.11045Reachability of Nonlinear Systems with Unknown Dynamics. T. Shafa and M. Ornik, "Reachability of Nonlinear Systems with Unknown Dynamics," arXiv:2108.11045, 2021. . J R Munkres, Topology , Prentice HallUpper Saddle River, USA2nd edJ. R. Munkres, Topology, 2nd ed. Upper Saddle River, USA: Prentice Hall, 2000. B G Pachpatte, Inequalities for Differential and Integral Equations. San Diego, USAAcademic PressB. G. Pachpatte, Inequalities for Differential and Integral Equations. San Diego, USA: Academic Press, 1998. Arcwise connectedness of sets of solutions to differential inclusions. V Staicu, Journal of Mathematical Sciences. 1201V. Staicu, "Arcwise connectedness of sets of solutions to differential inclusions," Journal of Mathematical Sciences, vol. 120, no. 1, pp. 1006- 1015, 2004. On solutions of a differential inclusion with lower semicontinuous nonconvex right-hand side in a Banach space. A A Tolstonogov, I A Finogenko, Mathematics of the USSR-Sbornik. 531A. A. Tolstonogov and I. A. Finogenko, "On solutions of a differential inclusion with lower semicontinuous nonconvex right-hand side in a Banach space," Mathematics of the USSR-Sbornik, vol. 53, no. 1, pp. 203-231, 1986. Characterizing continuity by preserving compactness and connectedness. J Gerlits, I Juhász, L Soukup, Z Szentmiklóssy, Topology and its Applications. 1381-3J. Gerlits, I. Juhász, L. Soukup, and Z. Szentmiklóssy, "Characterizing continuity by preserving compactness and connectedness," Topology and its Applications, vol. 138, no. 1-3, pp. 21-44, 2004. A E Taylor, D C Lay, Introduction to Functional Analysis. Malabar, Florida, USAR.E. Krieger Pub. Co2nd ed.A. E. Taylor and D. C. Lay, Introduction to Functional Analysis, 2nd ed. Malabar, Florida, USA: R.E. Krieger Pub. Co, 1986. G B Folland, Real Analysis: Modern Techniques and Their Applications. New YorkWileyser. Pure and Applied MathematicsG. B. Folland, Real Analysis: Modern Techniques and Their Applica- tions, 2nd ed., ser. Pure and Applied Mathematics. New York: Wiley, 1999. On the solution set of differential inclusions in Banach space. Q J Zhu, Journal of Differential Equations. 932Q. J. Zhu, "On the solution set of differential inclusions in Banach space," Journal of Differential Equations, vol. 93, no. 2, pp. 213-237, 1991. W A Sutherland, Introduction to Metric and Topological Spaces. Oxford, UKOxford University Press2nd edW. A. Sutherland, Introduction to Metric and Topological Spaces, 2nd ed. Oxford, UK: Oxford University Press, 2009. Implementation of Taylor models in CORA 2018. M Althoff, D Grebenyuk, N Kochdumper, 5th International Workshop on Applied Verification for Continuous and Hybrid Systems. Oxford, UKM. Althoff, D. Grebenyuk, and N. Kochdumper, "Implementation of Taylor models in CORA 2018," in 5th International Workshop on Applied Verification for Continuous and Hybrid Systems, Oxford, UK, 2018, pp. 145-173. Adaptive feedback linearization applied to steering of ships. T Fossen, M Paulsen, The First IEEE Conference on Control Applications. Dayton, USAIEEET. Fossen and M. Paulsen, "Adaptive feedback linearization applied to steering of ships," in The First IEEE Conference on Control Applica- tions. Dayton, USA: IEEE, 1992, pp. 1088-1093. Output feedback control of large-scale nonlinear time-delay systems in lower triangular form. X Zhang, L Liu, G Feng, C Zhang, Automatica. 4911X. Zhang, L. Liu, G. Feng, and C. Zhang, "Output feedback control of large-scale nonlinear time-delay systems in lower triangular form," Automatica, vol. 49, no. 11, pp. 3476-3483, 2013. Comparative study of complexity of algorithms for ordinary differential equations. D S Ruhela, R N Jat, International Journal of Advanced Research in Computer Science & Technology. 22D. S. Ruhela and R. N. Jat, "Comparative study of complexity of algorithms for ordinary differential equations," International Journal of Advanced Research in Computer Science & Technology, vol. 2, no. 2, pp. 329-334, 2014. Level Set Method. N D Katopodes, Free-Surface Flow. ElsevierN. D. Katopodes, "Level Set Method," in Free-Surface Flow. Elsevier, 2019, pp. 804-828. Decentralized state estimation in large-scale interconnected dynamical systems. M Saif, Y Guan, Automatica. 281M. Saif and Y. Guan, "Decentralized state estimation in large-scale interconnected dynamical systems," Automatica, vol. 28, no. 1, pp. 215- 219, 1992. Nonlinear model order reduction based on local reduced-order bases. D Amsallem, M J Zahr, C Farhat, International Journal for Numerical Methods in Engineering. 9210D. Amsallem, M. J. Zahr, and C. Farhat, "Nonlinear model order reduction based on local reduced-order bases," International Journal for Numerical Methods in Engineering, vol. 92, no. 10, pp. 891-916, Dec. 2012. Topological Fixed Point Theory of Multivalued Mappings. L Górniewicz, SpringerDordrecht, The Netherlands; NetherlandsL. Górniewicz, Topological Fixed Point Theory of Multivalued Map- pings. Dordrecht, The Netherlands: Springer Netherlands, 1999.
[]
[ "GENERAL RELATIVITY WITH NONZERO COSMOLOGICAL CONSTANT Λ AS A GAUGE THEORY", "GENERAL RELATIVITY WITH NONZERO COSMOLOGICAL CONSTANT Λ AS A GAUGE THEORY" ]
[ "Marta Dudek [email protected] \nInstitute of Mathematics\nInstitute of Mathematics and Cosmology Group\nUniversity of Szczecin\nWielkopolska 1570-451SzczecinEUPoland\n", "Janusz Garecki [email protected] \nUniversity of Szczecin\nWielkopolska 1570-451SzczecinEUPoland\n" ]
[ "Institute of Mathematics\nInstitute of Mathematics and Cosmology Group\nUniversity of Szczecin\nWielkopolska 1570-451SzczecinEUPoland", "University of Szczecin\nWielkopolska 1570-451SzczecinEUPoland" ]
[]
We show in a new way that the general relativity action (and Lagrangian) in recent Einstein-Palatini formulation is equivalent in four dimensions to the action (and Langrangian) of a gauge field.Firstly, we present the Einstein-Palatini (EP) action with cosmological constant Λ = 0 and derive Einstein fields equations from it. Then we consider their action integral in terms of the corrected curvature Ω cor . We will see that in terms of Ω cor the EP action takes the form typical for a gauge field. Finally, we give a geometrical interpretation of the corrected curvature Ω cor . This paper is a continuation of the previous paper[17]and it also gives an amended version of the lecture delivered by one of the authors [M.D.] at Hypercomplex Seminar 2017 in Będlewo.Keywords: action integral, fiber bundle, connection in a principal fiber bundle and its curvature, pull-back of forms, Lie groups and their algebras.
null
[ "https://arxiv.org/pdf/1802.01371v1.pdf" ]
119,228,377
1802.01371
710e015d4a7c9915ce7305ce43fa882542c3ebe9
GENERAL RELATIVITY WITH NONZERO COSMOLOGICAL CONSTANT Λ AS A GAUGE THEORY Marta Dudek [email protected] Institute of Mathematics Institute of Mathematics and Cosmology Group University of Szczecin Wielkopolska 1570-451SzczecinEUPoland Janusz Garecki [email protected] University of Szczecin Wielkopolska 1570-451SzczecinEUPoland GENERAL RELATIVITY WITH NONZERO COSMOLOGICAL CONSTANT Λ AS A GAUGE THEORY Received 24 July 2017 We show in a new way that the general relativity action (and Lagrangian) in recent Einstein-Palatini formulation is equivalent in four dimensions to the action (and Langrangian) of a gauge field.Firstly, we present the Einstein-Palatini (EP) action with cosmological constant Λ = 0 and derive Einstein fields equations from it. Then we consider their action integral in terms of the corrected curvature Ω cor . We will see that in terms of Ω cor the EP action takes the form typical for a gauge field. Finally, we give a geometrical interpretation of the corrected curvature Ω cor . This paper is a continuation of the previous paper[17]and it also gives an amended version of the lecture delivered by one of the authors [M.D.] at Hypercomplex Seminar 2017 in Będlewo.Keywords: action integral, fiber bundle, connection in a principal fiber bundle and its curvature, pull-back of forms, Lie groups and their algebras. Einstein-Palatini action for general relativity The Einstein-Palatini action with cosmological constant Λ = 0 in new formulation [3] reads S EP = 1 4κ D ϑ i ∧ ϑ j ∧ Ω kl + Λ 6 ϑ i ∧ ϑ j ∧ ϑ k ∧ ϑ l η ijkl ,(1) where Ω is the curvature of the spin connection ω and κ = 8πG/c 4 . All indices take values (0, 1, 2, 3) and D means an established 4-dimensional compact domain in spacetime. ϑ a denote 1-forms of the Lorentzian coreper in terms of which the spacetime looks locally Minkowskian, i.e., g = η ik ϑ i ⊗ϑ k , η ik = diag(1, −1, −1, −1). η ijkl is completely antysymmetric Levi-Civita pseudotensor: η 0123 = |g|, where g := det(g ik ). In a Lorentzian coreper |g| = 1. Spin connection ω is a general metric connection (or Levi-Civita connection) in Lorentzian coreper. For convenience we will write the cosmological constant Λ = 0 in the form Λ = Λ, where Λ > 0 and = ±1. In conseqence, if = 1, then Λ = Λ > 0, and if = −1, then Λ = −Λ < 0. For the geometrical units G = c = 1 the formula (1) takes the form in terms if and Λ > 0 S EP = 1 32π D η ijkl ϑ i ∧ ϑ j ∧ Ω kl + Λ 6 η ijkl ϑ i ∧ ϑ j ∧ ϑ k ∧ ϑ l .(2) Adding to the geometric part S EP the matter action S m = D L mat (φ A , Dφ A , ϑ i ),(3) where φ A means tensor-valued matter form and Dφ A its absolute exterior derivative, we obtain full action S = S EP + S m = 1 32π D η ijkl ϑ i ∧ ϑ j ∧ Ω kl + Λ 6 η ijkl ϑ i ∧ ϑ j ∧ ϑ k ∧ ϑ l + D L mat (φ A , Dφ A , ϑ i )(4) After some calculations one gets that the variation δS = δS EP + δS m with respect to ϑ i , ω i j and φ A reads δS = D 1 8π δϑ i ∧ 1 2 Ω kl ∧ η kli + Λη i + 8πt i + 1 2 δω i j ∧ 1 8π Dη j i + s j i + δφ A ∧ L A + an exact f orm .(5) The three-forms: energy-momentum t i , classical spin s j i and L A are defined by the following form of the variation δL m δL m = δϑ i ∧ t i + 1 2 δω i j ∧ s j i + δφ A ∧ L A + an exact f orm.(6) η kli , η j i , η i mean the forms introduced in the past by A. Trautman [11]. The variations δϑ i , δω i j and δφ A are vanishing on the boundary ∂D of the compact domain D. Einstein's equations like all the other physical field equations arise due to variational principle, which is called the Principle of Stationary Action or Hamiltonian Principle. In our case it has the following form: δS = 0,(7) It leads us to the following sets of the field equations 1 2 Ω kl ∧ η kli + Λη i = −8πt i (8) Dη j i = −8πs j i(9) and L A = 0.(10) L A = 0 represent equations of motion for matter field. These equations are not intrinsic in further our considerations, so we will omit them. We are interested only in the gravitational field equations which are given by the equations (8)- (9). In vacuum where t i = s j i = 0 also Dη j i = 0 and we get standard vacuum Einstein's equations with cosmological constant Λ = Λ 1 2 Ω kl ∧ η kli ± Λη i = 0(11) and pseudoriemannian geometry. In general, we have the Einstein-Cartan equations and Riemann-Cartan geometry (a metric geometry with torsion, see e.g. [11]). The standard GR we obtain also if δLm δω i k = 0 =⇒ s k i = 0 =⇒ Dη k i = 0, i.e. , if we confine to spinless matter. Namely, one has in the case the following gravitational equations 1 2 Ω kl ∧ η kli + Λη i = −8πt i .(12) One can show that 1 2 Ω kl ∧ η kli = −G s i η s , where the Einstein tensor G s i is defined as follows G s i = R s i − 1 2 δ s i R.(13) Putting t i = T s i η s we get from (12) −G s i η s + Λδ s i η s = −8πT s i η s .(14) or G s i − Λδ s i = 8πT s i .(15) (15) are standard Einstein equations with cosmological constant Λ in tensorial notation with symmetric matter tensor: T ik = T ki . 2 Einstein-Palatini action integral for General Relativity in vacuum and with nonzero cosmological constant Λ as integral action for a gauge field Now, getting back to Einstein-Palatini action in vacuum S EP = 1 4κ D ϑ i ∧ ϑ j ∧ Ω kl + Λ 6 ϑ i ∧ ϑ j ∧ ϑ k ∧ ϑ l η ijkl = 1 4κ D ϑ i ∧ ϑ j ∧ Ω kl η ijkl + Λ 6 ϑ i ∧ ϑ j ∧ ϑ k ∧ ϑ l η ijkl(16) and defining the duality operator [1] := − η ijkl 2 =⇒ η ijkl = −2(17) one gets η ijkl Ω kl = −2 Ω ij ,(18)η ijkl ϑ k ∧ ϑ l = −2 ϑ i ∧ ϑ j .(19) Thus the Einstein-Palatini action has the following form S EP = − 1 2κ D ϑ i ∧ ϑ j ∧ Ω ij + Λ 6 ϑ i ∧ ϑ j ∧ ϑ i ∧ ϑ j = − 1 2κ D tr ϑ ∧ ϑ ∧ Ω + Λ 6 ϑ ∧ ϑ ∧ ϑ ∧ ϑ .(20) Let us introduce the corrected curvature Ω cor Ω cor := Ω + Λ 3 ϑ ∧ ϑ =⇒ ϑ ∧ ϑ = − 3 Λ Ω − Ω cor .(21) Substituting the last formula into Einstein-Palatini action we get S EP = −1 2κ D tr ϑ ∧ ϑ ∧ Ω + Λ 6 ϑ ∧ ϑ ∧ ϑ ∧ ϑ = 1 2κ D tr 3 Λ Ω − Ω cor ∧ Ω − Λ 6 9 2 Λ 2 Ω − Ω cor ∧ Ω − Ω cor = 3 4Λκ D tr 2 Ω − Ω cor ∧ Ω − Ω − Ω cor ∧ Ω − Ω cor = 3 4Λκ D tr 2Ω ∧ Ω − 2Ω cor ∧ Ω − Ω ∧ Ω + Ω cor ∧ Ω + Ω ∧ Ω cor − Ω cor ∧ Ω cor = 3 4Λκ D tr Ω ∧ Ω − Ω cor ∧ Ω + Ω ∧ Ω Cor − Ω cor ∧ Ω cor(22) Because −Ω cor ∧ Ω + Ω ∧ Ω cor reduces, then we finally have S EP = 3 4Λκ D tr Ω ∧ Ω − Ω cor ∧ Ω cor .(23) The expression tr Ω ∧ Ω = η ijkl Ω ij ∧ Ω kl is in four dimensions a topological invariant called Euler's form, which does not influence the equations of motion [12]. Hence, in 4-dimensions the Einstein-Palatini action is equivalent to S EP = − 3 4Λκ D tr Ω cor ∧ Ω cor ,(24)where = ±1. We see that the Einstein-Palatini action in 4-dimensions is efectively the functional which is quadratic function of the corrected Riemannian curvature, i.e., it has the form of the action for a gauge field. Only one difference is that in (24) we have the star operator , which is different from Hodge star operator. Namely, our star operator acts onto "interior" indices (tetrad's indices), not onto forms as Hode duality operator does [2,12]. It is interesting that Ω cor = 0 for the de Sitter spacetime which is the fundamental vacuum solution to the Einstein equations (8) We would like to emphasize that in the case Λ = Λ = 0 ⇐⇒ R = ∞ the above trick with Ω cor breaks. Namely, we have in this case Ω cor = Ω. This result formally trivializes S E−P action (see formula (22)) to the strange form S E−P = 0 and has no physical meaning. The case Λ = Λ < 0 needs introducing of the anti de Sitter spacetime (AdS) and its isometry group SO(3,2) (see Section 3). The anti de Sitter spacetime has very strange casual properties (see e.g. [18]). In consequence, it seems that the physical meaning of the case Λ = Λ < 0 is problematic. 3 Geometrical interpretation of the corrected curvature Ω cor We begin from = 1, i.e., from Λ = Λ = Λ > 0. This is de Sitter case because for Λ > 0, the Einstein equations (11) admit the de Sitter spacetime as fundamental solution (see,e.g. [18]). This spacetime is realized as hyperboloid (χ 0 ) 2 − (χ 1 ) 2 − (χ 2 ) 2 − (χ 3 ) 2 − (χ 4 ) 2 = −R 2(25) with radius R in the 5-dimensional pseudoeuclidean spacetime M (4, 1) which possesses metric η AB = diag(1, −1, −1, −1, −1) [18]. Let P (M 4 , GdS) denotes the principal bundle of de Sitter basis over a manifold M 4 (spacetime) with de Sitter group(GdS) [5,13] as a structure group. The de Sitter group is isomorphic to the group SO(4, 1) which acts on the spacetime M (4, 1) as rotations group. Let ω be 1-form of connection in the principle fibre bundle P (M 4 , GdS). The form ω has values in the algebra g of the group GdS. The algebra g is also the algebra of the group SO(4, 1). This algebra splits (as a vector space) into direct sum g = so(3, 1) ⊕ R (3,1) .(26) Here so(3, 1) denotes the algebra of the group SO(3, 1) isomorphic to Lorentz group L, and R (3,1) is a 4-dimensional vector space of generalised translations (translations in the curved de Sitter spacetime). One can identify the de Sitter spacetime with the quotient SO(4,1)/SO(3,1). Let us define so(3, 1) =: h, R 3,1 =: p. Then we have [1,2] g = h ⊕ p,(27) and [h, h] ⊂ h, [h, p] ⊂ p, [p, p] ⊂ h.(28) This means that the Lie algebra g is a symmetric Lie algebra [1,2]. On the other hand, the spaces which satisfy (27)-(28) are called globally symmetric Riemannian spaces [13]. analogical to the morphism of the bundle linear frames and the bundle affine frames [4]. This morphism is created by embedding of the SO(3, 1) group into SO(4, 1). It creates pull-back f * ω of the form ω onto the bundle P (M 4 , L). Here ω is the connection 1-form in the bundle P (M 4 , GdS). Let us denote this pull-back by A. A is a 1-form on P (M 4 , L) with values in the direct sum [4] so(3, 1) ⊕ R (3,1) . Hence, we have a natural decomposition [2,3,4,13] A = f * ω = ω + θ R ,(31) where ω is a 1-form on P (M 4 , L) with values in the algebra so(3, 1) and θ is a 1-form on P (M 4 , L) with values on R (3,1) . ω is a connection on the bundle P (M 4 , L). R is the radius of the de Sitter spacetime (see e.g. [18]). On the base M 4 the 1-form θ can be identified with 1-form ϑ already used in this paper: θ = ϑ. In the following we will work on the base space M 4 and write (31) in the form A = ω + ϑ R . Let us compute the curvature 2-form Ω of the pulled back A, where A A B = A a b = ω a b A,B=0,1,2,3,4 A a 4 = 1 R ϑ a ; A 4 a = 1 R ϑ a a,b=0,1,2,3 ,(32) and A AB = −A BA . From the definition we have Ω A B = dA A B + A A K ∧ A K B .(33) Hence Ω a b = dA a b + A a k ∧ A k b = dω a b + A a d ∧ A d b + A a 4 ∧ A 4 b = Ω a b + 1 R ϑ a ∧ 1 R ϑ b = Ω a b + 1 R 2 ϑ a ∧ ϑ b (34) because in this case A 4 b = 1 R ϑ b . Ω i 4 = dA i 4 + A i k ∧ A k 4 = 1 R dϑ i + A i b ∧ A b 4 = 1 R (dϑ i + ω i b ∧ ϑ b ) = 1 R D ω ϑ i = 1 R Θ i ω .(35) In the last formula we have usual the antysymmetry of the connection form A BC = −A CB → A 4 4 = A 44 = 0(Ω AB = Ω a b = Ω a ω b + 1 R 2 ϑ a ∧ ϑ b = Ω a ω b + Λ 3 ϑ a ∧ ϑ b Ω i 4 = 1 R D ω ϑ i = 1 R Θ i ω .(37) The cosmological constant Λ = 3 R 2 > 0 and Θ means the torsion 3-form of the connection ω. In the Section 2 we gave the definition of the corrected curvature Ω cor for the case Λ = Λ > 0 as follows Ω cor := Ω + Λ 3 ϑ ∧ ϑ.(38) As one can see this curvature is the so(3,1) part of the curvature Ω of the connection A = f * ω = ω + ϑ. If the torsion Θ of the connection ω is 0, then Ω cor = Ω. Let us consider now the case = −1. Then Λ = Λ = −Λ < 0. From the beginning we must remember that this case seems to have smaller physical meaning than the case Λ > 0. In the case Λ < 0 we have to take into account the principal bundle of the anti-de Sitter bases over spacetime manifold M 4 . This principal bundle we will denote P (M 4 , AdS), where AdS means anti-de Sitter group. One can identify this group with the rotation group SO (3,2) in the pseudoeuclidean 5dimentional spacetime M 3,2 with metric G AB = G AB = diag(1, −1, −1, −1, 1). On the other hand, the anti-de Sitter group is the isometry group of the anti-de Sitter spacetime (see e.g. [18]). The AdS spacetime is the fundamental solution to the Einstein equations (11) if = −1, i.e., if Λ = −Λ < 0. This solution can be realized as 4-dimensional hyperboloid (χ 0 ) 2 − (χ 1 ) 2 − (χ 2 ) 2 − (χ 3 ) 2 + (χ 4 ) 2 = R 2(39) with imaginary radius iR immersed in 5-dimensional spacetime M (3, 2) with metric G AB = diag(1, −1, −1, −1, 1) (see e.g. [3,5]). Let ω be 1-form of connection in the principal bundle P (M 4 , AdS). The form ω has values in the algebra g of the group SO (3,2). For the algebra g the formulas Hence we have a natural decomposition [analogical to (31)] A = f * ω = ω + θ R .(42) Here ω determines metric connection on the bundle P (M 4 , L) and θ is a 1-form on P (M 4 , L) with values in the space R (3,1) of the generalized translations in anti de Sitter spacetime. [θ is analogical to the soldering form on the bundle of the linear frames P (M 4 , GL)]. R means the radius of the AdS spacetime. In this case one has Λ = Λ = −Λ = −3 R 2 . In the following we once more confine to the base manifold M 4 (=spacetime). Then, as in the case Λ > 0, θ = ϑ, A = ω + ϑ R .(43) Let us calculate the curvature 2-form Ω of the pulled back A. Starting with A A B = A a b = ω a b A,B=0,1,2,3,4 A a 4 = 1 R ϑ a ; A 4 a = 1 R ϑ a a,Ω AB = Ω a b = Ω a b − 1 R 2 ϑ a ∧ ϑ b Ω i 4 = 1 R Θ i ω (45) where Λ = 3 R 2 > 0. Here Ω is the curvature of the connection ω and Θ is its torsion. We see that in the case Λ = Λ < 0, the so(3, 1) part of the curvature Ω is equal Ω a b − 1 R 2 ϑ a ∧ ϑ b = Ω a b − Λ 3 ϑ a ∧ ϑ b(46) i.e., it is equal to Ω cor given by (21) if = −1. By using this Ω cor one can easily obtain the form (24) (with = −1) for the Einstein-Palatini action (16) with = −1. One can write the obtained results for Λ = Λ = 0, Λ > 0, = ±1 in the common form A A B = A a b = ω a b A a 4 = 1 R ϑ a ; A 4 a = R ϑ a .(47)Ω AB = Ω a b + Λ 3 ϑ a ∧ ϑ b Ω i 4 = 1 R Θ i , Ω 4 a = R Θ a .(48) where = 1 for Λ > 0 −1 for Λ < 0 .(49) In the Section 2 we gave the definition of the corrected curvature Ω cor as follows: Ω cor := Ω + Λ 3 ϑ ∧ ϑ Λ > 0.(50) One can see that this curvature is a curvature of the connection pulled back from the bundles P (M 4 , SO(4, 1)) or P (M 4 , SO(3, 2)) onto bundle P (M 4 , L) if Θ = 0. If Θ = 0 then Ω cor is the so(3, 1)-part of this curvature. Conclusion In this article we have shown that in four dimensions the action integral for GR with Λ = 0 can be written in very similar form to the form of the action integral for the typical gauge field. There is only one difference -the star. Instead of the Hodge star, we have slightly different star called the duality operator [2,12]. Our result is important because it shows that there is no need to generalize GR and construct very complicated gravitational theories to obtain a gravitational theory as a gauge theory. The ordinary GR formulated in terms of tetrads and spin connection with cosmological constant Λ = 0 is already a gauge theory. The gauge group of this theory is Lorentz group SO(3, 1) or its double covering SL(2, C). The above facts are very interesting in connection with universality of the Einstein theory (alternative theories are not necessary) [15,16] and in connection with trials of quantizing this theory (gauge field can be successfully quantized). Some scientists [1,2,3] were concerned with this problem and they came to the similar conclusions as ours, but they applied in their works the Cartan's approach to the connection in the principal bundle [2,13,14]. This approach is not well known among geometrists and relativists. We have used only the standard theory of connection in the principal bundle which was created by Ehresmann -Cartan's student [4,8]. His approach is commonly used in differential geometry and in relativity. Appendix 1 η forms and operations with them [11] Following [11] we define η ijkl = |g| ijkl (A.1) where ijkl is Levi-Civita pseudotensor with properties ijkl =                1 if the sequence of indices ijkl is an even permutation of the sequence 0, 1, 2, 3; −1 if it is an odd permutation; 0 if the sequence of indices ijkl is not an even permutation of the sequence 0, 1, 2, 3 . (A.2) and we take η 0123 = |g|. In Lorentzian coreper |g| = 1. One has [11] η ijk = ϑ l η ijkl (A.3) η ij = 1 2 ϑ k ∧ η ijk (A.4) η i = 1 3 ϑ j ∧ η ij (A.5) η = 1 4 ϑ i ∧ η i (A.6) ϑ n ∧ η kli = δ n i η kl + δ n l η ik + δ n k η li (A.7) ϑ m ∧ η kl = δ m l η k − δ m k η l (A.8) ϑ j ∧ η i = δ j i η (A.9) The forms η, η i , η ij , η ijk are Hodge dual to the forms 1, ϑ i , ϑ i ∧ ϑ j , ϑ i ∧ ϑ j ∧ ϑ k respectively [11]. if = 1 and Ω cor = 0 for the AdS spacetime if = −1. The AdS spacetime is the fundamental solution of the equations (8) if Λ = Λ < 0. Let P (M 4 , 4L) denotes the principal bundle of Lorentz basis over the manifold M 4 . There exists a morphism of principal bundles f : P (M 4 , L) −→ P (M 4 , GdS) ( Indices A, B, C, ... are raised and lowered with the pseudoeuclidean metric η AB = η AB = diag(1, −1, −1, −1, −1) and the indices a, b, c, ... are raised and lowered with the metric η ab = η ab = diag(1, −1, −1, −1)). So, we have obtained the final result (27), (28) are correct. Let us consider a morphism f : P (M 4 , L) → P (M 4 , AdS) (40) generated by embedding Lorentz group L into SO(3,2) group. This morphism creates pull-back f * ω of the form ω onto the bundle P (M 4 , L). Let us denote this pull-back by A. A is the 1-form on P (M 4 , L) with values in the direct sum so(3, 1) ⊕ R (3,1) = g. AB = −A BA we obtain, after calculations analogical to calculations performed in the case Λ = Λ > 0 [Now, A, B, C, ... are raised and lowered with the metric G AB = G AB = diag(1, −1, −1, −1, 1)] AcknowledgementsAuthors would like to thank Prof. J. Ławrynowicz for possibility of delivering lecture at the Hypercomplex Seminar 2017. MacDowell-Mansouri Gravuty and Cartan Geometry. D K Wise, arXiv:gr-qc/0611154v2,15CQG. 155010D. K. Wise, "MacDowell-Mansouri Gravuty and Cartan Geometry", CQG, 27 (2010) 155010 (arXiv:gr-qc/0611154v2, 15 May 2009) Symmetric Space Cartan Connections and Gravity in Three and Four Dimensions. D K Wise, arXiv:0904.1738v2math.DGD. K. Wise, "Symmetric Space Cartan Connections and Gravity in Three and Four Dimensions", arXiv:0904.1738v2 [math.DG], 3 August 2009 Gauge Gravity: a forward-looking introduction. A Randomo, arXiv:1010.5822v1[gr-qc]27A. Randomo, "Gauge Gravity: a forward-looking introduction", arXiv: 1010.5822v1 [gr-qc], 27 October 2010 S Kobayashi, K Nomizu, Foundations of Differential Geometry. New York , LondonInterscience Publishers, a division of John Wiley and Sons1S. Kobayashi, K. Nomizu, "Foundations of Differential Geometry", Vol.1 and Vol.2, Interscience Publishers, a division of John Wiley and Sons, New York , London 1963 Introduction to Group Theory" an article in "Groups and Topology in Relativity. F Gürsey, C. DeWitt and B. DeWittGordon and BreachLondonF. Gürsey, "Introduction to Group Theory" an article in "Groups and Topology in Relativity",C. DeWitt and B. DeWitt (editors), Gordon and Breach, London 1964 Topological Groups for Physicists. A Dubničkova, Dubnain RussianA. Dubničkova, "Topological Groups for Physicists", Dubna 1987 (in Russian) Applications of Group Theory in Modern Physics. J Mozrzymas, National Scientific Publishers PWNWrocławin PolishJ. Mozrzymas, "Applications of Group Theory in Modern Physics", National Scientific Publishers PWN, Wrocław 1967 (in Polish) Foundations of Modern Differential Geometry. J Gancarzewicz, SCRIPT. in PolishJ. Gancarzewicz, "Foundations of Modern Differential Geometry", SCRIPT, Warsaw 2010 (in Polish) Differentialgeometrie und Faserbuendel. R Sulanke, P Wintgen, Copyright by VEB Deutscher Verlag der Wissenschaften. BerlinR. Sulanke, P. Wintgen, "Differentialgeometrie und Faserbuendel", Copyright by VEB Deutscher Verlag der Wissenschaften, Berlin 1972 Spacetime and Gravitation. W Kopczyński, A Trautman, National Scientific Publishers PWNWarszawain Polish -there exists English translationW. Kopczyński, A. Trautman, "Spacetime and Gravitation", National Scientific Publishers PWN, Warszawa 1984 (in Polish -there exists English translation) Einstein-Cartan Theory. A Trautman, Symposia Mathematica. 12139A. Trautman, "Einstein-Cartan Theory", Symposia Mathematica, 12 (1973) 139 Gravity from Poincare Gauge Theory of the Fundamental Particles. Part V" an article in. K Hayashi, T Shirafuji, Progress of Theoretical Physics. 65525K. Hayashi, T. Shirafuji, "Gravity from Poincare Gauge Theory of the Funda- mental Particles. Part V" an article in "Progress of Theoretical Physics", 65 (1981) 525 W Drechsler, M E Mayer, Fiber Bundle Techniques in Gauge Theories. Berlin · Heidelberg · New YorkSpringer-Verlag67W. Drechsler, M.E. Mayer, "Fiber Bundle Techniques in Gauge Theories" , an article in ""Lectures Notes in Physics" Vol.67, Springer-Verlag, Berlin · Heidelberg · New York 1977 Differential Geometry. Cartan's Generalization of Klein's Erlangen Program. R W Sharpe, Springer-VerlagNew York · Berlin · HeidelbergR. W. Sharpe, "Differential Geometry. Cartan's Generalization of Klein's Er- langen Program" , Springer-Verlag, New York · Berlin · Heidelberg 2000 . J Kijowski, International Journal of Geometric Methods in Modern Physics. 131640008J. Kijowski, "International Journal of Geometric Methods in Modern Physics", 13 (2016) 1640008 Gauge Theory and Gravitation. M A Schweizer, PhD, ZurichM. A. Schweizer, "Gauge Theory and Gravitation", PhD, Zurich 1980 General Relativity with cosmological constant Λ > 0 as a gauge theory. M Dudek, J Garecki, International Journal of Geometric Methods in Modern Physics. submitted toM. Dudek, J. Garecki, "General Relativity with cosmological constant Λ > 0 as a gauge theory", submitted to "International Journal of Geometric Methods in Modern Physics" Foundations of Tensor Analysis. L M Sokołowski, Wydawnictwo Uniwersytetu. L. M. Sokołowski, "Foundations of Tensor Analysis", Wydawnictwo Uniwer- sytetu Warszawskiego 2010.
[]
[ "MODEL INDEPENDENT RESULTS FOR HEAVY QUARKONIUM", "MODEL INDEPENDENT RESULTS FOR HEAVY QUARKONIUM" ]
[ "Joan Soto [email protected] \nDepartament d'Estructura i Constituents de la Matèria\nUniversitat de Barcelona Diagonal 647E-08028BarcelonaCataloniaSpain\n" ]
[ "Departament d'Estructura i Constituents de la Matèria\nUniversitat de Barcelona Diagonal 647E-08028BarcelonaCataloniaSpain" ]
[]
We review a number of results for the spectrum and inclusive decays of heavy quarkonium systems which can be derived from QCD under well controlled approximations. They essentially follow from the hierarchy of scales in these systems, which can be efficiently exploited using non-relativistic effective field theories. In particular, we discuss under which conditions non-relativistic potential models emerge as effective theories of QCD.
10.1142/s0217732304014690
[ "https://export.arxiv.org/pdf/hep-ph/0406104v1.pdf" ]
15,781,154
hep-ph/0406104
b10a01ced95cbb7e4cac8569bab98a4b7df7709a
MODEL INDEPENDENT RESULTS FOR HEAVY QUARKONIUM 9 Jun 2004 Joan Soto [email protected] Departament d'Estructura i Constituents de la Matèria Universitat de Barcelona Diagonal 647E-08028BarcelonaCataloniaSpain MODEL INDEPENDENT RESULTS FOR HEAVY QUARKONIUM 9 Jun 2004Heavy QuarkoniumEffective Field TheoriesNon-Relativistic QCD PACS Nos: 1440Gx, 1838Bx, 1838Cy, 1838Lg We review a number of results for the spectrum and inclusive decays of heavy quarkonium systems which can be derived from QCD under well controlled approximations. They essentially follow from the hierarchy of scales in these systems, which can be efficiently exploited using non-relativistic effective field theories. In particular, we discuss under which conditions non-relativistic potential models emerge as effective theories of QCD. Introduction Heavy quarkonium systems have played a prominent role in our current understanding of the Standard Model. Indeed, both charm and bottom quantum numbers were discovered through heavy quarkonium systems, J/ψ 1 , the lightest vector charmonium, and Υ(1S) 2 , the lightest vector bottomonium, respectively. They have also been important for our understanding of QCD, the sector of the Standard Model which concerns strong interactions. Indeed, since the heavy quark masses m are larger than Λ QCD , the typical hadronic scale, two important properties of QCD play a rôle in these systems, asymptotic freedom and confinement. On the one hand, asymptotic freedom explains the narrow width of the lower laying states 3 . On the other hand, they are the closest objects in nature to two ideal static color sources, whose energy behavior at large distances serves as an order parameter for confinement (in the absence of light quarks) 4 . It was soon realized that, due to asymptotic freedom, for sufficiently heavy quark masses, heavy quarkonium systems should be similar to positronium and amenable for a weak coupling analysis 3 . Unfortunately, actual charm and bottom masses turned out not to be sufficiently heavy as to allow to explain the observed spectrum in the weak coupling regime 5 . However, they appeared to be heavy enough as to allow for a good phenomenological description of the spectrum by means of simple non-relativistic potential models (see Ref. 6 for a review). What to take as the potential was, and still is, the main input of such models. The question then arose whether such potential could in principle be obtained from QCD if reliable non-perturbative techniques were at hand. Formulas were produced for it in terms of expectation values of Wilson loops (to be evaluated non-perturbatively) in a 1/m expansion up to order 1/m 2 , including spin dependent and velocity dependent terms 4,7 . However, when one loop results for the potential became available from direct QCD perturbative calculations 8 , it was realized that some of them were not correctly reproduced by the perturbative evaluation of the Wilson loop formulas. Inclusive heavy quarkonium decays to light particles were calculated using a factorization hypothesis. Namely, the short distance annihilation process was computed in QCD at quark level, and the long distance (non-perturbative) effects were taken into account by the wave function (or derivatives of it) at the origin, which was calculated using potential models or dropped from suitable ratios. However, again, when one loop results became available it was noticed that IR divergences appeared in some of the short distance calculations 9 . The understanding of the IR divergences was possible due to the introduction of Non-Relativistic QCD (NRQCD) 10 . It was shown that color octet operators, which are absent in potential models, were necessary to cancel the above IR divergences, and hence the factorization hypothesis used so far were wrong 11 . The long distance part needs not only wave functions at the origin but also matrix elements of color octet operators, which were not computable in terms of potential models. Thus the lesson seemed to be that potential models cannot incorporate all relevant features of QCD for heavy quarkonium systems. However, a indication occurred that it may not be necessarily the case. If one recalculates the formulas for the QCD potential in terms of Wilson loops from NRQCD instead of directly from QCD, the discrepancies with the direct QCD calculation mentioned above disappeared, if the matching coefficients of NRQCD were calculated at one loop 12 . One of the aims of this brief review is to illustrate that suitable potential models can indeed be regarded as effective theories of QCD, and hence totally equivalent to it, in a very particular kinematical regime, and, as such, NRQCD color octet operators have a precise representation in them. This produces a number of model independent results for the inclusive decay widths to light particles and for the NRQCD matrix elements. The second aim is to illustrate that in the weak coupling regime, which corresponds to a different kinematical situation, potential models are not an effective theory of QCD. This regime is well understood and a number of higher order calculations are available. Before entering the issues above, let us mention that heavy quarkonium physics is experiencing a revival. Recently, new states have been discovered and new processes have been measured, some compatible with theoretical expectations 13 , others not 14 , which is triggering theoretical research. We refer the reader to Ref. 15 for up-dates of the current status of the field, to Ref. 16 for an extensive theoretical review and to Ref. 17 for a recent experimental account. Heavy quarkonium as a non-relativistic system A system is called non-relativistic if, in the center of mass frame, the typical threemomentum of a particle p is much smaller than its mass m. This implies that the non-relativistic energy E := m 2 + p 2 −m ∼ p 2 /m is much smaller than p. Hence a hierarchy of scales exist m >> p >> E, which may be exploited in order to simplify calculations. In addition other scales may also be important depending on the particular non-relativistic system. For heavy quarkonium, Λ QCD is also important. In fact, it already enters in the definition of heavy quark, namely a quark whose mass fulfills m >> Λ QCD . Such a definition together with asymptotic freedom, which implies α s (m) << 1, suggests that heavy quarkonium, namely a heavy quark and a heavy antiquark (not necessarily of the same flavor), is indeed a non-relativistic system. Rather than exploiting the inequalities m >> p >> E and m >> Λ QCD in every individual calculation, it is more convenient to built effective field theories (EFTs), which implement them at the Lagrangian level. This is the approach we will follow. Non-Relativistic QCD The Non-Relativistic QCD (NRQCD) Lagrangian has the following aspect 10 L NRQCD = ψ † iD 0 + D 2 2m + c F g σ · B 2m + c D g [D·, E] 8m 2 + ic S g σ · [D×, E] 8m 2 + · · · ψ +χ † iD 0 − D 2 2m 2 − c F g σ · B 2m 2 + c D g [D·, E] 8m 2 2 + ic S g σ · [D×, E] 8m 2 2 + · · · χ + f 1 ( 1 S 0 ) m 2 O 1 ( 1 S 0 ) + f 1 ( 3 S 1 ) m 2 O 1 ( 3 S 1 ) + f 8 ( 1 S 0 ) m 2 O 8 ( 1 S 0 ) + f 8 ( 3 S 1 ) m 2 O 8 ( 3 S 1 ) + + f 1 ( 1 P 1 ) m 4 O 1 ( 1 P 1 ) + f 1 ( 3 P 0 ) m 4 O 1 ( 3 P 0 ) + f 1 ( 3 P 1 ) m 4 O 1 ( 3 P 1 ) + f 1 ( 3 P 2 ) m 4 O 1 ( 3 P 2 ) + · · ·(1) where O 1 ( 1 S 0 ) = ψ † χχ † ψ , O 1 ( 3 S 1 ) = ψ † σχχ † σψ O 8 ( 1 S 0 ) = ψ † T a χχ † T a ψ , O 8 ( 3 S 1 ) = ψ † T a σχχ † T a σψ O 1 ( 1 P 1 ) = ψ † (− i 2 ← → D )χ · χ † (− i 2 ← → D )ψ (2) O 1 ( 3 P 0 ) = 1 3 ψ † (− i 2 ← → D · σ)χ χ † (− i 2 ← → D · σ)ψ O 1 ( 3 P 1 ) = 1 2 ψ † (− i 2 ← → D × σ)χ · χ † (− i 2 ← → D × σ)ψ O 1 ( 3 P 2 ) = ψ † (− i 2 ← → D (i σ j) )χ χ † (− i 2 ← → D (i σ j) )ψ . ψ is a Pauli spinor which annihilates a heavy quark and χ a Pauli spinor which creates a heavy antiquark. c F , c D , c S , f 1 , f 8 , etc. are matching coefficients which encode (non-analytic) contributions from (relativistic) energy scales of order m, and may have a factorization scale (µ) dependence. The NRQCD Lagrangian is obtained from QCD by integrating out energy fluctuations about the heavy quark mass and three-momenta higher than, or of the order of, m for the heavy quarks, and four momenta higher than, or of the order of m, in the gluon fields. This can be done in perturbation theory in α s (m) 10 since α s (m) << 1 (see 18,19 for an efficient way of doing such a calculation). Hence NRQCD is equivalent to QCD at any desired order in α s (m) and 1/m. Note that the NRQCD Lagrangian is organized in inverse powers of m, which means that only the hierarchy m >> Λ QCD , p, E has been exploited. Hence, any dimensionful field in it does not have a definite size but may take the value of any of the remaining scales (Λ QCD , p, E). In spite of this, a concrete velocity (v) counting was put forward in the original papers under the assumption that Λ QCD ∼ E =: mv 2 (then p ∼ mv) which was useful to systematically organize calculations. As we will make clear in the following sections this is only one of the various counting possibilities that NRQCD admits, and may not be suitable for all heavy quarkonium states. In any counting, however, the scale dependence of the matching coefficients cancels against the scale dependence induced by UV divergences in NRQCD calculations, and, hence, each µ-dependence is eventually traded for one of the remaining dynamical scales (Λ QCD , p, E). Note that the NRQCD Lagrangian is manifestly invariant under rotations but not under Lorentz transformations. The Lorentz symmetry is, however, non-linearly realized and provides constraints on some of the matching coefficients (for instance, c S = 2c F −1). These constraints were first uncovered in Ref. 20 using reparametrization invariance. In Ref. 21 it was shown that they follow form the Poincaré algebra. The NRQCD Lagrangian contains non-hermitian terms due to imaginary pieces of the matching coefficients of the four quark operators (f 1 , f 8 ,..)(see Ref. 22 for a recent update). This is due to the fact that a heavy quark and a heavy antiquark of the same flavor may annihilate into hard gluons (of energy ∼ m), which have been integrated out. These non-hermitian pieces must be there in order to guarantee the equivalence of NRQCD and QCD. They contain crucial information about inclusive decay widths to light degrees of freedom. For instance, for P-wave states one obtains at leading order in the original velocity counting Γ(χ Q (nJS) → LH) = 2 m 2 Im f 1 ( 2S+1 P J ) χ Q (nJS)|O 1 ( 2S+1 P J )|χ Q (nJS) m 2 +Im f 8 ( 2S+1 S S ) χ Q (nJS)|O 8 ( 1 S 0 )|χ Q (nJS) .(3) where χ Q (nJS) stands for a heavy quarkonium P-wave state of principal quantum number n, total angular momentum J and spin S. This is to be compared with the potential model result, which is recovered by dropping the second term and identifying χ Q (nJS)|O 1 ( 2S+1 P J )|χ Q (nJS) = 3C A 2π |R (0) ′ n1 (0)| 2 (4) where R (0) n1 (r) is the wave function. The second term in (3) is however crucial in order to cancel the scale (µ) dependence of the matching coefficient of the first term at one loop 11 . For instance, in the 3 P 0 case it reads 23 a Im f 1 ( 3 P 0 ) = 3C F C A 2 − C F πα s (2m) 2 1 + α s π − 7 3 + π 2 4 C F + + 427 81 − π 2 144 C A + 4 27 n f − 29 6 − log µ 2m .(5) Let us also mention that the NRQCD formalism has also been used for the description of semi-inclusive decays (see Ref. 24 and references therein) and inclusive production (see Ref. 25 and references therein). We will not discuss these two applications here. Potential NRQCD NRQCD does not take advantage of the inequality p >> E. In particular, it contains gluons of typical energy p, which cannot be produced in processes at the energy scale E. Simplifications should occur if one further integrates out degrees of freedom with energies larger than E, which leads to Potential NRQCD (pNRQCD) 26 . Unlike in NRQCD, the degrees of freedom, and hence the Lagrangian, of pN-RQCD depends on the interplay of Λ QCD with p and E. We shall discuss two situations below: the weak coupling regime (k >> E Λ QCD ) and the strong coupling regime (k Λ QCD >> E), where k is the typical momentum transfer, which, for low lying states, is of the order of p. Weak coupling regime If k >> E Λ QCD we can first integrate out energies ∼ k . The EFT thus obtained is pNRQCD in the weak coupling regime. It has the following aspect 26,27 L pNRQCD = Tr S † (i∂ 0 − h s ) S + +O † (iD 0 − h o ) O + (6) + Tr O † r · gE S + H.c. + O † r · gE O 2 + O † Or · gE 2 + · · · where S = S(R, r, t) and O = O(R, r, t) are singlet and octet wave function fields respectively, R is the center of mass coordinate (whose dynamics is trivial at lower orders and has been neglected above), r is the relative coordinate, and h s and h o are quantum mechanical Hamiltonians h s = − ∇ 2 m − C f α s r + · · · − C A 2 δ (3) (r) m 2 4 f 1 ( 1 S 0 ) − 2 S 2 f 1 ( 1 S 0 ) − f 1 ( 3 S 1 ) + + C A m 4 T ij SJ ∇ i δ (3) (r)∇ j f 1 ( 2S+1 P J ) + · · · (7) h o = − ∇ 2 m + C A 2 − C f α s r + · · · − T F 2 δ (3) (r) m 2 4 f 8 ( 1 S 0 ) − 2 S 2 f 8 ( 1 S 0 ) − f 8 ( 3 S 1 ) T ij SJ projects on states of spin S and total angular momentum J (see Ref. 28 for a precise definition). Only gluons and heavy quarks of energies smaller than k ∼ 1/r are present in (6). However the only constraint on the three-momentum of the heavy quark is still that it must be smaller than m. The potentials in (7) play the role of matching coefficients. They can be calculated perturbatively in α s (k) (α s (k) << 1 since k >> Λ QCD ) and in the 1/m expansion. Beyond tree level, this calculation produces UV divergences and also IR divergences if the smaller scales (E, Λ QCD ) are expanded. Once properly renormalized, the former cancel part of the scale dependence of the NRQCD matching coefficients. The latter and the remaining scale dependences from NRQCD matching coefficients cancel with scale dependences induced by properly renormalized UV divergences in pNRQCD calculations (see Refs. 29, 30 for illustrations in QED) . Note also that if we drop the octet field we recover a particular potential model. This is enough if we neglect nonperturbative contributions (meaning contributions for which the scale Λ QCD plays a role), and are only interested in corrections up to O(α 2 s ) 31,32 . Beyond that order or if we wish to take into account non-perturbative contributions, the remaining gluon and octet fields are crucial. Note also that, analogously to NRQCD, the pNRQCD Lagrangian is manifestly invariant under rotations, but not under Lorentz transformations. The constraints from the full Poincaré algebra have been worked out in Ref. 21. Let us mention here that, in spite of the fact that the top quark decays due to the weak interactions before forming hadronic states, this regime is also relevant for the study of the top-antitop system near its production threshold 33 . Spectrum The corrections to the spectrum up to order α 2 s had already been obtained before the introduction of pNRQCD 31 ( and, in fact, making no use of NRQCD). The one loop potentials of the singlet field had been directly calculated from QCD 8 . The two loop potential, which was also necessary at this order, was calculated using static heavy quark propagators 34 . NRQCD and pNRQCD just make the calculation simpler. Beyond that order or if one is interested in non-perturbative contributions due to the scale Λ QCD all degrees of freedom of pNRQCD play a role and correct results cannot be obtained by just calculating potentials to a higher order b . In order to proceed further one has to specify the size of Λ QCD with respect to E. If Λ QCD ∼ E, the leading non-perturbative effects are parameterized by non-local condensates and compete in size with the α 2 s perturbative corrections. If E >> Λ QCD , one can carry out weak coupling calculations with the (ultrasoft) gluons in (6). The physical observables can then be organized in powers of α s (at different scales) since p ∼ k ∼ mα s and E ∼ mα s 2 . The logarithmic contributions to the corrections at O(α 3 s ) were calculated in Ref. 36 and the finite parts for the ground state in Ref. 37. Not only that, the use of EFTs, in this case NRQCD and pNRQCD, allows to resum IR QCD logarithms. For heavy quarkonium systems this was first proposed in Ref. 12 within NRQCD, later addressed in a slightly different EFT framework called vNRQCD 38 , and implemented in the NRQCD-pNRQCD framework in Refs. 39, 40, which produced the first correct NNLL resummations for the complete spectrum 39 (see also 41 ). NLL resummation for the hyperfine splitting have been obtained recently 42 . Non-perturbative contributions are parameterized in this case by local condensates 5 . Inclusive Decays The information on the parton subprocesses of inclusive decay widths to light particles (light hadrons, photons or leptons) is encoded in the imaginary parts of the NRQCD matching coefficients. These are inherited in pNRQCD as imaginary parts of local potentials (δ(r) and derivatives of it), which eventually makes the decay width proportional to the wave function at the origin (or derivatives of it). Corrections up to α 2 s to the wave function at the origin were known before the introduction b However, the leading 5 and next-to-leading 35 non-perturbative contributions in the case E >> Λ QCD were obtained before the introduction of pNRQCD. of pNRQCD 32 . In the case E >> Λ QCD , the double logarithmic corrections at α 3 s were already obtained in this framework 43 and, today, the single logarithmic corrections at that order are also known 44 . The resummation of logs at NLL has already been carried out 40,45 . Semi-inclusive radiative decays have recently been addressed within this framework 46 . Strong coupling regime If k Λ QCD >> E, the integration of all degrees of freedom with energies larger than E cannot be carried out in an expansion in α s . In this case the degrees of freedom of pNRQCD are the singlet field interacting with a potential and the pseudo-Goldstone bosons 27 . If the latter are ignored, the form of the pNRQCD Lagrangian reduces to that of a potential model. L pNRQCD = Tr S † (i∂ 0 − h) S(8) where h is a quantum mechanical Hamiltonian. Again, h is manifestly invariant under rotations, but not under Lorentz transformations. The constraints of the full Poincaré invariance have been discussed in Ref. 47. In the particular case k >> Λ QCD , we may integrate out first energies of the order of k, which can be done in an expansion in α s (k) and 1/m exactly in the same way as in the weak coupling regime. We are thus lead to the same Lagrangian (6), which may be renamed as pNRQCD' since it is not our final EFT yet. We still have to integrate out energies ∼ Λ QCD . This cannot be done in perturbation theory in α s anymore but one can use the fact that k, p >> Λ QCD >> E. If one assumes that the octet field develops a gap ∼ Λ QCD , it can be integrated out and we are left with the singlet field only. Namely we recover the degrees of freedom of a potential model as explicitely shown in (8). In the general case k ∼ Λ QCD , the integration of energies of order k cannot be done perturbatively in α s . If one assumes that the potentials are analytic in 1/m, h = − ∇ 2 m + V 0 + V 1 m + V 2 m 2 + · · · + V 4 m 4 + · · · .(9) one can obtain them by matching (8) to NRQCD in the 1/m expansion. Then one obtains the non-perturbative potentials in terms of expectation values of operator insertions in Wilson loops 48 . In this way one is able to reproduce and correct earlier results 7 . In particular, it was noticed in this approach that the 1/m potential had been missed before. V 1 (r) = lim T →∞ − g 2 4T T /2 −T /2 dt T /2 −T /2 dt ′ |t − t ′ | E(t) · E(t ′ ) − E(t) · E(t ′ ) .(10) Model Independent Results for Heavy Quarkonium 9 where stands for a rectangle of size T × r. · · · means the expectation value of the depicted fields joined by Wilson lines along the rectangle, divided by the expectation value of the Wilson loop in the same rectangle. Let us make a parenthesis here and exemplify how the mismatch between the earlier Wilson loop approach and the explicit QCD one loop calculations mentioned in the introduction is resolved in the present formalism. Consider, for instance, one of the terms in V 2 contributing to the hyperfine splitting, V 2 = S 2 V (1,1) S 2 (r) + · · · (11) V (1,1) S 2 (r) = 2c 2 F 3 i lim T →∞ T 0 dt gB 1 (t) · gB 2 (0) − 4(d sv + d vv C f ) δ (3) (r) where the d sv and d vv are suitable combinations of the f 1 ( 2S+1 S S ) and f 8 ( 2S+1 S S ) matching coefficients of the four fermion operators in (1) 19 . In the earlier Wilson loop approach one would obtain the same expression with c F = 1 and d sv = d vv = 0, namely the short distance contributions coming from loops and virtual annihilation processes from scales of the order of m were missing. If one calculates the chromomagnetic correlator at one loop one finds a contribution proportional to α 2 s log(rµ) which adds to a α 2 s log(m/µ) contribution in d sv + d vv C f producing the full QCD α 2 s log(rm) contribution. Let us remark that the short distance behavior of these potentials can be calculated in perturbation theory in α s (r) and hence they must coincide with the ones in h s of the weak coupling regime (7). Therefore, they become increasingly singular at short distances as we go further in the 1/m expansion. Hence the Hamiltonian h is not well defined in standard quantum mechanics. In order to make sense of it we must understand this Hamiltonian as an EFT. As such we should regulate it, establish a counting, and treat the subleading pieces as perturbations. The scale dependence induced by the regularization should cancel exactly with that in the NRQCD matching coefficients, much in the same way as it was observed in Ref. 30 for QED. The power counting in h depends on the typical value of p and r in the concrete bound state we wish to analyze, and hence a simple power counting cannot be fixed a priori. Only a few statements can be made in general. The kinetic term and V 0 must always be assigned the same size (mv 2 since p ∼ mv) and taken as leading order. Although V 1 is suppressed by α 2 s and hence order mv 4 in the weak coupling regime, it may in our case also be leading order, since in the strong coupling regime α s ∼ 1 and dimensional counting allows for a size V 1 ∼ (mv) 2 . The terms in V 2 are at most order mv 3 (although in the weak coupling regime are order mv 4 due to extra α s suppressions) and hence they can be treated as perturbations. One should be aware that on general grounds the form of the potentials in (9) is not unique. Unitary transformations are allowed in quantum mechanics which change the aspect of the Hamiltonian but do not change the physics. In EFT one should better stick to transformations which preserve the counting. Even within those one can reshuffle contributions from one term to another in the potentials 48 . In general different ways of performing the matching from NRQCD (or directly from QCD) lead to different forms of the potential related by unitary transformations. For instance, in the weak coupling regime, matching on-shell matrix elements rather than using the 1/m expansion, or matching in the Feynman gauge rather than in the Coulomb gauge produces different forms of the potential. Recently, it has been shown that contributions to the potentials which are nonanalytic in 1/m exists. A procedure to compute the ones due to the three-momentum scale mΛ QCD was put forward in Ref. 49. They give rise to subleading contributions with respect to the 1/m potentials. All these potentials (analytic and nonanalytic) can be evaluated on the lattice 50 . Further non-analytic terms may appear due to the three-momentum scale mα s when this scale is much larger than Λ QCD , which have not been taken into account so far. Spectrum Once the non-perturbative potentials are obtained from a lattice calculation (or by means of other non-perturbative methods 51,52 ), one may think that the Schrödinger equation can be solved and the spectrum obtained 50 in total analogy with potential models 6 . A fully consistent calculation, however, requires the lattice calculation of the potentials to be translated to M S scheme or similar, in order to match the available NRQCD matching coefficients, or vice-versa. Furthermore, because of the same reason, one has to use (or to translate to) M S scheme the calculations in quantum mechanics perturbation theory. The advantage is that now one has a counting and a well-defined procedure which allows, at least in principle, to systematically improve the calculation by adding higher order potentials and by going to higher order in quantum mechanics perturbation theory. The contribution to the spectrum from potentials which are non-analytic in 1/m turns out to be very suppressed. The role of pseudo-Goldstone modes has not been addressed in this framework so far. Inclusive decays As in the weak coupling regime, the imaginary parts of the NRQCD matrix elements are inherited in local terms of the pNRQCD Lagrangian. In particular, the local terms in h s (7) also exists in h. More important the color octet operators of NRQCD also have a representation in terms of local potentials. For instance, if we restrict ourselves to the P -wave contributions we have Im V 4 = C A T ij SJ ∇ i δ (3) (r)∇ j Im f 1 ( 2S+1 P J )(12)+ T F 9 E 3 ∇δ (3) (r)∇ 4 Im f 8 ( 1 S 0 ) − 2 S 2 Im f 8 ( 1 S 0 ) − Im f 8 ( 3 S 1 ) + · · · Then the decay width of P -wave states to light hadrons at leading order now reads 53 Γ(χ Q (nJS) → LH) = C A π |R (0) ′ n1 (0)| 2 m 4 3 Im f 1 ( 2S+1 P J ) + 2T F 3C A Im f 8 ( 2S+1 S S ) E 3 . (13) where E 3 is a non perturbative parameter defined as E 3 = 1 N c ∞ 0 dt t 3 gE(t) · gE(0) ,(14) By comparing with (3), one may rephrase (13) as χ Q (nJS)|O 8 ( 1 S 0 )|χ Q (nJS) = T F 3 |R (0) ′ n1 (0)| 2 πm 2 E 3 .(15) Namely, one is able to obtain NRQCD color-octet matrix elements in terms of (derivatives of) wave functions at the origin, which contain all flavor and principal quantum number dependence, plus extra universal (depending on Λ QCD only) non-perturbative parameters. One can check in perturbation theory that the scale dependence of E 3 cancels exactly the scale dependence of Im f 1 ( 2S+1 P J ) (5). The unknown non-perturbative parameters together with the wave functions at the origin may drop from suitable ratios. They may also be extracted from data. This allowed to put forward a prediction for bottomonium states in terms of data extracted from charmonium 53 , which turned out to be in reasonable agreement with the experimental results when they came out 54 . If all the states below threshold for bottomonium and charmonium were in the strong coupling regime, and if we restrict ourself to potentials which are analytic in 1/m, one would obtain a reduction of unknown NRQCD matrix elements by roughly a factor of two 28 at order 1/m 4 (i.e. LO for P-wave states and NLO for Swave states). Non-analytic terms ∼ mΛ QCD give rise to subleading contributions (provided that mΛ QCD >> mα s ( mΛ QCD )), which, however, may be of the same order as analytic corrections to the leading analytic contributions 49 . They would slightly increase the number of unknown matrix elements. Discussion So far we have discussed a theoretical framework with almost no reference to any heavy quarkonium state in nature. If we wish to apply it to a concrete state we have to first figure out whether this state belongs to the weak or strong coupling regime, if any. This is not easy to establish a priori, since what scale plays the role of Λ QCD or the typical value of k or even E, cannot be extracted directly from the experimental observables. One may try the weak coupling regime first and check: (i) if the expansion in α s shows good convergence and (ii) if the leading nonperturbative effects are small. These appear to be fulfilled by the Υ(1S) system, and to a lesser extend for the B c and J/ψ c . The constrain (ii) is very restrictive since non-perturbative effects in the weak coupling regime grow with a large power of the principal quantum number d . Hence, most likely any excited state does not belong to the weak coupling regime. This does not mean that they belong to the strong coupling regime, since there is also the possibility that pNRQCD does not apply to them . Indeed, the constrain k ∼ Λ QCD >> E, does not allow states with k ∼ E ∼ Λ QCD . If we accept Heavy Quark Effective Theory counting rules 58 , these are states close to, or beyond, the heavy-light pair production threshold. Hence neither the weak nor the strong coupling regime are in principle applicable to these states. In order to make things concrete, let us advance what we believe to be a reasonable (although not entirely conservative) assignment. For the Υ system, Υ(1S) and η b (1S) belong to the weak coupling regime, whereas the remaining states below heavy-light pair production threshold may well be considered in the strong coupling regime. For the ψ system, J/ψ and η c (1S) seem to be in the border line between the weak and strong coupling regimen, the lowest lying P -wave states in the strong coupling regime, and the remaining states (including ψ(2S) and η c (2S)) are either too close or beyond the heavy-light pair production threshold, so most likely pNRQCD is not applicable. For the B c system, the pseudoscalar and vector ground states may well be in the weak coupling regime 57,42 , the lowest lying P -wave states and the first radial excitation of the S-wave states in the strong coupling regime, whereas for the remaining states pNRQCD would not be applicable. Conclusions EFTs techniques allow to exploit efficiently the various hierarchies of scales appearing in heavy quarkonium. They simplify and systematize earlier approaches ( for instance, potential models) and are powerful enough to put forward new results, which stem from QCD under well controlled approximations. This is so both in the weak and the strong coupling regime. In the weak coupling regime a number of explicit calculations have been carried out at higher orders in α s . In the strong coupling regime non-trivial results for the NRQCD matrix elements have been obtained. The phenomenological consequences of these results have not been fully exploited yet. Acknowledgments Thanks are given to N. Brambilla, D. Eiras, A. Pineda and A. Vairo for enjoyable collaborations which led to many of the results presented here. Thanks are also given to A. Pineda for comments on the manuscript. We acknowledge financial support from a CICYT-INFN 2003 collaboration contract, the MCyT and Feder ( Spain) grant FPA2001-3598, the CIRIT (Catalonia) grant 2001SGR-00065 and the network EURIDICE (EU) HPRN-CT2002-00311. a The µ-independent piece of this result slightly differs from the earlier calculations in Refs. 9. c For this to be so one has to properly take into account the renormalon singularities 55,56 d In spite of this, there are indications that using the weak coupling regime for excited states in the Υ system may give reasonable results 56 . J J Aubert, Phys. Rev. Lett. 331404J. J. Aubert et al., Phys. Rev. Lett. 33 (1974) 1404; . J E Augustin, Phys. Rev. Lett. 331406J. E. Augustin et al., Phys. Rev. Lett. 33 (1974) 1406. . S W Herb, Phys. Rev. Lett. 39252S. W. Herb et al., Phys. Rev. Lett. 39 (1977) 252. . T Appelquist, H D Politzer, Phys. Rev. Lett. 3443T. Appelquist and H. D. Politzer, Phys. Rev. Lett. 34 (1975) 43. . K G Wilson, Phys. Rev. D. 102445K. G. Wilson, Phys. Rev. D 10 (1974) 2445. . M B Voloshin, Nucl. Phys. B. 154365M. B. Voloshin, Nucl. Phys. B 154 (1979) 365; . J Sov, Nucl. Phys. 36143Yad. Fiz.Sov. J. Nucl. Phys. 36 (1982) 143 [Yad. Fiz. 36 (1982) 247]; . H Leutwyler, Phys. Lett. B. 98447H. Leutwyler, Phys. Lett. B 98 (1981) 447. . W Lucha, F F Schoberl, D Gromes, Phys. Rept. 200127W. Lucha, F. F. Schoberl and D. Gromes, Phys. Rept. 200 (1991) 127. L Susskind, Les Houches, Session XXIX. R. Balian and C. H. Llewellyn SmithAmsterdamNorth-Holland Publishing CompanyL. Susskind, in Les Houches, Session XXIX, ed. R. Balian and C. H. Llewellyn Smith (North-Holland Publishing Company, Amsterdam, 1977); . W Fischler, Nucl. Phys. B. 129157W. Fischler, Nucl. Phys. B 129, 157 (1977); . L S Brown, W I Weisberger, Phys. Rev. D. 203239L. S. Brown and W.I. Weisberger, Phys. Rev. D 20, 3239 (1979); . E Eichten, F L Feinberg, Phys. Rev. D. 232724E. Eichten and F. L. Feinberg, Phys. Rev. D 23, 2724 (1981); M E Peskin, No. 207Proceeding of the 11th SLAC Institute. P. Mc Donougheeding of the 11th SLAC Institute151SLAC ReportM. E. Peskin, in Proceeding of the 11th SLAC Institute, SLAC Report No. 207, 151, edited by P. Mc Donough (1983); . D Gromes, Z. Phys. C. 26401D. Gromes, Z. Phys. C 26, 401 (1984); . A Barchielli, E Montaldi, G M Prosperi, Nucl. Phys. B. 296752E) ibid.A. Barchielli, E. Montaldi and G. M. Prosperi, Nucl. Phys. B 296, 625 (1988); (E) ibid. 303, 752 (1988); . A Barchielli, N Brambilla, G Prosperi, Nuovo Cimento 103 A. 59A. Barchielli, N. Brambilla and G. Prosperi, Nuovo Cimento 103 A, 59 (1990). . S N Gupta, S F Radford, W W Repko, Phys. Rev. 263305S.N. Gupta, S.F. Radford and W.W. Repko, Phys. Rev. D26 3305 (1982). . R Barbieri, R Gatto, R Kogerler, Phys. Lett. B. 60183R. Barbieri, R. Gatto and R. Kogerler, Phys. Lett. B 60 (1976) 183; . R Barbieri, R Gatto, E Remiddi, Phys. Lett. B. 61220Nucl. Phys. BR. Barbieri, R. Gatto and E. Remiddi, Phys. Lett. B 61 (1976) 465; Nucl. Phys. B 162 (1980) 220; . R Barbieri, M Caffo, R Gatto, E Remiddi, Phys. Lett. B. 9561Nucl. Phys. BR. Barbieri, M. Caffo, R. Gatto and E. Remiddi, Phys. Lett. B 95 (1980) 93. Nucl. Phys. B 192 (1981) 61. . W E Caswell, G P Lepage, Phys. Lett. B. 167437W. E. Caswell and G. P. Lepage, Phys. Lett. B 167 (1986) 437; . B A Thacker, G P Lepage, Phys. Rev. D. 43B. A. Thacker and G. P. Lepage, Phys. Rev. D 43 (1991) 196; . G T Bodwin, E Braaten, G P Lepage, arXiv:hep-ph/9407339Phys. Rev. D. 511125Erratum-ibid. DG. T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev. D 51 (1995) 1125 [Erratum-ibid. D 55 (1997) 5853] [arXiv:hep-ph/9407339]. . G T Bodwin, E Braaten, G P Lepage, arXiv:hep-lat/9205006Phys. Rev. D. 461914G. T. Bodwin, E. Braaten and G. P. Lepage, Phys. Rev. D 46 (1992) 1914 [arXiv:hep-lat/9205006]. . Y Q Chen, Y P Kuang, R J Oakes, arXiv:hep-ph/9406287Phys. Rev. D. 52264Y. Q. Chen, Y. P. Kuang and R. J. Oakes, Phys. Rev. D 52 (1995) 264 [arXiv:hep-ph/9406287]. . G Bonvicini, CLEO CollaborationarXiv:hep-ex/0404021G. Bonvicini [CLEO Collaboration], arXiv:hep-ex/0404021; . S E Csorna, CLEO Collaboration ; BES CollaborationarXiv:hep-ex/0207060arXiv:hep-ph/0209354Phys. Lett. B. J. Z. Bai et al.55024S. E. Csorna et al. [CLEO Collaboration], arXiv:hep-ex/0207060; J. Z. Bai et al. [BES Collaboration], Phys. Lett. B 550 (2002) 24 [arXiv:hep-ph/0209354]. . S K Choi, Belle CollaborationarXiv:hep-ex/0309032Phys. Rev. Lett. 91262001S. K. Choi et al. [Belle Collaboration], Phys. Rev. Lett. 91 (2003) 262001 [arXiv:hep-ex/0309032]; . K Abe, Belle CollaborationarXiv:hep-ex/0205104Phys. Rev. Lett. 89142001K. Abe et al. [Belle Collaboration], Phys. Rev. Lett. 89 (2002) 142001 [arXiv:hep-ex/0205104]. . G S Bali, arXiv:hep-ph/0001312Phys. Rept. 3431G. S. Bali, Phys. Rept. 343 (2001) 1 [arXiv:hep-ph/0001312]. . T Skwarnicki, arXiv:hep-ph/0311243Int. J. Mod. Phys. A. 191030T. Skwarnicki, Int. J. Mod. Phys. A 19 (2004) 1030 [arXiv:hep-ph/0311243]. . A V Manohar, arXiv:hep-ph/9701294Phys. Rev. B. 56230A. V. Manohar, Phys. Rev. B 56, 230 (1997) [arXiv:hep-ph/9701294]. . A Pineda, J Soto, arXiv:hep-ph/9802365Phys. Rev. D. 58114011A. Pineda and J. Soto, Phys. Rev. D 58, 114011 (1998) [arXiv:hep-ph/9802365]. . M E Luke, A V Manohar, arXiv:hep-ph/9205228Phys. Lett. B. 286348M. E. Luke and A. V. Manohar, Phys. Lett. B 286 (1992) 348 [arXiv:hep-ph/9205228]. . N Brambilla, D Gromes, A Vairo, arXiv:hep-ph/0306107Phys. Lett. B. 576314N. Brambilla, D. Gromes and A. Vairo, Phys. Lett. B 576 (2003) 314 [arXiv:hep-ph/0306107]. . A Vairo, arXiv:hep-ph/0311303Mod. Phys. Lett. A. 19253A. Vairo, Mod. Phys. Lett. A 19 (2004) 253 [arXiv:hep-ph/0311303]. . A Petrelli, M Cacciari, M Greco, F Maltoni, M L Mangano, arXiv:hep-ph/9707223Nucl. Phys. B. 514245A. Petrelli, M. Cacciari, M. Greco, F. Maltoni and M. L. Mangano, Nucl. Phys. B 514 (1998) 245 [arXiv:hep-ph/9707223]. . S Fleming, A K Leibovich, arXiv:hep-ph/0212094Phys. Rev. D. 6774035S. Fleming and A. K. Leibovich, Phys. Rev. D 67 (2003) 074035 [arXiv:hep-ph/0212094]. . G T Bodwin, arXiv:hep-ph/0312173G. T. Bodwin, arXiv:hep-ph/0312173. . A Pineda, J Soto, arXiv:hep-ph/9707481Nucl. Phys. Proc. Suppl. 64428A. Pineda and J. Soto, Nucl. Phys. Proc. Suppl. 64 (1998) 428 [arXiv:hep-ph/9707481]. . N Brambilla, A Pineda, J Soto, A Vairo, arXiv:hep-ph/9907240Nucl. Phys. B. 566275N. Brambilla, A. Pineda, J. Soto and A. Vairo, Nucl. Phys. B 566 (2000) 275 [arXiv:hep-ph/9907240]. . N Brambilla, D Eiras, A Pineda, J Soto, A Vairo, arXiv:hep-ph/0208019Phys. Rev. D. 6734018N. Brambilla, D. Eiras, A. Pineda, J. Soto and A. Vairo, Phys. Rev. D 67 (2003) 034018 [arXiv:hep-ph/0208019]. . A Pineda, J Soto, arXiv:hep-ph/9711292Phys. Lett. B. 420391A. Pineda and J. Soto, Phys. Lett. B 420 (1998) 391 [arXiv:hep-ph/9711292]; . arXiv:hep-ph/9805424Phys. Rev. D. 5916005Phys. Rev. D 59 (1999) 016005 [arXiv:hep-ph/9805424]. . A Czarnecki, K Melnikov, A Yelkhovsky, arXiv:hep-ph/9901394Phys. Rev. A. 594316A. Czarnecki, K. Melnikov and A. Yelkhovsky, Phys. Rev. A 59 (1999) 4316 [arXiv:hep-ph/9901394]. . A Pineda, F J Yndurain, arXiv:hep-ph/9711287Phys. Rev. D. 5894022A. Pineda and F. J. Yndurain, Phys. Rev. D 58 (1998) 094022 [arXiv:hep-ph/9711287]; . arXiv:hep-ph/9812371Phys. Rev. D. 6177505Phys. Rev. D 61 (2000) 077505 [arXiv:hep-ph/9812371]. . K Melnikov, A Yelkhovsky, arXiv:hep-ph/9802379Nucl. Phys. B. 52859K. Melnikov and A. Yelkhovsky, Nucl. Phys. B 528 (1998) 59 [arXiv:hep-ph/9802379]; . A A Penin, A A Pivovarov, arXiv:hep-ph/9904278Phys. Atom. Nucl. 64275Yad. Fiz.A. A. Penin and A. A. Pivovarov, Phys. Atom. Nucl. 64 (2001) 275 [Yad. Fiz. 64 (2001) 323] [arXiv:hep-ph/9904278]. . A H Hoang, arXiv:hep-ph/0001286Eur. Phys. J. directC. 23A. H. Hoang et al., Eur. Phys. J. directC 2 (2000) 3 [arXiv:hep-ph/0001286]. . Y Schroder, arXiv:hep-ph/9812205Phys. Lett. B. 447321Y. Schroder, Phys. Lett. B 447 (1999) 321 [arXiv:hep-ph/9812205]; . M Peter, arXiv:hep-ph/9702245Nucl. Phys. B. 501471M. Peter, Nucl. Phys. B 501 (1997) 471 [arXiv:hep-ph/9702245]. . A Pineda, arXiv:hep-ph/9611388Nucl. Phys. B. 494213A. Pineda, Nucl. Phys. B 494 (1997) 213 [arXiv:hep-ph/9611388]. . N Brambilla, A Pineda, J Soto, A Vairo, arXiv:hep-ph/9910238Phys. Lett. B. 470215N. Brambilla, A. Pineda, J. Soto and A. Vairo, Phys. Lett. B 470 (1999) 215 [arXiv:hep-ph/9910238]. . B A Kniehl, A A Penin, M Steinhauser, V A Smirnov, arXiv:hep-ph/0106135Phys. Rev. D. 6591503B. A. Kniehl, A. A. Penin, M. Steinhauser and V. A. Smirnov, Phys. Rev. D 65 (2002) 091503 [arXiv:hep-ph/0106135]; . arXiv:hep-ph/0203166Nucl. Phys. B. 635357Nucl. Phys. B 635 (2002) 357 [arXiv:hep-ph/0203166]; . A A Penin, M Steinhauser, arXiv:hep-ph/0204290Phys. Lett. B. 538335A. A. Penin and M. Steinhauser, Phys. Lett. B 538 (2002) 335 [arXiv:hep-ph/0204290]. . M E Luke, A V Manohar, I Z Rothstein, arXiv:hep-ph/9910209Phys. Rev. D. 6174025M. E. Luke, A. V. Manohar and I. Z. Rothstein, Phys. Rev. D 61 (2000) 074025 [arXiv:hep-ph/9910209]; . A V Manohar, I W Stewart, arXiv:hep-ph/9912226Phys. Rev. D. 6214033A. V. Manohar and I. W. Stewart, Phys. Rev. D 62 (2000) 014033 [arXiv:hep-ph/9912226]. . A Pineda, arXiv:hep-ph/0109117Phys. Rev. D. 6574007A. Pineda, Phys. Rev. D 65 (2002) 074007 [arXiv:hep-ph/0109117]; . arXiv:hep-ph/0204213Phys. Rev. A. 6662108Phys. Rev. A 66 (2002) 062108 [arXiv:hep-ph/0204213]. . A Pineda, arXiv:hep-ph/0110216Phys. Rev. D. 6654022A. Pineda, Phys. Rev. D 66 (2002) 054022 [arXiv:hep-ph/0110216]. . A H Hoang, I W Stewart, arXiv:hep-ph/0209340Phys. Rev. D. 67114020A. H. Hoang and I. W. Stewart, Phys. Rev. D 67 (2003) 114020 [arXiv:hep-ph/0209340]. . B A Kniehl, A A Penin, A Pineda, V A Smirnov, M A Steinhauser ; A, A Penin, V A Pineda, M Smirnov, Steinhauser, arXiv:hep-ph/0312086arXiv:hep-ph/0403080B. A. Kniehl, A. A. Penin, A. Pineda, V. A. Smirnov and M. Steinhauser, arXiv:hep-ph/0312086; A. A. Penin, A. Pineda, V. A. Smirnov and M. Steinhauser, arXiv:hep-ph/0403080. . B A Kniehl, A A Penin, arXiv:hep-ph/9911414Nucl. Phys. B. 577197B. A. Kniehl and A. A. Penin, Nucl. Phys. B 577 (2000) 197 [arXiv:hep-ph/9911414]. . B A Kniehl, A A Penin, M Steinhauser, V A Smirnov, arXiv:hep-ph/0210161Phys. Rev. Lett. 90212001B. A. Kniehl, A. A. Penin, M. Steinhauser and V. A. Smirnov, Phys. Rev. Lett. 90 (2003) 212001 [arXiv:hep-ph/0210161]. . A Pineda, Acta Phys. Polon. B. 345295A. Pineda, Acta Phys. Polon. B 34 (2003) 5295. . X Garcia I Tormo, J Soto, arXiv:hep-ph/0401233X. Garcia i Tormo and J. Soto, arXiv:hep-ph/0401233. . N Brambilla, D Gromes, A Vairo, arXiv:hep-ph/0104068Phys. Rev. D. 6476010N. Brambilla, D. Gromes and A. Vairo, Phys. Rev. D 64 (2001) 076010 [arXiv:hep-ph/0104068]. . N Brambilla, A Pineda, J Soto, A Vairo, arXiv:hep-ph/0002250Phys. Rev. D. 6314023N. Brambilla, A. Pineda, J. Soto and A. Vairo, Phys. Rev. D 63 (2001) 014023 [arXiv:hep-ph/0002250]; . A Pineda, A Vairo, arXiv:hep-ph/0009145Phys. Rev. D. 6354007Erratum-ibid. D 64 (2001) 039902A. Pineda and A. Vairo, Phys. Rev. D 63 (2001) 054007 [Erratum-ibid. D 64 (2001) 039902] [arXiv:hep-ph/0009145]. . N Brambilla, A Pineda, J Soto, A Vairo, arXiv:hep-ph/0307159Phys. Lett. B. 58060N. Brambilla, A. Pineda, J. Soto and A. Vairo, Phys. Lett. B 580 (2004) 60 [arXiv:hep-ph/0307159]. . G S Bali, K Schilling, A Wachter, arXiv:hep-lat/9703019Phys. Rev. D. 562566G. S. Bali, K. Schilling and A. Wachter, Phys. Rev. D 56 (1997) 2566 [arXiv:hep-lat/9703019]. . N Brambilla, A Vairo, arXiv:hep-ph/9904330N. Brambilla and A. Vairo, arXiv:hep-ph/9904330. . F Jugeau, H Sazdjian, arXiv:hep-ph/0305021Nucl. Phys. B. 670221F. Jugeau and H. Sazdjian, Nucl. Phys. B 670 (2003) 221 [arXiv:hep-ph/0305021]. . N Brambilla, D Eiras, A Pineda, J Soto, A Vairo, arXiv:hep-ph/0109130Phys. Rev. Lett. 8812003N. Brambilla, D. Eiras, A. Pineda, J. Soto and A. Vairo, Phys. Rev. Lett. 88 (2002) 012003 [arXiv:hep-ph/0109130]. . D Cinabro, CLEO CollaborationarXiv:hep-ex/0207062D. Cinabro et al. [CLEO Collaboration], arXiv:hep-ex/0207062. . A Pineda, arXiv:hep-ph/0105008JHEP. 010622A. Pineda, JHEP 0106 (2001) 022 [arXiv:hep-ph/0105008]. . N Brambilla, Y Sumino, A Vairo, arXiv:hep-ph/0101305Phys. Lett. B. 513381N. Brambilla, Y. Sumino and A. Vairo, Phys. Lett. B 513 (2001) 381 [arXiv:hep-ph/0101305]; . arXiv:hep-ph/0108084Phys. Rev. D. 6534001Phys. Rev. D 65 (2002) 034001 [arXiv:hep-ph/0108084]; . S Recksiegel, Y Sumino, arXiv:hep-ph/0305178Phys. Lett. B. 578369S. Recksiegel and Y. Sumino, Phys. Lett. B 578 (2004) 369 [arXiv:hep-ph/0305178]. . N Brambilla, A Vairo, arXiv:hep-ph/0002075Phys. Rev. D. 6294019N. Brambilla and A. Vairo, Phys. Rev. D 62 (2000) 094019 [arXiv:hep-ph/0002075]. . M Neubert, arXiv:hep-ph/9306320Phys. Rept. 245259M. Neubert, Phys. Rept. 245 (1994) 259 [arXiv:hep-ph/9306320].
[]
[ "Critical equation of state from the average action", "Critical equation of state from the average action" ]
[ "J Berges \nInstitut für Theoretische Physik\nUniversität Heidelberg\nPhilosophenweg 1669120HeidelbergGermany\n", "N Tetradis \nTheoretical Physics\nUniversity of Oxford\n\n", "C Wetterich \nInstitut für Theoretische Physik\nUniversität Heidelberg\nPhilosophenweg 1669120HeidelbergGermany\n", "\nKeble Rd\nOX1 3NPOxfordU.K\n" ]
[ "Institut für Theoretische Physik\nUniversität Heidelberg\nPhilosophenweg 1669120HeidelbergGermany", "Theoretical Physics\nUniversity of Oxford\n", "Institut für Theoretische Physik\nUniversität Heidelberg\nPhilosophenweg 1669120HeidelbergGermany", "Keble Rd\nOX1 3NPOxfordU.K" ]
[]
The scaling form of the critical equation of state is computed for O(N )-symmetric models. We employ a method based on an exact flow equation for a coarse grained free energy. A suitable truncation is solved numerically.
10.1103/physrevlett.77.873
[ "https://export.arxiv.org/pdf/hep-th/9507159v1.pdf" ]
14,669,782
hep-th/9507159
564426b742651246a5861fa5121fc875fb6853c0
Critical equation of state from the average action Jul 1995 July 1995 J Berges Institut für Theoretische Physik Universität Heidelberg Philosophenweg 1669120HeidelbergGermany N Tetradis Theoretical Physics University of Oxford C Wetterich Institut für Theoretische Physik Universität Heidelberg Philosophenweg 1669120HeidelbergGermany Keble Rd OX1 3NPOxfordU.K Critical equation of state from the average action Jul 1995 July 1995arXiv:hep-th/9507159v1 28 The scaling form of the critical equation of state is computed for O(N )-symmetric models. We employ a method based on an exact flow equation for a coarse grained free energy. A suitable truncation is solved numerically. A precise computation of the critical equation of state near a second order phase transition is an old problem. From a general renormalisation group analysis [1] one can prove the Widom scaling form [2] H = φ δf ((T − T c )/φ 1/β ) for the relation between the magnetic field H, the magnetisation φ and the difference from the critical temperature T − T c . In several models the critical exponents β and δ have been computed with high accuracy [3] but the scaling functionf is more difficult to access. Previous attempts include an expansion in 4 − ǫ dimensions in second order in ǫ (third order for the Ising model) [3]. A particular difficulty for a direct computation in three dimensions arises from the existence of massless Goldstone modes in the phase with spontaneous symmetry breaking for models with continuous symmetry (e.g. Heisenberg models with O(N ) symmetry for N > 1). They introduce severe infrared problems within perturbative or loop expansions. Recently a non-perturbative method has been proposed which can systematically deal with infrared problems. It is based on the average action Γ k [4] which is a coarse grained free energy with an infrared cutoff. More precisely Γ k includes the effects of all fluctuations with momenta q 2 > k 2 but not those with q 2 < k 2 . In the limit k → 0 the average action becomes the standard effective action (the generating functional of the 1PI Green functions), while for k → ∞ it equals the classical or microscopic action. It is formulated in continuous space and all symmetries of the model are preserved. There is a simple functional integral representation [4] of Γ k also for k > 0 such that its couplings can, in principle, also be estimated by alternative methods. The exact non-perturbative flow equation [5] for the scale dependence of Γ k takes the simple form of a renormalisation group improved one-loop equation [4] k ∂ ∂k Γ k [φ] = 1 2 Tr Γ (2) k [φ] + R k −1 k ∂ ∂k R k .(1) The trace involves a momentum integration and summation over internal indices. Most importantly, the relevant infrared properties appear directly in the form of the exact inverse average propagator Γ (2) k , which is the matrix of second functional derivatives with respect to the fields. There is always only one momentum integration -multi-loops are not needed -which is, for a suitable cutoff function R k (q 2 ) (with R k (0) ∼ k 2 , R k (q 2 → ∞) ∼ e −q 2 /k 2 ), both infrared and ultraviolet finite. The flow equation (1) is a functional differential equation and an approximate solution requires a truncation. Our truncation is the lowest order in a systematic derivative expansion of Γ k [4,6,7] Γ k = d d x U k (ρ) + 1 2 Z k ∂ µ φ a ∂ µ φ a .(2) Here φ a denotes the N -component real scalar field and ρ = 1 2 φ a φ a . We keep for the potential term the most general O(N )-symmetric form U k (ρ) since U 0 (ρ) encodes the equation of state. The wavefunction renormalisation is approximated by one k-dependent parameter Z k . Next order in the derivative expansion would be the generalization to a ρ-dependent wavefunction renormalisation Z k (ρ) plus a function Y k (ρ) accounting for a possible different index structure of the kinetic term for N ≥ 2 [4,6]. Going further would require the consideration of terms with four derivatives and so on. Concerning the equation of state for the present model, the omission of higher derivative terms in the average action typically generates an uncertainty of the order of the anomalous dimension η. The main reason is that for η = 0 the kinetic term in the k-dependent inverse propagator must be exactly proportional to q 2 both for q 2 → 0 and q 2 → ∞. For the three-dimensional scalar theory η is known to be small and the derivative expansion is, therefore, expected to give a reliable approximation. This holds for arbitrary constant "background" field φ a . Similar, although less stringent, arguments indicate a weak ρ-dependence of the kinetic term. For the scaling solution for N = 1 this weak ρ-dependence has been established explicitly [7]. In this letter we compute the effective potential (Helmholtz free energy) lim k→0 U k (ρ) ≡ U (ρ) for the O(N )-model directly in three dimensions from a solution of eqs. (1), (2). We extract the Widom scaling form of the equation of state and give semi-analytical expressions for N = 1 and N = 3. Its asymptotic behavior yields the universal critical exponents and amplitude ratios. An alternative parametrisation of the equation of state in terms of renormalised quantities is used in order to compute universal couplings. For a study of the behavior in the vicinity of the phase transition it is convenient to work with dimensionless renormalised fields ρ = Z k k −1 ρ, u k (ρ) = k −3 U k (ρ(ρ)).(3) With the truncation of eq. (2) the exact evolution equation for u ′ k ≡ ∂u k /∂ρ [4,6] reduces to the partial differential equation ∂u ′ k ∂t = −2 + η u ′ k + 1 + η ρu ′′ k − (N − 1) 4π 2 u ′′ k l 3 1 u ′ k ; η − 1 4π 2 3u ′′ k + 2ρu ′′′ k l 3 1 u ′ k + 2ρu ′′ k ; η ,(4) where t = ln (k/Λ), with Λ the ultraviolet cutoff of the theory. The anomalous dimension η is given in our truncation by [4,6] η = − ∂ ∂t ln Z k = 2 3π 2 κλ 2 m 3 2,2 (2λκ).(5) with κ the location of the minimum of the potential, u ′ k (κ) = 0, and λ the quartic coupling, u ′′ k (κ) = λ. The "threshold" functions l 3 1 and m 3 2,2 result from the momentum integration on the r.h.s. of eq. (1) and account for the decoupling of modes with effective mass larger than k. They equal constants of order one for vanishing arguments and decay fast for arguments much larger than one. For the choice of the cutoff function R k employed here their explicit form can be found in refs. [6,8]. To obtain the equation of state one has to solve the partial differential equation (4) for k → 0. Algorithms adapted to the numerical solution of eq. (4) have been developed previously [8] and we refer to this work for details. The integration starts at some short distance scale k −1 = Λ −1 (t = 0) where the average potential is equal to the microscopic or classical potential (no integration of fluctuations has been performed). We start with a quartic classical potential parametrized as u ′ Λ (ρ) = λ Λ (ρ−κ Λ ). In the phase with spontaneous symmetry breaking the order parameter ρ 0 = lim k→0 Z −1 k kκ takes a non-vanishing value. In the symmetric phase the order parameter vanishes, i.e. ρ 0 = 0 for k = 0. The two phases are separated by a scaling solution for which u ′ k (ρ) becomes independent of k. For any given λ Λ there is a critical value κ cr for which the evolution leads to the scaling solution. A measure of the distance from the phase transition is the difference δκ Λ = κ Λ − κ cr . If κ Λ is interpreted as a function of temperature, the deviation δκ Λ is proportional to the deviation from the critical temperature, i.e. δκ Λ = A(T )(T c − T ) with A(T c ) > 0. The external field H is related to the derivative of the effective potential U ′ = ∂U/∂ρ by H a = U ′ φ a . The critical equation of state relating the temperature, the external field and the order parameter can then be written in the scaling form (φ = √ 2ρ) U ′ φ δ−1 = f (x), x = −δκ Λ φ 1/β(6) with critical exponents δ and β. For φ → ∞ our numerical solution for U ′ obeys U ′ ∼ φ δ−1 to high accuracy. The inferred value of δ is displayed in the table, and we have checked the scaling relation δ = (5 − η)/(1 + η). The value of the critical exponent η is obtained from eq. (5) for the scaling solution [6]. We have also verified explicitly that f depends only on the scaling variable x for the value of β given in the table. In figs. 1 and 2 we plot log(f ) and log(df /dx) as a function of log|x| for N = 1 and N = 3. Fig. 1 corresponds to the symmetric phase (x > 0) and fig. 2 to the phase with spontaneous symmetry breaking (x < 0). One can easily extract the asymptotic behavior from the logarithmic plots. The curves become constant both for x → 0 + and x → 0 − with the same value, consistently with the regularity of f (x) at x = 0. For the universal function one obtains lim x→0 f (x) = D and H = Dφ δ on the critical isotherm. For x → ∞ one observes that log(f ) becomes a linear function of log(x) with constant slope γ. In this limit the universal function takes the form lim x→∞ f (x) = (C + ) −1 x γ ,(8) or lim φ→0 U ′ = (C + ) −1 |δκ Λ | γ φ δ−1−γ/β =m 2 , and we have verified the scaling relation γ/β = δ − 1 . One observes that the zero-field magnetic susceptibility, or equivalently the inverse unrenormalised squared massm −2 = χ, is non-analytic for δκ Λ → 0 in the symmetric phase: χ = C + |δκ Λ | −γ . In this phase the correlation length ξ = (Z 0 χ) 1/2 , which is equal to the inverse of the renormalised mass m R , behaves as ξ = ξ + |δκ Λ | −ν with ν = γ/(2 − η). In the phase with spontaneous symmetry breaking (x < 0) the plot of log(f ) fig. 2 shows a singularity for x = −B −1/β , i.e. f (x = −B −1/β ) = 0. The order parameter for H = 0 therefore behaves as φ = B(δκ Λ ) β . Below the critical temperature the longitudinal and transversal magnetic susceptibilities χ L and χ T are different for N > 1 (f ′ = df /dx) χ −1 L = ∂ 2 U ∂φ 2 = φ δ−1 δf (x) − x β f ′ (x) , χ −1 T = 1 φ ∂U ∂φ = φ δ−1 f (x).(10) This is related to the existence of massless Goldstone modes in the (N − 1) transverse directions which imply that the transversal susceptibility diverges for vanishing external field. Fluctuations of these massless modes also induce a divergence of the zero-field longitudinal susceptibility. This can be seen from the singularity of the plot of log(f ′ ) for N = 3 in fig. 2. The first derivative of the universal function with respect to x vanishes as H → 0, i.e. f ′ (x = −B −1/β ) = 0 for N ≥ 2. For N = 1 there is a non-vanishing constant value for f ′ (x = −B −1/β ) with a finite zero-field susceptibility χ = C − (δκ Λ ) −γ where (C − ) −1 = B δ−1−1/β f ′ (−B −1/β )/β. For a non-vanishing physical infrared cutoff k the longitudinal susceptibility remains finite also for N ≥ 2: χ L ∼ (kρ 0 ) −1/2 . In the ordered phase the correlation length for N = 1 behaves as ξ = ξ − (δκ Λ ) −ν and, also for N > 1, the renormalised minimum ρ 0R = Z 0 ρ 0 of the potential U scales as ρ 0R = E(δκ Λ ) ν . The amplitudes of singularities near the phase transition D, C ± , ξ ± , B and E are shown in the table. They are not universal since different short distance physics will result in different wavefunction renormalisations Z ϕ and Z ϕ 2 . All models in the same universality class can, however, be related by a multiplicative rescaling of φ and δκ Λ (or T c − T ) resulting in x → c x x and f → c f f . Ratios of amplitudes which are invariant under this rescaling are universal. We display the universal combinations C + /C − , ξ + /ξ − , R χ = C + DB δ−1 ,R ξ = (ξ + ) β/ν D 1/(δ+1) B and ξ + E in the table. The asymptotic behavior observed for the universal function can be used to obtain a semianalytical expression for f (x). We find the following fit to reproduce the numerical values for both f and df /dx within 1% deviation (apart from the immediate vicinity of the zero of f for N = 3, cf. eq. (17)): There is an alternative parametrisation of the equation of state in terms of renormalised quantities. In the symmetric phase (δκ Λ < 0) we consider the dimensionless quantity f f it (x) = D 1 + B 1/β x a 1 + Θx ∆ 1 + cx γ−a−∆ ,(11)F (s) = U ′ R m 2 R = C + x −γ f (x), s = ρ R m R = 1 2 (ξ + ) 3 (C + ) −1 x −2β(12) with ρ R = Z 0 ρ and U (n) R = Z −n 0 U (n) . The derivatives of F at s = 0 yield the universal couplings dF ds (0) = U ′′ R (0) m R ≡ λ R m R , d 2 F ds 2 (0) = U ′′′ R (0) ≡ ν R(13) and similarly for higher derivatives. They determine the behavior of f for x ≫ 1/2 f (x) = (C + ) −1 x γ + 1 2 λ R m R (ξ + ) 3 (C + ) −2 x γ−2β + 1 8 ν R (ξ + ) 6 (C + ) −3 x γ−4β + . . .(14) In the ordered phase (δκ Λ > 0) we consider the ratio G(s) = U ′ R ρ 2 0R = 1 2 B 2 E −3 (−x) −γ f (x),s = ρ R ρ 0R = B −2 (−x) −2β .(15) The values for the universal couplings dG ds (1) = U ′′ R (ρ 0R ) ρ 0R ≡λ R ρ 0R , d 2 G ds 2 (1) = U ′′′ R (ρ 0R ) ≡ν R(16) as well as λ R /m R and ν R are given in the table. One observes that for N > 1 the renormalised quartic couplingλ R vanishes in the ordered phase. This results from the presence of massless fluctuations. For x near −B −1/β the scaling function is approximated by f (x) = E 3 B −6 (−x) γ (−x) −2β − B 2 2B 2λ R ρ 0R +ν R (−x) −2β − B 2 + . . .(17) In summary, our numerical solution of eq. (4) gives a very detailed picture of the critical equation of state. The numerical uncertainties are estimated by comparison of results obtained through two independent integration algorithms [8]. They are small, typically less than 0.3% for critical exponents and 1 − 3% for amplitudes. The scaling relations between the critical exponents are fulfilled within a deviation of 2 × 10 −4 . The dominant quantitative error stems from the truncation of the exact flow equation and is related to the size of the anomalous dimension η ≃ 4%. This is consistent with the fact that the critical exponents and amplitudes calculated here typically deviate by a few percent from the more precise values obtained by other methods [3]. If the equation of state is needed with a higher accuracy one has to extend the truncation beyond the level of the present work. Fig. 1 : Logarithmic plot of f and df /dx for x > 0. Figures with c = (C + DB a/β Θ ∆ ) −1/(γ−a−∆) . The parameter a is determined by the order of the pole of f −1 at x = −B −1/β , i.e. a = 1 (a = 2) for N = 1 (N > 1). The fitting parameters are chosen as Θ = 0.569 (1.312) and ∆ = 0.180 (−0.595) for N = 1(3). Fig. 2 : 2Logarithmic plot of f and df /dx for x < Table 1 : 1Parameters for the equation of state.Tables N β γ δ ν η λ R /m R ν RλR /ρ 0RνR 1 0.336 1.258 4.75 0.643 0.044 9.69 108 61.6 107 3 0.388 1.465 4.78 0.747 0.038 7.45 57.4 0 ≃ 250 C + D B ξ + E C + /C − ξ + /ξ − R χRξ ξ + E 1 0.0742 15.88 1.087 0.257 0.652 4.29 1.86 1.61 0.865 0.168 3 0.0743 8.02 1.180 0.263 0.746 - - 1.11 0.845 0.196 . K G Wilson, Phys. Rev. B. 43184K.G. Wilson, Phys. Rev. B 4, 3174; 3184 (1971); . K G Wilson, I G Kogut, Phys. Rep. 1275K.G. Wilson and I.G. Kogut, Phys. Rep. 12, 75 (1974); F J Wegner, Phase Transitions and Critical Phenomena. C. Domb and M.S. GreeneAcademic Press6F.J. Wegner, in: Phase Transitions and Critical Phenomena, vol. 6, eds. C. Domb and M.S. Greene, Academic Press (1976). . B Widom, J. Chem. Phys. 433898B. Widom, J. Chem. Phys. 43, 3898 (1965) . E Brezin, D J Wallace, K G Wilson, Phys. Rev. Lett. 29591E. Brezin, D.J. Wallace and K.G. Wilson, Phys. Rev. Lett. 29, 591 (1972); . Phys. Rev. B. 7232Phys. Rev. B 7, 232 (1973); J Zinn-Justin, Quantum field theory and critical phenomena. Oxford Science PublicationsJ. Zinn-Justin, Quantum field theory and critical phenomena, Oxford Science Publications (1989); . G Parisi, J. Stat. Phys. 2349Addison-WesleyStatistical field theoryG. Parisi, J. Stat. Phys. 23, 49 (1980); Statistical field theory, Addison-Wesley (1988); University of Milano preprint IFUM-TH-498. P Butera, M Comi, P. Butera and M. Comi, University of Milano preprint IFUM-TH- 498, 1995. . C Wetterich, Nucl. Phys. B. 352529C. Wetterich, Nucl. Phys. B 352, 529 (1991); . Z. Phys. C. 57451Z. Phys. C 57, 451 (1993); . C. 60461ibid C 60, 461 (1993); . Phys. Lett. B. 30190Phys. Lett. B 301, 90 (1993). Earlier versions of exact renormalisation group equations can be found in ref. Phys. Rev. A. F.J. Wegner and A. Houghton8401Earlier versions of exact renormalisation group equations can be found in ref. [1] and in: F.J. Wegner and A. Houghton, Phys. Rev. A 8, 401 (1973); Critical phenomena for field theorists. S Weinberg, Erice Subnuc. Phys. 1S. Weinberg, Critical phenomena for field theorists, in Erice Subnuc. Phys. 1 (1976); . J F Nicoll, T S Chang, Phys. Lett. A. 62287J.F. Nicoll and T.S. Chang, Phys. Lett. A 62, 287 (1977); . J Polchinski, Nucl. Phys. B. 231269J. Polchinski, Nucl. Phys. B 231, 269 (1984); . A Hasenfratz, P Hasenfratz, Nucl. Phys. B. 270687A. Hasenfratz and P. Hasenfratz, Nucl. Phys. B 270, 687 (1986). . N Tetradis, C Wetterich, Nucl. Phys. B. 422541N. Tetradis and C. Wetterich, Nucl. Phys. B, 422, 541 (1994). . T R Morris, Phys. Lett. B. 329241T.R. Morris, Phys. Lett. B 329, 241 (1994). . J Adams, J Berges, S Bornholdt, F Freire, N Tetradis, C Wetterich, preprint CAU-THP-95-10, HD- THEP-95-15 and OUTP 95-12 PUniversity of Kiel, University of Heidelberg and University of OxfordJ. Adams, J. Berges, S. Bornholdt, F. Freire, N. Tetradis and C. Wetterich, University of Kiel, University of Heidelberg and University of Oxford preprint CAU-THP-95-10, HD- THEP-95-15 and OUTP 95-12 P, 1995.
[]
[ "Offline Drawing of Dynamic Trees: Algorithmics and Document Integration", "Offline Drawing of Dynamic Trees: Algorithmics and Document Integration", "Offline Drawing of Dynamic Trees: Algorithmics and Document Integration", "Offline Drawing of Dynamic Trees: Algorithmics and Document Integration" ]
[ "Malte Skambath [email protected] \nDepartment of Computer Science\nKiel University\nGermany\n", "Till Tantau [email protected] \nInstitute of Theoretical Computer Science\nUniversität zu Lübeck\nGermany\n", "Malte Skambath [email protected] \nDepartment of Computer Science\nKiel University\nGermany\n", "Till Tantau [email protected] \nInstitute of Theoretical Computer Science\nUniversität zu Lübeck\nGermany\n" ]
[ "Department of Computer Science\nKiel University\nGermany", "Institute of Theoretical Computer Science\nUniversität zu Lübeck\nGermany", "Department of Computer Science\nKiel University\nGermany", "Institute of Theoretical Computer Science\nUniversität zu Lübeck\nGermany" ]
[]
While the algorithmic drawing of static trees is well-understood and well-supported by software tools, creating animations depicting how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular T E X typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running T E X on the documents then results in documents in the svg format with visually pleasing embedded animations. Our algorithm produces animations that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NP-complete.
10.1007/978-3-319-50106-2_44
[ "https://arxiv.org/pdf/1608.08385v1.pdf" ]
16,348,998
1608.08385
6d953ee01f87d5d07b8da09c836fe5cd81bbe430
Offline Drawing of Dynamic Trees: Algorithmics and Document Integration Malte Skambath [email protected] Department of Computer Science Kiel University Germany Till Tantau [email protected] Institute of Theoretical Computer Science Universität zu Lübeck Germany Offline Drawing of Dynamic Trees: Algorithmics and Document Integration While the algorithmic drawing of static trees is well-understood and well-supported by software tools, creating animations depicting how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular T E X typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running T E X on the documents then results in documents in the svg format with visually pleasing embedded animations. Our algorithm produces animations that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NP-complete. Introduction Trees are undoubtedly among the most extensively studied graph structures in the field of graph drawing; algorithms for drawing trees date back to the origins of the field [26,40]. However, the extensive, ongoing research on how trees can be drawn efficiently, succinctly, and pleasingly focuses on either drawing a single, "static" tree or on interactive drawings of "dynamic" trees [11,12,27], which are trees that change over time. In contrast, the problem of drawing dynamic trees noninteractively in an offline fashion has received less attention. It is this problem that lies at the heart of our paper. Consider how an author could explain, in a paper or in a presentation, how a tree-based data structure such as a search tree works. In order to explain the dynamic behavior, our author might wish to show how the data structure evolves for a sequence of update operations. A typical drawing of the evolving sequence might look as in Figure 1, which has been created "manually" by running Animations in this document will only be rendered in the svg version [32], see Section 2.3 for a discussion of the reasons. A "manually" created drawing of a dynamic tree: Each tree in the sequence has been drawn using the Reingold-Tilford [29] algorithm. the Reingold-Tilford algorithm [29] on each tree in the sequence independently. While the result is satisfactory, there are (at least) two shortcomings: First Shortcoming: Flawed Layout. In the first step, the layout of the root's children changes (their horizontal distance decreases) even though there is no structural change at the root. While in the present graph the effect is small, one can construct examples where a single node removal causes a change in distances on all levels, obscuring where the actual structural change occurred. Since the whole sequence of trees (the whole "dynamic tree") is given by the author, the problem can be addressed by not running the Reingold-Tilford algorithm on each tree individually, but by running it on the "supergraph" resulting from uniting all trees in the sequence, resulting in the visualization in Figure 2. Unfortunately, this simple supergraph approach introduces new problems: First, the nodes "2" and "7" are unnecessarily far apart -the nodes "3" and "6" could use the same space since they are never both members of the same tree. Second, it is easy to construct sequences of trees whose union is not a tree itself. We address these problems in Section 3, where we present a new algorithm for computing layouts of dynamic trees that addresses the above problems. For dynamic trees whose supergraphs are trees or at least acyclic, the algorithm finds an optimal layout (with respect to natural aesthetic criteria) of the dynamic tree in linear time. For cyclic supergraphs, which are also important in practice since they arise for instance from the rotations necessary to balance search trees in data structures such as avl trees [1], we show that one has to break the cycles in order to layout the graph according to the criteria we develop. While we show that it is NP-complete to find a minimal set of break points, a simple greedy heuristic for finding breakpoints turns out to produce visually pleasing results. Second Shortcoming: Presentation as a Sequence of Snapshots. In order to depict the evolving nature of her dynamic tree, our author depicted different "snapshots" of the tree at different times and arranged these snapshots in a sequence. While the temporal dimension needs to be turned into something else when our medium of communication is printed paper, for documents presented using appropriate electronic devices we can visualize dynamic trees using animations. Such an animation needs much less space on a page and, perhaps more importantly, our visual system is much better at spotting movement than at identifying structural changes between adjacent objects. In Section 2 we present a system for creating animations on-the-fly during a run of the T E X program on a text document: First, we have augmented the popular Tik Z graphic package [37] (a macro package for T E X for creating graphics) by commands that compute and embed animations in the output files. Due to the way the system works, these commands have almost no overhead regarding compilation speed or resulting file size. Second, we have implemented a prototype of our algorithm from Section 3 for drawing dynamic trees that uses these animation commands. In result, when an author specifies the above dynamic graph appropriately in a T E X document and then runs T E X on it to convert it, the resulting file will contain the normal text and graphics as well as an embedded animation of the dynamic tree. When the document is viewed on electronic devices with a modern browser, the animation runs right inside the document. Related Work. Approaches to drawing static trees date back to the early 1970s, namely to the work of Knuth, Wetherell and Shannon, and Sweet [26,40,35]. A standard algorithm still in use today is due to Reingold and Tilford [29], see also [38]. They suggested that symmetric tree structures should be drawn symmetrically and provided an algorithm that supports this objective well and runs in linear time. Instead of visualizing trees as node-link diagrams, one can also use tree maps [25], three dimensional cone trees [30], or sunburst visualizations [33]. Approaches to drawing general dynamic graphs are more recent. The sequence-of-snapshot visualizations sketched before as well as animations are standard ways of visualizing them [19]. One can also generally treat time as another spacial dimension, which turns nodes into tubes through space [23]. There are many further techniques that are not restricted to node-link diagrams [8,9,22,28]; for an extensive overview of the whole state of the art including a taxonomy of different visualization techniques see Beck et al. [5], or [21] for a more treespecific overview. Diehl, Görg and Kerren [14,15] introduced a general concept, called foresighted layout, for drawing dynamic graphs offline. They propose to collapse nodes in the supergraph that never exist at the same time and to then draw the supergraph. While this approach produces poor results for trees, the results are better for hierarchical graphs [20]. Approaches tailored specifically to drawing dynamic trees are currently almost always online approaches. The algorithms, which expect a sequence of update operation as input [27,12], are integrated into interactive software and create or adjust the layout for each change. An early algorithm designed for dynamic trees was developed by Moen [27]. Later Cohen et al. [11,12] presented algorithms for different families of graphs the includes trees. Concerning the integration of tree drawing algorithms into text processing software, first implementations for the typesetting system T E X date back to Eppstein [18] and Brüggemann and Wood [6]. A more recent implementation of the Reingold-Tilford algorithm by the second author is now part of the graph drawing engine in Tik Z [36]. Organisation of this Paper. This paper is structured into two parts: In the first part, Section 2, we present the system we have developed for generating animations of dynamic graphs that are embedded into documents. Our core argument is that the system's seamless integration into a widely used system such as T E X is crucial for its applicability in practice. In the second part, Section 3, partly as a case study, partly as a study of independent interest, we investigate how a dynamic tree can be drawn using animations. We derive aesthetic criteria that animations and even image sequences of dynamic trees should meet and present an algorithm that does meet them. Full proofs can be found in the appendix, which also contains a gallery of dynamic trees drawn using our prototype. Dynamic Trees in Documents The problem for which we wish to develop a practical solution in the rest of this paper is the following: Visualize one or more dynamic trees inside a document created by an author from some manuscript. To make the terminology precise, by dynamic graph we refer to a sequence (G 1 , . . . , G n ), where each G i = (V i , E i , φ i ) is a directed, annotated graph with vertex set V i , edge set E i , and an annotation function φ i : V i ∪ E i → A that assigns additional information to each node and edge from some set A of annotations like ordering or size information. A dynamic tree is a dynamic graph where each T i is a tree with the edges pointing away from the root. A manuscript is a plain text written by an author that can be transformed by a program into an (output) document, a typically multi-page text document with embedded graphics or embedded animations. Note that the problem is an offline problem since the manuscript contains a full description of the dynamic graph and algorithms have full access to it. In rest of this section we explain how the practical obstacles arising from the problem are solved by the system we have developed, in Section 3 we investigate algorithmic questions. In the introduction we saw an example of how a dynamic tree can be visualized using a series of "snapshots" shown in a row. While this way of depicting a dynamic tree is a sensible, traditional way of solving the problem (drawings on printed paper "cannot change over time"), documents are now commonly also read on electronic devices that are capable of displaying changing content and, in particular, animations. We claim that using an animation instead of a sequence of snapshots has two major advantages: First, sequences of snapshots need a lot of space on a page even for medium-sized examples. We did a cursory survey of standard textbooks on computer science and found that typically only three to four snapshots are shown and that the individual trees are often rather small. For an animation, the length of the sequence is only limited by the (presumed) attention span of the reader and not by page size. Second, our visual system is much better at spotting movement than at identifying structural changes between adjacent objects. When operations on trees such as adding or deleting a leaf or moving whole subtrees are visualized using movements, readers can identify and focus on these operations on a subconscious level. Given the advantages offered by animations, it is surprisingly difficult to integrate animations into documents. Of course, there is a lot of specialized software for creating animations and graphics output formats like pdf or svg allow the inclusion of movie files in documents. However, this requires authors to use -apart from their main text processor like T E X or Word -one or more programs for generating animations and they then have to somehow "link" the (often very large) outputs of these different programs together. The resulting workflows are typically so complicated that authors rarely employ them. Even when they are willing to use and integrate multiple tools into their workflow, authors face the problem that using different tools makes it next to impossible to keep a visually consistent appearance of the document [36]. Very few, if any, animation software will be able to render for instance T E X formulas inside tobe-animated nodes correctly and take the sizes of these formulas into account. We have developed a system that addresses the above problems; more precisely, we have augmented an existing system that is in wide-spread use -T E Xby facilities for specifying dynamic trees, for computing layouts for them, and for generating animations that are embedded into the output files. Our extensions are build on top of Tik Z's graph drawing engine [36], which has been part of standard T E X distributions since 2014. Authors first specify the dynamic trees they wish to draw inside T E X manuscripts using a special syntax, which we describe in Section 2.1 (conceptually, this is similar to specifying for instance formulas inside the T E X manuscripts). Next, authors apply a graph drawing algorithm to the specified dynamic graph by adding an appropriate option to the specification and then running the T E X program as explained in Section 2.2. Lastly, in Section 2.3, we discuss which output formats are supported by our system, how the output can be viewed on electronic devices, and how a fallback for printed paper can be generated. The Input: Specifying Dynamic Trees In order to make dynamic trees accessible to graph drawing algorithms, we first have to specify them. For dynamic graphs and, in particular, for dynamic trees, there are basically two different methods available to us: We can specify each graph or tree in the dynamic graph sequence explicitly. Alternatively, we can specify a sequence of update operations that transform one graph into the next such as, for the dynamic trees of search trees, the sequence of insert and delete operations that give rise to the individual trees. Besides being easy and natural to use, the second method also provides algorithms with rich semantic information concerning the change from one graph to the next in the sequence. Despite the fact that the second method is more natural in several contexts and more semantically rich, for our prototype we use the first method: Authors specify dynamic graphs by explicitly specifying the sequence of graphs that make up the dynamic graph. We have two reasons for this choice: First, specifying the sequence of graphs explicitly imposes the least restrictions on what kind of dynamic graphs can be drawn, in principle. In contrast, the set of update operations necessary to describe the changes occurring just for the standard data structures balanced search trees, heaps, and union-find trees is large and hard to standardize. For instance, should the root rotation occurring in avl trees be considered a standard update operation or not? Second, it easy to convert a sequence of update operations into a sequence of graphs, while the reverse direction is harder and, sometimes, not possible. Our system can easily be extended to accept different sequences of update operations as input and convert them on-the-fly into a sequence of graphs that is then processed further. There are different possible formats for specifying individual graphs and, in particular, trees of graph sequences, including graphml, an xml-based markup language; the dot format, used by graphviz [17]; the gml format, used by the Open Graph Drawing Framework [10]; or the format of the \graph command of Tik Z [37], which is similar to the dot format. As argued in [36], it is not purely a matter of taste, which format is used; rather, good formats make it easy for humans to notate all information about a graph that is available to them. For instance, for static graphs the order in which vertices are specified is almost never random, but reflects information about them that the author had and that algorithms should take into account. Since the algorithm and system we have implemented are build on top of the graph drawing engine of Tik Z [36], we can use all of the different syntax flavors offered by this system, but authors will typically use the \graph command. Each graph in the sequence of graphs is surrounded by curly braces and, following the opening brace, we say [when=i] to indicate that we now specify the ith graph in the sequence. The graph is then specified by listing the edges, please see [36] and [37] for details on the syntax and its use in Tik Z. The result is a specification of the dynamic graph such as the following for the example graph from Figures Document Processing and Algorithm Invocation Once a dynamic graph has been specified as part of a larger T E X document, we need to process it. This involves both running a dynamic graph drawing algorithm to determine the positions of the nodes and the routing of the edges as well as producing commands that create the desired animation. The framework provided by the graph drawing engine [36] of Tik Z is wellsuited for the first task. All the author has to do is to load an appropriate graph drawing library and then use a special key with the \tikz command: The key animated binary tree layout causes the graph drawing engine to process the dynamic graph. It will parse the dynamic graph, convert it to an object-oriented model, and pass it to an algorithm from the evolving library, which is written in the Lua programming language [24]. 3 The framework also handles the later rendering of the nodes and edges and their correct scaling and embedding into the output document. Thus, the algorithm's implementation only needs to address the problem of computing node positions from an objectoriented model of the dynamic graph. The implementation need not (indeed, cannot) produce or process graphical output and primitives. Once the algorithm has computed the positions for nodes and edges of the graphs in the sequence, actual animations need to be generated. For this, Tik Z itself was extended by a new animation subsystem, which can be used independently of the graph drawing engine and allows users to specify and embed arbitrary animations in their documents. The animation subsystem adds animation annotations to the output file, which are statements like "move this graphics group by 1cm to the right within 2s" or "change the opacity of this node from opaque to transparent within 200ms." More formally, they are xml elements in the Synchronized Multimedia Integration Language [7]. For the animation of dynamic graphs, the graph drawing engine can now map the computed positions of the nodes at different times to Tik Z commands that add appropriate movement and opacity-change annotations to the output. The Output: Scalable Vector Graphics The annotation-based way of producing animations has two important consequences: Firstly, adding the annotations to the output does not have a noticeable effect on the speed of compilation (computing the necessary xml statements is quite easy) nor on the file size (annotations are small). However, secondly, the job of rendering the graph animations with, say, 30 frames per second does not lie with T E X, but with the viewer application and we need both a format and viewer applications that support this. Currently, there is only one graphics format that supports these annotationbased animations: The Scalable Vector Graphics (svg) format [13], which is a general purpose graphics language that is in wide-spread use. All modern browsers support it, including the parsing and rendering of svg animations. The dvisvgm program, which is part of standard T E X distributions, transforms arbitrary T E X documents into svg files that, when viewed in a browser, are visually indistinguishable from pdf files produced by T E X -except, of course, for the animations of the dynamic graphs. While we argued that animations are a superior way of visualizing dynamic graphs, there are situations where they are not feasible: First, documents are still often printed on paper. Second, the popular pdf format does not support annotation-based animations and, thus, is not able to display Tik Z's animations. Third, it is sometimes desirable or necessary to display "stills" or "snapshots" of animations at interesting time steps alongside the animation. In these situations, authors can say make snapshot of=t to replace the animation by a static picture of what the animated graphic would look like at time t. Since the computation of the snapshot graphic is done by T E X and since no animation code is inserted into the output, this method works with arbitrary output formats, including pdf. Algorithmic Aspects of Drawing Dynamic Trees Given a dynamic tree T = (T 1 , . . . , T k ) consisting of a sequence of trees T i = (V i , E i , φ i ), we saw in the introduction that neither drawing each tree independently and then "morphing" the subsequent drawings to create an animation nor laying out just the supergraph super(T ) = ( i V i , i E i ) and then animating just the opacity of the nodes and edges will lead to satisfactory drawings of dynamic trees. Our aim is to devise a new algorithm that addresses the shortcomings of these approaches and that meets a number of sensible aesthetic criteria that we formulate in Section 3.1. The algorithm, presented in Section 3.2, has been implemented as a prototype [31] and we have used it to create the animations of dynamic trees in the present paper. While the prototype implementation does not even run in linear time (as would be possible by Theorem 3.2), it only needs fractions of a second for the example graphs from this paper. Aesthetic Criteria for Drawing Dynamic Trees Already in 1979, Wetherell and Shannon [40] explicitly defined aesthetic criteria for the layout of trees. Two years later Reingold and Tilford [29] refined these static criteria towards more symmetric drawings in which isomorphic subtrees must have the same layout. While the criteria were originally formulated for binary trees only, one can allow any number of children when there is an ordering on the children of each node. Criterion (Ranking). The vertical position of a node equals its topological distance from the root. Criterion (Symmetry). All topologically order-isomorphic subtrees are drawn identically. Topologically mirrored subtrees are drawn horizontally mirrored. As numerous applications show, these rather sensible criteria lead to aesthetically pleasing drawings of static trees. We extend these criteria to the dynamic case. Ideally, we would like to keep all of the above criteria, but will see in a moment that this is not always possible. Our first dynamic criterion forbids the unnecessary movement of nodes in drawings like the one shown on the right, which shows the same problem as the example in the introduction did: The horizontal offset between n and c changes from T i to T i+1 even though there is no structural change at n. (Note that when a node disappears in the step from T i to T i+1 and then reappears in T i+2 , the stability criterion does no require it to appear at the same position as before.) Criterion (Stability). The horizontal offset between a node n and a child c may not change between the layouts of trees T i and T i+1 if c does not change its position in the ordering of the children of n. While the stability criterion forbids relative movements of connected nodes, it allows whole subtrees to move without changing their inner layout. This emphasizes the important parts of changes since multiple objects moving with the same speed are percieved as one connected group [4,39]. The criterion reduces movements and draws common structures identically, thereby reducing errors in understanding [2] and making it easier for viewers to correctly recognize the changes in the tree sequence [3]. While all of the above criteria are reasonable, unfortunately, there is no way of meeting all of them simultaneously, see the appendix for the proof: In view of the lemma, we will need to weaken one or more of our criteria, while still trying to meet them at least in "less problematic" cases than the dynamic tree from Figure 3. Furthermore, even when the criteria can be met, this may not always be desirable. Consider the right example, which seems like a "reasonable" drawing of a dynamic tree. The Stability Criterion enforces the large distance between b and c already in T 1 , but the Symmetry Criterion would now actually enforce the same distance between 2 and 3, which seems undesirable here. As a replacement of the Symmetry Criterion we propose a Weak Symmetry Criterion that our algorithm will be able to meet in many important cases, including the troublesome example from Lemma 3.1. Nevertheless, there are still dynamic trees that cannot be drawn in this way, see Lemma 3.3, which also turn out to be the algorithmically difficult cases. Criterion (Weak Symmetry). Let n and n be nodes such that for all i ∈ {1, . . . , n} the subtrees rooted at n and at n in T i are order-isomorphic (or all mirrored). Then in all drawings of the T i the subtrees rooted at n and n must all be drawn identically (or all mirrored). An Algorithm for Drawing Arbitrary Dynamic Trees Our starting point for an algorithm that meets the aesthetic criteria just formulated is the classical Reingold-Tilford algorithm [29]. It will be useful to review this algorithm briefly, formulated in a "bottom-up" fashion: While there is a node that has not yet been processed, pick a node n whose children c 1 , . . . , c m have all already been processed (this is immediately the case for all leafs, where m = 0). For each child c r a layout L(c r ) will have been computed for the subtree T (c r ) of T rooted at c r . The algorithm now shifts the L(c r ) vertically so that all c r lie on the same horizontal line (Ranking Criterion), then shifts them horizontally so that the c 1 comes first, followed by c 2 , and so on (Ordering Criterion), such that no overlap of the L(c r ) occurs. Finally, n is centered above its children (Centering Criterion). The Symmetry Criterion is satisfied automatically by this algorithm since the same shifts occur for symmetric subtrees. Using appropriate data structures, the algorithm can be implemented in linear time. Our Algorithm A.1, see the appendix for pseudo-code, uses the same basic idea as the Reingold-Tilford algorithm, but introduces two new ideas. tighten Idea 1: Treat Nodes as Three-Dimensional Objects. In our algorithm, we treat nodes and subtrees as "three dimensional" objects with time as the third dimension. Given a dynamic tree T = (T 1 , . . . , T k ), the algorithm does not process the T i one at a time (as online algorithms have to do), but instead considers for each node n of the supergraph super(T ) the sequence (T 1 (n), . . . , T k (n)) of trees rooted at n in the different T i and computes a whole sequence of layouts (L 1 (n), . . . , L k (n)) for these trees: The core operation of the Reingold-Tilford algorithm, the shifting of a layout L(c r ) until it almost touches the previous layout L(c r−1 ), is replaced by a shifting of the whole sequence (L 1 (c r 1 ), . . . , L k (c r k )), where c i j denotes the ith child of n in T j , until at least one layout L j (c r j ) (one of the gray layouts in the example) almost touches its sibling's layout L j (c r−1 j ) (one of the dark layouts). Idea 2: Processing the Supergraph Using a Topological Ordering. For static trees, there is a clear order in which the nodes should be processed by the Reingold-Tilford algorithm: from the leafs upwards. For a dynamic tree, this order is no longer clear -just consider the example from Figure 3: Should we first process node 1 or node a? Our algorithm address this ordering problem as follows: We compute the supergraph super(T ) and then check whether it is acyclic. If so, it computes a topological ordering of super(T ) and then processes the nodes in this order. Observe that this guarantees that whenever a node is processed, complete layouts for its children will already have been computed. The lemma tempts us to just "give up" on cyclic supergraphs. However, these arise naturally in prune-and-regraft operations and from rotations in search trees -which are operations that we would like to visualize. We could also just completely ignore the temporal criteria and return to drawing each tree individually in such cases -but we might be able to draw everything nicely except for a single "small" cycle "somewhere" in the supergraph. We propose to deal with the cycle problem by "cutting" the cycles with as few "temporal cuts" as possible. These are defined as follows: Let G = (G 1 , . . . , G k ) be a dynamic graph and let n be a node of the supergraph super(G) and let i ∈ {1, . . . , k − 1}. The temporal cut of G at n and i is a new dynamic graph G that is identical to G, except that for all j ∈ {i + 1, . . . , k} in which G j contains the node n, this node is replaced by the same new node n (and all edges to or from n are replaced by edges to or from n ). Temporal cuts can be used to remove cycles from the supergraph of a dynamic graph, which allows us to then run our Algorithm A.1 on the resulting graph; indeed, simply "cutting everything at all times" turns every supergraph into a (clearly acyclic) collection of non-adjacent edges and isolated nodes. However, we wish to minimize the number of temporal cuts since, when we visualize G using an animation, the different locations that may be assigned to n and n will result in a movement of the node n to the new position of n . By the above discussion, we would like to find an algorithm that solves the following problem temporal-cut-minimization: Given a dynamic tree T , find a minimal number of temporal cuts, such that the resulting dynamic tree T has an acyclic supergraph. Unfortunately, this problem turns out to be difficult: Theorem 3.4. The decision version of temporal-cut-minimization is NP- complete. In light of the above theorem, we have developed and implemented a simple greedy heuristic, Algorithm A.2, for finding temporal cuts that make the supergraph acyclic, which our prototype runs prior to invoking Algorithm A.1: Given a dynamic tree, the heuristic simply adds the trees T i and their edges incrementally to the supergraph. However, whenever adding an edge e = (v, w) of T i to the supergraph creates a cycle, we cut w at i − 1. Conclusion and Outlook We have presented a system for offline drawings of dynamic trees using animations that are embedded in (text) documents. The system has been implemented [31] as an extension of the popular T E X system and will become part of future version of Tik Z. 4 The generated animation are light-weight both in terms of file size and generation time, but require that the documents (or, at least, the graphic files) are stored in the svg format. Our new algorithm is a natural extension of the Reingold-Tilford algorithm to the dynamic case, but while the original algorithm runs in linear time on all trees, we showed that the dynamic case leads to NP-complete problems. Fortunately, in practice, the hard subproblems can be solved satisfactorily using a greedy strategy -at least, that has been our finding for a limited number of examples such as the above animation; a perceptual study of animated drawings of dynamic graphs has not (yet) been conducted. We see our algorithm as a first step towards a general set of algorithms for drawing dynamic graphs using animations, which we believe to have a great (and not yet fully realized) potential as parts of text documents. A next logical step would be a transferal of the Sugiyama method [16,34] to the dynamic offline case. A Technical Appendix In this appendix, Section A.1 presents some example animations for dynamic trees that were generated using our prototype implementation. In Section A.2 we present pseudo-code for our two algorithms: The main Algorithm A.1 for drawing dynamic graphs with an acyclic supergraph, and the greedy heuristic Algorithm A.2 for finding temporal cuts quickly. Finally, Section A.3 contains the proofs for the lemmas and theorems from the main text. A.1 A Gallery of Animations of Dynamic Trees In the following, we present a number of animations that show how different dynamic graphs evolve over time. These animations will play only in the svg version of this paper [32], the pdf version displays only the (rather boring) initial tree T 1 of the dynamic tree. As explained in Section 2.3, it would have been possible to generate snapshots of the animations, but since the whole point of this paper is to show animations embedded in documents, we have only include the animations. Our first example is the "troublesome" dynamic tree from Figure 3 as rendered by our algorithm. As we argued in detail in Lemma 3.3, it is not possible to draw this dynamic tree without violating the Stability Criterion at least once. In the drawing in Figure 3, this criterion is actually violated twice, namely for the node a between T 1 and T 2 and again for the node 1 between T 2 and T 3 . In contrast, our algorithm, which tries to minimize these violations, succeeds in only having to "cut once" and, hence, renders the dynamic tree with a single "movement" resulting from this cut. Our second example is a rendering of an avl tree that gets updated repeatedly, namely by 24 update operations, consisting both of insertions and deletions. The avl tree is balanced, meaning that rotations are used to keep the height of the tree logarithmically bounded. Each rotation introduces a cyclic dependency in the supergraph, all of which have to be removed by Algorithm A.2 and all of which result in movements in the animation. While the Stability Criterion dictates that such movements should not happen, the animation shows that -besides being unavoidable -they are actually helpful in the example since they highlight the places where rotations occur. As a final example we have a rendering of a simple non-balanced binary search tree (20 steps). In this search tree new nodes are inserted as new leafs of the tree and in this example each node that is deleted has only one single child, which takes its parent's position. In contrast to the previous example of an avl tree, neither a rotation occurs nor does the ascendant-descendant relation change for any pair of nodes from one tree T i to the next. Hence, the supergraph is acyclic and the produced animation meets all of the criteria Ranking, Ordering, Centering, Stability, and Weak Symmetry. // By the sorting, for every child c r j of v i , the layout L r (c r j ) will already have been computed 17 for r ← 2 to m do 18 foreach T j that contains v i do 19 // Use the Reingold-Tilford data structure to compute in linear time: 20 dist r j (v i ) ← minimum horizontal distance from c r−1 j to c r j so that all of L j (c 1 j ), . . . , L j (c r−1 j ) is to the left of L j (c r j ) with a fixed minimal padding 21 22 // Synchronize the distance between neigboring subtrees 23 dist r (v i ) ← max j {dist r j (v i )} 24 25 // Shift the subtree 26 foreach T j that contains v i do 27 x j (c r j ) ← x j (c r−1 j ) + dist r (v i ) 28 update the data structure of Reingold and Tilford for the used shift 29 30 // Update the relative shift such that L(v i ) is centered: 31 width(v i ) ← m r=2 dist r (v i ) 32 foreach T j that contains v i do 33 x j (v i ) ← 0 // the initial horizontal shift of v i in L j (v i ) 34 for r ← 1 to m do 35 x j (c r j ) ← x j (c r j ) − 1 2 width(v i )18 V ← V ∪ V i 19 E ← E ∪ E i 20 A.3 Proofs Omitted from the Main Text Proof (of Lemma 3.1). We use the first two trees T 1 and T 2 from Figure 3. For a node x of a tree T i , let us write h x i for the horizontal distance from x to the next node to the right of it on its height (for instance, h c 2 is the distance from c to f in T 2 ). In T 2 , the subtrees rooted at 1 and at a are clearly order-isomorphic and, hence, have to be drawn identically by the Symmetry Criterion. The same is true for the subtrees rooted at 2, 5, b, and e. Hence, h 2 2 = h 3 2 = h 4 2 = h b 2 = h c 2 = h d 2 . Consider T 1 and observe that as in T 2 the nodes 2 and 5 are the first and second child of the node 1 and the nodes b and e are the first and second of a. By the Stability Criterion, we get h 2 1 = h 2 2 and h b 1 = h b 2 . Now, in the tree T 1 , by the Ranking and Ordering Criteria, the vertices d, 2, 5, g must come in this order as shown. However, the distance between d and g in T 1 , which must be equal to h b 1 = h 2 2 , is clearly greater than h 2 1 = h 2 2 ; and h b 2 > h 2 2 is a contradiction to h b 2 = h 2 2 . Proof (of Theorem 3.2). Algorithm A.1 automatically meets the criteria Ranking, Ordering, and Centering: As in independent runs of the Reingold-Tilford algorithm for each T i , only the horizontal distance between neighboring children of the same node differs, which does not influence the ordering of children. Furthermore, as the algorithm processes all nodes in a topological order, the layout of a node n depends only on the previously computed subtree layouts L i (c j i ) of n's children and thus the Weak symmetry criterion holds automatically, too. Since processing a node n just shifts the children (with their subtrees) of a node n relative to n and with the same horizontal distances dist r (n) between every (r−1)-th and r-th child in each T i , the produced layout meets the Stability criterion: A node only "moves" in two cases. First, its parent or its position relative to its siblings can change; but then the Stability Criterion makes no requirements. Second, there may be a change at some some ancestor further up; but then there is no change of the offset between n and its parent node since the inner layout of the related subtree is already fixed. Concerning the claimed linear runtime, the only difficult part is to see how the computation of the necessary shifts can be done in linear time: A naïve implementation would remember and then traverse the "left" and "right" sides of the different layouts repeatedly to compute the point of "least distance" between adjacent layouts. Reingold and Tilford had a clever idea of introducing a skipping data structure that removes this requirement: One can compute the necessary shift distance between two given layouts of subtrees in constant time using this data structure. This yields a linear runtime for the classical Reingold-Tilford algorithm and also a linear runtime for our algorithm since we can use the same data structure and need to compute the same shift distances. Proof (of Lemma 3.3). The tree T = (T 1 , T 2 , T 3 ) from Figure 3 cannot be drawn while meeting the criteria. The argument is essentially the same as in Lemma 3.1, only we can now no longer argue that in T 2 we must have h 2 2 = h 2 b since the trees rooted at a and 1 no longer have the same overall "history" and the Weak Symmetry Criterion does not apply to them. However, it does apply to the trees rooted at 2, 5, b, and e and, hence, we have h 2 2 = h 3 2 = h 4 2 and h b 2 = h c 2 = h d 2 . Furthermore, as in Lemma 3.1, the Stability Criterion still tells us h 2 1 = h 2 2 = h 2 3 and h b 1 = h b 2 = h b 3 . Finally, as in the Lemma 3.1, the Ranking and Ordering Criteria still yield that in T 1 the nodes d, 2, 5, g must be ordered as shown in Figure 3 and in T 3 the nodes 4, b, e, t must be ordered as shown. From these orderings we can conclude h b 1 > h 2 1 and h 2 3 > h b 3 , which is a contradiction since h b 1 = h b 2 = h b 3 and h 2 1 = h 2 2 = h 2 3 . Proof (of Theorem 3.4). For the decision version of temporal-cut-minimization we are given a dynamic tree T = (T 1 , . . . , T k ) and a number c and must decide whether c temporal cuts suffice to turn T into a graph T with an acyclic supergraph super(T ). Containment of this problem in NP is clear since we can simply guess the temporal cuts and checking whether a graph is acyclic can easily be done in polynomial time. To show hardness we reduce from the NP-complete problem vertex-cover, which contains all (coded) pairs (G, k) of undirected graphs G = (V, E) and numbers k such that there is a set C ⊆ V with |C| ≤ k and for all {u, v} ∈ E we have u ∈ C or v ∈ C. Let G = (V, E) with V = {v 1 , . . . , v n } and k be an input for the reduction. We must compute an instance for temporal-cutminimization consisting of a dynamic tree T and a number c. We set c = k. For each v ∈ V let v in and v out be two new nodes and V = {v in , v out | v ∈ V } be the set of nodes in the supergraph of the dynamic tree. For each node v i ∈ V let T i = ( V , E i ) with E i = {(v i,out , w in ) | {v i , w} ∈ E}. These trees contain directed outgoing edges from v i,out to w in for each vertex w that is connected with v i in G. Finally, let T |V |+1 = ( V , E |V |+1 ) with (v in , v out ) ∈ E n+1 for each node v ∈ V . By construction, each T i is a tree or forest (it is natural to allow forests, but one can also turn forests into trees by adding a global root and making all forest roots children of this global root). Clearly, the reduction is computable in polynomial time. Figure 4 shows an example for this reduction. Note that for each edge in G we get exactly one atomic cycle in the supergraph and all cycles in the supergraph are alternating paths of in-out (short, straight) and out-in (long, bend) edges. It remains to show (G, k) ∈ vertex-cover if and only if (T, k) ∈ temporal-cut-minimization. Let (G, k) be an instance of vertex-cover. Then there is a vertex cover C of size |C| ≤ k for the graph G = (V, E). The length of the sequence T is |V | + 1. We do the following |C| ≤ k temporal cuts: (v in , n) for each v ∈ C. We claim that these cuts turn T into a new dynamic tree T for which super(T ) is acyclic: By construction, no in-node v in has incoming edges in T n+1 and all out-nodes have exactly one in-node. A given vertex v in can be divided by out temporal cuts into at most two nodes v in and v in . This implies that if a node v is in the vertex cover C, then in T neither v in nor the vertex v out can be on a cycle. If there is a cycle left, then there is an alternating path in the supergraph with at least two in-out-edges. Those correspond to connected nodes x and y in G. As the in-out-edges are on that path, neither (x, n) nor (y, n) is one of our temporal cuts. Hence, neither x nor y are in the vertex cover C. Since x and y are connected, C cannot be a valid vertex cover and it follows that super(T ) is acyclic. For the other direction, let there be a set R or at most k temporal cuts that turn T into T with super(T ) being acyclic. If necessary, we replace all (v in , j) and (v out , j) in R by (v in , n) since all cycles contain an in edge of the last tree T n+1 . Let C = {v ∈ V G | (v in , n) ∈ R}. Then C is a vertex cover in G: If there is an edge {u, v} in G not covered by C, then neither (v in , n) nor (u in , n) can be in R since, otherwise, the cycle u in , u out , v in , v out , u in would still be in the supergraph. Hence, we get the claim. Fig. 1. A "manually" created drawing of a dynamic tree: Each tree in the sequence has been drawn using the Reingold-Tilford [29] algorithm. Fig. 2 . 2The dynamic tree fromFigure 1, redrawn by drawing a "supergraph" (the union of all trees in the sequence) and then using the positions of nodes in this supergraph for the individual drawings. Fig. 3 . 3A "problematic" dynamic tree. Already the dynamic tree T = (T1, T2) cannot be drawn while meeting all of the criteria Ranking, Ordering, Centering, Symmetry, and Stability, as shown in Lemma 3.1. The whole dynamic tree T = (T1, T2, T3) cannot even be drawn when the Symmetry Criterion is replaced by the Weak Symmetry Criterion, see Lemma 3.3.Criterion (Ordering). The horizontal positions of a node's children respect their topological order in the tree.Criterion (Centering). Nodes are horizontally centered between their leftmost and rightmost child if there are at least two children. Lemma 3. 1 . 1No drawing of the dynamic tree T = (T 1 , T 2 ) from Figure 3 meets all of the criteria Ranking, Ordering, Centering, Symmetry, and Stability. Theorem 3 . 2 . 32Let T be a dynamic tree whose supergraph is acyclic. Then Algorithm A.1 draws T in linear time such that all of the criteria Ranking, Ordering, Centering, Weak Symmetry, and Stability are met. Theorem 3.2 settles the problem of drawing dynamic trees with acyclic supergraphs nicely. In contrast, for a cyclic supergraph, things get much harder:Lemma 3.3. No drawing of T = (T 1 , T 2 , T 3 ) from Figure 3 meets all of the criteria Ranking, Ordering, Centering, Weak Symmetry, and Stability. A. 2 2Algorithms in Pseudo-Code Algorithm A.1 (Drawing Dynamic Trees with Acyclic Supergraphs) 1 input a dynamic tree T = (T 1 , . . . , T k ) the supergraph acyclic 6 if super(T ) is not acyclic then 7 call Algorithm A.2 8 update T and recompute super(T ), which is now acyclic 9 10 sort the vertices of super(T ) topologically so that {v 1 , . . . , v n } is the vertex set of super(T ) and for all edges (v i , v j ) we have j < i 11 12 // Iterate over all nodes 13 for i ← 1 to n do 14 m ← the maximum number of children v i has over time 15 16 36 37 36// Compute absolute coordinates 38 foreach snapshot T i do 39 compute the depth of each node v in T i as the vertical coordinate of L i (v) 40 compute the horizontal position of all nodes by accumulation of all shift values on the path from the root to the node in a tree traversal. 41 42 return (L 1 , . . . , L k ) Algorithm A.2 (Greedy Heuristic for Making Supergraphs Acyclic) 1 input a dynamic tree T = (T 1 , . . . , T k ) i ← 1 to k do 7 let T i = (V i , E i ) be a new tree with V i = E i = ∅ 8 9 foreach edge (v, w) ∈ E i do 10 // Check if the edge creates a cycle in the supergraph 11 if v is reachable from w in (V ∪ V i , E ∪ E i ) then 12 replace w by w in all trees T j with j ≥ i and in T i 13 add the edge (v, w ) to T i 14 else 15 add the edge (v, w) to T i 16 17 // Add possibly renamed nodes and edges to the still-acyclic supergraph Fig. 4 . 4Example of the reduction from vertex-cover to temporal-cut-minimization. The input graph is G = {a, b, c}, {{a, b}, {b, c}, {a, c}} . The supergraph of the dynamic graph T = (T1, T2, T3, T4) is T . 1 and 2: \tikz \graph { {[when=1] 10->{ 5->{ 2, 7->6 }, 15->12 } }, {[when=2] 10->{ 5->{ 2, 7->6 }, 15 } }, {[when=3] 10->{ 5->{ 2, 7 }, 15 } }, {[when=4] 10->{ 5->{ 2->{ , 3 }, 7 }, 15 } } }; When the algorithm is also implemented in the Lua language, it can be used directly by T E X without special configurations or runtime linking, but it can also be implemented in C or C++ at the cost of a more complicated deployment. Currently available in the development version at http://pgf.cvs.sourceforge.net. An Algorithm for the Organization of Information. G M Adelson-Velsky, E M Landis, Doklady Akademii Nauk USSR. 32G. M. Adelson-Velsky and E. M. Landis. An Algorithm for the Organization of Information. Doklady Akademii Nauk USSR, 3(2):1259-1263, 1962. Animation, small multiples, and the effect of mental map preservation in dynamic graphs. D Archambault, H Purchase, B Pinaud, IEEE Transactions on Visualization and Computer Graphics. 174D. Archambault, H. Purchase, and B. Pinaud. Animation, small multiples, and the effect of mental map preservation in dynamic graphs. IEEE Transactions on Visualization and Computer Graphics, 17(4):539-552, 2011. The mental map and memorability in dynamic graphs. D Archambault, H C Purchase, Proceedings of Visualization Symposium (PacificVis) 2012. Visualization Symposium (PacificVis) 2012IEEE PressD. Archambault and H. C. Purchase. The mental map and memorability in dy- namic graphs. In Proceedings of Visualization Symposium (PacificVis) 2012, pages 89-96. IEEE Press, 2012. Filtering and brushing with motion. L Bartram, C Ware, 1Information VisualizationL. Bartram and C. Ware. Filtering and brushing with motion. Information Visu- alization, 1(1):66-79, 2002. The state of the art in visualizing dynamic graphs. F Beck, M Burch, S Diehl, D Weiskopf, State of the Art Reports of the 16th Eurographics Conference on Visualization. EuroVisF. Beck, M. Burch, S. Diehl, and D. Weiskopf. The state of the art in visualizing dynamic graphs. In State of the Art Reports of the 16th Eurographics Conference on Visualization, EuroVis 2014, pages 83-103. Eurographics Association, 2014. Drawing trees nicely with T E X. A Brüggemann-Klein, D Wood, Electronic Publishing2A. Brüggemann-Klein and D. Wood. Drawing trees nicely with T E X. Electronic Publishing, 2(2):101-115, 1989. Synchronized Multimedia Integration Language (SMIL 3.0), W3C Recommendation. D Bulterman, J Jansen, P Cesar, S Mullender, E Hyche, M Demeglio, J Quint, H Kawamura, D Weck, X Pañeda, D Melendi, S Cruz-Lara, M Hanclik, D F Zucker, T Michel, REC- SMIL3-20081201The World Wide Web Consortium (W3C). Technical ReportD. Bulterman, J. Jansen, P. Cesar, S. Mullender, E. Hyche, M. DeMeglio, J. Quint, H. Kawamura, D. Weck, X. García Pañeda, D. Melendi, S. Cruz-Lara, M. Hanclik, D. F. Zucker, and T. Michel. Synchronized Multimedia Integration Language (SMIL 3.0), W3C Recommendation 01 December 2008. Technical Report REC- SMIL3-20081201, The World Wide Web Consortium (W3C), 2008. Available at http://www.w3.org/TR/2008/REC-SMIL3-20081201. Radial edge splatting for visualizing dynamic directed graphs. M Burch, F Beck, D Weiskopf, Proceedings of the International Conference on Computer Graphics Theory and Applications, IVAPP 2012. the International Conference on Computer Graphics Theory and Applications, IVAPP 2012SciTe PressM. Burch, F. Beck, and D. Weiskopf. Radial edge splatting for visualizing dy- namic directed graphs. In Proceedings of the International Conference on Com- puter Graphics Theory and Applications, IVAPP 2012, pages 603-612. SciTe Press, 2012. TimeRadarTrees: Visualizing dynamic compound digraphs. M Burch, S Diehl, Computer Graphics Forum. 273M. Burch and S. Diehl. TimeRadarTrees: Visualizing dynamic compound digraphs. Computer Graphics Forum, 27(3):823-830, 2008. The open graph drawing framework. M Chimani, C Gutwenger, M Jünger, K Klein, P Mutzel, M Schulz, Poster at the 15th International Symposium on Graph Drawing. M. Chimani, C. Gutwenger, M. Jünger, K. Klein, P. Mutzel, and M. Schulz. The open graph drawing framework. Poster at the 15th International Symposium on Graph Drawing 2007 (GD 2007), 2007. Dynamic graph drawings: Trees, series-parallel digraphs, and planar st-digraphs. R F Cohen, G Di Battista, R Tamassia, I G Tollis, SIAM Journal on Computing. 245R. F. Cohen, G. Di Battista, R. Tamassia, and I. G. Tollis. Dynamic graph draw- ings: Trees, series-parallel digraphs, and planar st-digraphs. SIAM Journal on Computing, 24(5):970-1001, 1995. A framework for dynamic graph drawing. R F Cohen, Gi Di Battista, R Tamassia, I G Tollis, P Bertolazzi, Proceedings of the 8th Annual Symposium on Computational Geometry, SCG 1992. the 8th Annual Symposium on Computational Geometry, SCG 1992ACM PressR. F. Cohen, Gi. Di Battista, R. Tamassia, I. G. Tollis, and P. Bertolazzi. A frame- work for dynamic graph drawing. In Proceedings of the 8th Annual Symposium on Computational Geometry, SCG 1992, pages 261-270. ACM Press, 1992. E Dahlström, P Dengler, A Grasso, C Lilley, C Mccormack, D Schepers, J Watt, The World Wide Web Consortium (W3C). 1Technical Report REC-SVG11-20110816W3C Recommendation 16E. Dahlström, P. Dengler, A. Grasso, C. Lilley, C. McCormack, D. Schepers, and J. Watt. Scalable Vector Graphics (SVG) 1.1 (Second Edition), W3C Recommen- dation 16 August 2011. Technical Report REC-SVG11-20110816, The World Wide Web Consortium (W3C), 2011. Available at http://www.w3.org/TR/2011/REC- SVG11-20110816. Foresighted graphlayout. S Diehl, C Görg, A Kerren, A/02/2000Saarbrücken, GermanyFB Informatik, University SaarbrückenTechnical reportS. Diehl, C. Görg, and A. Kerren. Foresighted graphlayout. Technical report A/02/2000, FB Informatik, University Saarbrücken, Saarbrücken, Germany, 2000. Preserving the mental map using foresighted layout. S Diehl, C Görg, A Kerren, Proceedings of the 3rd Joint Eurographics-IEEE TCVG Conference on Visualization. the 3rd Joint Eurographics-IEEE TCVG Conference on Visualization1The Eurographics AssociationS. Diehl, C. Görg, and A. Kerren. Preserving the mental map using foresighted layout. In Proceedings of the 3rd Joint Eurographics-IEEE TCVG Conference on Visualization, volume 1, pages 175-184. The Eurographics Association, 2001. How to draw a directed graph. P Eades, K Sugiyama, Journal of Information Processing. 134P. Eades and K. Sugiyama. How to draw a directed graph. Journal of Information Processing, 13(4):424-436, 1990. Graphviz and Dynagraph -static and dynamic graph drawing tools. J Ellson, E R Gansner, E Koutsofios, S C North, G Woodhull, Graph Drawing Software, Mathematics and Visualization. M. Junger and P. MutzelSpringer-VerlagJ. Ellson, E.R. Gansner, E. Koutsofios, S.C. North, and G. Woodhull. Graphviz and Dynagraph -static and dynamic graph drawing tools. In M. Junger and P. Mutzel, editors, Graph Drawing Software, Mathematics and Visualization, pages 127-148. Springer-Verlag, 2004. . D Eppstein, Trees in T E X. TUGboat. 61D. Eppstein. Trees in T E X. TUGboat, 6(1):31-35, 1985. A visual analytics approach to dynamic social networks. P Federico, W Aigner, S Miksch, F Windhager, L Zenk, Proceedings of the 11th International Conference on Knowledge Management and Knowledge Technologies, i-KNOW '11. the 11th International Conference on Knowledge Management and Knowledge Technologies, i-KNOW '11ACM Press47P. Federico, W. Aigner, S. Miksch, F. Windhager, and L. Zenk. A visual analytics approach to dynamic social networks. In Proceedings of the 11th International Conference on Knowledge Management and Knowledge Technologies, i-KNOW '11, pages 47:1-47:8. ACM Press, 2011. Dynamic graph drawing of sequences of orthogonal and hierarchical graphs. C Görg, P Birke, M Pohl, S Diehl, Proceedings of the 12th International Symposium on Graph Drawing. the 12th International Symposium on Graph DrawingSpringer-Verlag3383C. Görg, P. Birke, M. Pohl, and S. Diehl. Dynamic graph drawing of sequences of orthogonal and hierarchical graphs. In Proceedings of the 12th International Sym- posium on Graph Drawing, GD 2004, volume 3383 of Lecture Notes in Computer Science, pages 228-238. Springer-Verlag, 2004. A survey of multiple tree visualisation. Martin Graham, Jessie Kennedy, formation Visualization. 9Martin Graham and Jessie Kennedy. A survey of multiple tree visualisation. In- formation Visualization, 9(4):235-252, 2010. Visualizing the evolution of compound digraphs with TimeArcTrees. M Greilich, M Burch, S Diehl, Computer Graphics Forum. 283M. Greilich, M. Burch, and S. Diehl. Visualizing the evolution of compound di- graphs with TimeArcTrees. Computer Graphics Forum, 28(3):975-982, 2009. Interactively visualizing dynamic social networks with DySoN. G Groh, H Hanstein, W Wörndl, Proceedings of the Workshop on Visual Interfaces to the Social and the Semantic Web, VISSW2009. the Workshop on Visual Interfaces to the Social and the Semantic Web, VISSW2009G. Groh, H. Hanstein, and W. Wörndl. Interactively visualizing dynamic social networks with DySoN. In Proceedings of the Workshop on Visual Interfaces to the Social and the Semantic Web, VISSW2009, 2009. R Ierusalimschy, Programming in Lua. Lua.org. 2nd editionR. Ierusalimschy. Programming in Lua. Lua.org, 2nd edition, 2006. Tree-maps: a space-filling approach to the visualization of hierarchical information structures. B Johnson, B Shneiderman, Proceedings of the 2nd IEEE Conference on Visualization '91, VIS '91. the 2nd IEEE Conference on Visualization '91, VIS '91IEEE PressB. Johnson and B. Shneiderman. Tree-maps: a space-filling approach to the visu- alization of hierarchical information structures. In Proceedings of the 2nd IEEE Conference on Visualization '91, VIS '91, pages 284-291. IEEE Press, 1991. Optimum binary search trees. D E Knuth, Acta Informatica. 11D. E. Knuth. Optimum binary search trees. Acta Informatica, 1(1):14-25, 1971. Drawing dynamic trees. Software. S Moen, IEEE. 74S. Moen. Drawing dynamic trees. Software, IEEE, 7(4):21-28, 1990. Visualizing the evolution of community structures in dynamic social networks. K Reda, C Tantipathananandh, A Johnson, J Leigh, T Berger-Wolf, Computer Graphics Forum. 303K. Reda, C. Tantipathananandh, A. Johnson, J. Leigh, and T. Berger-Wolf. Visual- izing the evolution of community structures in dynamic social networks. Computer Graphics Forum, 30(3):1061-1070, 2011. Tidier drawings of trees. E M Reingold, J S Tilford, IEEE Transactions on Software Engineering. 72E. M. Reingold and J. S. Tilford. Tidier drawings of trees. IEEE Transactions on Software Engineering, 7(2):223-228, 1981. Cone trees: Animated 3d visualizations of hierarchical information. G G Robertson, J D Mackinlay, S K Card, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '91. the SIGCHI Conference on Human Factors in Computing Systems, CHI '91ACM PressG. G. Robertson, J. D. Mackinlay, and S. K. Card. Cone trees: Animated 3d visualizations of hierarchical information. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '91, pages 189-194. ACM Press, 1991. Algorithmic drawing of evolving trees. M Skambath, GermanyInstitute of Theoretical Computer Science, Universität zu LübeckMaster's thesisM. Skambath. Algorithmic drawing of evolving trees. Master's thesis, Institute of Theoretical Computer Science, Universität zu Lübeck, Germany, 2016. Offline drawing of dynamic trees: Algorithmics and document integration. SVG Version of this document. M Skambath, T Tantau, M. Skambath and T. Tantau. Offline drawing of dynamic trees: Algorith- mics and document integration. SVG Version of this document, available at http://www.informatik.uni-kiel.de/˜msk/pub/2016-dynamic-trees/main.html. Focus+context display and navigation techniques for enhancing radial, space-filling hierarchy visualizations. J Stasko, E Zhang, Proceedings of the IEEE Symposium on Information Vizualization 2000, INFOVIS '00. the IEEE Symposium on Information Vizualization 2000, INFOVIS '00IEEE PressJ. Stasko and E. Zhang. Focus+context display and navigation techniques for enhancing radial, space-filling hierarchy visualizations. In Proceedings of the IEEE Symposium on Information Vizualization 2000, INFOVIS '00, pages 57-65. IEEE Press, 2000. Effective representations of hierarchical structures. K Sugiyama, S Tagawa, M Toda, 8International Institute for Advanced Study of Social Information Science. FujitsuTechnical ReportK. Sugiyama, S. Tagawa, and M. Toda. Effective representations of hierarchical structures. Technical Report 8, International Institute for Advanced Study of Social Information Science, Fujitsu, 1979. Empirical estimates of program entropy. R E Sweet, Stan-CS-78-698Stanford, CA, USADepartment of Computer Science, Stanford UniversityReportR. E. Sweet. Empirical estimates of program entropy. Report Stan-CS-78-698, Department of Computer Science, Stanford University, Stanford, CA, USA, 1978. Graph drawing in Tik Z. T Tantau, Journal of Graph Algorithms and Applications. 174T. Tantau. Graph drawing in Tik Z. Journal of Graph Algorithms and Applications, 17(4):495-513, 2013. The TikZ and pgf Packages, Manual for version 3.0.0. T Tantau, T. Tantau. The TikZ and pgf Packages, Manual for version 3.0.0, 2015. Available online at http://sourceforge.net/projects/pgf/. A node-positioning algorithm for general trees. Software: Practice and Experience. J Q Walker, I I , 20J. Q. Walker, II. A node-positioning algorithm for general trees. Software: Practice and Experience, 20(7):685-705, 1990. Motion to support rapid interactive queries on node-link diagrams. C Ware, R Bobrow, ACM Transactions on Applied Perception. 11C. Ware and R. Bobrow. Motion to support rapid interactive queries on node-link diagrams. ACM Transactions on Applied Perception, 1(1):3-18, 2004. Tidy drawings of trees. C Wetherell, A Shannon, IEEE Transactions on Software Engineering. 55C. Wetherell and A. Shannon. Tidy drawings of trees. IEEE Transactions on Software Engineering, 5(5):514-520, 1979.
[]
[ "Visual Named Entity Linking: A New Dataset and A Baseline", "Visual Named Entity Linking: A New Dataset and A Baseline" ]
[ "Wenxiang Sun [email protected] ", "Yixing Fan [email protected] ", "Jiafeng Guo [email protected] ", "Ruqing Zhang [email protected] ", "Xueqi Cheng ", "\nInstitute of Computing Technology\nCAS Key Lab of Network Data Science and Technology\nChinese Academy of Sciences\nBeijingChina\n", "\nUniversity of Chinese Academy of Sciences\nBeijingChina\n" ]
[ "Institute of Computing Technology\nCAS Key Lab of Network Data Science and Technology\nChinese Academy of Sciences\nBeijingChina", "University of Chinese Academy of Sciences\nBeijingChina" ]
[]
Visual Entity Linking (
10.48550/arxiv.2211.04872
[ "https://export.arxiv.org/pdf/2211.04872v1.pdf" ]
253,420,227
2211.04872
c0bb6496edb5ddf30713cdb425b7c3e4205f6ae3
Visual Named Entity Linking: A New Dataset and A Baseline Wenxiang Sun [email protected] Yixing Fan [email protected] Jiafeng Guo [email protected] Ruqing Zhang [email protected] Xueqi Cheng Institute of Computing Technology CAS Key Lab of Network Data Science and Technology Chinese Academy of Sciences BeijingChina University of Chinese Academy of Sciences BeijingChina Visual Named Entity Linking: A New Dataset and A Baseline Visual Entity Linking ( Introduction An in-depth understanding of visual content in an image is fundamental for many computer vision tasks. VEL (Tilak et al., 2017;Maigrot et al., 2016) is a task to put the image understanding to the entity-level. For example, given an image of the debate between Trump and Hillary, the goal of VEL is not only to recognize the region of Trump and Hillary, but also to link them to the correct entity in KBs (e.g., Wikidata (Vrandečić and Krötzsch, 2014), DBpedia (Auer et al., 2007), or YAGO (Fabian et al., 2007)). Just as the significance of textual entity linking for many NLP tasks such as Information Extraction and Information Retrieval (Sevgili et al., 2022), visual tasks, such as image retrieval (Datta et al., 2008) and image caption (Tariq and Foroosh, 2017), would also benefit from entitylevel fine-grained comprehension of images. In recent years, VEL has been given increasing attention. Early works (Tilak et al., 2017;Weegar et al., 2014) try to link objects in images with general entities, e.g., 'Person' and 'Suit', in KBs as is described in Figure 1(b). Apparently, these works are restricted to the coarse-level entity linking and fail to distinguish objects within the same class. Besides, there are also some works that make use of deep image understanding to link objects with named entities in KBs (Müller-Budack et al., 2021;Zheng et al., 2022;Dost et al., 2020;Gan et al., 2021). However, they generally require detailed entity mention information in text, which plays a vital role via multi-modal entity linking as shown Dataset Multi-modal Entity-aware Entity-labeled Modality KB Source Lang Size AIDA (Hoffart et al., 2011) T m → T e Wikipedia News en 1K docs Flicker30K (Young et al., 2014) social media en 30k images BreakingNews (Ramisa et al., 2017) News en 100k images SnapCaptionsKB (Moon et al., 2018) T m + V → T e Freebase Social Media en 12K captions WIKIDiverse T m + V → T e , V e Wikipedia News en 8K captions WIKIPerson V m → V e V m → T e Wikipedia News en 50k Images V m → V e , T e Table 1: The public related dataset of WIKIPerson. T m , T e , V m , V e , and V represent textual mention, textual entity, visual mention, visual entity, and visual information, respectively. in Figure 1(c). We argue that all the above tasks fail to process the named entity linking well for images without any text annotations, which is often the case in social media platforms. In this work, we consider a purely Visual-based Named Entity Linking (VNEL) task, which is described in Figure 1(d). Given an image without textual description, the goal is to link the visual mention in the image with the whole image as the context to the corresponding named entity in KBs. Considering the format of entity in KBs, such as textual descriptions, images, and other structured attributes, we further introduce three sub-tasks according to the type of entity context, i.e., the visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). We believe these tasks could put forward higher requirements and more detailed granularity for image understanding, crossmodal alignment, and multi-modal fusion. Following the definition of VNEL, currently public available EL datasets may not fit for our research, as they either only focus on textual modality or lack of detailed annotations for entity information in each image. As a result, we release a new dataset called WIKIPerson. The WIKIPerson is a high-quality human-annotated visual person linking dataset based on Wikipedia. Unlike previously commonly-used datasets in EL, the mention in WIKIPerson is only an image containing the PERSON entity with its bounding box. The corresponding label identifies an unique entity in Wikipedia. For each entity in Wikipedia, we provide textual descriptions as well as images to satisfy the need of three sub-tasks. In the experiments, we benchmark a series of baseline models on WIKIPerson under both zeroshot and fine-tuned settings. In detail, we adopt a universal contrastive learning framework to learn a robust and effective representation for both mentions and entities. Experimental results show that …… Q960580 Gavin Smith Canadian professional poker player... Q430740 Paul Merson English footballer and manager... Entity Linking Q4806 Gregory Merson is an American professional poker player…. Figure 2: VNEL with its three sub-tasks. existing models are able to obtain a reasonably good performance on different VNEL tasks, but there is still a large room for further enhancements. The Visual Named Entity Linking Task This section first presents a formal definition of the task. Then we introduce the complete building procedure of the human-annotated dataset, which covers a wide variety of Wikipedia person entities for further research. Finally, an in-depth data analysis will be elaborated on in detail. Definition of VNEL and Three Sub-tasks VNEL takes an image as input and extracts bounding boxes around objects, and then links them to entities in KBs. More precisely, given an image I, all visual mentions V m , which are regions of the image, are firstly recognized with a bounding box. Then, all visual mentions V m are linked with the corresponding entity e in knowledge base E. The visualized process of the VNEL task is shown in Figure 2, which often consists of two stages, namely the visual mention detection stage and the visual entity linking stage. In this work, we follow existing works (Mulang' et al., 2020;Sil et al., 2018) to pay attention to the visual entity linking stage. Generally, each entity e i ∈ E is often characterized with rich textual and visual descriptions, and each modality of the description can provide sufficient information for visual entity linking. To make the task more clearly presented, we further decompose the VNEL task into three sub-tasks according to the type of description used for the entity. In the first place, only the visual description V e i of the entity can be used in the visual entity linking stage, which we denote as the V2VEL sub-task. The core of V2VEL is to match two visual objects. It is worth noting that entities in KB may contain more than one image. To simply this, we take the first image of e i as V e i , and leave the multiple images per entity as the future work. In the second place, only the textual description T e i of the entity is used in the visual entity linking stage, which we denote as the V2TEL sub-task. The V2TEL task aims to evaluate the ability in image-text matching, central to cross-modal entity linking. Finally, both the visual description and the textual description (V e i , T e i ) of the entity could be employed to link the visual mention, which we denote as the V2VTEL sub-task. The V2VTEL task could leverage both textual and visual modality to complement each other in linking visual mentions. Formally, let e i represent the i th entity in KB with corresponding visual description V e i or textual description T e i and the whole image can be seen as visual context V c . As a result, three sub-tasks of the VNEL can be formulated as the following respectively: e * (m) V →V = arg max e i ∈E Φ α (V m , V e i | V c ) , e * (m) V →T = arg max e i ∈E Φ β (V m , T e i | V c ) , e * (m) V →V +T = arg max e i ∈E Φ γ (V m , (V e i , T e i ) | V c ) , where Φ represents the value of the score function between a mention and an entity. Dataset Setups of WIKIPerson To facilitate research on VNEL, we introduce WIKIPerson, a benchmark dataset designed for linking person in images with named entities in KB. The dataset building process is shown in Figure 3, which consists of three main steps. We firstly select the data source to build the input image collection, and then filter and clean the collection to obtain a high-quality dataset. Finally, we annotate Data source Collection News-related Entity-aware Image-Caption each image by several experienced annotators. In the following, we will describe each step in detail. Data Filter And Clean Data Source Collection For the source of data, we follow existing works (Ramisa et al., 2017;Tran et al., 2020;Liu et al., 2020;Biten et al., 2019) to use News collections, since the content of images in News collection often contains many named entities at a higher degree of specificity, e.g., specific people, which convey key information regarding the events presented in the images. In this paper, we choose VisualNews 1 , which has the largest data scale with 1.2 million image-text pairs among them as the original data source. In addition, VisualNews covers diverse news topics, consisting of more than one million images accompanied by news articles, image captions, author information, and other metadata. All these additional metadata could help us in the subsequent entity annotation procedure. However, only images and annotated mentions with bounding boxes are available in all VNEL sub-tasks. For the knowledge base, we employ the commonly-used Wikipedia as back-end, consisting of a wide range and abundant information of entities. Specifically, we crawl the first image of each entity from wiki commons as the visual description and the text information from Wikipedia as the textual description, respectively. Data Filter and Clean In this work, we pay our attention to PERSON mentions in images since person is the most common named entity, and leave the research on other entity types for future work. For this purpose, we keep only images with PERSON mentions from the news collection, and remove non-PERSON entities from the KB. Specifically, for each imagecaption pair in the news collection, we take Spacy to analyze the text caption and filter out the corre- sponding data without any PERSON entities. Moreover, we leverage the MTCNN model , which is the state-of-the-art face detection model, to check the number of PERSON mentions in each image. Then, we select images with the number of person mentions less than 4 to reduce the complexity of the task. Lastly, we remove repeated and blurred images to keep the quality of the dataset. Data Annotation The primary goal of WIKIPerson is to link the PER-SON mention in the image to the correct Wikipedia entity. As a consequence, the annotators need to identify the person mention and label each mention with the corresponding Wikipedia entity in the form of a Wikidata id. 2 In the earlier step, Spacy is used to identify the caption of origin image-text pairs to extract possible PERSON entities. MTCNN is adopted to recognize the faces, supplying bounding boxes in the picture. So the annotators only need to check the faces in the bounding box and choose the corresponding entity from the results generated by searching the keywords of PERON entities detected in the caption. In this way, we can largely reduce the labor in labeling the entity of each mention. Mentions that do not have corresponding entities in Wikipedia will be filtered in the procedure. In the process of data annotation, we designed end-to-end labeling web demos to facilitate manual annotation. The provided information on the website includes news images, captions, news content, and possible candidate entities with pictures 2 https://en.wikipedia.org/wiki/Wikidata#Concept and descriptions to help the annotator make judgments. All annotators have linguistic knowledge and are instructed with detailed annotation principles. The annotators need to link the mention with each bounding box to the correct entity in Wikipedia. Finally, after the labeling, we can get the dataset full of the image which comprises several mentions with each bounding box and corresponding entity WikiId. 3 Table 2 shows the statistics of the WIKIPerson in detail. The dataset contains a total of 48k different news images, covering 13k out of 120K (i.e.|E| ≈ 120K) PERSON named entities, each of which corresponds to a celebrity in Wikipedia. Many entities appear many times in the data, which ensures that entities can be fully learned. Unlike many datasets in traditional EL, the image of the PERSON named entity usually focuses on a single person in the news except for the scene such as group photo, debate, etc. As a result, the average amount of the mention per image is about 1.08 and only about 3k images contain more than one mention. Dataset Analysis Basic Statistics Entity Distribution The WIKIPerson comprises diverse PERSON named entity types such as politicians, singers, actresses, sports players, and so on, from different news agencies. These entities do not belong to a British guitarist and record producer… Mention Encoder Feedforward Adapter Input Images Entity Encoder Feedforward Layers Contrastive Loss Matmul Entities In-batch Negatives single analogy but are widely distributed in different topics, occupations, skin colors, and multiple age stages. The detailed information is shown in left of Figure 5. It can be observed that in addition to the common politician in the news, the dataset also includes artistic, sports, entertainment, and even criminal topics, which greatly increases the richness of image information. The diversity makes the task could pay attention to the alignment between the background information of the picture, e.g., visual context and entity's meta info in KBs. ! ….. Ground Truth " ! " ….. … .. … .. … .. … .. Moreover, considering the difference in entities' popularity, we analyzed the link-popularity of the entities in the WIKIPerson compared to that in the whole Wikipedia. As shown in the right of Figure 5, both covered entities and the whole Wikipedia entities conform to the long-tailed distribution, which ensures that the dataset will not be biased because of some significantly popular entities. Generally speaking, celebrities are likely to be reported in news articles, which causes the entity in our dataset to be more prevalent than in the whole Wikipedia. To the best of our knowledge, WIKIPerson is the first diverse human-annotated PERSON-entity-aware dataset with high research value. Baseline Methods Generally, the VNEL task is to link mentions in the input image with the corresponding entities from a large-scale KB. Typically, the existing VNEL system is often implemented as a two-stage process, i.e., the candidate retrieval stage and the entity disambiguation stage, to balance the efficiency and the effectiveness. In this work, we implement a fast end-to-end linking directly from a large-scale collection by employing an efficient model. We take a widely-used bi-encoder contrastive learning framework to learn robust and effective representations of both visual mentions and entities. Given a visual mention V m and a candidate entity e i , which is accompanied by visual description V e i and/or textual description T e i , the framework aims to produce a relevance score between the mention and the entity. The overall structure of the framework is shown in Figure 6, which consists of two major components, namely the mention encoder and the entity encoder. These two encoders aim to extract features as embeddings f m for the input image and f e for the entity. For each encoder, we directly take existing pre-trained models as the implementation. Inspired by existing works (Gao et al., 2021;Zhang et al., 2021b) in applying pretrained model, we add a feed-forward layer to transform the vector generated from the encoder to the task-oriented embedding space. After that, a residual connection (He et al., 2016) is added to obtain F m and F e i , followed by using L2 norm and dotproduction to calculate the similarity score. f m = Encoder m (V m ) , f e i = Encoder e (e i ) F m = f m + ReLU (f m W m 1 ) W m 2 F e i = f e i + ReLU (f e i W e 1 ) W e 2 e * (m) = arg max e i ∈E F m · F e i Where W m 1 and W m 2 are learnable parameters for mention representation learning, and W e 1 and W e 2 are learnable parameters for entity representation learning. Since each sub-task of VNEL have different types of inputs, we thus implement each baseline with different encoders: • V2VEL Encoders: We adopt ResNet (Szegedy et al., 2017) in a single-modal way following (Schroff et al., 2015), which has been pre-trained on the vggface2 (Cao et al., 2018) to extract visual features. Here, both mention and entity encoder use ResNet and share the parameters. • V2TEL Encoders: We directly take CLIP (Radford et al., 2021), which has been pretrained with a large-scale image-text dataset, to implement the mention encoder and the entity encoder. For entity encoder, we apply two types of textual information about entity, i.e., entity name (CLIP_N) and entity name with description (CLIP_N_D), to study the influence of the entity's meta info. In the training step, the contrastive loss function of a single mention-entity sample is defined as: L (V m , ei) = − log exp Φ V m , e + i /τ − + exp Φ V m , e + i /τ − = k =i exp Φ V m , e − k /τ where e + i represents the ground truth positive entity of V m and e − k denotes the k th candidate of V m in the batch, which is all negative samples. τ is the temperature coefficient that helps control the softmax's smoothness (Jang et al., 2016). Experiments During experiments, we split images in WIKIPerson into train, dev, and test set with the ratio of 6:2:2. Besides, to avoid the bias of popular entities affecting the evaluation, each named entity appears at most once in test set. For evaluation, we report two widely-used metrics of Top-k retrieve: Recall@K (K=1, 3, 5, 10) and Mean Reciprocal 4 The detailed analysis of this strategy is displayed in the Appendix due to the page limitation. Rank (MRR@K, K=3, 5, 10). 5 Results All results are summarized in Table 3. Since all the encoders we adopt are pre-trained and can be directly applied in each task, we thus report both zero-shot and fine-tuned performances to show the effectiveness of all baselines. Zero-shot v.s. fine-tune. In zero-shot, we directly use the embedding generated from the encoder as the feature. As we can see, ResNet has achieved a reasonable good performance for R@10 (i.e, 0.5076), which demonstrates the effectiveness of the pre-trained model. Moreover, we can see that the CLIP, which is pre-trained with about 400M image-caption pairs, has achieved better performances against ResNet with either CLIP_N or CLIP_N_D across all metrics. When combining ResNet with CLIP, we observe a distinct improvement for all combination, which demonstrate the effectiveness in combining both visual description and textual description in VNEL. While comparing the zero-shot with fine-tuned baselines, all models have obtained significant improvements, e.g., an average improvement of MRR@10 is 0.13. The improvements verify the quality of dataset and demonstrates that the WIKIPerson could significantly boost the ability of visual named entity linking. Sub-tasks of VNEL. We focus on the below part of table 3 where all models are fine-tuned on WIKIPerson. 1) The V2VEL sub-task: As the most funda- mental part concerning VNEL, the ResNet extracts features for both visual mentions and visual descriptions of entities, and matches them in visual feature space. However, it obtains generally low absolute numbers in different evaluation metrics, e.g., 0.4212 on R@1, which leaves a large room for improvement. A possible reason is that the image of an entity in KB are often earlier pictures which show very different state (e.g., age and occasion) with entities appeared in news articles. 2) The V2TEL sub-task: CLIP obtains higher performance compared to ResNet by matching the visual mention with textual descriptions of the entity. Besides, these results show that the crossmodal matching between the image and the text is very powerful in linking images with entities. Moreover, by comparing the two different types of textual information about the entity, we can see that entity description could provide useful information in distinguishing disambiguate entities since CLIP_N_D outperforms CLIP_N over all metrics. 3) The V2VTEL sub-task: By combining the textual information and visual information of each entity, the performance could be further boosted. For example, the relative improvement of ResNet+CLIP_N_D over ResNet and CLIP_N_D against R@1 is about 73% and 23%, respectively. These results verify that both textual and visual modality of the entity could complement each other in linking visual mentions with named entities. Moreover, as for different order of combination between ResNet and CLIP, we can see that each method could obtain a relatively close performance, which confirms the effectiveness of the strategy in combining the V2VEL method and the V2TEL method. Qualitative Analysis To better understand baseline methods among different sub-tasks, we show several cases in Figure 7. The input image is on the left, and the top 3 predicted results are partitioned into two rows corresponding to different baselines. The entity with a green border is the ground truth entity. The first case of Figure 7 is a picture of a famous American golfer named Tiger Woods. ResNet could identify the correct entity, and other returned results have a similar face to the input image. CLIP_N_D also returned the ground truth entity at the second position in the top-3 results, and all three candidates are professional golfers. This shows that only text descriptions may unable to disambiguate between the correct entity and irrelevant entities. Analogously, the second case is an image about Bill Clinton speaking at his foundation. ResNet links it to the entity "Andy Gill", which looks very similar to Clinton. While CLIP_N_D correctly predicts the ground truth entity in the first position, and all returned entities are related to Clinton. This verifies that CLIP_N_D can learn high-level association between image mention and entity meta-info. The last case is an image of a famous Chinese tennis sports player named Li Na. We can see that the image has complicated backgrounds, and both ResNet and CLIP_N_D cannot link the mention with the ground truth entity in top-3 returned results. This motivates the need for focused research on building effective VNEL models. From all the above cases, it is clearly presented that ResNet pays more attention to the pixel-level matching, and CLIP learns high-level semantic connection between mentions and entities. However, the dynamic nature of the input images highlights the difficulty of the task, especially for entities with outdated pictures. We believe this work could pave the way for better visual entity linking. Related Work Entity Linking. There is extensive research on EL, which serves as a classic NLP task. With the help of large-scale pre-train language models (Devlin et al., 2018;Liu et al., 2019), several recent deep learning methods (Mulang' et al., 2020;Yamada et al., 2019;De Cao et al., 2020) achieve 90%+ accuracy on AIDA (Hoffart et al., 2011), which is a commonly used high-quality robust EL dataset. However, as mentioned in , it seems that the current methods have already torched the task ceiling. As a result, many more challenging EL-related tasks are formulated. For example, zeroshot entity linking (Logeswaran et al., 2019;Wu et al., 2019), engaging other features like global coherence across all entities in a document, NIL prediction, joining MD and ED steps together, or providing completely end-to-end solutions to address emerging entities is rapidly evolving (Sevgili et al., 2022). Multi-modal Entity Linking. Recently, Multimodal Entity Linking(MEL) (Moon et al., 2018) task has also been proposed for consideration. Given a text with images attached, MEL uses both textual and visual information to map an ambiguous mention in the text to an entity in the KBs. (Moon et al., 2018) proves that image information helps identify the mention in social media for the fuzzy and short text. Furthermore, (Adjali et al., 2020) transfer the scene to Twitter and perform MEL on Twitter users. (Zhang et al., 2021a) proposes an attention-based structure to eliminate dis-tracting information from irrelevant images and builds a multi-source Social Media multi-modal dataset. builds a multi-modal Entity Linking Dataset with Diversified Contextual Topics and Entity Types. However, for all those works, the text input plays a vital part, and the visual input only serves as a complementary role to the text. Multi-modal Dataset. At the same time, our work is also related to the multi-modal imagetext datasets, which is also a hot issue in recent years. Flicker30k (Young et al., 2014) annotates 30k image-caption pairs from Flicker with five descriptive sentences per image, such as "a man is wearing a tie." In addition, MSCOCO caption (Chen et al., 2015) scale up the size with over one and a half million captions describing over 330000 images. However, the caption in all these datasets is descriptive sentences and non-entity aware. As a result, some work has started to build a news-related dataset for entity-aware image caption tasks. For example, (Ramisa et al., 2017) focus on the news website and have crawled 100k image-caption pairs. (Biten et al., 2019;Liu et al., 2020) expand the size of the dataset. Nevertheless, the detailed entity information is neither annotated nor linked to the KBs. Conclusion and Future Work To tackle the limitation that previous visual entity linking either rely on textual data to complement a multi-modal linking or only link objects with general entities, we introduce a purely Visual-based Named Entity Linking task, where the input only contains the image. The goal of this task is to identify objects of interest in images and link them to corresponding named entities in KBs. Considering the rich multi-modal contexts of each entity in KBs, we propose three different sub-tasks, i.e. the V2VEL sub-task, the V2TEL sub-task, and the V2VTEL sub-task. Moreover, we build a high-quality human-annotated visual person linking dataset, named WIKIPerson, which aims at recognizing persons in images and linking them to Wikipedia. Based on WIKIPerson, we introduce several baseline algorithms for each sub-task. According to the experimental results, the WIKIPerson is a challenging dataset worth further explorations. In the future, we intend to build a larger scale VNEL dataset with diverse types and adopt more advanced models to achieve higher accuracy. Limitations Low extensibility of the entity information. In the V2VEL sub-task, each entity in the KB can have more than one attached image. However, in our paper, only the first image is selected for convenience, which will inevitably omit additional information. At the same time, in the V2TEL sub-task, we only use the short descriptive sentences of the entity. How to integrate longer unstructured text information is also a problem worth exploring. Ethics Statement We collected data based on open-source datasets and databases. These data have been strictly manually reviewed and do not contain any pictures that are sexual or violate politics. We are authorized by the relevant authority in our university to hire employees from the laboratory to build the platform and carry out the annotations. All employees are adults and ethical. On average, they were paid £5-£10/hour. A Baselines Details Parameters Setting. In the architecture, we set the number of layers in the feed-forward as 2 and the dimensions are [512*1024, 1024*512] both for mention and entity in the two models. The initial learning rate is set to 2e-4 for ResNet and 2e-6 for CLIP. Images are all resized to 224 × 224 pixels according to the common size and textual information is truncated to 77 words. The batch sizes for ResNet and clip are both set to 64. All the methods are implemented in Pytorch (Paszke et al., 2019) and optimized by the AdamW (Loshchilov and Hutter, 2017) algorithm. Experimental setup. We train our models on two NVIDIA Tesla V100 GPU. We train each model with much to 20 epochs. For inference, we use Faiss 6 to achieve fast recall in large-scale embedding space with about 500ms per instance. B Evaluation Metrics All evaluation and empirical analysis are reported by two widely-used metrics of Top-k retrieve: Recall and Mean Reciprocal Rank (MRR). The final result is the average score among all the cases. Recall@K = 1 Q Q i=1 1 qk i (gt i ) MRR@K = 1 Q Q i=1 1 rank i where 1 A (x) denotes a 0,1 valued indicator function. qk i , gt i are the Top-k result and the ground truth of query i. MRR is a measure to evaluate systems that "Where is the first relevant item". For a single query, the reciprocal rank is 1 rank where rank is the position of the highest-ranked answer. If no correct answer was returned in the query, then the reciprocal rank is 0. C More Examples from WIKIPerson To demonstrate more details of our dataset, we pick two examples from our dataset. (Figure 8, Figure 9). D Detailed Analysis According to the experimental results, the reranking strategy improves performance to a certain degree. So we conduct a detailed analysis of the 6 https://github.com/facebookresearch/faiss strategy to help understand the reason and provide some insights for future model designs. Firstly, we analyze the effect of re-ranking sequence length, which is the main factor affecting the result. Specifically, we conduct research on the re-ranking sequence length. Then we plot the Re-call@1 for ResNet + CLIP_N_D and CLIP_N_D + ResNet in Figure 10. From the results, we can see that both two methods achieve high performance as the re-ranking length increases at the beginning. Then it starts to decrease slightly. It can be simply inferred that when the re-ranking length continues to grow to the size of the |E|, the re-ranking model can be equal to the single Reset or CLIP_N_D. Besides, these two models have different Inflection Points and speeds of the downtrend. CLIP_N_D + ResNet reaches its peak at lower re-rank length and decent sharply while ResNet + CLIP_N_D increases until re-rank length equals 600 and decent slowly. The reason for the phenomenon is that CLIP_N_D outperforms ResNet. As a result, a larger re-rank size is necessary for ResNet to guarantee to recall the ground truth. Secondly, we notice that the Top-k results of CLIP_N_D and ResNet differ greatly. As a result, we plot the precise overlap between ResNet and CLIP_N_D's Top-k result in Figure 11. The origin and fine-tune model have the same trend: with the increase of the K, the overlap decreases first and increases later. When k nears 50, the overlap minimum. For fine-tune model, it has a higher overlap than the zero-shot. The overlap starts from 30.1%, which means only the 30.1% of entities are identical among the Top-1 results between the two models even though they have comparable performance. Then it drops to 15% sharply. When k equals |E|, the overlap will reach 100%. Smaller coverage with high and comparable model performance ensures that using one model to re-ranking based on the recall of the other model could improve performance significantly. Figure 1 : 1Different categories of Entity Linking. VNEL is a task to identify images individually without any text input and link visual mentions to specific named entities in KBs. Figure 3 : 3The procedure of building WIKIPerson. Figure 4 : 4Examples of the WIKIPerson dataset. Left: An image and its mention's bounding box with WikiId, which represents an unique entity in Wikipedia. Right: The ground truth entity in KB with both visual and textual information. Figure 5 : 5Left: Topic distribution of entities in WIKIPerson.Right: Link popularity distribution between entities in WIKIPerson and the whole Wikipedia. Figure 6 : 6The overall framework of different baselines. Figure 7 : 7The qualitative case studies of Top-3 predicted entities. The result with a green border is the ground truth entity of the input image. Figure 8 : 8The images of Taylor Swift (Q26876, a famous American singer-songwriter) in WIKIPerson. Figure 9 : 9The images of Indra Nooyi (Q264913, Indian American business executive and former CEO of Pep-siCo) in WIKIPerson. Figure 10 :Figure 11 : 1011The Recall@1 of the models with different re-rank size. Overlap of Top-k result between CLIP_N_D and ResNet in zero-shot and fine-tune. Table 2 : 2Statistics of WIKIPerson. #E cov and #M I avg denotes number of covered entities and average number of mentions per image, respectively.other 13.4% law_crime 4.5% business 8.0% Table 3 : 3Experimental results of baselines among three sub-tasks under both zero-shot and fine-tuned settings.• V2VTEL Encoders: We combine encoders of V2VEL and V2TEL to implement the V2VTEL. Specifically, we take a simple but effective strategy that uses one model to recall Top-K results and the other to re-rank. For ex- ample, ResNet + CLIP means recall with the ResNet first and re-rank Top-K results with CLIP again. We also test different combina- tions about the order of V2VEL encoders and V2TEL encoders, whose results are listed in Section 4.1. 4 https://github.com/FuxiaoLiu/VisualNews-Repository More examples from WIKIPerson are shown in Appendix. The description about the evaluation metrics can be found in the Appendix. Multimodal entity linking for tweets. Omar Adjali, Romaric Besançon, Olivier Ferret, Herve Le Borgne, Brigitte Grau, European Conference on Information Retrieval. SpringerOmar Adjali, Romaric Besançon, Olivier Ferret, Herve Le Borgne, and Brigitte Grau. 2020. Multi- modal entity linking for tweets. In European Con- ference on Information Retrieval, pages 463-478. Springer. Dbpedia: A nucleus for a web of open data. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary G Ives, The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference. Busan, KoreaSpringer4825Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, 6th International Se- mantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Ko- rea, November 11-15, 2007, volume 4825 of Lec- ture Notes in Computer Science, pages 722-735. Springer. Good news, everyone! context driven entity-aware captioning for news images. A F Biten, L Gomez, M Rusinol, D Karatzas, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). A. F. Biten, L. Gomez, M. Rusinol, and D. Karatzas. 2019. Good news, everyone! context driven entity-aware captioning for news images. In 2019 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR). Autoregressive entity retrieval. N D Cao, G Izacard, S Riedel, F Petroni, N. D. Cao, G. Izacard, S. Riedel, and F. Petroni. 2020. Autoregressive entity retrieval. Vggface2: A dataset for recognising faces across pose and age. Qiong Cao, Li Shen, Weidi Xie, M Omkar, Andrew Parkhi, Zisserman, 13th IEEE international conference on automatic face & gesture recognition (FG 2018). IEEEQiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. 2018. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018), pages 67-74. IEEE. Microsoft coco captions: Data collection and evaluation server. Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, C Lawrence Zitnick, arXiv:1504.00325arXiv preprintXinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325. Image retrieval: Ideas, influences, and trends of the new age. Ritendra Datta, Dhiraj Joshi, Jia Li, James Z Wang, ACM Computing Surveys (Csur). 402Ritendra Datta, Dhiraj Joshi, Jia Li, and James Z Wang. 2008. Image retrieval: Ideas, influences, and trends of the new age. ACM Computing Surveys (Csur), 40(2):1-60. Nicola De Cao, Gautier Izacard, Sebastian Riedel, Fabio Petroni, arXiv:2010.00904Autoregressive entity retrieval. arXiv preprintNicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval. arXiv preprint arXiv:2010.00904. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Vtlinker: Visual-textual-knowledge entity linker. Shahi Dost, Luciano Serafini, Marco Rospocher, Lamberto Ballan, Alessandro Sperduti, ECAI 2020. IOS PressShahi Dost, Luciano Serafini, Marco Rospocher, Lam- berto Ballan, and Alessandro Sperduti. 2020. Vt- linker: Visual-textual-knowledge entity linker. In ECAI 2020, pages 2897-2898. IOS Press. Yago: A core of semantic knowledge unifying wordnet and wikipedia. M Fabian, Kasneci Gjergji, Gerhard, 16th International world wide web conference. WWWM Fabian, Kasneci Gjergji, WEIKUM Gerhard, et al. 2007. Yago: A core of semantic knowledge unifying wordnet and wikipedia. In 16th International world wide web conference, WWW, pages 697-706. Multimodal entity linking: a new dataset and a baseline. Jingru Gan, Jinchang Luo, Haiwei Wang, Shuhui Wang, Wei He, Qingming Huang, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaJingru Gan, Jinchang Luo, Haiwei Wang, Shuhui Wang, Wei He, and Qingming Huang. 2021. Mul- timodal entity linking: a new dataset and a baseline. In Proceedings of the 29th ACM International Con- ference on Multimedia, pages 993-1001. Clip-adapter: Better visionlanguage models with feature adapters. Peng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, Yu Qiao, arXiv:2110.04544arXiv preprintPeng Gao, Shijie Geng, Renrui Zhang, Teli Ma, Rongyao Fang, Yongfeng Zhang, Hongsheng Li, and Yu Qiao. 2021. Clip-adapter: Better vision- language models with feature adapters. arXiv preprint arXiv:2110.04544. Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778. Robust disambiguation of named entities in text. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, Gerhard Weikum, Proceedings of the 2011 conference on empirical methods in natural language processing. the 2011 conference on empirical methods in natural language processingJohannes Hoffart, Mohamed Amir Yosef, Ilaria Bor- dino, Hagen Fürstenau, Manfred Pinkal, Marc Span- iol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named en- tities in text. In Proceedings of the 2011 conference on empirical methods in natural language process- ing, pages 782-792. Eric Jang, Shixiang Gu, Ben Poole, arXiv:1611.01144Categorical reparameterization with gumbel-softmax. arXiv preprintEric Jang, Shixiang Gu, and Ben Poole. 2016. Categor- ical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Visualnews : Benchmark and challenges in entity-aware image captioning. F Liu, Y Wang, T Wang, V Ordonez, F. Liu, Y. Wang, T. Wang, and V. Ordonez. 2020. Visu- alnews : Benchmark and challenges in entity-aware image captioning. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Zero-shot entity linking by reading entity descriptions. Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee, arXiv:1906.07348arXiv preprintLajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity de- scriptions. arXiv preprint arXiv:1906.07348. . Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. 2017. Decou- pled weight decay regularization. arXiv preprint arXiv:1711.05101. Mediaeval 2016: A multimodal system for the verifying multimedia use task. Cédric Maigrot, Vincent Claveau, Ewa Kijak, Ronan Sicre, Me-diaEval 2016:" Verfiying Multimedia Use" task. Cédric Maigrot, Vincent Claveau, Ewa Kijak, and Ro- nan Sicre. 2016. Mediaeval 2016: A multimodal system for the verifying multimedia use task. In Me- diaEval 2016:" Verfiying Multimedia Use" task. Multimodal named entity disambiguation for noisy social media posts. S Moon, L Neves, V Carvalho, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong PapersS. Moon, L. Neves, and V. Carvalho. 2018. Multimodal named entity disambiguation for noisy social media posts. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers). Evaluating the impact of knowledge graph context on entity disambiguation models. Isaiah Onando Mulang, &apos; , Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri, Johannes Hoffart, Jens Lehmann, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementIsaiah Onando Mulang', Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri, Johannes Hoffart, and Jens Lehmann. 2020. Evaluating the impact of knowledge graph context on entity disambiguation models. In Proceedings of the 29th ACM Inter- national Conference on Information & Knowledge Management, pages 2157-2160. Multimodal news analytics using measures of cross-modal entity and context consistency. Eric Müller-Budack, Jonas Theiner, Sebastian Diering, Maximilian Idahl, Sherzod Hakimov, Ralph Ewerth, International Journal of Multimedia Information Retrieval. 102Eric Müller-Budack, Jonas Theiner, Sebastian Diering, Maximilian Idahl, Sherzod Hakimov, and Ralph Ew- erth. 2021. Multimodal news analytics using mea- sures of cross-modal entity and context consistency. International Journal of Multimedia Information Re- trieval, 10(2):111-125. Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in neural information processing systems. 32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. Learning transferable visual models from natural language supervision. A Radford, J W Kim, C Hallacy, A Ramesh, G Goh, S Agarwal, G Sastry, A Askell, P Mishkin, J Clark, A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, and J. Clark. 2021. Learning transferable visual models from natural language supervision. Breakingnews: Article annotation by image and text processing. A Ramisa, Fei Yan, Francesc Moreno-Noguer, K Mikolajczyk, IEEE Transactions on Pattern Analysis and Machine Intelligence. 99PPA. Ramisa, Fei Yan, Francesc Moreno-Noguer, and K. Mikolajczyk. 2017. Breakingnews: Article anno- tation by image and text processing. IEEE Transac- tions on Pattern Analysis and Machine Intelligence, PP(99):1-1. Facenet: A unified embedding for face recognition and clustering. Florian Schroff, Dmitry Kalenichenko, James Philbin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionFlorian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815-823. Neural entity linking: A survey of models based on deep learning. Özge Sevgili, Artem Shelmanov, Mikhail Y Arkhipov, Alexander Panchenko, Chris Biemann, Semantic Web. 133Özge Sevgili, Artem Shelmanov, Mikhail Y. Arkhipov, Alexander Panchenko, and Chris Biemann. 2022. Neural entity linking: A survey of models based on deep learning. Semantic Web, 13(3):527-570. Neural cross-lingual entity linking. Avirup Sil, Gourab Kundu, Radu Florian, Wael Hamza, Thirty-Second AAAI Conference on Artificial Intelligence. Avirup Sil, Gourab Kundu, Radu Florian, and Wael Hamza. 2018. Neural cross-lingual entity linking. In Thirty-Second AAAI Conference on Artificial In- telligence. Inception-v4, inception-resnet and the impact of residual connections on learning. Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alexander A Alemi, Thirty-first AAAI conference on artificial intelligence. Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. 2017. Inception-v4, inception-resnet and the impact of residual connec- tions on learning. In Thirty-first AAAI conference on artificial intelligence. A contextdriven extractive framework for generating realistic image descriptions. Amara Tariq, Hassan Foroosh, IEEE Trans. Image Process. 262Amara Tariq and Hassan Foroosh. 2017. A context- driven extractive framework for generating realistic image descriptions. IEEE Trans. Image Process., 26(2):619-632. Visual entity linking. Neha Tilak, Sunil Gandhi, Tim Oates, 10.1109/IJCNN.2017.79659162017 International Joint Conference on Neural Networks. Anchorage, AK, USAIEEENeha Tilak, Sunil Gandhi, and Tim Oates. 2017. Visual entity linking. In 2017 International Joint Confer- ence on Neural Networks, IJCNN 2017, Anchorage, AK, USA, May 14-19, 2017, pages 665-672. IEEE. Transform and tell: Entity-aware news image captioning. Alasdair Tran, Alexander Mathews, Lexing Xie, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionAlasdair Tran, Alexander Mathews, and Lexing Xie. 2020. Transform and tell: Entity-aware news im- age captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, pages 13035-13045. Wikidata: a free collaborative knowledgebase. Denny Vrandečić, Markus Krötzsch, Communications of the ACM. 5710Denny Vrandečić and Markus Krötzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78-85. X Wang, J Tian, M Gui, Z Li, R Wang, M Yan, L Chen, Y Xiao, Wikidiverse: A multimodal entity linking dataset with diversified contextual topics and entity types. arXiv e-printsX. Wang, J. Tian, M. Gui, Z. Li, R. Wang, M. Yan, L. Chen, and Y. Xiao. 2022. Wikidiverse: A multi- modal entity linking dataset with diversified contex- tual topics and entity types. arXiv e-prints. Visual entity linking: A preliminary study. Rebecka Weegar, Linus Hammarlund, Agnes Tegen, Magnus Oskarsson, Kalle Åström, Pierre Nugues, Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence. Rebecka Weegar, Linus Hammarlund, Agnes Tegen, Magnus Oskarsson, Kalle Åström, and Pierre Nugues. 2014. Visual entity linking: A prelimi- nary study. In Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence. Scalable zeroshot entity linking with dense entity retrieval. Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, Luke Zettlemoyer, arXiv:1911.03814arXiv preprintLedell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2019. Scalable zero- shot entity linking with dense entity retrieval. arXiv preprint arXiv:1911.03814. Global entity disambiguation with pretrained contextualized embeddings of words and entities. Ikuya Yamada, Koki Washio, Hiroyuki Shindo, Yuji Matsumoto, arXiv:1909.00426arXiv preprintIkuya Yamada, Koki Washio, Hiroyuki Shindo, and Yuji Matsumoto. 2019. Global entity disam- biguation with pretrained contextualized embed- dings of words and entities. arXiv preprint arXiv:1909.00426. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. P Young, A Lai, M Hodosh, J Hockenmaier, Nlp.cs.illinois.eduP. Young, A. Lai, M. Hodosh, and J. Hockenmaier. 2014. From image descriptions to visual denota- tions: New similarity metrics for semantic inference over event descriptions. Nlp.cs.illinois.edu. Joint face detection and alignment using multitask cascaded convolutional networks. Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, Yu Qiao, IEEE signal processing letters. 2310Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. 2016. Joint face detection and alignment using multitask cascaded convolutional networks. IEEE signal processing letters, 23(10):1499-1503. Attention-Based Multimodal Entity Linking with High-Quality Images. Database Systems for Advanced Applications. L Zhang, Z Li, Q Yang, L. Zhang, Z. Li, and Q. Yang. 2021a. Attention-Based Multimodal Entity Linking with High-Quality Im- ages. Database Systems for Advanced Applications. Renrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, Hongsheng Li, arXiv:2111.03930Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv preprintRenrui Zhang, Rongyao Fang, Peng Gao, Wei Zhang, Kunchang Li, Jifeng Dai, Yu Qiao, and Hongsheng Li. 2021b. Tip-adapter: Training-free clip-adapter for better vision-language modeling. arXiv preprint arXiv:2111.03930. Visual entity linking via multi-modal learning. Qiushuo Zheng, Hao Wen, Meng Wang, Guilin Qi, Data Intell. 41Qiushuo Zheng, Hao Wen, Meng Wang, and Guilin Qi. 2022. Visual entity linking via multi-modal learning. Data Intell., 4(1):1-19.
[ "https://github.com/facebookresearch/faiss", "https://github.com/FuxiaoLiu/VisualNews-Repository" ]
[ "Zero-Shot and Few-Shot Learning for Lung Cancer Multi-Label Classification using Vision Transformer", "Zero-Shot and Few-Shot Learning for Lung Cancer Multi-Label Classification using Vision Transformer" ]
[ "Fu-Ming Guo ", "Yingfang Fan " ]
[]
[]
Lung cancer is the leading cause of cancer-related death worldwide. Lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the most common histologic subtypes of non-small-cell lung cancer (NSCLC). Histology is an essential tool for lung cancer diagnosis. Pathologists make classifications according to the dominant subtypes. Although morphology remains the standard for diagnosis, significant tool needs to be developed to elucidate the diagnosis. In our study, we utilize the pre-trained Vision Transformer (ViT) model to classify multiple label lung cancer on histologic slices (from dataset LC25000), in both Zero-Shot and Few-Shot settings. Then we compare the performance of Zero-Shot and Few-Shot ViT on accuracy, precision, recall, sensitivity and specificity. Our study show that the pre-trained ViT model has a good performance in Zero-Shot setting, a competitive accuracy (99.87%) in Few-Shot setting (epoch = 1) and an optimal result (100.00% on both validation set and test set) in Few-Shot seeting (epoch = 5). * [email protected], Association for Computing Machinery. † Harvard Medical School.Preprint.
10.48550/arxiv.2205.15290
[ "https://arxiv.org/pdf/2205.15290v2.pdf" ]
249,192,263
2205.15290
0953ada119f384f328b6102e6b7963b3bde7cc9e
Zero-Shot and Few-Shot Learning for Lung Cancer Multi-Label Classification using Vision Transformer Fu-Ming Guo Yingfang Fan Zero-Shot and Few-Shot Learning for Lung Cancer Multi-Label Classification using Vision Transformer Lung cancer is the leading cause of cancer-related death worldwide. Lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the most common histologic subtypes of non-small-cell lung cancer (NSCLC). Histology is an essential tool for lung cancer diagnosis. Pathologists make classifications according to the dominant subtypes. Although morphology remains the standard for diagnosis, significant tool needs to be developed to elucidate the diagnosis. In our study, we utilize the pre-trained Vision Transformer (ViT) model to classify multiple label lung cancer on histologic slices (from dataset LC25000), in both Zero-Shot and Few-Shot settings. Then we compare the performance of Zero-Shot and Few-Shot ViT on accuracy, precision, recall, sensitivity and specificity. Our study show that the pre-trained ViT model has a good performance in Zero-Shot setting, a competitive accuracy (99.87%) in Few-Shot setting (epoch = 1) and an optimal result (100.00% on both validation set and test set) in Few-Shot seeting (epoch = 5). * [email protected], Association for Computing Machinery. † Harvard Medical School.Preprint. Introduction Lung cancer is the leading cause of cancer-related death worldwide. It is not only because of smoking, but also exposure to toxic chemicals. Non-small-cell lung cancer (NSCLC) is any malignant epithelial lung tumor that lacks a small-cell component (Mengoli et al. [2018]). NSCLC represents approximately 85% of all new lung cancer diagnoses (Gridelli et al. [2015]). Lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC) are the most common histologic subtypes (Herbst et al. [2018]) of NSCLC. These subtypes are further subclassified into multiple subtypes according to WHO criteria (Travis et al. [2015]). Histology is an essential factor for individualizing treatment based on either safety or efficacy outcomes (Langer et al. [2010]). Adenocarcinomas are malignant epithelial tumors with glandular differentiation. It has clear morphologic patterns such as acinar, papillary, lepidic, micropapillary (Travis et al. [2015]), although mix pattern adenocarcinomas are most common (Nasim et al. [2019]). Squamous cell carcinomas are often centrally located and derived from bronchial epithelial cells. Unequivocal keratinization and well-formed classical bridges can be diagnosed as squamous cell carcinomas (Travis et al. [2015]). Histologic distinctions may be unclear due to poorly differentiated tumors and requires confirmatory immunohistochemical stains. The heterogeneous histology within the same lesion occurs in many NSCLC tumors. Pathologists make classifications according to the dominant subtypes. Although morphology remains the standard for diagnosis, significant tool needs to be developed to elucidate the diagnosis. Using AI to analyze tissue sections is typically called computational pathology (Fuchs and Buhmann [2011]). Research in this area can trace back to the middle of the last century, with the seminal application of image analysis algorithms to medical images. Image analysis algorithms can classify cell images based on quantitative cell characteristics, e.g., size, shape, and chromatin distribution, and support the diagnosis of diseases (Mendelsohn et al. [1965]).The early applications implemented computational features matched to a biological process, later replaced by radionics using generic features of texture descriptors (Zwanenburg et al. [2020]). Automated classification of abnormal lesions using images is a challenging task owing to the finegrained variability in the appearance of abnormal lesions. Deep convolutional neural networks (CNN) (LeCun et al. [1999]) show potential for general and highly variable tasks across many fine-grained object categories. In the recent booming (Dean [2022]) of Deep Learning(LeCun et al. [2015]), a series of CNN-based models unceasingly refreshed the state-of-the-art performance on various computer vision benchmarks (Deng et al. [2009]; Krizhevsky et al. [2009]). Nowadays, the Transformer (Vaswani et al. [2017]) shows an advantage over computer vision tasks (Dosovitskiy et al. [2020]) after becoming the de-facto module for natural language processing tasks (Devlin et al. The main advantage of the Deep Learning approach is automatically learning features from the data, instead of crafting meaningful features in feature engineering and conventional image analysis. The automated feature learning from the Deep Learning approach reduced the required domain knowledge and the implementation time. More importantly, the automated Deep Learning approach yields robust, hierarchical feature representations, which outperform traditional image analysis methods in most cases. Despite the powerful learning ability of the Deep Learning approach, there is still the central issue in the application in the medical domain, the domain shift. How do these state-of-the-art computer vision models perform on medical image analysis tasks? Specifically, the histological analysis of lung cancer in our case. To illustrate the working mechanism behind these models, we utilized the cutting-edge model interpretation work Transformer models (Chefer et al. [2021]) compared with the analysis from the human expert. Our pathology expert provides the information about which regions help her decide whether the input image is a lung cancer lesion or not from the professional perspective. The comparison between the pathology cognition and Grad-CAM visualization presents a high degree of consistency. This comparison helps us understand our deep learning model, and rationalizes our proposed method further. Methods The pre-train-finetune paradigm is the robust defacto pipeline in visual and language learning. In the finetune stage, a Multi-Layer Perceptron (MLP) (Rosenblatt [1958]; Aizerman [1964]) often follows a thoroughly pre-trained backbone network to function as a projector and aid transfer learning (Wang et al. [2021]) on new tasks (medical image classification in our case). Our customized models utilized the paradigm above: ViT as the pre-trained backbone, followed by an MLP projector, and finally, a cross-entropy loss to classify and diagnose the cancer lesion ( Figure 1). Model Architectures We utilize the ViT ( Figure 1, we illustrate how the ViT backbone function in our customize d model. The multiple scale input image samples (768 * 768 pixels) are first normalized to 224 * 224 pixels. This follows the input image format in ImageNet, due to ViT is pre-trained on ImageNet. For the 224 x 224 pixels input, the ViT backbone first split the input image into 196 image patches, and each patch is 16 x 16 pixels. Image patches are treated the same way as tokens (words) in natural language processing applications. The image patches are linearly embedded, with positional embeddings. ViT backbone (Dosovitskiy et al. [2020]) then feeds the resulting sequence vectors into a standard Transformer encoder (Vaswani et al. [2017]). The process is visually illustrated in Figure 1. Experiments In this section, we describe the dataset we used and our experiments in the Zero-Shot and Few-Shot manners. Dataset We utilized the Lung Cancer part of the dataset LC25000 (Borkowski et al. [2019]). LC25000 has 25,000 color images in 5 classes. Each class contains 5,000 images of the histologic categories: colon adenocarcinoma, benign colonic tissue, lung adenocarcinoma, lung squamous cell carcinoma and benign lung tissue. All images are de-identified, HIPPA 3 compliant and validated. We utilize the lung cancer categories: lung adenocarcinoma, lung squamous cell carcinoma and benign lung tissue. We split the 15,000 images in lung cancer categories into training set (D train ), validation set (D validation ) and test set (D test ), by the ratios 60%, 20%, 20%, after random sampling. Zero-Shot transfer learning First, we conducted the experiments in a Zero-Shot manner. The well pre-trained ViT model functions as frozen wights in this setting. We directly use the frozen pre-trained ViT model to make the prediction on test set (D test ). We report the accuracy on validation set and test set in Table 1 Few-Shot transfer learning Then we conducted the experiments in Few-Shot manners. We fine-tuned the pre-trained ViT model on training set (D train ) for 5 epochs, and validation the best model on validation set (D validation ) at each epoch. We find that the pre-trained ViT has prompt and strong learning ability, so that it quick achieves the optimal accuracy after fine-tuning of only 5 epochs (100.00% on both validation set (D validation ) and test set (D test )). We report the accuracies on validation set and test set in Table 1. Receiver operating characteristic (ROC) curve The Receiver operating characteristic (ROC) curve is used to evaluate the quality of a classifier (Green et al. [1966]; Fawcett [2006] Discussion A lot of work endeavors to make deep learning more sensible and explainable. In various deep learning applications especially into medical imaging, it is crucial to make the deep learning model more interpretative. Selvaraju et al. [2017] have introduced a Gradient Weighted Class Activation Mapping (Grad-CAM) technique which provides the interpretative view of deep learning models. (Chefer et al. [2021]) extends the research of Grad-CAM to Transformer Vaswani et al. [2017] models. Grad-CAM uses the gradients of any target concept, flowing into the final convolutional layer to produce the coarse localization map highlighting important regions in the image for predicting the concept. To illustrate the working mechanism behind these models, we utilized the cutting-edge model interpretation work Transformer models (Chefer et al. [2021]) compared with the analysis from the human expert. In our application to classify the histology images, our visualizations ( Figure 3) lend insights into failure modes of these models, showing that seemingly unreasonable predictions have reasonable explanations. Our visualization (Figure 3) is robust to adversarial perturbations, are more faithful to the underlying model and help achieve model generalizations by identifying dataset bias. The interpretation work of deep learning models is essential to the understanding of the mechanism behind the success of our model, and making the model more transparent. In our case, we utilize Grad-CAM to generate the visualized explanations via gradient-based localization. The localizations in Grad-CAM help explain which regions (confirmed salient features) in the image input contribute more to the model's final decision and which regions are less ( Figure 3). Furthermore, our pathology expert provides the information about which regions help him decide whether the input image is an infection or not from the professional perspective. The comparison between the pathology cognition and Grad-CAM visualization presents a high degree of consistency. This comparison is novel, helps us understand our deep learning model, and rationalizes our proposed method further. Figure 1 : 1Model Overview -Vision Transformer for Lung Cancer Classification . The x axis of ROC curve is the 'false positive rate' (FPR), and the y axis means the 'true positive rate' (TPR). The point or line of a classifier locates more top-left on the ROC curve, the better this classifier is. Area under the Curve of ROC (AUC ROC), tests whether positives are ranked higher than negatives. We reported the AUC values of each model (of all the three classes, LUAD, BENIGH and LUSC) on Figure 2. The AUC values of ViT model (Few-Shot epoch = 5) is 1.00000000 for all the three classes (LUAD, BENIGN abd LUSC), showing that the ViT model (Few-Shot epoch = 5) is an optimal classifier in our case. Figure 2b .Figure 2 : 2b2ROC Receiver operating characteristic (ROC) curves for Zero-Shot and Few-Shot ViT models Figure 3 : 3Attention Visualization: first row -Original histological image; second row -Grad-CAMdiscriminative regions. Few-shot transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a wide range of downstream tasks, has emerged as a powerful technique in natural languages and computer vision tasks. Pre-trained representations have shown substantial performance improvements using self-supervised learning and transfer learning. There are also emerging zero-shot learning techniques showing outstanding performance, like prompt-tuning(Li and Liang [2021]) and instruction tuning(Wei et al. [2021]). In our study, we utilize the pre-trained Vision Transformer (ViT fromDosovitskiy et al. [2020]) model to classify multiple label lung cancer on histologic slices (from dataset LC25000 in Borkowski et al.[2019]), in both Zero-Shot and Few-Shot manners. Then we compare the performance of Zero-Shot and Few-Shot ViT on accuracy, precision, recall, sensitivity and specificity. Our study show that the pre-trained ViT model has a good performance in Zero-Shot setting, a competitive accuracy in Few-Shot setting (epoch = 1) and an optimal result in Few-Shot seeting (epoch = 5).The Zero-Shot Learning (Larochelle et al. [2008];Socher et al. [2013];Wei et al. [2021]) and Few-Shot Learning (Brown et al. [2020]) are the emerging paradigms to address this issue. .3 Health Insurance Portability and Accountability Act of 1996 (HIPAA), https://www.cdc.gov/phlp/publications/topic/hipaa.html Table 1 : 1Zero-Shot and Few-Shot performance of ViT on LC 25000Part Name Validation set (acc) Test set (acc) Zero-Shot 33.77% Few-Shot (epoch = 1) 98.90% 98.87% Few-Shot (epoch = 2) 98.47% 99.50% Few-Shot (epoch = 3) 99.77% 99.70% Few-Shot (epoch = 4) 98.90% 99.87% Few-Shot (epoch = 5) 100.00% 100.00% ) Acknowledgments and Disclosure of FundingWe thank the computing resource support from the TPU Research Cloud. Both Guo FM and Fan Y contributed to manuscript revision, read, and approved the submitted version. Theoretical foundations of the potential function method in pattern recognition learning. Automation and remote control. Mark A Aizerman, 25Mark A Aizerman. Theoretical foundations of the potential function method in pattern recognition learning. Automation and remote control, 25:821-837, 1964. A Andrew, Marilyn M Borkowski, Bui, Catherine P Brannon Thomas, Lauren A Wilson, Stephen M Deland, Mastorides, arXiv:1912.12142Lung and colon cancer histopathological image dataset (lc25000). arXiv preprintAndrew A Borkowski, Marilyn M Bui, L Brannon Thomas, Catherine P Wilson, Lauren A DeLand, and Stephen M Mastorides. Lung and colon cancer histopathological image dataset (lc25000). arXiv preprint arXiv:1912.12142, 2019. Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. Transformer interpretability beyond attention visualization. Hila Chefer, Shir Gur, Lior Wolf, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionHila Chefer, Shir Gur, and Lior Wolf. Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 782-791, 2021. A golden decade of deep learning: Computing systems & applications. Jeffrey Dean, Daedalus. 1512Jeffrey Dean. A golden decade of deep learning: Computing systems & applications. Daedalus, 151 (2):58-74, 2022. Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee, 2009. Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, 2019. An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, arXiv:2010.11929arXiv preprintAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. An introduction to roc analysis. Tom Fawcett, Pattern recognition letters. 278Tom Fawcett. An introduction to roc analysis. Pattern recognition letters, 27(8):861-874, 2006. Computational pathology: challenges and promises for tissue analysis. J Thomas, Joachim M Fuchs, Buhmann, Computerized Medical Imaging and Graphics. 357-8Thomas J Fuchs and Joachim M Buhmann. Computational pathology: challenges and promises for tissue analysis. Computerized Medical Imaging and Graphics, 35(7-8):515-530, 2011. Signal detection theory and psychophysics. John A David Marvin Green, Swets, Wiley1New YorkDavid Marvin Green, John A Swets, et al. Signal detection theory and psychophysics, volume 1. Wiley New York, 1966. Non-small-cell lung cancer. Cesare Gridelli, Antonio Rossi, P David, Juliana Carbone, Niki Guarize, Tony Karachaliou, Francesco Mok, Lorenzo Petrella, Rafael Spaggiari, Rosell, Nature reviews Disease primers. 11Cesare Gridelli, Antonio Rossi, David P Carbone, Juliana Guarize, Niki Karachaliou, Tony Mok, Francesco Petrella, Lorenzo Spaggiari, and Rafael Rosell. Non-small-cell lung cancer. Nature reviews Disease primers, 1(1):1-16, 2015. The biology and management of non-small cell lung cancer. S Roy, Daniel Herbst, Chris Morgensztern, Boshoff, Nature. 5537689Roy S Herbst, Daniel Morgensztern, and Chris Boshoff. The biology and management of non-small cell lung cancer. Nature, 553(7689):446-454, 2018. Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. The evolving role of histology in the management of advanced non-small-cell lung cancer. J Corey, Benjamin Langer, Antonio Besse, Elizabeth Gualberto, Jean-Charles Brambilla, Soria, Journal of clinical oncology. 2836Corey J Langer, Benjamin Besse, Antonio Gualberto, Elizabeth Brambilla, and Jean-Charles Soria. The evolving role of histology in the management of advanced non-small-cell lung cancer. Journal of clinical oncology, 28(36):5311-5320, 2010. Zero-data learning of new tasks. Hugo Larochelle, Dumitru Erhan, Yoshua Bengio, AAAI. 13Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. Zero-data learning of new tasks. In AAAI, volume 1, page 3, 2008. Object recognition with gradientbased learning. Yann Lecun, Patrick Haffner, Léon Bottou, Yoshua Bengio, Shape, contour and grouping in computer vision. SpringerYann LeCun, Patrick Haffner, Léon Bottou, and Yoshua Bengio. Object recognition with gradient- based learning. In Shape, contour and grouping in computer vision, pages 319-345. Springer, 1999. Deep learning. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, nature. 5217553Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436-444, 2015. Prefix-tuning: Optimizing continuous prompts for generation. Lisa Xiang, Percy Li, Liang, 10.18653/v1/2021.acl-long.353Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAssociation for Computational Linguistics1Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4582-4597, Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.353. Computer analysis of cell images. Wilfred A Mortimer L Mendelsohn, Benson Kolman, Judith Ms Perry, Prewitt, Postgraduate Medicine. 385Mortimer L Mendelsohn, Wilfred A Kolman, Benson Perry, and Judith MS Prewitt. Computer analysis of cell images. Postgraduate Medicine, 38(5):567-573, 1965. The 2015 world health organization classification of lung tumors: new entities since the 2004 classification. Mc Mengoli, F Longo, Fraggetta, Cavazza, Dubini, Ali, Guddo, G Gilioli, Bogina, Nannini, Pathologica-Journal of the Italian Society of Anatomic Pathology and Diagnostic Cytopathology. 1101MC Mengoli, FR Longo, F Fraggetta, A Cavazza, A Dubini, G Ali, F Guddo, E Gilioli, G Bogina, N Nannini, et al. The 2015 world health organization classification of lung tumors: new entities since the 2004 classification. Pathologica-Journal of the Italian Society of Anatomic Pathology and Diagnostic Cytopathology, 110(1):39-67, 2018. Lung Cancer. F Nasim, B F Sabath, G A Eapen, Med Clin North Am. 1033F. Nasim, B. F. Sabath, and G. A. Eapen. Lung Cancer. Med Clin North Am, 103(3):463-473, May 2019. Imagenet-21k pretraining for the masses. Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelnik-Manor, Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 2021Round 1Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, and Lihi Zelnik-Manor. Imagenet-21k pretraining for the masses. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021. URL https://openreview.net/forum?id=Zkj_VcZ6ol. The perceptron: a probabilistic model for information storage and organization in the brain. Frank Rosenblatt, Psychological review. 656386Frank Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958. Grad-cam: Visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionRamprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localiza- tion. In Proceedings of the IEEE international conference on computer vision, pages 618-626, 2017. Zero-shot learning through cross-modal transfer. Richard Socher, Milind Ganjoo, D Christopher, Andrew Manning, Ng, Advances in neural information processing systems. 26Richard Socher, Milind Ganjoo, Christopher D Manning, and Andrew Ng. Zero-shot learning through cross-modal transfer. Advances in neural information processing systems, 26, 2013. Revisiting unreasonable effectiveness of data in deep learning era. Chen Sun, Abhinav Shrivastava, Saurabh Singh, Abhinav Gupta, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionChen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843-852, 2017. The 2015 world health organization classification of lung tumors: impact of genetic, clinical and radiologic advances since the 2004 classification. D William, Elisabeth Travis, Brambilla, G Andrew, Yasushi Nicholson, Yatabe, H M John, Mary Beth Austin, Lucian R Beasley, Sanja Chirieac, Edwina Dacic, Duhig, B Douglas, Flieder, Journal of thoracic oncology. 109William D Travis, Elisabeth Brambilla, Andrew G Nicholson, Yasushi Yatabe, John HM Austin, Mary Beth Beasley, Lucian R Chirieac, Sanja Dacic, Edwina Duhig, Douglas B Flieder, et al. The 2015 world health organization classification of lung tumors: impact of genetic, clinical and radiologic advances since the 2004 classification. Journal of thoracic oncology, 10(9):1243-1260, 2015. Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. Revisiting the transferability of supervised pretraining: an mlp perspective. Yizhou Wang, Shixiang Tang, Feng Zhu, Lei Bai, Rui Zhao, Donglian Qi, Wanli Ouyang, arXiv:2112.00496arXiv preprintYizhou Wang, Shixiang Tang, Feng Zhu, Lei Bai, Rui Zhao, Donglian Qi, and Wanli Ouyang. Revisiting the transferability of supervised pretraining: an mlp perspective. arXiv preprint arXiv:2112.00496, 2021. Finetuned language models are zero-shot learners. Jason Wei, Maarten Bosma, Y Vincent, Kelvin Zhao, Adams Wei Guu, Brian Yu, Nan Lester, Du, M Andrew, Quoc V Dai, Le, arXiv:2109.01652arXiv preprintJason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021. Alexey Dosovitskiy, et al. The visual task adaptation benchmark. Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al. The visual task adaptation benchmark. 2019. The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping. Alex Zwanenburg, Martin Vallières, Mahmoud A Abdalah, Jwl Hugo, Vincent Aerts, Aditya Andrearczyk, Saeed Apte, Spyridon Ashrafinia, Bakas, J Roelof, Ronald Beukinga, Boellaard, Radiology. 2952Alex Zwanenburg, Martin Vallières, Mahmoud A Abdalah, Hugo JWL Aerts, Vincent Andrearczyk, Aditya Apte, Saeed Ashrafinia, Spyridon Bakas, Roelof J Beukinga, Ronald Boellaard, et al. The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping. Radiology, 295(2):328-338, 2020.
[]
[ "CellCentroidFormer: Combining Self-attention and Convolution for Cell Detection", "CellCentroidFormer: Combining Self-attention and Convolution for Cell Detection" ]
[ "Royden Wagner [email protected]@uni-heidelberg.de ", "Karl Rohr ", "\nBiomedical Computer Vision Group\nIPMB\nBioQuant\n", "\nHeidelberg University\nGermany\n" ]
[ "Biomedical Computer Vision Group\nIPMB\nBioQuant", "Heidelberg University\nGermany" ]
[]
Cell detection in microscopy images is important to study how cells move and interact with their environment. Most recent deep learning-based methods for cell detection use convolutional neural networks (CNNs). However, inspired by the success in other computer vision applications, vision transformers (ViTs) are also used for this purpose. We propose a novel hybrid CNN-ViT model for cell detection in microscopy images to exploit the advantages of both types of deep learning models. We employ an efficient CNN, that was pre-trained on the ImageNet dataset, to extract image features and utilize transfer learning to reduce the amount of required training data. Extracted image features are further processed by a combination of convolutional and transformer layers, so that the convolutional layers can focus on local information and the transformer layers on global information. Our centroid-based cell detection method represents cells as ellipses and is end-to-end trainable. Furthermore, we show that our proposed model can outperform fully convolutional one-stage detectors on four different 2D microscopy datasets. Code is available at: https://github.com/roydenwa/cell-centroid-former
10.48550/arxiv.2206.00338
[ "https://arxiv.org/pdf/2206.00338v2.pdf" ]
249,240,254
2206.00338
5ad58b5193f4c88fa5e74790d30c7bc5d538c691
CellCentroidFormer: Combining Self-attention and Convolution for Cell Detection Royden Wagner [email protected]@uni-heidelberg.de Karl Rohr Biomedical Computer Vision Group IPMB BioQuant Heidelberg University Germany CellCentroidFormer: Combining Self-attention and Convolution for Cell Detection Cell detection · transformer · self-attention · convolution Cell detection in microscopy images is important to study how cells move and interact with their environment. Most recent deep learning-based methods for cell detection use convolutional neural networks (CNNs). However, inspired by the success in other computer vision applications, vision transformers (ViTs) are also used for this purpose. We propose a novel hybrid CNN-ViT model for cell detection in microscopy images to exploit the advantages of both types of deep learning models. We employ an efficient CNN, that was pre-trained on the ImageNet dataset, to extract image features and utilize transfer learning to reduce the amount of required training data. Extracted image features are further processed by a combination of convolutional and transformer layers, so that the convolutional layers can focus on local information and the transformer layers on global information. Our centroid-based cell detection method represents cells as ellipses and is end-to-end trainable. Furthermore, we show that our proposed model can outperform fully convolutional one-stage detectors on four different 2D microscopy datasets. Code is available at: https://github.com/roydenwa/cell-centroid-former Fig. 1 . Feature maps and cell detection results for a fluorescence microscopy image. The feature map on the left was generated by a transformer encoder in the neck part of our model, the feature map on the right was generated by an adjacent convolutional layer. The transformer encoder focuses on the overall cell shapes (global features), while the convolutional layer focuses on the cell centroids (local features). Both feature maps have been enlarged, their original size is 48 × 48 pixels. Introduction Cell detection is an important task when studying biomedical microscopy images and time-lapse microscopy videos. Main applications are the quantification of cellular structures as well as studying how cells move and interact with their environment. Most recent deep learning-based methods for cell detection in microscopy images use convolutional neural networks (CNNs) (e.g., [5], [6], [12]). However, inspired by the success in other computer vision applications, vision transformers (ViTs) (e.g., [13]) are also used for this purpose. The comparison of ViTs and CNNs in computer vision applications reveals that the receptive fields of ViTs and CNNs are fundamentally different [14]. The receptive fields of ViTs capture local and global information in both earlier and later layers. The receptive fields of CNNs, on the other hand, initially capture local information and gradually grow to capture global information in later layers. This suggests that ViTs better preserve spatial information across the generated feature maps within the models, which is advantageous for object detection. However, a limitation of most current ViT models is that they require much more training data to reach or surpass the performance of CNNs in computer vision tasks [7]. This is a major limitation for biomedical applications, where annotated training samples are limited. Another limitation of ViTs is that their core mechanism, multi-head self-attention, has a computational complexity of O(n 2 ), where n is the length of the input sequence. Consequently, large computational resources are required for training such models. To exploit the advantages of both types of deep learning models, we propose a hybrid CNN-ViT model for cell detection in microscopy images. We employ an efficient CNN, that was pre-trained on the ImageNet [1] dataset, to extract images features and utilize transfer learning to reduce amount of required training data. Extracted image features are further processed by a combination of convolutional and transformer layers, so that the convolutional layers can focus on local information and the transformer layers on global information. We propose a one-stage cell detection method that is end-to-end trainable. Overall, the contributions of our paper are twofold: 1. We introduce a novel deep learning model that combines self-attention and convolution for cell detection in microscopy images. 2. We propose a centroid-based cell detection method that represents cells as ellipses. Related Work Several related methods use CNNs [3,5,6,9,12,22,24] or combine CNNs with classical image analysis approaches [17] for cell detection in biomedical microscopy images. Amongst these methods, two approaches are common: (i) to perform cell detection by predicting a heatmap for cell positions [3,12,24] or (ii) to perform cell detection by predicting the coordinates of bounding boxes for cells [5,6,9]. The heatmap-based methods are well suited for cell counting, but are unsuitable for morphological studies since they only predict where cells are located and do not provide information on the cell dimensions. The bounding box-based methods are more flexible as they predict the cell positions and the cell dimensions (width and height). However, the bounding box-based methods use a cascade of two CNNs and are therefore not end-to-end trainable. To deal with sparsely annotated or small datasets, semi-supervised learning [3,12] or pre-training with synthetic data [24] is used. Thus complicating the training process, since pseudolabels must be iteratively generated or synthetic data must be created in advance. The Cell-DETR [13] model is architecturally most related to ours. Cell-DETR is a hybrid CNN-ViT model for cell detection and segmentation. Prangemeier et al. use a CNN backbone to extract image features, a transformer encoderdecoder block to process image features and a model head with a multi-layer perceptron for cell detection. Due to the high computational complexity of the transformer encoder-decoder block, they use a small input size of 128 × 128 pixels. Therefore, a considerable amount of information is lost when downsizing high-resolution microscopy images. Method Model Architecture The proposed hybrid CNN-ViT model combines self-attention and convolution for cell detection. For this purpose, we use MobileViT blocks [11], which combine transformer encoders [2,19] with convolutional layers. In MobileViT blocks, input tensors are first processed by convolutional layers, then, the extracted features are unfolded into a sequence and passed through a transformer encoder. Finally, the output tensor of the transformer encoder is folded again into a 3D representation and concatenated with the input tensor. Thereby, the convolutional layers extract local information and the self-attention mechanism in the transformer encoder associates local information from distant features to capture global information. We use MobileViT blocks in the neck part of our proposed model to enhance global information compared to a fully convolutional neck part. MobileViT blocks are a light-weight alternative to the original transformer encoder-decoder design [19]. However, due to their multi-head self-attention layers, MobileViT blocks still have a much higher computational complexity (CC) than convolutional layers: CC mhs−attn ∼ = O(n 2 ) if n >> d · h, O(n 2 · d · h) else (1) CC conv ∼ = O(n) if n >> d · k · f, O(n · d · k · f ) else (2) where n is the sequence length, d is the sequence depth, h is the number of selfattention heads, k the size of the convolutional kernels, and f is the number of convolutional filters. Thus, we combine MobileViT blocks in the neck part of our model with convolutional layers to extract more features without increasing the computational complexity excessively. In addition, we add layer normalization layers for regularization and to allow higher learning rates during training. As backbone of our proposed model, we use parts of an EfficientNetV2S [16] CNN model. EffcientNet models consist of six high-level blocks, we use five of these blocks to extract image features. We initialize the backbone with weights learned from training on the ImageNet dataset to leverage transfer learning and reduce the amount of required training data. EfficientNetV2S models are optimized for a fixed input size of 384 × 384 × 3 pixels. Therefore, we resize all input images to this input size. We represent cells by their centroid, their width, and their height. Our model contains two fully convolutional heads to predict these cell properties. The heads contain 2D convolution, batch normalization, and bilinear upsampling layers. We do not use further MobileViT blocks in the heads to reduce the computational complexity of our model, and since later convolutional layers have a large receptive field that allows them to capture global information [14]. The first head predicts a heatmap for cell centroids, and the second head predicts the cell dimensions (width and height) at the position of the corresponding cell centroid. The output dimensions of our model are 384 × 384, thus, the output stride is one and we do not need an additional offset head to account for offset errors (as in, e.g., [25]). Figure 2 shows the overall model architecture. Bounding Ellipses for Cell Detection Yang et al. [25] argue that traditional bounding box-based object detection methods are not optimized for biomedical applications. They state that most relevant objects in biomedical applications have round shapes, and thus they propose using bounding circles instead of bounding boxes. We extend this idea and use bounding ellipses for cell detection, since most cell types in biomedical applications have an ellipsoidal shape. Figure 3 shows how we generate training samples for the proposed centroid-based cell detection method. We approximate the centroid of a cell by an ellipse with width and height adjusted to the corresponding cell dimensions. We blur the centroid ellipses using a Gaussian kernel to reduce the loss value when a centroid is only missed by a small distance. Similar as in [26], the cell width and cell height are encoded in rectangles located at the position of the cell centroids. Experiments Datasets and Pre-processing We evaluate our hybrid CNN-ViT model using four different 2D microscopy datasets from the Cell Tracking Challenge [18]. The Fluo-N2DL-HeLa (HeLa) dataset consists of 2D fluorescence microscopy images of cells stably expressing H2b-GFP. The images of this dataset have a size of 700 × 1100 pixels. The Fluo-N2DH-SIM+ (SIM+) dataset consists of simulated 2D fluorescence microscopy images of HL60 cells stained with Hoechst dyes. The images of this dataset have a size of 773 × 739 pixels. The Fluo-N2DH-GOWT1 (GOWT1) dataset consists of 2D fluorescence microscopy images of mouse stem cells. These images have a size of 1024 × 1024 pixels. The PhC-C2DH-U373 (U373) dataset consists of 2D phase contrast microscopy images of glioblastoma-astrocytoma U373 cells on a polyacrylamide substrate. The images of this dataset have a size of 520 × 696 pixels. Similar as in [20], we use pseudocoloring to generate input images with three channels suitable for the pretrained backbone. We use the ncar pseudo spectral colormap 1 , which is well suited for coloring cell nuclei and their immediate surroundings to distinguish these regions from the background. Figure 4 shows pseudocolored image crops of the SIM+ and GOWT1 datasets. Baseline Model Similar to semantic segmentation, our method treats cell detection as a pixelwise regression objective. Therefore, we choose a CNN designed for semantic segmentation as baseline model in the following experiments. The Dual U-Net model [10] has an encoder-decoder architecture with two decoders. Most recently, the Dual U-Net model was used as part of a tracking and segmentation method [15] that achieved multiple top-3 rankings in the Cell Tracking Challenge 2 . Analogously to our model, we use one decoder of the Dual U-Net model to predict the centroid heatmap and the other to predict the cell dimensions. Training Setup, Metrics, and Hyperparameters We use geometric data augmentations such as grid distortion, elastic transformation, shifting, and rotation to increase the size of each dataset to 2150 samples. The resulting datasets are each split into a training dataset (80%), a validation dataset (10%), and a test dataset (10%). All images are normalized using min-max scaling. Our CellCentroidFormer model is trained with pseudocolored training samples, the Dual U-Net model with normalized grayscale training samples. We use three Huber loss functions [4], one loss function per output, to train both models. The total loss is computed by a weighted sum of the three loss values: L Huber (y, y) = 1 2 (y − y) 2 if y − y ≤ 1.0, (y − y) − 1 2 else (3) L total = L heatmap + 1 2 · L height + 1 2 · L width (4) The centroid heatmap loss (L heatmap ) contributes the most to the total loss because our method is inherently centroid-based. When decoding the predictions of our model, the width and height of the cells are only looked up at positions where cell centroids are located according to the centroid heatmap prediction. As performance metrics, we use the mean intersection over union (Mean-IoU) and the structural similarity metric (SSIM) [21]. We convert the centroid heatmaps to binary masks to evaluate the detection performance. Therefore, we apply thresholding to the heatmaps. Afterwards, we compute the MeanIoU value of the predicted and the ground truth centroid mask: M eanIoU = 1 C C T P C T P C + F P C + F N C(5) For the two class labels (C), background and cell centroids, the true positive (TP), false positive (FP), and false negative (FN) pixels are determined. For the cell dimensions (height and width), we use the SSIM metric to quantify the performance. The metric measures the similarity between two images or matrices and is defined as: SSIM (x 1 , x 2 ) = (2µ x1 µ x2 + (0.01L) 2 )(2σ x1x2 + (0.03L) 2 ) (µ 2 x1 + µ 2 x2 + (0.01L) 2 )(σ 2 x1 + σ 2 x2 + (0.03L) 2 )(6) For the two inputs (x 1 and x 2 ), the mean (µ), the variance (σ), and the dynamic range (L) are computed. We train both models per dataset for 50 epochs with a batch size of 4 using a Nvidia® V100 GPU. We use Adam [8] as optimizer with an initial learning rate of 1 −4 and reduce the learning rate at plateaus. Our model converges around the 30th epoch, whereas the baseline model converges around the 40th epoch. In these training epochs, the lowest learning rate of 1 −6 is also reached for both models and, accordingly, the metrics do not change much in the following epochs. Our model converges faster since the pretrained backbone was already trained to extract image features. The performance difference for the cell centroid MeanIoU score is greater than for the cell dimensions SSIM score, but overall our model yields higher values for all considered metrics on both datasets (training and validation). Table 1 shows a comparison the performance of the two models on the used test datasets. Additionally, we train a CircleNet [25] model with an Efficient-NetV2S as backbone. As in [23,26], we combine the backbone with upsampling blocks, such that the CircleNet has an output stride of 4. CircleNets detect bounding circles for cells by predicting a heatmap for cell centers, the circle radius, and a local offset. We train the model with pseudocolored samples, use a Huber loss for the center heatmap predictions, and keep the rest of the training setup as in [25]. To compute the SSIM score for the cell dimensions, we compute the SSIM score for the radius prediction and cell width map, the SSIM score for the radius prediction and cell height map, and average them. Training Curves and Performance Comparison Our CellCentroidFormer model outperforms the Dual U-Net and the CircleNet on all considered datasets. As in Figure 5, the performance difference for the cell centroid MeanIoU score is greater than for the cell dimensions SSIM score. Our model and the Dual U-Net model yield higher cell dimensions SSIM scores than the CircleNet model since most cells have an ellipsoidal shape, which is more accurately represented by an ellipse than by a circle. On the SIM+ dataset, our model and the CircleNet model outperform the Dual U-Net model on the cell centroid MeanIoU score. On the HeLa dataset, the CircleNet performs worst because this dataset contains many small cells (see Figure 6), that are challenging to distinguish in a low resolution output of 128 × 128. Table 2 shows a comparison of the training times, the inference times, and the overall number of parameters per model. As previously shown [11], vision transformers need more time for training and inference than CNNs. This also applies to our hybrid model, thus, it requires more time to train our model for one epoch (T Epoch ), and the inference time for one input image is somewhat slower (TI GP U ). However, our model only requires one third of the parameters of the Dual U-Net model and roughly half of the parameters of the CircleNet model (11.5 M vs. 33 M vs. 24.6 M) to achieve superior performance. Figure 6 shows example images of the considered datasets and the corresponding cell detection results of our model. Conclusion We have presented a novel hybrid CNN-ViT model that combines self-attention and convolution for cell detection. Our centroid-based cell detection method represents cells as ellipses and is end-to-end trainable. We have shown that the transformer layers in the neck part of our proposed model can focus on the overall cell shapes, while adjacent convolutional layers focus on the cell centroids. Our experiments reveal that pseudocoloring in combination with pretrained backbones improves the cell detection performance, whereas larger output strides worsen the performance. Furthermore, our CellCentroidFormer model outperforms fully convolutional one-stage detectors on four different 2D microscopy datasets despite having less parameters overall. Fig. 2 . 2CellCentroidFormer model. Backbone: Five blocks of an EfficientNetV2S. Neck: MobileViT blocks and convolutional layers. Heads: Fully convolutional upsampling blocks. Fig. 3 . 3Centroid representation of bounding ellipses. Instance segmentation masks are converted into training samples for cell detection. The training samples contain centroid heatmaps, height maps, and width maps. Fig. 4 . 4Pseudocolored image crops of two considered datasets. Figure 5 5shows the training curves for the HeLa dataset. Fig. 5 . 5Training curves for the HeLa dataset. Fig. 6 . 6Example microscopy images and cell detection results of our proposed method. Table 1 . 1Performance on different microscopy datasets. Metrics are evaluated after training for 50 epochs with the corresponding training datasets.Dataset Model Backbone Centroid MeanIoU ⇑ Dimensions SSIM ⇑ SIM+ Test Dual U-Net - 0.8033 0.9631 CircleNet EfficientNetV2S 0.8308 0.9011 CellCentroidFormer EfficientNetV2S 0.8492 0.9858 GOWT1 Test Dual U-Net - 0.9278 0.9909 CircleNet EfficientNetV2S 0.9108 0.9192 CellCentroidFormer EfficientNetV2S 0.9355 0.9959 U373 Test Dual U-Net - 0.9170 0.9908 CircleNet EfficientNetV2S 0.8802 0.9241 CellCentroidFormer EfficientNetV2S 0.9256 0.9923 HeLa Test Dual U-Net - 0.8675 0.9885 CircleNet EfficientNetV2S 0.7650 0.7507 CellCentroidFormer EfficientNetV2S 0.9287 0.9937 Table 2 . 2Model comparison. Inference times (T IGP U ) are measured end-to-end and include the data transfer between the host (CPU) and the device (GPU). T Epoch ⇓ TI GP U ⇓ #Params ⇓Model Dual U-Net 106 s 67 ms 33.0 M CircleNet 64 s 59 ms 24.6 M CellCentroidFormer 122 s 82 ms 11.5 M https://www.ncl.ucar.edu/Document/Graphics/ColorTables/MPL_gist_ncar.shtml 2 http://celltrackingchallenge.net/participants/KIT-Sch-GE Acknowledgements. Support of the DFG (German Research Foundation) within the SFB 1129 (project Z4) and the SPP 2202 (RO 2471/10-1), and the BMBF (German Federal Ministry of Education and Research) within the de.NBI is gratefully acknowledged. ImageNet: A largescale hierarchical image database. J Deng, W Dong, R Socher, L J Li, K Li, L Fei-Fei, Conference on Computer Vision and Pattern Recognition (CVPR). IEEEDeng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large- scale hierarchical image database. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 248-255. IEEE (2009) An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, J Uszkoreit, N Houlsby, International Conference on Learning Representations (ICLR. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In: International Conference on Learning Representations (ICLR) (2021) Cell Detection from Imperfect Annotation by Pseudo Label Selection Using P-classification. K Fujii, D Suehiro, K Nishimura, R Bise, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). SpringerFujii, K., Suehiro, D., Nishimura, K., Bise, R.: Cell Detection from Imperfect Annotation by Pseudo Label Selection Using P-classification. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). pp. 425-434. Springer (2021) Robust Estimation of a Location Parameter. P J Huber, Annals of Statistics. 53Huber, P.J.: Robust Estimation of a Location Parameter. Annals of Statistics 53, 73-101 (1964) Keras r-cnn: library for cell detection in biological images using deep neural networks. J Hung, A Goodman, D Ravel, S C Lopes, G W Rangel, O A Nery, B Malleret, F Nosten, M V Lacerda, M U Ferreira, BMC bioinformatics. 211Hung, J., Goodman, A., Ravel, D., Lopes, S.C., Rangel, G.W., Nery, O.A., Malleret, B., Nosten, F., Lacerda, M.V., Ferreira, M.U., et al.: Keras r-cnn: li- brary for cell detection in biological images using deep neural networks. BMC bioinformatics 21(1), 1-7 (2020) . H Jiang, S Li, W Liu, H Zheng, J Liu, Y Zhang, Geometry-Aware Cell Detection with Deep Learning. Msystems. 51Jiang, H., Li, S., Liu, W., Zheng, H., Liu, J., Zhang, Y.: Geometry-Aware Cell Detection with Deep Learning. Msystems 5(1), e00840-19 (2020) Transformers in Vision: A Survey. S Khan, M Naseer, M Hayat, S W Zamir, F S Khan, M Shah, ACM Comput. Surv. Khan, S., Naseer, M., Hayat, M., Zamir, S.W., Khan, F.S., Shah, M.: Transformers in Vision: A Survey. ACM Comput. Surv. (2021) Adam: A Method for Stochastic Optimization. D P Kingma, J Ba, International Conference on Learning Representations (ICLR. Kingma, D.P., Ba, J.: Adam: A Method for Stochastic Optimization. International Conference on Learning Representations (ICLR) (2015) Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN. X Li, Z Xu, X Shen, Y Zhou, B Xiao, T Q Li, Current Oncology. 285Li, X., Xu, Z., Shen, X., Zhou, Y., Xiao, B., Li, T.Q.: Detection of Cervical Cancer Cells in Whole Slide Images Using Deformable and Global Context Aware Faster RCNN-FPN. Current Oncology 28(5), 3585-3601 (2021) Dual U-Net for the Segmentation of Overlapping Glioma Nuclei. X Li, Y Wang, Q Tang, Z Fan, J Yu, IEEE Access. 7Li, X., Wang, Y., Tang, Q., Fan, Z., Yu, J.: Dual U-Net for the Segmentation of Overlapping Glioma Nuclei. IEEE Access 7, 84040-84052 (2019) MobileViT: Light-weight, General-purpose, and Mobilefriendly Vision Transformer. S Mehta, M Rastegari, International Conference on Learning Representations (ICLR). Mehta, S., Rastegari, M.: MobileViT: Light-weight, General-purpose, and Mobile- friendly Vision Transformer. In: International Conference on Learning Represen- tations (ICLR) (2022) Semi-supervised Cell Detection in Time-Lapse Images Using Temporal Consistency. K Nishimura, H Cho, R Bise, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). SpringerNishimura, K., Cho, H., Bise, R.: Semi-supervised Cell Detection in Time-Lapse Images Using Temporal Consistency. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). pp. 373-383. Springer (2021) Attention-Based Transformers for Instance Segmentation of Cells in Microstructures. T Prangemeier, C Reich, H Koeppl, International Conference on Bioinformatics and Biomedicine (BIBM). IEEEPrangemeier, T., Reich, C., Koeppl, H.: Attention-Based Transformers for Instance Segmentation of Cells in Microstructures. In: International Conference on Bioin- formatics and Biomedicine (BIBM). IEEE (2020) M Raghu, T Unterthiner, S Kornblith, C Zhang, A Dosovitskiy, Do Vision Transformers See Like Convolutional Neural Networks? In: Advances in Neural Information Processing Systems (NeurIPS). Raghu, M., Unterthiner, T., Kornblith, S., Zhang, C., Dosovitskiy, A.: Do Vision Transformers See Like Convolutional Neural Networks? In: Advances in Neural Information Processing Systems (NeurIPS) (2021) On Improving an Already Competitive Segmentation Algorithm for the. T Scherr, K Löffler, O Neumann, R Mikut, Cell Tracking Challenge-Lessons Learned. bioRxiv. Scherr, T., Löffler, K., Neumann, O., Mikut, R.: On Improving an Already Com- petitive Segmentation Algorithm for the Cell Tracking Challenge-Lessons Learned. bioRxiv (2021) EfficientNetV2: Smaller models and faster training. M Tan, Q Le, International Conference on Machine Learning (ICML). PMLRTan, M., Le, Q.: EfficientNetV2: Smaller models and faster training. In: Interna- tional Conference on Machine Learning (ICML). pp. 10096-10106. PMLR (2021) A deep learning algorithm for 3D cell detection in whole mouse brain image datasets. A L Tyson, C V Rousseau, C J Niedworok, S Keshavarzi, C Tsitoura, L Cossell, M Strom, T W Margrie, PLoS Computational Biology. 1751009074Tyson, A.L., Rousseau, C.V., Niedworok, C.J., Keshavarzi, S., Tsitoura, C., Cos- sell, L., Strom, M., Margrie, T.W.: A deep learning algorithm for 3D cell detec- tion in whole mouse brain image datasets. PLoS Computational Biology 17(5), e1009074 (2021) An objective comparison of cell-tracking algorithms. V Ulman, M Maška, K E Magnusson, O Ronneberger, C Haubold, N Harder, P Matula, P Matula, D Svoboda, M Radojevic, Nature Methods. 1412Ulman, V., Maška, M., Magnusson, K.E., Ronneberger, O., Haubold, C., Harder, N., Matula, P., Matula, P., Svoboda, D., Radojevic, M., et al.: An objective com- parison of cell-tracking algorithms. Nature Methods 14(12), 1141-1152 (2017) Attention is all you need. Advances in neural information processing systems (NeurIPS). A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, 30Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. Advances in neural information pro- cessing systems (NeurIPS) 30 (2017) EfficientCellSeg: Efficient Volumetric Cell Segmentation Using Context Aware Pseudocoloring. R Wagner, K Rohr, Medical Imaging with Deep Learning (MIDL). Wagner, R., Rohr, K.: EfficientCellSeg: Efficient Volumetric Cell Segmentation Using Context Aware Pseudocoloring. In: Medical Imaging with Deep Learning (MIDL) (2022) Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, Transactions on Image Processing. 134Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. Transactions on Image Processing 13(4), 600-612 (2004) Deep Consensus Network: Aggregating predictions to improve object detection in microscopy images. T Wollmann, K Rohr, Medical Image Analysis. 70102019Wollmann, T., Rohr, K.: Deep Consensus Network: Aggregating predictions to improve object detection in microscopy images. Medical Image Analysis 70, 102019 (2021) Simple baselines for human pose estimation and tracking. B Xiao, H Wu, Y Wei, European conference on computer vision (ECCV). Xiao, B., Wu, H., Wei, Y.: Simple baselines for human pose estimation and tracking. In: European conference on computer vision (ECCV). pp. 466-481 (2018) Microscopy cell counting and detection with fully convolutional regression networks. W Xie, J A Noble, A Zisserman, Computer methods in biomechanics and biomedical engineering: Imaging & Visualization. 63Xie, W., Noble, J.A., Zisserman, A.: Microscopy cell counting and detection with fully convolutional regression networks. Computer methods in biomechanics and biomedical engineering: Imaging & Visualization 6(3), 283-292 (2018) CircleNet: Anchor-Free Glomerulus Detection with Circle Representation. H Yang, R Deng, Y Lu, Z Zhu, Y Chen, J T Roland, L Lu, B A Landman, A B Fogo, Y Huo, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). SpringerYang, H., Deng, R., Lu, Y., Zhu, Z., Chen, Y., Roland, J.T., Lu, L., Landman, B.A., Fogo, A.B., Huo, Y.: CircleNet: Anchor-Free Glomerulus Detection with Circle Representation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). pp. 35-44. Springer (2020) X Zhou, D Wang, P Krähenbühl, arXiv:1904.07850Objects as points. arXiv preprintZhou, X., Wang, D., Krähenbühl, P.: Objects as points. arXiv preprint arXiv:1904.07850 (2019)
[ "https://github.com/roydenwa/cell-centroid-former" ]
[ "The Non-Compact Weyl Equation", "The Non-Compact Weyl Equation" ]
[ "Anastasia Doikou [email protected]† ", "Theodora Ioannidou ", "\nDepartment of Engineering Sciences\nDepartment of Mathematics, Physics and Computational Sciences\nFaculty of Engineering\nUniversity of Patras\nGR-26500PatrasGreece\n", "\nAristotle University of Thessaloniki\nGR-54124ThessalonikiGreece\n" ]
[ "Department of Engineering Sciences\nDepartment of Mathematics, Physics and Computational Sciences\nFaculty of Engineering\nUniversity of Patras\nGR-26500PatrasGreece", "Aristotle University of Thessaloniki\nGR-54124ThessalonikiGreece" ]
[]
A non-compact version of the Weyl equation is proposed, based on the infinite dimensional spin zero representation of the sl 2 algebra. Solutions of the aforementioned equation are obtained in terms of the Kummer functions. In this context, we discuss the ADHMN approach in order to construct the corresponding non-compact BPS monopoles.
10.1007/jhep04(2011)072
[ "https://arxiv.org/pdf/1012.5643v2.pdf" ]
18,853,671
1012.5643
082604970928746e778201b81d41900d6a904f73
The Non-Compact Weyl Equation 7 Apr 2011 Anastasia Doikou [email protected]† Theodora Ioannidou Department of Engineering Sciences Department of Mathematics, Physics and Computational Sciences Faculty of Engineering University of Patras GR-26500PatrasGreece Aristotle University of Thessaloniki GR-54124ThessalonikiGreece The Non-Compact Weyl Equation 7 Apr 2011arXiv:1012.5643v2 [hep-th] A non-compact version of the Weyl equation is proposed, based on the infinite dimensional spin zero representation of the sl 2 algebra. Solutions of the aforementioned equation are obtained in terms of the Kummer functions. In this context, we discuss the ADHMN approach in order to construct the corresponding non-compact BPS monopoles. Introduction The Nahm equations provide a system of non-linear ordinary differential equations dT i ds = 1 2 ε ijk [T j , T k ](1) for three n × n anti-hermitian matrices T i (the so-called Nahm data) of complex-valued functions of the variable s, where n is the magnetic charge of the BPS monopole configuration. The tensor ε ijk is the totally antisymmetric tensor. In the ADHMN approach, the construction of SU(n + 1) monopole solutions of the Bogomolny equation with topological charge n is translated to the following problem which is known as the inverse Nahm transform [1]. Given the Nahm data for a n-monopole the one-dimensional Weyl equation I 2n d ds − I n ⊗ x j σ j + iT j ⊗ σ j v(x, s) = 0(2) for the complex 2n-vector v(x, s), must be solved. I n denotes the n × n identity matrix, x = (x 1 , x 2 , x 3 ) is the position in space at which the monopole fields are to be calculated. In the minimal symmetry breaking case, the Nahm data T i 's can be cast as (see Reference [2], for a more detailed discussion) T i = − i 2 f i τ i , i = 1, 2, 3(3) where τ i 's form the n-dimensional representation of SU(2) and satisfy: [τ i , τ j ] = 2iε ijk τ k .(4) Let us choose an orthonormal basis for these solutions, satisfying v †v ds = I. Givenv(x, s), the normalized vector computed from (2) and (5), the Higgs field Φ and the gauge potential A i are given by Φ = −i sv †v ds,(6)A i = v † ∂ iv ds.(7) In [3,4], we applied the ADHMN construction to obtain the SU(n+1) (for generic values of n) BPS monopoles with minimal symmetry breaking, by solving the Weyl equation. In this paper, we present a non-compact approach of the ADHMN transform by introducing an infinite dimensional spin zero representation of the sl 2 algebra for the Nahm data. The aforementioned representation is expressed in terms of appropriate differential operators; hence, the Weyl equation is also written in terms of the aforementioned differential operators, and not in terms of n × n matrices as in its conventional form (see, for example, Ref. [3,4]). In the Appendix, we present the equivalence between the two approaches, i.e. matrix versus differential operator description of the Weyl equation, which leads us to conjecture that the results of the present investigation should by construction satisfy the Bogomolny equation. This is mainly due to the structural similarity between the equations arising in the present case, and the ones emerging in the finite dimensional case described in the Appendix and in Ref. [3]. Nevertheless, this is an intriguing issue, which merits further investigation, in particular when azimuthal dependence is also implemented along the lines described in [4]. The Weyl Equation In order to construct the non-compact BPS monopole solutions of the Weyl equation, let us consider the sl 2 algebra, and focus on the non-trivial spin zero representation. Consider the general case: i.e. the spin S ∈ R representation of sl 2 of the form τ 1 = − ξ 2 − 1 d dξ +S ξ + ξ −1 , τ 2 = −i 1 + ξ 2 d dξ + S ξ −1 − ξ , τ 3 = −2ξ d dξ .(8) Also take the inner product, in the basis of polynomials of ξ on the unit circle (ξ = e iθ ), to be of the form: f, g ≡ 1 2iπ 1 ξ f * g dξ (9) and immediately obtain the formula ξ m , ξ n = δ nm .(10) Next consider the generic state v = ∞ k=−∞ h k ξ k b 1 √ η + b 2 √ η ,(11)where h k = h k (r, s) and b i = b i (r, s) for i = 1, 2. Notice that using the representation (8), for S being an integer or half integer; together with the inner product (9) of an appropriate orthonormal basis {v 1 , . . . ,v n+1 } where n = 2S + 1 being the dimension of the representation (see also Appendix for more details): n+1 0 v i ,v j ds = δ ij(12) one may recover the Higgs field obtained in [3] from the formula Φ ij = −i n+1 0 (s − n) v i ,v j ds.(13) Next, we focus on the the spin zero representation of sl 2 , associated to the Möbius transformation and also relevant in high energy QCD (see for example, Ref. [5,6]). Again we consider the spherically symmetric case (that is, x i = rδ i3 ) where the Nahm data are given by (3) for f i = f = − 1 s . Substituting the Nahm data (3) where τ i 's are defined by (8) for S = 0 to the Weyl equation (2) and expressing σ i in terms of the spin 1 2 representation; that is equation (8) for S = 1 2 : σ 1 = − η 2 − 1 d dη + (η −1 + η) 2 , σ 2 = −i 1 + η 2 d dη + (η −1 − η) 2 , σ 3 = −2η d dη(14) one gets d ds + f (ξ 2 − 1) 2 d dξ η 2 − 1 d dη − (η −1 + η) 2 − f (1 + ξ 2 ) 2 d dξ 1 + η 2 d dη + (η −1 − η) 2 + 2f ξ d dξ η d dη + 2rη d dη ∞ k=−∞ h k ξ k b 1 √ η + b 2 √ η = 0.(15) Next, by setting w k = b 1 h k and u k = b 2 h k in (15), the following set of linear differential equations is obtaineḋ w k − (k + 1) s u k+1 − k s − r w k = 0, u k+1 + k s w k + (k + 1) s − r u k+1 = 0, k ∈ (−∞, ∞).(16) Here,ẇ k andu k are the total derivatives of the functions w k and u k with respect to the argument s. Note that our results are analogous to the ones obtained in [3]. Let us now solve these equations. The coupled equations for u k+1 and w k are equivalent by expressing u k+1 in terms of w k : u k+1 = s (k + 1)ẇ k − (k − rs) (k + 1) w k ,(17) to the single second-order equation sẅ k + 2ẇ k − r 2 s − 2r (k + 1) w k = 0.(18) Then, the solution of (18) is given in a closed form, in terms of the Kummer functions as Finally, the corresponding function u k+1 given by (17) takes the simple form u k+1 = k (k + 1) e −rs − c 1 (r) M(−k + 1, 2, 2rs) + c 2 (r)(k + 1) U(−k + 1, 2, 2rs) . (20) The next step is to choose an orthogonal basis of the infinite dimensional space. Consider the following functions: v k = ξ k √ η w k + ξ k √ η u k+1 ,(21) which are orthogonal by construction. Then the norm of such a function is given by 1 −∞ < v k , v k > ds = 1 −∞ w 2 k + u 2 k+1 ds = N k .(22) As it can be observed from Table 1 the arbitrary constant c 2 (r) at (19) and (20) should be set equal to zero in order to avoid the divergencies of (22) at s → −∞. Also, the norm (22) is well-defined only for k ∈ (−∞, −2]. Some particular examples of the values of the norm N k are Similarly to the finite case the associated Higgs field may be then obtained via the generic expression: N −2 =cΦ kk = − i N k 1 −∞ s w 2 k + u 2 k+1 ds.(24) Conclusions In this paper, we discuss the ADHMN construction in the case of the non-compact The next natural step is to verify that our results satisfy the Bogomolny equation. However in order to do so we need to implement azimuthal dependence to the spherically symmetric solution presented here along the lines presented in [4] and thus, obtain the solution of the full non-compact Weyl equation. This requires the identification of a suitable transformation [4] that reduces the full problem to the "diagonal" one treated here. This is arguably a highly non-trivial task, and will be pursued in full detail elsewhere together with the physical description of the full solution. In any case, the results presented here are already of great significance given that they provide solutions of the non-compact Weyl equation, opening also the path to the study of novel infinite type monopole configurations. To conclude, it would be interesting to investigate any possible relevance of our findings with previous results of the classical version of the Nahm equations related to infinite monopoles [8,10] and SU(∞) Yang-Mills theories [11,12]. Note that in [13] the Nahm equations are associated to the classical sl 2 algebra (Poisson bracket structure) and are linear, whereas in our study we consider the quantum sl 2 algebra and we deal with the correspond-ing spin zero infinite dimensional (non-compact) representation. One final comment is in order: one should not confuse the spin zero infinite dimensional representation utilized here with the n → ∞ limit of the representation used, for example, in [3,4]. The representation employed in the present investigation is qualitatively different from the n → ∞ case; having, for instance, a zero Casimir (i.e. C = 0) as opposed to the n → ∞ case where C → ∞. ẇ n + 1 − n 2s + r w n = 0,(30) whereu i andẇ i are the total derivatives of u i (r, s) and u i (r, s) with respect to the argument s. Equations (27) and (30) can be immediately integrated and their solutions are equal to: u 1 = κ 1 (r) s n−1 2 e rs , w n = κ 2 (r) s n−1 2 e −rs .(31) Note that the aforementioned solutions coincide with the ones found in [3]. The coupled equations (28) and (29) are equivalent by expressing u k+1 in terms of w k : u k+1 = 1 k sẇ k + n + 1 − 2k 2 + rs w k ,(32) to the single second-order equation s 2ẅ k + 2sẇ k − r 2 s 2 + (n − 1 − 2k) rs + n 2 − 1 4 w k = 0,(33) which may be solved by substituting w k = W k s and z = 2rs. The latter equation is then reduced to the familiar Whittaker equation: d 2 W k dz 2 + − 1 4 + 2k − n + 1 2z + 1 − n 2 4z 2 W k = 0,(34) and coincides with the solution W k found in [3]. The next step is to choose an orthogonal basis of the n-dimensional space. Consider the following functions v 1 = ξ −S √ η u 1 , v k = ξ k−1−S √ η w k + 1 √ η u k+1 , v n = ξ n−1−S √ η w n , k ∈ {2, . . . , n − 1}(35) which are orthogonal by construction. Then the norm of such a function is given by n+1 0 < v k , v k > ds = n+1 0 w 2 k + u 2 k+1 ds = N k .(36) And one may readily recover the Higgs field obtained in [3] from the formula Φ kk = − i N k n+1 0 (s − n) w 2 k + w 2 k+1 ds.(37) Remark: It is clear that the present description is equivalent to the one discussed in [3]. w k = e −rs c 1 (r) M (−k, 2, 2rs) + c 2 (r) U (−k, 2, 2rs)(19)where c i (r) for i = 1, 2 are constants. M (−k, 2, 2rs) is the regular confluent hypergeometric Kummer function and U (−k, 2, 2rs) is the Tricomi confluent hypergeometric function defined inTable 11 . These functions are widely known as the Kummer functions of first and second kind, respectively, and are linearly independent solutions of the Kummer equation[7].1 Γ(a, z) is the complementary or upper incomplete Gamma function defined by 2 s 2 +r 3 s 3 Γ (0, 2rs) e 2rs sl 2 algebra. 2More precisely, we propose a generalized version of the Weyl equation in terms of differential operators. The aforementioned (non-compact) Weyl equation is solved explicitly for the infinite dimensional spin zero representation of sl 2 , and the associated solutions are expressed in terms of the so-called Kummer functions. Also, a suitable infinite set of orthogonal functions is chosen, and in analogy to the finite case (see, for example, [3] and References therein), expressions of the relevant Higgs fields are proposed. These expressions have a simple and elegant form, and should correspond to a kind of infinite BPS monopole configurations. Table 1 : 1Explicitexpressions of the Kummer functions M (−k, 2, 2rs) and U (−k, 2, 2rs) for k = −2, . . . , −5. A AppendixIn what follows we briefly describe the equivalence between the matrix description of the Weyl equation presented in full generality in Ref.[3]and the differential operator description attempted here. More precisely, we express the Weyl equation in terms of differential operators via the spin S representation of su 2 , for S being integer or half integer. That is, we focus on the finite representation of dimension n = 2S + 1 and explicitly show the equivalence with the generic results obtained in[3], via the matrix description.Finally, assume the generic form for the function v :where h k = h k (r, s) and b i = b i (r, s) for i = 1, 2. Setting w k = b 1 h k and u k = b 2 h k in (25), the following set of linear differential equations is obtaineḋu k+1 + k − n s w k − n − 1 − 2k 2s + r u k+1 = 0,w k − k s u k+1 + n + 1 − 2k 2s + r w k = 0, The construction of all self-dual multimonopoles by the ADHM method. W Nahm, Monopoles in Quantum Field Theory. N.S. Craigie, P. Goddard and W. NahmSingaporeWorld ScientificW. Nahm, The construction of all self-dual multimonopoles by the ADHM method, in Monopoles in Quantum Field Theory, eds N.S. Craigie, P. Goddard and W. Nahm (World Scientific, Singapore, 1982). . N S Manton, P M Sutcliffe, Topological Solitons, Cambridge Monographs on Mathematical Physics. Cambridge University PressN.S. Manton and P.M. Sutcliffe, Topological Solitons, Cambridge Monographs on Math- ematical Physics, Cambridge University Press (2004). . A Doikou, T Ioannidou, JHEP. 1008105A. Doikou and T. Ioannidou, JHEP 1008, (2010) 105. . A Doikou, T Ioannidou, arXiv:1010.5076A. Doikou and T. Ioannidou, arXiv:1010.5076. . L N Lipatov, Sov. Phys. JETP. 63904L. N. Lipatov, Sov. Phys. JETP 63, 904 (1986). . L D Faddeev, G P Korchemsky, Phys. Lett. 342311L.D. Faddeev and G.P. Korchemsky, Phys. Lett. B342, 311 (1995). M Abramowitz, I Stegun, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables. New York DoverM. Abramowitz and I. Stegun, Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables, New York Dover (1972). . R S Ward, Phys. Lett. B. 23481R.S. Ward,Phys. Lett. B 234, 81 (1990). . R S Ward, Class. Quantum Grav. 795R.S. Ward, Class. Quantum Grav. 7, L95 (1990); . Class. Quantum Grav. 7217Class. Quantum Grav. 7, L217 (1990); . H Garcia-Compean, J F Plebanski, Phys. Lett. 2345H. Garcia-Compean and J.F. Plebanski, Phys. Lett. A234, 5 (1997). . E G Floratos, J Iliopoulos, G Tiktopoulos, Phys. Lett. 217285E.G. Floratos, J. Iliopoulos and G. Tiktopoulos, Phys. Lett. B217, 285 (1989). . D B Fairlie, P Fletcher, C K Zachos, J. Math. Phys. 311088D.B. Fairlie, P. Fletcher and C.K. Zachos, J. Math. Phys. 31, 1088 (1990). . R S Ward, J. Geom. Phys. 8317R.S. Ward, J. Geom. Phys. 8, 317 (1992).
[]
[ "A new method to study the number of colors in the final-state interactions of hadrons", "A new method to study the number of colors in the final-state interactions of hadrons" ]
[ "Ling-Yun Dai ", "Ulf-G Meißner \nHelmholtz Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany\n", "\n1a Institute for Advanced Simulation\nInstitut für Kernphysik\nJülich Center for Hadron Physics\nForschungszentrum Jülich\nD-52425JülichGermany\n" ]
[ "Helmholtz Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics\nUniversität Bonn\nD-53115BonnGermany", "1a Institute for Advanced Simulation\nInstitut für Kernphysik\nJülich Center for Hadron Physics\nForschungszentrum Jülich\nD-52425JülichGermany" ]
[]
We match the ππ → ππ scattering amplitudes of Chiral Perturbation Theory with those from dispersion relations that respect analyticity and coupled channel unitarity, as well as accurately describing experiment. Their dependence on the number of colors (NC ) is obtained. By varying NC the trajectories of the poles and residues (the couplings to ππ) of light mesons, the σ, f0(980), ρ(770) and f2(1270) are investigated. Our results show that the method proposed is a reliable way to study the NC dependence in hadron-hadron scattering with final-state interactions.
10.1016/j.physletb.2018.06.071
[ "https://arxiv.org/pdf/1706.10123v3.pdf" ]
119,024,390
1706.10123
15ea81f275c14522c4a99bc2b70a2e876e045211
A new method to study the number of colors in the final-state interactions of hadrons 6 Jul 2017 Ling-Yun Dai Ulf-G Meißner Helmholtz Institut für Strahlen-und Kernphysik and Bethe Center for Theoretical Physics Universität Bonn D-53115BonnGermany 1a Institute for Advanced Simulation Institut für Kernphysik Jülich Center for Hadron Physics Forschungszentrum Jülich D-52425JülichGermany A new method to study the number of colors in the final-state interactions of hadrons 6 Jul 2017(Dated: July 7, 2017) We match the ππ → ππ scattering amplitudes of Chiral Perturbation Theory with those from dispersion relations that respect analyticity and coupled channel unitarity, as well as accurately describing experiment. Their dependence on the number of colors (NC ) is obtained. By varying NC the trajectories of the poles and residues (the couplings to ππ) of light mesons, the σ, f0(980), ρ(770) and f2(1270) are investigated. Our results show that the method proposed is a reliable way to study the NC dependence in hadron-hadron scattering with final-state interactions. The lightest scalar mesons are rather interesting as they have the same quantum numbers as the QCD vacuum. The nature of them is still a mystery [1][2][3][4]. The phenomenology of these is complicated due the contribution from important final-state interactions (FSI) [5]. Dispersion relations are the natural way to include FSI, see e.g. [6,7]. For some of the light mesons, like the σ, κ, their existence has been confirmed [8][9][10][11] and accurate pole locations and ππ couplings, including also the ρ(770), have been given in Refs. [12][13][14]. Concerning the nature of the scalar mesons, there is a cornucopia of models [15][16][17][18][19][20][21][22][23][24][25][26][27][28]. Among them the large N C trajectories of the poles are an effective diagnostics to distinguish ordinary from non-ordinary quark-antiquark structure as considered in [29][30][31][32][33]. However, these analysis based on unitarized Chiral Perturbation theory (UχPT) lack crossing symmetry. Unitarization itself will also generate spurious poles and cuts. In contrast, dispersion relations respect analyticity, but including coupled channel unitarity and the N C dependence is difficult. Clearly, both analyticity and coupled channel unitarity are critical in the region of the KK threshold, close to which the f 0 (980) is located. To solve this problem, we use an Omnès representation based on the phase of the relevant amplitudes, rather than the elastic phase shift [34,35]. There has been renewed interest in the study of the large N C limit [36,37] of the properties of resonances [38][39][40]. Weinberg [41] pointed out that resonant tetraquark states could exist due to the contribution of the leading order (LO) 'connected' diagrams to the Green functions. Their widths are O(N −1 C ), as narrow as ordinary mesons. They could be even narrower, with width of O(N −2 C ), when the flavor of the quarks is combined in different ways [42]. There are many other interesting discussions such as [43,44] and references therein. In this paper we focus on establishing a 'practical' way to study the N C dependence of the scattering amplitudes, built into dispersion relations. Resonances appearing in the intermediate states are also studied. In this letter we first use dispersive methods to obtain the ππ scattering amplitudes up to 2 GeV. We construct the amplitudes in a model-independent way, which is both analytic and respects coupled channel unitarity. We also recalculate the analytical expressions of IJ = 00, 02, 11 waves in SU (3) Chiral Perturbation Theory (χPT) up to O(p 4 ). By expanding the amplitudes in the low-energy region, we match the dispersive and the χPT amplitudes and also introduce the N C -dependence into the dispersive amplitudes. This N C dependence is automatically transferred to the high-energy region, where the FSI are implemented by the dispersion relation. We give the trajectories of the poles and residues by varying N C . The behavior of the ρ(770), f 2 (1270), σ(600) and f 0 (980) show that this is a reliable way to study the number of colors in hadron-hadron scattering. The N C trajectory of the light scalar mesons supports a mixed structure of hadronic molecule andqq components (for a recent review on hadronic molecules, see Ref. [45]. We first present our IJ = 00, 02, 11 partial waves of ππ → ππ calculated in a model-independent way. We start from: T I J (s) = P I J (s)Ω I J (s).(1) where Ω I J (s) is the Omnès function [46]: Ω I J (s) = exp s π ∞ s th ds ′ ϕ I J (s ′ ) s ′ (s ′ − s) .(2) with ϕ I Jλ (s) the phase of the partial wave amplitude T I J (s), which has been given in previous amplitude analysis [34,35]. This phase is known from experiment up to roughly 2 GeV. The function P I J (s) includes the effect of the left-hand cut (l.h.c) and corrections that come from the distant right-hand cut (r.h.c) above 2 GeV. Other information is provided by chiral dynamics that fixes the Adler zeros in the S-waves, and the approach to threshold of the S-, P-, and D-waves in terms of scattering lengths and effective ranges. We therefore parameterize the P I J (s) as P I J (s) = (s − z I J ) nJ n k=1 α I J k (s − 4M 2 π ) k−1 ,(3) where z I J is the Adler zero for the S-waves and 4M 2 π for P-and D-waves. The parameter n J is 1 for S-and Pwaves and 2 for D-waves. The parameters α i are given in Table I shown in Fig. 1. We fit the amplitudes in the region of s ∈ [0, 4GeV 2 ], where the 'data' is as follows: χPT amplitudes at [0, 4M 2 π ], amplitudes of K-matrix and Roylike equation at [4M 2 π , 2GeV 2 ], and experiment data up to 4GeV 2 . Though we do not have the left-hand cuts directly in our fit, their contribution are correctly implemented in the physical region, as we fit to the results of Roy-like equation which keeps the crossing symmetry. The fits are of high quality, even in our 'prediction' region where s ∈ [−4M 2 π , 0]. From these amplitudes, we can extract the poles and residues on the second sheet. α 4 -0.8590(20) -2.8838(2) -0.7225(11) α 5 0.2942(6) 1.8719(1) 0.3820(4) α 6 -0.0531(2) -0.6520(1) -0.0912(1) α 7 0.0039(1) 0.1156(1) 0.0081(1) α 8 - -0.0082(1) - The residue g f ππ and pole s R on the second Riemann sheet are defined as: T II (s) = g 2 f ππ s R − s .(4) The pole locations and their residues are listed in Table II. These are very similar from those of previous analyses [12,14,35,55,56]. For the f 0 (1370) and f 2 (1270), to find the pole closest to the physical sheet one needs to include the ππ, KK, 4π as coupled channels. However, notice that a Breit-Wigner resonance will always have shadow poles on other sheets. Although ππ is the dominant decay channel of the f 2 (1270), our f 2 (1270) on the second sheet is not far away from the physical one. Having analytically calculated the partial wave amplitudes of IJ = 00, 11, 02 within one-loop SU(3) χPT, we can match our dispersive results to these and so fix their N C dependence. We note several points about this matching: First, though the matching is done in the low-energy region, this N C dependence is transferred to high-energy region. As the FSI of hadrons at higher energy region corresponds to the 'hadron loop' corrections, C . Third, in the χPT amplitudes the r.h.c is given entirely from one-loop integrals (the B functions) in the O(p 4 ) amplitudes. This is why we do not consider a matching with only LO χPT amplitudes. The imaginary part of the χPT amplitudes is N −2 C , while the real part of the amplitudes is of N −1 C . From this one sees that the phase of the amplitudes that enter the Omnès functions should be N −1 C . Indeed, any higher order N Cdependence of the phases such as N −2 C can be ignored, which will contribute finally at O(N −3 C ) to the amplitudes T I J (s) in Eq (1). Fourth, we choose the matching points to be the Adler zero for the S-waves and 2M 2 π for the P-and D-waves. This avoids s = 0 where the l.h.c starts and the threshold where the r.h.c starts. To match the dispersive and χPT amplitudes we expand each one. For the dispersive amplitude of Eq (1) we first expand the function P I J (s) as: P I J (s) = (s − z I J ) nJ n k=1 β I J k (s − s m I J ) k−1 ,(5) where z I J is given in Eq. (3), while s m I J is the matching point. One readily finds β I J k = n l=k C k−1 l−1 α I J l (s m I J − 4M 2 π ) l−k ,(6) where C k−1 l−1 are the binomial coefficients. With this transformation one can translate the parameters α I J l in Table I with ω 1 (s m I J ) = 1 π ϕ I J (s ′ ) ds ′ s ′ (s ′ − s m I J ) , ω k (s m I J ) = 1 π ϕ I J (s ′ ) ds ′ (s ′ − s m I J ) k , for k > 1 .(8) We note that ϕ I J (s) is proportional to N −1 C , and find Ω I J (s) = 1 + higher order loop corrections, these are at least N −2 C or even 'weaker'. As a result we simply have β I J k ∼ 1 N 2 C , for k ≥ 3 .(10) Finally we obtain the N C -dependence of the coefficients in Eq. (5). For the S-wave, we have P I S (s, N C ) = (s − s 0 ) n k=1 d (k) IS (N C ) d (k) IS (3) β I S k (s − s 0 ) k−1 + d (0) IS (N C ) ,(11) For the P-and D-waves, we find P I J (s, N C ) = (s − 4M 2 π ) J n k=1 d (k) IJ (N C ) d (k) IJ (3) β I J k (s − s 0 ) k−1 ,(12) The LECs and their N C -dependence in χPT are given by [49,50]. We also use the relation L 2 ∼ 2L 1 which is derived from the matching of χP T with RχT [57]. With these LECs and the input M π = 0.13957 GeV, M K = 0.496 GeV, M η = 0.54785 GeV, F π = 0.924 GeV and its N C dependence up to N −1 C [? ] we finally get: d (0) 0S (N C ) = −0.0077 3 N C + 0.0077 9 N 2 C , d (1) 0S (N C ) = 2.6205 3 N C + 6.7444 9 N 2 C , d(2) 0S (N C ) = −5.8358 3 N C + 19.2587 9 N 2 C , d(1) 1P (N C ) = 0.4062 3 N C − 0.6776 9 N 2 C , d(2) 1P (N C ) = 0.6126 3 N C − 1.8309 9 N 2 C , d(1) 0D (N C ) = −0.0864 3 N C + 0.4595 9 N 2 C ,(13) We are aware that these coefficients can be modified by higher order corrections, but this goes beyond the accuracy of the present calculation. We test the stability of the amplitudes by varying N C . The shape of the amplitudes are rather stable, and only the magnitude varies dramatically, as shown in Fig. 1 This is because in our model a N −1 C factor appears in the Omnès function. An exponential function of course converges fast. Consequently, poles move rapidly either towards the real energy axis, or far away, much faster than the behavior using UχPT. We only match to O(p 4 ) and lack more accurate N C -dependence, with which the peak of |T | will behave as O(1) for P and D waves. This is because for Breit-Wigner particles, there is a zero of the real part amplitudes (s = M 2 R ), where the N C dependence is canceled leading to T ≃ i/ρ. However, our pole trajectories suggest it has no obvious influence on the amplitudes locate elsewhere. The zero is not there and |T | is still O(N −1 C ). The trajectories of the pole locations and their residues g f ππ on the second Riemann sheet are obtained from Eq. (4), and plotted in Fig. 2 Adler zero for the S-waves and 2M 2 π for P-and D-waves. The step of ∆NC is 0.01 (just for illustration, of course, NC should be an integer). The magenta dashed, red dotted, orange dash-dotted, olive dashed, blue dotted, and dark cyan lines represent the results for different matching points: M 2 π , 2M 2 π , 3M 2 π , 5M 2 π , 6M 2 π , 7M 2 π for the S-wave, and sa 0 S , M 2 π , 3M 2 π , 5M 2 π , 6M 2 π , 7M 2 π for the P-and D-waves, respectively. Note that for the matching points above threshold we have added i0.001 to avoid the singularity. determine the uncertainties of our trajectories, we choose different matching points in (0, 8M 2 π ) and present the results in Fig. 2. They are similar to each other. This is consistent with the assumption that the matching does not depend on the details of the matching points. Next, we discuss the various resonances within the accuracy of our approach. ρ(770) The pole trajectory of ρ(770) moves towards the real axis, similarly to what was found using UχPT [30,32,33]. The modulus of the residue decreases when N C increases. Such behavior confirms the widely acceptedqq structure. f 2 (1270) For f 2 (1270) our trajectory is quite similar to that of ρ(770). Again this confirms aqq structure. O(p 6 ) χPT amplitudes would be required to get a more precise N Cdependence. σ(600) For σ(600), its mass is O(1) and its width is O(N C ). To get such a wide width it could have a molecular component [42]. Notice in [33,35] the shadow pole in the third sheet suggests aqq component. So the σ is likely to be a mixed state including molecule,qq, etc. The modulus of the residue increases, reaching the peak at roughly N C = 3.5 and then decreases. It implies that the residue should not only contain O(1), but also O(N −1/2 C ) or even O(N −1 C ). Such curved behavior of the trajectory is consistent with the mixing structure of molecule andqq. The relative strengths of these components can not be inferred from the analysis presented here. f 0 (980) For f 0 (980), the pole moves rapidly to the real axis, slightly belowKK threshold. It is similar to that of ρ(770) and f 2 (1270), implying anss component. In contrast, in Refs [33,59] the pole moves to the real axis abovē KK threshold and goes onto the fourth Riemann sheet. We may need higher order 1/N C corrections, especially that caused by kaon loops, to obtain a more accurate pole trajectory. The residue behaves like that of the σ(600), it increases at first and then decreases, as a 'curve', implying that it must be a combination of N −1/2 C and N −1 C , etc. Note it is most likely to be KK molecular in other analysis [2,33,35]. Our findings support the idea that the f 0 (980) is a mixture of KK molecular andss components. At present, we can not quantify the relative strengths of these components. In this letter we present a new method to study the large N C behaviour of resonances. The ππ scattering amplitude with FSI is constructed in a model-independent way. We match it with the amplitude of χPT in the lowenergy region, which gives the N C -dependence of the coefficients and phase in the dispersive approach. This is a reliable way to study the N C -dependence with finalstate interactions. We obtain the trajectory of poles and residues as N c changes. Those for both the ρ(770) and f 2 (1270) support, as expected, the standardqq structure. In contrast, the N C trajectory of the light scalar mesons, σ and f 0 (980), are consistent with each being a combination ofqq and multi-hadron (molecular) components. We stress that some of these conclusions might be modified when higher order corrections in the χPT amplitudes are accounted for. We are very grateful to M.R. Pennington for many valuable suggestions and discussions to improve the paper. Special thanks to J. Ruiz de Elvira for his thoughtful and critical reading of the manuscript. Helpful discussions with Z.H. Guo are also acknowledged. This work is supported by the DFG (SFB/TR 110, "Symmetries and the Emergence of Structure in QCD") and by the Chinese Academy of Sciences (CAS) President's International Fellowship Initiative (PIFI) (Grant No. 2017VMA0025) FIG. 1 . 1The fit of ππ scattering amplitudes are in the left column. The violet and magnetic bands are from K-Matrix[35]. The olive and light grey bands in the low energy region are from χPT[33,[49][50][51][52]. The cyan and green bands are from CFDIV[47]. The Cern-Munich data is from[53] and the OPE and OPE-DP data are from[54]. The absolute values of the amplitudes by varying NC are in the right column. The black solid, magenta dashed, and blue dotted lines are for NC = 3, 4, 5, respectively. they could be translated into the higher order corrections (quark loops) in large N C QCD, and suppressed by an extra factor of N −1 C . Second, all the χPT amplitudes are calculated up to O(p 4 ), O(p 6 ) or higher orders are ignored. This contains the N C dependence of N −1 C and N −2 C . Consequently we only perform the matching up to N −2 J ) k−1 + · · · . (9) For the χPT amplitudes, they are only calculated up to O(p 4 ), so we expand the amplitudes up to (s − s m ) 2 . For higher order χP T amplitudes, either from the higher order expansions of the B functions in O(p 4 ) or from FIG. 2 . 2The trajectories of pole locations and their residues by varying NC . The black filled circle and lines represent poles and residues, respectively. The matching points are chosen as follows: . The units of the α k are chosen to ensure that the amplitude T I J (s) is dimensionless. The fit results areT 11 0 S T 11 0 D T 11 1 P α 1 2.4051 0.2972 0.4283 α 2 -1.9451 -0.9354 -0.2976 α 3 1.5068(39) 2.3171(2) 0.6278(17) TABLE I . IThe fit parameters of our Fit, as given in Eq. (3). The errors are given by MINUIT and notice that the α1,2 are fixed by the scattering lengths and slope parameters [34, 47, 48]. TABLE II . IIThe pole locations and residues on the second Riemann sheet. . M R Pennington, arXiv:0705.3314Mod. Phys. Lett. 221439hep-phM.R. Pennington, Mod. Phys. Lett. A22 1439 (2007), arXiv: 0705.3314 [hep-ph]. M R Pennington, arXiv:1003.2549AIP Conf. Proc. 1257. 27hep-phM.R. Pennington, AIP Conf. Proc. 1257 27 (2010), arXiv: 1003.2549 [hep-ph]. . R J Jaffe, Phys. Rev. D15. 2671Phys. Rept.R.J. Jaffe, Phys. Rev. D15 267 (1977), Phys. Rept. 409 1 (2005). . J R Pelaez, arXiv:1510.00653Physics Reports. 6581J. R. Pelaez, Physics Reports 658 1 (2016). arXiv: 1510.00653 . K L Au, D Morgan, M R Pennington, Phys. Rev. 351633K.L. Au, D. Morgan and M.R. Pennington, Phys. Rev. D35 1633 (1987); . D Morgan, M R Pennington, Phys. Rev. 481185D. Morgan and M.R. Pennington, Phys. Rev. D48 1185 (1993). . X W Kang, B Kubis, C Hanhart, U.-G Meißner, arXiv:1312.1193Phys. Rev. D89. 053015hepphX.W. Kang, B. Kubis, C. Hanhart and U.-G. Meißner, Phys. Rev. D89 053015 (2014), arXiv: 1312.1193 [hep- ph]. . Y H Chen, J T Daub, F K Guo, B Kubis, U.-G Meißner, B S Zou, arXiv:1512.03583Phys. Rev. D93. 034030hep-phY.H. Chen, J. T. Daub, F.K. Guo, B. Kubis, U.-G. Meißner and B.S. Zou, Phys. Rev. D93 034030 (2015), arXiv: 1512.03583 [hep-ph]. . Z G Xiao, H Q Zheng, arXiv: 0011260Nucl. Phys. A695. 273hep-phZ.G. Xiao and H.Q. Zheng, Nucl. Phys. A695 273 (2001), arXiv: 0011260 [hep-ph]. . G Colangelo, J Gasser, H Leutwyler, arXiv: 0103088Nucl. Phys. 603hep-phG. Colangelo, J. Gasser and H. Leutwyler, Nucl. Phys. B603, 125 (2001), arXiv: 0103088 [hep-ph]. . Z Y Zhou, G Y Qin, P Zhang, Z G Xiao, H Q Zheng, N Wu, arXiv: 0406271JHEP. 050243hepphZ.Y. Zhou, G. Y. Qin, P. Zhang, Z. G. Xiao, H. Q. Zheng, and N. Wu, JHEP 0502 043 (2005), arXiv: 0406271 [hep- ph]. . S Descotes-Genon, B Moussallam, arxiv: [hep-ph/0607133Eur. Phys. J. C48. 553S. Descotes-Genon, B. Moussallam, Eur. Phys. J. C48 553 (2006), arxiv: [hep-ph/0607133]. . I Caprini, G Colangelo, H Leutwyler, arXiv: 0512364Phys. Rev. Lett. 96132001hep-phI. Caprini, G. Colangelo and H. Leutwyler, Phys. Rev. Lett. 96 132001 (2006), arXiv: 0512364 [hep-ph]. . P Büttiker, S Descotes-Genon, B Moussallam, arXiv:0310283Eur. Phys. J. C33. 409hep-phP. Büttiker, S. Descotes-Genon and B. Moussallam, Eur. Phys. J. C33 409 (2004), arXiv:0310283 [hep-ph]. . R García-Martín, R Kamiński, J R Peláez, J Ruiz De, Elvira , arXiv:1107.1635Phys. Rev. Lett. 10772001hep-phR. García-Martín, R. Kamiński, J. R. Peláez, and J. Ruiz de Elvira, Phys. Rev. Lett. 107 072001 (2011), arXiv: 1107.1635 [hep-ph]; . E Van Beveren, T A Rijken, K Metzger, C Dullemond, G Rupp, J E Ribeiro, arXiv:0710.4067Z. Phys. C30. 615hep-phE. van Beveren, T. A. Rijken, K. Metzger, C. Dullemond, G. Rupp and J. E. Ribeiro, Z. Phys. C30 615 (1986), arXiv:0710.4067 [hep-ph]. . D Black, A H Fariborz, F Sannino, J Schechter, arXiv: 9808415Phys. Rev. D59. 074026hepphD. Black, A. H. Fariborz, F. Sannino and J. Schechter, Phys. Rev. D59 074026 (1999), arXiv: 9808415 [hep- ph]; . D Black, A H Fariborz, J Schechter, arXiv: [hep-ph/9907516Phys. Rev. D61. 074001D. Black, A. H. Fariborz and J. Schechter, Phys. Rev. D61 074001 (2000), arXiv: [hep-ph/9907516]. . V Baru, J Haidenbauer, C Hanhart, Yu Kalashnikova, A Kudryavtsev, arXiv: 0308129Phys. Lett. B586. 53hep-phV. Baru, J. Haidenbauer, C. Hanhart, Yu. Kalashnikova, and A. Kudryavtsev, Phys. Lett. B586 53 (2004), arXiv: 0308129 [hep-ph]. . L Maiani, F Piccinini, A D Polosa, V Riquer, arXiv:0407017Phys. Rev. Lett. 93212002hep-phL. Maiani, F. Piccinini, A. D. Polosa and V. Riquer, Phys. Rev. Lett. 93 212002 (2004), arXiv:0407017 [hep-ph]; . arXiv: 0604018Eur. Phys. J. 50609hep-phEur. Phys. J. C50, (2007) 609, arXiv: 0604018 [hep-ph]. . G Hooft, G Isidori, L Maiani, A D Polosa, V Riquer, arXiv:0801.2288Phys. Lett. B662. 424hep-phG. 't Hooft, G. Isidori, L. Maiani, A.D. Polosa, and V. Riquer, Phys. Lett. B662 424 (2008), arXiv: 0801.2288 [hep-ph]. . Z Y Zhou, Z G Xiao, arXiv:1007.2072Phys. Rev. 8314010hep-phZ.Y. Zhou and Z.G. Xiao, Phys. Rev. D83 014010 (2011), arXiv: 1007.2072 [hep-ph]. . G Mennessier, S Narison, X G Wang, arxiv: 1002.1402Phys. Lett. B688. 59hep-phG. Mennessier, S. Narison, X. G. Wang, Phys. Lett. B688 59 (2010), arxiv: 1002.1402 [hep-ph]; . arxiv: 1009.2773Phys. Lett. B696. 40hep-phPhys. Lett. B696 40 (2011), arxiv: 1009.2773 [hep-ph]; . Z H Guo, J A Oller, arXiv:1104.2849Phys. Rev. 8434005hep-phZ.H. Guo and J. A. Oller, Phys. Rev. D84 034005 (2011), arXiv: 1104.2849 [hep-ph]. . Z H Guo, J A Oller, J Ruiz De Elvirac, arXiv:1203.4381Phys. Lett. 712407hep-phZ.H. Guo, J. A. Oller and J. Ruiz de Elvirac, Phys. Lett. B712, 407 (2012), arXiv: 1203.4381 [hep-ph]; . arXiv:1206.4163Phys. Rev. 8654006hep-phPhys. Rev. D86, 054006 (2012), arXiv: 1206.4163 [hep-ph]. . J Nebreda, J T Londergan, J R Peláez, A P Szczepaniak, arXiv:1403.2790hep-phJ. Nebreda, J. T. Londergan, J. R. Peláez and A. P. Szczepaniak, arXiv: 1403.2790 [hep-ph]; . T Cohen, F J Llanes-Estrada, J R Pelaez, J Ruiz De, Elvira , arXiv:1405.4831Phys. Rev. 9036003hep-phT. Cohen, F. J. Llanes-Estrada, J. R. Pelaez and J. Ruiz de Elvira, Phys. Rev. D90 036003 (2014), arXiv: 1405.4831 [hep-ph]; . Z.-H Guo, U.-G Meißner, D.-L Yao, arXiv:1507.03123Phys. Rev. 9294008hep-phZ.-H. Guo, U.-G. Meißner, and D.-L. Yao, Phys. Rev. D92, 094008 (2015), arXiv: 1507.03123 [hep-ph]. . Z H Guo, J A Oller, arXiv:1508.06400Phys. Rev. 9396001hep-phZ.H. Guo, and J. A. Oller, Phys. Rev. D93, 096001 (2016), arXiv: 1508.06400 [hep-ph]. . Raul A Briceno, Jozef J Dudek, Robert G Edwards, David J Wilson, arXiv:1607.05900Phys. Rev. Lett. 11822002hep-phRaul A. Briceno, Jozef J. Dudek, Robert G. Edwards, David J. Wilson, Phys. Rev. Lett. 118, 022002, (2017). arXiv: 1607.05900 [hep-ph]. . J R Pelaez, arXiv: 0309292Phys. Rev. Lett. 92102001hep-phJ. R. Pelaez, Phys. Rev. Lett. 92 102001 (2004). arXiv: 0309292 [hep-ph]. . J R Pelaez, G Rios, arXiv: 0610397Phys. Rev. Lett. 97242002hep-phJ. R. Pelaez and G. Rios, Phys. Rev. Lett. 97 242002 (2006). arXiv: 0610397 [hep-ph]. . Z X Sun, arXiv: 0503195Mod. Phys. Lett. 22711hep-phZ. X. Sun et al., Mod. Phys. Lett.A22 711 (2007). arXiv: 0503195 [hep-ph]. . J Ruiz De Elvira, J R Pelaez, M R Pennington, D J Wilson, arXiv:1009.6204Phys. Rev. D84. 096006hep-phJ. Ruiz de Elvira , J.R. Pelaez, M.R. Pennington, and D.J. Wilson, Phys. Rev. D84 096006 (2011), arXiv: 1009.6204 [hep-ph]; . L Y Dai, X G Wang, H Q Zheng, arXiv:1108.1451Commun. Theor. Phys. 57841hep-phL.Y. Dai, X.G. Wang and H.Q. Zheng, Commun. Theor. Phys. 57 841 (2012), arXiv: 1108.1451 [hep-ph]; . Commun, arXiv:1206.5481Theor. Phys. 58410hepphCom- mun. Theor. Phys. 58 410 (2012), arXiv: 1206.5481 [hep- ph]. . L Y Dai, V Mathieu, E Passemar, M R Pennington, A Szczepaniak, in preparationL. Y. Dai, V. Mathieu, E. Passemar, M. R. Pennington and A. Szczepaniak, in preparation. . L Y Dai, M R Pennington, arXiv:1403.7514Phys. Lett. B736. 11hep-phL.Y. Dai and M.R. Pennington, Phys. Lett. B736 11 (2014), arXiv: 1403.7514 [hep-ph]; . arXiv:1404.7524Phys. Rev. D90. 036004hep-phPhys. Rev. D90 036004 (2014), arXiv: 1404.7524 [hep-ph]. . G Hooft, 10.1016/0550-3213(74)90154-0Nucl. Phys. 72461G. 't Hooft, Nucl. Phys. B72 461 (1974). . G Hooft, 10.1016/0550-3213(74)90088-1Nucl.Phys. 75461G. 't Hooft, Nucl.Phys. B75 461 (1974). . E Witten, 10.1016/0550-3213(79)90232-3Nucl.Phys. 16057E. Witten, Nucl.Phys. B160 57 (1979). S R Coleman, Aspects of Symmetry. Cambridge, EnglandCambridge Univ. PressS.R. Coleman, Aspects of Symmetry, (Cambridge Univ. Press, Cambridge, England, 1985), pp. 377-378. . T D Cohen, arxiv: [hep-ph/9801316Phys. Lett. B. 427348T.D. Cohen, Phys. Lett. B 427, 348 (1998), arxiv: [hep-ph/9801316]. . S Weinberg, arXiv:1303.0342Phys. Rev. Lett. 110261601hep-phS. Weinberg, Phys. Rev. Lett. 110 261601 (2013), arXiv: 1303.0342 [hep-ph]. . M Knecht, S Peris, arXiv:1307.1273Phys. Rev. D. 8836016hep-phM. Knecht and S. Peris, Phys. Rev. D 88, 036016 (2013), arXiv: 1307.1273 [hep-ph]. . R F Lebed, arXiv:1308.2657Phys. Rev. D. 8857901hep-phR.F. Lebed, Phys. Rev. D 88, 057901 (2013), arXiv: 1308.2657 [hep-ph]. . T D Cohen, R F Lebed, arXiv:1401.1815Phys. Rev. D. 8954018hep-phT.D. Cohen and R.F. Lebed, Phys. Rev. D 89, 054018 (2014), arXiv: 1401.1815 [hep-ph]; . arXiv:1403.8090Phys. Rev. D. 9016001hep-phPhys. Rev. D 90, 016001 (2014), arXiv: 1403.8090 [hep-ph]. . F K Guo, C Hanhart, U.-G Meißner, Q Wang, Q Zhao, B S Zou, arXiv:1705.00141hep-phF. K. Guo, C. Hanhart, U.-G. Meißner, Q. Wang, Q. Zhao and B. S. Zou, arXiv:1705.00141 [hep-ph]. . R Omnès, Nuovo Cim, 8316R. Omnès, Nuovo Cim.8, 316 (1958). . R García-Martín, R Kamiński, J R Peláez, J Ruiz De Elvira, F J Ynduráin, arXiv:1102.2183Phys. Rev. D83. 074004hep-phR. García-Martín, R. Kamiński, J. R. Peláez, J. Ruiz de Elvira, and F. J. Ynduráin, Phys. Rev. D83 074004 (2011), arXiv: 1102.2183 [hep-ph]. . M M Nagels, Nucl. Phys. 147189M.M. Nagels et al. , Nucl. Phys. B147 189 (1979); . J Gasser, H Leutwyler, 10.1016/0003-4916(84)90242-2Ann. Phys. (NY). 158142J. Gasser, H. Leutwyler, Ann. Phys. (NY) 158 142 (1984). . J Gasser, H Leutwyler, Nucl. Phys. 250465J. Gasser, H. Leutwyler, Nucl. Phys. B250 465 (1985). . A G Nicola, J R Pelaez, arXiv: 0109056Phys. Rev. 6554009hep-phA. G. Nicola, J. R. Pelaez, Phys. Rev. D65 054009 (2002). arXiv: 0109056 [hep-ph]. . J Bijnens, G Colangelo, J Gasser, Nucl. Phys. 427427J. Bijnens, G. Colangelo and J. Gasser, Nucl. Phys. B427 427 (1994). . B Hyams, Nucl. Phys. 64134B. Hyams, et al., Nucl. Phys. B64 134 (1973); . G Grayer, Nucl. Phys. 75189G. Grayer, et al., Nucl. Phys. B75 189 (1974); . B Hyams, Nucl. Phys. 100205B. Hyams, et al., Nucl. Phys. B100 205 (1975). . N B Durusoy, M Baubillier, R George, M Goldberg, A M Touchard, Phys. Lett. 45517N. B. Durusoy, M. Baubillier, R. George, M. Goldberg and A. M. Touchard, Phys. Lett. B45 517 (1973); . M Döring, U.-G Meißner, E Oset, A Rusetsky, arXiv:1107.3988Eur. Phys. J. A47. 139hep-latM. Döring, U.-G. Meißner, E. Oset, A. Rusetsky, Eur. Phys. J. A47 139 (2011), arXiv: 1107.3988 [hep-lat]; . arXiv:1205.4838Eur. Phys. J. A48. 114hep-latEur. Phys. J. A48 114 (2012), arXiv: 1205.4838 [hep-lat]. . C Patrignani, Chin. Phys. 40100001C. Patrignani et al., [PDG], Chin. Phys. C40 100001 (2016). . G Ecker, J Gasser, A Pich, E De Rafael, Nucl. Phys. 321311G. Ecker, J. Gasser, A. Pich, and E. de Rafael, Nucl. Phys. B321 311 (1989). . T Ledwig, J Nieves, A Pich, E R Arriola, J Ruiz De, Elvira , arXiv:1407.3750Phys. Rev. 90114020hep-latT. Ledwig, J. Nieves, A. Pich, E.R. Arriola, and J. Ruiz de Elvira, Phys. Rev. D90 114020 (2014). arXiv: 1407.3750 [hep-lat]. . M Uehara, arXiv: 0404221hep-phM. Uehara, arXiv: 0404221 [hep-ph].
[]
[ "Light propagation and fluorescence quantum yields in liquid scintillators", "Light propagation and fluorescence quantum yields in liquid scintillators" ]
[ "C Buck \nMax-Planck-Institut für Kernphysik\nSaupfercheckweg 169117HeidelbergGermany\n", "B Gramlich \nMax-Planck-Institut für Kernphysik\nSaupfercheckweg 169117HeidelbergGermany\n", "S Wagner \nMax-Planck-Institut für Kernphysik\nSaupfercheckweg 169117HeidelbergGermany\n" ]
[ "Max-Planck-Institut für Kernphysik\nSaupfercheckweg 169117HeidelbergGermany", "Max-Planck-Institut für Kernphysik\nSaupfercheckweg 169117HeidelbergGermany", "Max-Planck-Institut für Kernphysik\nSaupfercheckweg 169117HeidelbergGermany" ]
[]
For the simulation of the scintillation and Cherenkov light propagation in large liquid scintillator detectors a detailed knowledge about the absorption and emission spectra of the scintillator molecules is mandatory. Furthermore reemission probabilities and quantum yields of the scintillator components influence the light propagation inside the liquid. Absorption and emission properties are presented for liquid scintillators using 2,5-Diphenyloxazole (PPO) and 4-bis-(2-Methylstyryl)benzene (bis-MSB) as primary and secondary wavelength shifter. New measurements of the quantum yields for various aromatic molecules are shown.
10.1088/1748-0221/10/09/p09007
[ "https://arxiv.org/pdf/1509.02327v1.pdf" ]
118,653,021
1509.02327
1e2813c29c2d34a1131502d38a673285ac3a532f
Light propagation and fluorescence quantum yields in liquid scintillators C Buck Max-Planck-Institut für Kernphysik Saupfercheckweg 169117HeidelbergGermany B Gramlich Max-Planck-Institut für Kernphysik Saupfercheckweg 169117HeidelbergGermany S Wagner Max-Planck-Institut für Kernphysik Saupfercheckweg 169117HeidelbergGermany Light propagation and fluorescence quantum yields in liquid scintillators Preprint typeset in JINST style -HYPER VERSIONScintillatorsLiquid detectorsLarge detector For the simulation of the scintillation and Cherenkov light propagation in large liquid scintillator detectors a detailed knowledge about the absorption and emission spectra of the scintillator molecules is mandatory. Furthermore reemission probabilities and quantum yields of the scintillator components influence the light propagation inside the liquid. Absorption and emission properties are presented for liquid scintillators using 2,5-Diphenyloxazole (PPO) and 4-bis-(2-Methylstyryl)benzene (bis-MSB) as primary and secondary wavelength shifter. New measurements of the quantum yields for various aromatic molecules are shown. Introduction In many particle physics experiments using organic liquid scintillators, precise knowledge on the parameters affecting the propagation of scintillation light inside the detector is required. This knowledge is needed to estimate the detector response and to simulate detector signals under various conditions. In particular, for large liquid scintillator (LS) detectors in the ton-scale basic properties of the LS molecules as the absorption and emission spectra or the fluorescence quantum yields need to be known. Such detectors are for example used in neutrino physics, since the cross sections for neutrino detection reactions are tiny and therefore large target masses are needed. The understanding of light propagation in combination with calibration data is paving the way for the determination of the energy deposited inside the detector by ionizing particles. In chapter 2 we describe the light absorption and emission properties of some substances typically used in LS neutrino experiments. In databases these spectra are normally given assuming low concentrations and an inert medium around the fluorescent molecules. Both conditions are not fulfilled for standard liquid scintillators which can be affected by solvent and concentration effects. Therefore the spectra as measured in the corresponding solvent and at a given concentration might be more relevant. In addition, the spectrum can change as the light propagates through the medium. In chapter 3 we present results of our new measurements on the fluorescence quantum yields for several molecules used in various LS neutrino detectors. It is hard to control the systematic uncertainties in such measurements, which might be the reason why the literature values of the absolute numbers are spread over a wide range with significant deviations from each other. The agreement between quantum yield ratios of molecules measured in our setup compared to the corresponding ratios in other publications is found to be more robust. In an organic liquid scintillator system with a primary and secondary wavelength shifter the absorption bands of the individual components should be reasonably separated. The bars in the plot show for each component the wavelength region for which its own absorbance is dominating in the scintillator mixture. Scintillator absorption and emission Ionizing particles traversing liquid scintillators mainly excite the solvent molecules of the liquid. Deexcitation can occur essentially via the emission of fluorescence light or non-radiative energy transfer to another molecule. Since the aromatic solvent molecules are not transparent for their own flourescence light, in particular in large scale LS detectors, primary and sometimes secondary wavelength shifters are added to the liquid. Most modern experiments use PPO (2,5-Diphenyloxazole) as primary and bis-MSB (4-bis-(2-Methylstyryl)benzene) as secondary wavelength shifter. For the modelling of light propagation inside the LS it is important to know on which molecule the photon is absorbed. UV light in wavelength regions below 280 nm is mainly absorbed by the solvent molecules. Benzene derivatives typically have absorption bands around 260 nm and reemit light around 300 nm [1]. Ideally the region of solvent emission matches the absorption peak of the primary wavelength shifter which dominates the absorption in the mixture from around 280 − 350 nm. The scintillation light seen by the photomultipliers in a large scale LS detector is above 350 nm. In this region the secondary wavelength shifter is typically dominating the absorption. Therefore, it is very important to have a high quantum yield for these molecules, since this parameter determines the reemission probability after absorption of scintillation light in the liquid. The absorption of secondary wavelength shifters like bis-MSB becomes negligible in the region above 430 nm. There the impurity levels in the chemicals are most relevant for the attenuation length. Light absorbed by the impurities is typically not reemitted. The regions of main absorption for the individual LS components are illustrated in Figure 1. The emission spectra of PPO and bis-MSB diluted in cyclohexane at concentrations of 0.5 mg/l are shown in Figure 2. In addition, the emission spectrum for a LS mixture as measured in a 2 mm cell in the so-called front face geometry is plotted. For this geometry, emission spectrum of the sample is recorded at the same side where the excitation takes place. The sample contains 3 g/l PPO and 20 mg/l bis-MSB dissolved in pure ortho-phenylxylylethane (o-PXE), a high flashpoint scintillator solvent used e.g. in the Borexino CTF [2] and in the Double Chooz experiment [3]. This The LS emission spectrum in Figure 2 shows the basic features of a pure PPO spectrum shifted by 8 nm to longer wavelengths. This wavelength shift can be explained by the combination of a solvent and concentration effect. In the spectrum, there is also some contribution of the bis-MSB (less than 30%) and a small PXE emission peak around 290 nm. We mainly see the PPO emission when exciting the PXE due to the fact that there is non-radiative energy transfer from the excited PXE to the PPO whereas the energy transfer from PPO to bis-MSB is mainly radiative. Since propagation through the liquid is minimized for the detected light in the front face geometry, radiative transfer is suppressed in this case. The rather poor non-radiative transfer between the two wavelength shifters due to a slight mismatch in the main PPO emssion and bis-MSB absorption could be improved by using a different primary wavelength shifter like e.g. p-terphenyl [2] which on the other hand has the disadvantage of limited solubility. In detector simulations it is normally better to use the emission spectrum after non-radiative transfer instead of the ones of the single molecules [4]. Otherwise the simulation might assume radiative energy transfer, which differs from non-radiative transfer in several aspects as e.g. the efficiency. Non-radiative transfer is much harder to simulate due to contribution of multiple processes and complicated microphysics [5]. As the light propagates through the liquid and gets absorbed and reemitted by the secondary wavelength shifter, the emission spectrum changes and shifts to longer wavelengths, since the absorbed photons can only be reemitted at equal or lower energies. Figure 3 shows again the LS spectrum of Figure 2. In addition, the spectrum of the same PXE based scintillator mixture is plotted for a case when the light had to pass through the liquid for few millimetres. In this spectrum the PPO component has essentially disappeared and the light is mainly emitted in the bis-MSB region above 400 nm. At longer pathlengths of several centimetres or even metres the spectrum will be even more shifted to longer wavelengths. To predict the response of light sensors such as photomultiplier tubes (PMTs) the wavelength dependent quantum efficiency of the device as well as the light spectrum at the PMT position needs to be known. As we see from Figure 3, the distance between the point of light creation and PMT position is relevant for the LS emission spectrum. Scintillation photons from 350 − 430 nm are mainly absorbed by the secondary wavelength shifter as described above. At shorter wavelength the amount of scintillation light is negligible. Nevertheless, the reemission probability below 350 nm is of interest as well, in case of Cherenkov light production. Good modelling of the Cherenkov contribution can be of crucial importance to understand the non-linearity in the energy scale of a detector [4]. Photons emitted after light absorption below 350 nm mainly originate from excited PPO molecules. Here the PPO molecules are excited either directly (280 − 350 nm) or indirectly via non-radiative energy transfer from the solvent (< 280 nm). Nevertheless, the reemission probability does not only depend on the fluorescence quantum yield of PPO, in particular for wavelengths lower than 300 nm. It is also influenced by the interaction mechanisms between molecules. The reemission probability in the LS has a rising trend with increasing wavelength and converges to the fluorescence quantum yield of PPO before bis-MSB absorption and emission take over [4]. Therefore the quantum yield not only of the secondary, but also of the primary wavelength shifter is an important parameter for the modelling of Cherenkov light. The secondary wavelength shifter is added typically at low concentrations to the scintillator and the emission spectrum is shifted relative to the absorption bands of the other molecules. Therefore, the reemission probability above 350 nm is less affected by self-interactions or solvent effects and can be approximated by the bis-MSB fluorescence quantum yield. Fluorescence quantum yields The literature values of fluorescence quantum yields, in particular for PPO and bis-MSB, vary over a wide range. In many databases the values of Berlman [1] are quoted. However, these values might overestimate the true numbers. For the quantum yield of one of the samples "Berlman has rather arbitrarily assigned a value of 1.00" [6]. This sample was used as a standard and all other numbers in the Berlman [1] handbook refer to it. The literature values we found for PPO range from 0.71 to 1.00, those for bis-MSB from 0.75 to 0.96 [1,7,8,9]. The reasons why it is so hard to reproduce those results are manifold. The determination of the quantum yield strongly depends on many parameters such as sample concentration, solvent choice, oxygen content, excitation/emission wavelength dependencies, sample geometry, self-absorption or instrumental effects. Since the values are typically determined in relative measurements, a rather large uncertainty on the knowledge of the quantum yield standard also enters the measurement. Therefore it is useful to have several independent measurements using different setups. For our measurements we diluted the samples in cyclohexane. This rather inert solvent has the advantage of high transparency and optical purity. The PPO and bis-MSB samples were diluted to a concentration of 0.5 mg/l. The absorption spectrum was measured using a Varian Cary400 UV/Vis photospectrometer. To get the emission spectrum a Varian CaryEclipse fluorimeter was used. The measurements were done in a 1 cm quartz cell. Oxygen was removed before the measurements by nitrogen bubbling. After bubbling the cells were closed with a sealed cap. To avoid any influence due to variation in the light intensity at different excitation wavelengths we extracted the quantum yield using the same excitation wavelength in the sample and the standard. As a standard we were using quinine sulfate (1.5 · 10 −5 M) in 0.1 M sulfuric acid. This quinine sulfate reference fulfills several criteria of a adequate standard as good separation of broad absorption and emission bands, which are in similar wavelength regions as those of the samples. Furthermore, it is rather insensitive to oxygen or concentration quenching and stable in solution. Its quantum yield Φ r of 0.55 [10] is rather accurately known and constant over the investiagted excitation wavelengths (290 -380 nm) [11]. However, there is a temperature dependence of the yield [10]. The measurements were done at a room temperature of about 21 − 25 • C. In this range corrections to the reported quantum yield of the standard should be less than 1 %. Since the solvent in the sample and the solvent in the standard are different, a refractive index correction had to be applied. The quantum yields of the samples Φ x were calculated using Φ x = Φ r B r B x I x I r n 2 x n 2 r . (3.1) In this equation B x and B r correspond to the fraction of light absorbed in the sample (x) and the standard (r). The values for B were kept below 20 % in sample and reference. The wavelength integrated intensities of the emission spectra I x and I r were corrected for the instrumental wavelength dependent detection efficiency. For the refractive index of cyclohexane we used n x = 1.43 and for the one of the standard n r = 1.34. The PPO quantum yield was determined at an excitation wavelength of 290 nm and from 300 to 330 nm in 5 nm steps. An average value of 0.842 was found. For bis-MSB the result we obtain is 0.863 which is also an average in a wavelength range from 300 to 380 nm (5 nm steps from 300 -350 nm, 10 nm steps from 350 -380 nm). Both values are found to be stable within the wavelength range covered by the measurements. The variations from the average value was within 3 % for all wavelengths. As an additional data point the PPO yield was checked at an excitation wavelength of 350 nm. To obtain a reasonable absorption at this wavelength the PPO-concentration in cyclohexane was increased by more than 2 orders of magnitude to 4.5 · 10 −4 M. Even at these high concentrations no indication for any decrease in the quantum yield could be found in the tail of the absorption spectrum. Since the emission of the bis-MSB light is mainly above 400 nm it would be interesting to test the yield also in that region to get the reemission probability when the photons are self-absorbed. However, the bis-MSB absorption drops rapidly in that area and much higher concentrations are needed to determine those values. This might induce systematic effects influencing the precision of the results. In addition, the quantum yield of the standard is not reliable anymore above 400 nm and one would need to compare it to data taken at lower excitation wavelengths, which requires additional correction factors increasing the uncertainty further. For those reasons we restricted ourselves in the determination of the quantum yields to wavelengths up to 380 nm. To check systematic effects the measurements were repeated varying some of the relevant parameters. To investigate effects of concentration quenching or self-absorption the yield was determined at lower concentrations (factor 2). In addition, the cell geometry was modified using also a thinner cell of 4 mm length and 1 cm width. The results obtained all agree within a 1σ error of 5 %, which was estimated from the systematic contributions as listed above. In particular, this 5 % relative error covers the uncertainties related to the precision of the reference yield, knowledge of the refractive indices and quenching effects due to inefficient oxygen removal. After oxygen removal in our samples the quantum yield increased in the case of bis-MSB by 7 %. A weaker effect was found for PPO. The ratio of the bis-MSB to PPO quantum yield is 1.02 for our results. This is in reasonable agreement to the ratio of Berlman which is 0.94 [1] assuming the Berlman ratio to also have an uncertainty in the 5 % range. However, our absolute values are significantly lower, which might be explained by the fact that Berlman overestimated the yield of his reference, as already suggested by Demas and Crosby [6]. On the other hand our results are higher than the ones measured by members of the Borexino Collaboration [9,8], but also here we have reasonable relative agreement for the Φ bis-MSB /Φ PPO ratio. If we compare the results presented here with the values of Xiao et al. [7] we find good agreement for the absolute value of PPO, however, the yield for bis-MSB is about 10 % lower in our case. Part of the reason could be a strong solvent effect, since the solvent for bis-MSB in reference [7] is linear alkyl benzene (LAB) instead of cyclohexane. To study such solvent effects we also did the measurements replacing cyclohexane by n-dodecane, LAB and o-PXE. The values in n-dodecane (the refractive index n dodecane at 405 nm, 18 • C is 1.42), the main component in the KamLAND [12] and Double Chooz detectors [13], were within the precision of the measurement consistent with the ones determined in cyclohexane. The aromatic solvent LAB (n LAB = 1.49) which is e.g. used in the targets of the Daya Bay [14] and RENO [15] reactor neutrino experiments has a rather strong self emission when excited below 340 nm making the measurement complicated. Since the PPO absorption is already very weak above 340 nm we only determined the yield for bis-MSB in LAB from 340 to 380 nm. At a bis-MSB concentration of about 1 · 10 −6 M the yields were 5 to 10 % higher in LAB compared to cyclohexane which is consistent with the bis-MSB result reported by Xiao et al. [7]. For o-PXE (n PXE at 405 nm, 18 • C is 1.604) which is used in Double Chooz there are similar difficulties at lower wavelengths as for the LAB, so also here the measurement was only done for bis-MSB at 370 and 380 nm. Slightly lower yields were recorded in o-PXE compared to cyclohexane, but the effect was only 5 % or less and therefore not significant. In reference [9] quite large solvent effects were reported in pseudocumene (1,2,4-trimethylbenzene, PC) which is the basis of the Borexino [16] scintillator and also contained in the KamLAND detector. Compared to the cyclohexane solvent the value increased by 13 % for the case of PPO and it decreased by 13 % for bis-MSB. In Table 1 we summarize all our results obtained in cyclohexane for several solvent or wavelength shifting molecules and other fluorescence materials of interest. Anthracene and 9,10-diphenylathracene (DPA) are sometimes used as standards in this type of measurements. Both of them have the disadvantage that they are very sensitive to oxygen quenching and that there absorption bands are very narrow and spiky making them sensitive to instrumental parameters as the excitation bandwidth. Besides, the separation of absorption and emission bands is rather poor increasing self-absorption processes inside the sample cell. So we observed in the DPA sample a rather high spread in the quantum yield numbers for the different excitation wavelengths and found a relative difference in the integrated light emission with and without nitrogen bubbling of almost 20 %. In many studies anthracene is diluted in ethanol, so we determined the yield also in this solvent (n = 1.36). Values were measured between 300 and 370 nm in 5 nm steps and the yields were found to be stable within errors in this region. For anthracene in ethanol (1.1 · 10 −5 M) Φ = 0.297 ± 0.022 was found which is in agreement to the literature value of 0.27 [6]. Dissolved in cyclohexane (1.1 · 10 −5 M) we get a higher value of Φ = 0.323 ± 0.021. This number can be compared to the one in the handbook of Berlman which is 0.36. Our lower result is consistent with the suspicion that all of Berlman's values should be reduced. Among all the molecules studied by Berlman as well as in our studies the highest fluorescence quantum yield is found for DPA. This yield was set to 1.00 in the Berlman book and used as reference for all other molecules. At excitation around the main absorption peak (350 nm to 380 nm), we find a quantum yield of 0.91 measured in a 4 mm fluorescence cell. This means if compared to Table 1. Fluorescence quantum yield of aromatic molecules used in liquid scintillators. All samples were diluted in cyclohexane and measured at room temperature inside a 1 cm fluorescence cell (for DPA, BPO, PBD and butly-PBD the yields determined in the 4 mm cell were used). Besides the range of excitation wavelengths used to determine the quantum yields the wavelengths of the absorption and emission peaks are quoted. Alternative options for PPO or bis-MSB in organic LS were measured for comparison. The primary wavelength shifters BPO (2-(4-biphenyl)-5-phenyloxazole), PBD (2-(4-biphenyl)-5-phenyl-1,3,4-oxadiazole) and butyl-PBD (2-(4-biphenyl)-5-(4-tert-butyl-phenyl)-1,3,4-oxadiazole) were already investigated in the context of Indium loaded LS [17]. For PBD and butyl-PBD the reference sample was the diluted PPO-solution which absorbs and emits in very similar wavelength regions. For the primary wavelength shifter candidates the highest values are found for BPO and butyl-PBD. Liquid scintillators containing these molecules typically also provide high light yields [17]. Although there is reasonable relative agreement between quantum yield ratios for most molecules investigated in this work with the available literature values of Berlman [1] and most discrepancies in the absolute values could be explained by a common correction factor of about 10 %, there is a notable difference between the PBD and the PPO quantum yield. We find identical numbers whereas Berlman finds a PBD quantum yield which is 17 % below the one of PPO. Molecule A possible replacement of the secondary wavelength shifter bis-MSB is POPOP (5-Phenyl-2-[4-(5-phenyl-1,3-oxazol-2-yl)phenyl]-1,3-oxazole) which seems to have a slightly higher yield. For the solvent molecules in Table 1 the estimated relative uncertainty is higher than the one of the wavelength shifters, since absorption and emission bands are shifted to lower wavelengths compared to the standard. In addition, those solvent molecules are very sensitive to oxygen quenching. Whereas several neutrino experiments in the past used PC-based scintillators providing high light yields, more recent experiments prefer high flash-point solvents which are advantageous in terms of safety aspects as o-PXE or LAB. The yield of o-PXE (Dixie Chemicals) was first determined relative to the quinine sulfate reference. The other solvents were then measured relative to PXE. For PC (Aldrich, 98 %) the highest yield of all tested solvents was found. The LAB (Petresa 550-Q) had significantly lower yields than the other molecules. Another solvent with properties which are promising for large scale neutrino detectors, Diisopropyl naphtalene (DIN, Ruetasolv DI-S, mixture of isomers), has a similar yield as o-PXE. Conclusion For a good modelling of the light propagation in large scale liquid scintillator detectors a detailed knowledge of the light absorption properties, the photon emission behaviour and the fluorescence quantum yields of the wavelength shifters is essential. For the commonly used molecules PPO and bis-MSB the literature values for the quantum yield vary by 30 %. Therefore new measurements were required with an emphasis on the control over systematic effects as presented in this article. The quantum yields obtained in our studies are Φ PPO = 0.842 ± 0.042 and Φ bis-MSB = 0.863 ± 0.043. Similar values ranging from 84 − 91 % were found in alternative primary and secondary wavelength shifters when dissolved in cyclohexane. Variations of the scintillator solvent can change the yield by about 10 % in some cases. Figure 1 . 1Figure 1. In an organic liquid scintillator system with a primary and secondary wavelength shifter the absorption bands of the individual components should be reasonably separated. The bars in the plot show for each component the wavelength region for which its own absorbance is dominating in the scintillator mixture. Figure 2 . 2Emission spectra of PPO and bis-MSB samples diluted in cyclohexane are compared to a o-PXE based scintillator (3 g/l PPO, 20 mg/l bis-MSB) spectrum measured in a geometry with negligible self absorption. sample was excited at 260 nm, where the light mainly excites the o-PXE. Figure 3 . 3Scintillator emission in front face geometry (small self-absorption) and in a 1 cm triangular cell (some self-absorption). concentration exc. range [nm] Quantum yield abs. max. em. max. the results presented here all of Berlman's values should be reduced by 9 %.bis-MSB 1.6 · 10 −6 M 300 -380 0.863 ± 0.043 345 nm 418 nm PPO 2.3 · 10 −6 M 290 -330 0.842 ± 0.042 303 nm 358 nm Anthracene 1.1 · 10 −5 M 300 -370 0.323 ± 0.021 357 nm 400 nm DPA 1.5 · 10 −6 M 350 -380 0.91 ± 0.05 373 nm 407 nm BPO 1.7 · 10 −6 M 320 -350 0.91 ± 0.05 320 nm 384 nm PBD 1.7 · 10 −6 M 300 -320 0.84 ± 0.05 302 nm 358 nm butyl-PBD 1.4 · 10 −6 M 300 -320 0.89 ± 0.05 302 nm 361 nm POPOP 1.4 · 10 −6 M 350 -380 0.90 ± 0.05 360 nm 411 nm o-PXE 1.2 · 10 −4 M 260 -280 0.33 ± 0.03 269 nm 290 nm PC 8.3 · 10 −5 M 250 -270 0.41 ± 0.04 267 nm 290 nm LAB 4.1 · 10 −4 M 250 -270 0.20 ± 0.02 260 nm 284 nm DIN 4.7 · 10 −5 M 250 -270 0.32 ± 0.03 279 nm 338 nm Handbook of fluorescence spectra of aromatic molecules. I B Berlman, Academic PressNew York and LondonI.B. Berlman, Handbook of fluorescence spectra of aromatic molecules, Academic Press, New York and London (1971). Study of phenylxylylethane (PXE) as scintillator for low energy neutrino experiments. H O Back, Nucl. Instrum. Meth. A. 585H.O. Back et al., Study of phenylxylylethane (PXE) as scintillator for low energy neutrino experiments, Nucl. Instrum. Meth. A 585 (2008) 48 -60. Indication for the disappearance of reactorν e in the Double Chooz experiment. Phys. Rev. Lett. 108131801Double Chooz Collaboration, Indication for the disappearance of reactorν e in the Double Chooz experiment, Phys. Rev. Lett. 108 (2012) 131801. Energy non-linearity studies and pulse shape analysis of liquid scintillator signals in the Double Chooz experiment. S Wagner, Universität HeidelbergPhD ThesisS. Wagner, Energy non-linearity studies and pulse shape analysis of liquid scintillator signals in the Double Chooz experiment, PhD Thesis, Universität Heidelberg 2014. Light yield and energy transfer in a new Gd-loaded liquid scintillator. C Aberle, C Buck, F X Hartmann, S Schönert, Chemical Physics Letters. 516C. Aberle, C. Buck, F.X. Hartmann, S. Schönert, Light yield and energy transfer in a new Gd-loaded liquid scintillator, Chemical Physics Letters 516 (2011) 257 -262. The Measurement of Photoluminescence Quantum Yields. A Review. J N Demas, G A Crosby, The Journal of Physical Chemistry. 75J.N. Demas and G.A. Crosby, The Measurement of Photoluminescence Quantum Yields. A Review., The Journal of Physical Chemistry, Vol. 75, 8 (1971) 991 -1024. Study of absorption and re-emission processes in a ternary liquid scintillation system. H Xiao, X Li, D Zheng, J Cao, L Wen, N Wang, Chinese Physics C. 34H. Xiao, X. Li, D. Zheng, J. Cao, L. Wen, N. Wang, Study of absorption and re-emission processes in a ternary liquid scintillation system., Chinese Physics C, Vol. 34, 11 (2010) 1724 -1728. Optical study of a large-scale liquid-scintillator detector. F Masetti, F Elisei, U Mazzucato, Journal of Luminescence. 68F. Masetti, F. Elisei, U. Mazzucato, Optical study of a large-scale liquid-scintillator detector, Journal of Luminescence 68 (1996) 15 -25. Scintillator Purification and Study of Light Propagation in a Large Liquid Scintillator Detector. M C Johnson, Princeton UniversityPhD thesisM.C. Johnson, Scintillator Purification and Study of Light Propagation in a Large Liquid Scintillator Detector, PhD thesis, Princeton University (1998). Quantum efficiencies of fluorescence of organic substances: effect of solvent and concentration of the fluorescent solute. W H Melhuish, J. Phys. Chem. 65229W.H. Melhuish, Quantum efficiencies of fluorescence of organic substances: effect of solvent and concentration of the fluorescent solute, J. Phys. Chem. 65 (1961) 229. The fluorescence excitation spectrum of quinine bisulfate. J E Gill, Photochem. Photobiol. 9313J.E. Gill, The fluorescence excitation spectrum of quinine bisulfate, Photochem. Photobiol. 9, 313 (1969). First Results from KamLAND: Evidence for Reactor Antineutrino Disappearance. Phys. Rev. Lett. 9021802KamLAND Collaboration, First Results from KamLAND: Evidence for Reactor Antineutrino Disappearance, Phys. Rev. Lett. 90, 2 (2003) 021802. Large scale Gd-beta-diketonate based organic liquid scintillator production for antineutrino detection. C Aberle, JINST. 76008C. Aberle et al., Large scale Gd-beta-diketonate based organic liquid scintillator production for antineutrino detection, JINST 7 (2012) P06008. Observation of Electron-Antineutrino Disappearance at Daya Bay. Phys. Rev. Lett. 108171803Daya Bay Collaboration, Observation of Electron-Antineutrino Disappearance at Daya Bay, Phys. Rev. Lett. 108 (2012) 171803. Observation of Reactor Electron Antineutrinos Disappearance in the RENO Experiment. Phys. Rev. Lett. 108191802RENO Collaboration, Observation of Reactor Electron Antineutrinos Disappearance in the RENO Experiment, Phys. Rev. Lett. 108 (2012) 191802. Direct Measurement of the 7Be Solar Neutrino Flux with 192 Days of Borexino Data. Phys. Rev. Lett. 10191302Borexino Collaboration, Direct Measurement of the 7Be Solar Neutrino Flux with 192 Days of Borexino Data, Phys. Rev. Lett. 101 (2008) 091302. Luminescent properties of a new In-based organic liquid scintillation system. C Buck, Journal of Luminescence. 106C. Buck et al., Luminescent properties of a new In-based organic liquid scintillation system, Journal of Luminescence 106 (2004) 57 -67.
[]
[ "First Order Alternation", "First Order Alternation", "First Order Alternation", "First Order Alternation" ]
[ "Radu Iosif [email protected] ", "Xiao Xu [email protected] ", "\nCNRS\nVerimag\n", "\nUniversité de Grenoble Alpes\n\n", "Radu Iosif [email protected] ", "Xiao Xu [email protected] ", "\nCNRS\nVerimag\n", "\nUniversité de Grenoble Alpes\n\n" ]
[ "CNRS\nVerimag", "Université de Grenoble Alpes\n", "CNRS\nVerimag", "Université de Grenoble Alpes\n" ]
[]
We introduce first order alternating automata, a generalization of boolean alternating automata, in which transition rules are described by multisorted first order formulae, with states and internal variables given by uninterpreted predicate terms. The model is closed under union, intersection and complement, and its emptiness problem is undecidable, even for the simplest data theory of equality. To cope with this limitation, we develop an abstraction refinement semi-algorithm based on lazy annotation of the symbolic execution paths with interpolants, obtained by applying (i) quantifier elimination with witness term generation and (ii) Lyndon interpolation in the quantifier-free data theory with uninterpreted predicate symbols. This provides a method for checking inclusion of timed and finitememory register automata, and emptiness of quantified predicate automata, previously used in the verification of parameterized concurrent programs, composed of replicated threads, with a shared-memory communication model.where φ is any formula in the first-order theory of the data domain, in which each state predicate occurs under an even number of negations. In this model, the arguments of a predicate atom q(y 1 , . . ., y k ) track the values of the internal variables associated with the state. Together with the input values x 1 , . . . , x n , these values are used to compute the successor states and are invisible in the input sequence.
null
[ "https://arxiv.org/pdf/1811.02398v2.pdf" ]
53,224,450
1811.02398
dc6be22f01a49aa4a591840de3e090d50c3264a4
First Order Alternation 19 Nov 2018 Radu Iosif [email protected] Xiao Xu [email protected] CNRS Verimag Université de Grenoble Alpes First Order Alternation 19 Nov 2018 We introduce first order alternating automata, a generalization of boolean alternating automata, in which transition rules are described by multisorted first order formulae, with states and internal variables given by uninterpreted predicate terms. The model is closed under union, intersection and complement, and its emptiness problem is undecidable, even for the simplest data theory of equality. To cope with this limitation, we develop an abstraction refinement semi-algorithm based on lazy annotation of the symbolic execution paths with interpolants, obtained by applying (i) quantifier elimination with witness term generation and (ii) Lyndon interpolation in the quantifier-free data theory with uninterpreted predicate symbols. This provides a method for checking inclusion of timed and finitememory register automata, and emptiness of quantified predicate automata, previously used in the verification of parameterized concurrent programs, composed of replicated threads, with a shared-memory communication model.where φ is any formula in the first-order theory of the data domain, in which each state predicate occurs under an even number of negations. In this model, the arguments of a predicate atom q(y 1 , . . ., y k ) track the values of the internal variables associated with the state. Together with the input values x 1 , . . . , x n , these values are used to compute the successor states and are invisible in the input sequence. Introduction Many results in formal language theory rely on the assumption that languages are defined over finite alphabets. In practice, this assumption is problematic when attempting to use automata as models of real-time systems or even simple programs, whose input and observable output requires taking into account data values, ranging over very large domains, better viewed as infinite mathematical abstractions. Alternating automata are a generalization of nondeterministic automata with universal transitions, that create several copies of the automaton, which synchronize on the same input word. Alternating automata are appealing for verification because they allow encoding of problems such as temporal logic model checking in linear time, as opposed to the exponential time required by nondeterministic automata [26]. A finitealphabet alternating automaton is typically described by a set of transition rules q a − → φ, where q is a state, a is an input symbol and φ is a positive boolean combinations of states, viewed as propositional variables. Here we introduce a generalized alternating automata model in which states are predicate symbols q(y 1 , . . . , y k ), the input has associated data variables x 1 , . . . , x n , ranging over an infinite domain and transitions are of the form q(y 1 , . . . , y k ) a(x 1 ,...,xn ) Previous attempts to generalize classical Rabin-Scott automata to infinite alphabets, such as timed automata [1] and finite-memory (register) automata [14] face the complement closure problem: there exist automata for which the complement language cannot be recognized by an automaton in the same class. This excludes the possibility of encoding a language inclusion problem L(A) ⊆ L(B) as the emptiness of an automaton recognizing the language L(A) ∩ L c (B), where L c (B) denotes the complement of L(B). The solution we adopt here is a tight coupling of internal variables to control states, using uninterpreted predicate symbols. As we show, this allows for linear-time complementation just as in the case of boolean alternating automata. Complementation is, moreover, possible when the transition formulae contain first-order quantifiers, generating infinitely-branching execution trees. The price to be paid for this expressivity is that emptiness of first-order alternating automata is undecidable, even for the simplest data theory of equality [4]. The main contribution of this paper is an effective emptiness checking semi-algorithm for first-order alternating automata, in the spirit of the IMPACT procedure, originally developed for checking safety of nondeterministic integer programs [18]. However, checking emptiness of first-order alternating automata by lazy annotation with interpolants faces two problems: 1. Quantified transition rules make it hard, or even impossible, to decide if a given symbolic trace is spurious. This is mainly because adding uninterpreted predicate symbols to decidable first-order theories, such as Presburger arithmetic, results in undecidability [8]. To deal with this problem, we assume that the first order data theory, without uninterpreted predicate symbols, has a quantifier elimination procedure, that instantiates quantifers with effectively computable witness terms. 2. The interpolants that prove the spuriousness of a symbolic path are not local, as they may refer to input values encountered in the past. However, the future executions are oblivious to when these values have been seen in the past and depend only on the data constraints between the values. We use this fact to define a labeling of nodes, visited by the lazy annotation procedure, with conjunctions of existentially quantified interpolants combining predicate atoms with data constraints. As applications of first order alternating automata, we identified several undecidable problems for which no semi-algorithmic methods exist: inclusion between recognizable timed languages [1], languages recognized by finite-memory automata [14] and emptiness of predicate automata, a subclass of first-order alternating automata used to check safety and liveness properties of parameterized concurrent programs [4,5]. For reasons of space, all proofs of technical results in this paper are given in [12]. Related Work The first order alternating automata model presented in this paper stems from our previous work on boolean alternating automata extended with variables ranging over infinite data [11]. There we considered states to be propositional variables, as in the classical textbook alternating automata model, and all variables of the automaton to be observable in the input. The model in this paper overcomes this latter restriction by allowing for internal variables, whose variables are not visible in the language. This solves an older language inclusion problem n i=1 L(A i ) ⊆ L(B), between finitestate automata with data variables, whose languages are alternating sequences of input events and variable valuations [10]. There, we assumed that all variables of the observer automaton B must be declared in the automata A 1 , . . . , A n that model the concurrent components of the system under check. Using first-order alternating automata allows to bypass this limitation of our previous work. The work probably closest to the one reported here concerns the model of predicate automata (PA) [4,5,15], applied to the verification of parameterized concurrent programs with shared memory. In this model, the alphabet consists of pairs of program statements and thread identifiers, thus being infinite because the number of threads is unbounded. Because thread identifiers can only be compared for (dis-)equality, the data theory in PA is the theory of equality. Even with this simplification, the emptiness problem is undecidable when either the predicates have arity greater than one [4] or quantified transition rules [15]. Checking emptiness of quantifier free PA is possible semi-algorithmically, by explicitly enumerating reachable configurations and checking coverage by looking for permuations of argument values. However, no semi-algorithm is given for quantified PA. Dealing with quantified transition rules is one of the contributions of the work reported in this paper. Preliminaries For two integers 0 ≤ i ≤ j, we denote by [i, j] the set {i, i + 1, . . ., j} and by [i] the set [0, i]. We consider two sorts D and B, where D is an infinite domain and B = {⊤, ⊥} is the set of boolean values true (⊤) and false (⊥), respectively. The D sort is equipped with finitely many function symbols f : D #( f ) → D, where #( f ) ≥ 0 denotes the number of arguments (arity) of f . When #( f ) = 0, we say that f is a constant. A predicate is a function symbol p : D #(p) → B, denoting a relation of arity #(p) and we write Pred for the set of predicates. In the following, we shall consider that the interpretation of all function symbols f : D #( f ) → D that are not predicates is fixed by the interpretation of the D sort, e.g. if D is the set of integers Z, the function symbols are zero, the successor function and the arithmetic operations of addition and multiplication. For simplicity, we further blur the notational distinction between function symbols and their interpretations. Let Var = {x, y, z, . . .} be an infinite countable set of variables, ranging over D. Terms are either constants of sort D, variables or function applications f (t 1 , . . . , t #( f ) ), where t 1 , . . . , t #( f ) are terms. The set of first order formulae is defined by the syntax below: φ := t ≈ s | p(t 1 , . . . , t #(p) ) | ¬φ 1 | φ 1 ∧ φ 2 | ∃x . φ 1 where t, s, t 1 , . . ., t #(p) denote terms. We write φ 1 ∨ φ 2 , φ 1 → φ 2 and ∀x . φ 1 for ¬(¬φ 1 ∧ ¬φ 2 ), ¬φ 1 ∨φ 2 and ¬∃x . ¬φ 1 , respectively. We denote by FV(φ) the set of free variables in φ. The size |φ| of a formula φ is the number of symbols needed to write it down. A sentence is a formula φ in which each variable occurs under the scope of a quantifier, i.e. FV(φ) = ∅. A formula is positive if each predicate symbol occurs under an even number of negations and we denote by Form + (Q, X) the set of positive formulae with predicates from the set Q ⊆ Pred and free variables from the set X ⊆ Var. A formula is in prenex form if it is of the form ϕ = Q 1 x 1 . . . Q n x n . φ, where φ has no quantifiers. In this case we call φ the matrix of ϕ. Every first order formula can be written in prenex form, by renaming each quantified variable to a unique name and moving the quantifiers upfront. An interpretation I maps each predicate p into a set p I ⊆ D #(p) , if #(p) > 0, or into an element of D if #(p) = 0. A valuation ν maps each variable x into an element of D. Given a term t, we denote by t ν the value obtained by replacing each variable x by the value ν(x) and evaluating each function application. For a formula φ, we define the forcing relation I, ν | = φ recursively on the structure of φ, as usual. I, ν | = t ≈ s ⇔ t ν = s ν I, ν | = p(t 1 , . . ., t #(p) ) ⇔ t ν 1 , . . . , t ν #(p) ∈ p I I, ν | = ¬φ 1 ⇔ I, ν | = φ 1 I, ν | = φ 1 ∧ φ 2 ⇔ I, ν | = φ i , for all i = 1, 2 I, ν | = ∃x . φ 1 ⇔ I, ν[x ← d] | = φ 1 , for some d ∈ D where ν[x ← d] First Order Alternating Automata Let Σ be a finite alphabet Σ of input events. Given a finite set of variables X ⊆ Var, we denote by X → D the set of valuations of the variables X and Σ[X] = Σ × (X → D) be the possibly infinite set of data symbols (a, ν), where a is an input symbol and ν is a valuation. A data word (simply called word in the following) is a finite sequence w = (a 1 , ν 1 )(a 2 , ν 2 ) . . . (a n , ν n ) of data symbols. Given a word w, we denote by w Σ def = a 1 . . . a n its sequence of input events and by w D the valuation associating each time-stamped variable x (i) the value ν i (x), for all x ∈ Var and i ∈ [1, n]. We denote by ε the empty sequence, by Σ * the set of finite sequences of input events and by Σ[X] * the set of data words over the variables X. Formally, a first order alternating automaton is a tuple A = Σ, X, Q, ι, F, ∆ , where Σ is a finite set of input events, X is a finite set of input variables, Q is a finite set of predicates denoting control states, ι ∈ Form + (Q, ∅) is a sentence defining initial configurations, F ⊆ Q is the set of predicates denoting final states, and ∆ is a set of transition rules of the form q(y 1 , . . . , y #(q) ) a(X) − −− → ψ, where q ∈ Q is a predicate, a ∈ Σ is an input event and ψ ∈ Form + (Q, X ∪ {y 1 , . . . , y #(q) }) is a positive formula, where X ∩ {y 1 , . . . , y #(q) } = ∅. The quantifiers occurring in the right-hand side formula of a transition rule are referred to as transition quantifiers. The size of A is defined as |A| = |ι| + q(y) a(X) − −− →ψ∈∆ |ψ|. The intuition of a transition rule q(y 1 , . . . , y #(q) ) a(X) − −− → ψ is the following: a is the input event and X are the input data values that trigger the transition, whereas q and y 1 , . . . , y #(q) are the current control state and data values in that state, respectively. Without loss of generality, we consider, for each predicate q ∈ Q and each input event a ∈ Σ, at most one such rule, as two or more rules can be joined using disjunction. The execution semantics of automata is given in close analogy with the case of boolean alternating automata, with transition rules of the form q a − → φ, where q is a boolean constant and φ a positive boolean combination of such constants. For instance, q 0 a − → q 1 ∧ q 2 ∨ q 3 means that the automaton can choose to transition in either both q 1 and q 2 or in q 3 alone. This intuition leads to saying that the steps of the automaton are defined by the minimal boolean models of the transition formulae. In this case, both {q 1 ← ⊤, q 2 ← ⊤, q 3 ← ⊥} and {q 1 ← ⊥, q 2 ← ⊥, q 3 ← ⊤} are minimal models, however {q 1 ← ⊤, q 2 ← ⊤, q 3 ← ⊤}, is a model but is not minimal. The original definition of alternating finite-state automata [3] works around this problem by considering boolean valuations (models) instead of formulae. However, describing first-order alternating automata using interpretations instead of formulae would be rather hard to follow. Given a predicate q ∈ Q and a tuple of data values d 1 , . . ., d #(q) , the tuple q(d 1 , . . ., d #(q) ) is called a configuration 1 . To formalize the execution semantics of automata, we relate sets of configurations to models of first order sentences, as follows. Each firstorder interpretation I corresponds to a set of configurations c(I) In this paper, we address the following questions: def = {q(d 1 , . . . , d #(q) ) | q ∈ Q, d 1 , . . . , d #(q) ∈ q I }, called a cube. 1. boolean closure: given automata A i = Σ, X, Q i , ι i , F i , ∆ i , for i = 1, Symbolic Execution In the upcoming developments it is sometimes more convenient to work with logical formulae defining executions of automata, than with low-level execution forests. For this reason, we first introduce path formulae Θ(α), which are formulae defining the executions of an automaton, over words that share a given sequence α of input events. Second, we restrict a path formula Θ(α) to an acceptance formula Υ(α), which defines only accepting executions over words that share a given input sequence. Otherwise stated, Υ(α) is satisfiable if and only if the automaton accepts a word w such that w Σ = α. Let A = Σ, X, Q, ι, F, ∆ be an automaton for the rest of this section. For any i ∈ N, we denote by Q (i) = {q (i) | q ∈ Q} and X (i) = {x (i) | x ∈ X} the sets of time-stamped predicates and variables, respectively. As a shorthand, we write Q (≤n) (resp. X (≤n) ) for the set {q (i) | q ∈ Q, i ∈ [n]} (resp. {x (i) | x ∈ X, i ∈ [n]}) . For a formula ψ and i ∈ N, we define ψ (i) def = ψ[X (i) /X, Q (i) /Q] the formula in which all input variables and state predicates (and only those symbols) are replaced by their time-stamped counterparts. As a shorthand, we shall write q(y) for q(y 1 , . . . , y #(q) ), when no confusion arises. Given a sequence of input events α = a 1 . . . a n ∈ Σ * , the path formula of α is: Θ(α) def = ι (0) ∧ n i=1 q(y) a i (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (i−1) (y) → ψ (i)(1) The automaton A, to which Θ(α) refers, will always be clear from the context. To formalize the relation between the low-level configuration-based execution semantics and the symbolic path formulae, consider a word w = (a 1 , ν 1 ) . . . (a n , ν n ) ∈ Σ[X] * . Any execution forest T of A over w is associated an interpretation I T of the set of timestamped predicates Q (≤n) , defined as: I T (q (i) ) def = { d 1 , . . . , d #(q) | q(d 1 , . . . , d #(q) ) labels a node on level i in T }, ∀q ∈ Q ∀i ∈ [n] Lemma 1. Given an automaton A = Σ, X, Q, ι, F, ∆ , for any word w = (a 1 , ν 1 ) . . .(a n , ν n ), we have [[Θ(w Σ )]] µ w D = {I T | T is an execution of A over w}. Proof : "⊆" Let I be a minimal interpretation such that I, w D | = Θ(w Σ ). We show that there exists an execution T of A over w such that I = I T , by induction on n ≥ 0. For n = 0, we have w = ǫ and Θ(w Σ ) = ι (0) . Because ι is a sentence, the valuation w D is not important in I, w D | = ι (0) and, moreover, since I is minimal, we have I ∈ [[ι (0) ]] µ . We define the interpretation J(q) = I(q (0) ), for all q ∈ Q. Then c(J) is an execution of A over ǫ and I = I c(J) is immediate. For the inductive case n > 0, we assume that w = u · (a n , ν n ) for a word u. Let J be the interpretation defined as I for all q (i) , with q ∈ Q and i ∈ [n − 1], and ∅ everywhere else. Then J, u D | = Θ(u Σ ) and J is moreover minimal. By the induction hypothesis, there exists an execution G of A over u, such that J = I G . Consider a leaf of a tree T ∈ G, labeled with a configuration q(d 1 , . . . , d #(q) ) and let ∀y 1 . . . ∀y #(q) . q (n−1) (y) → ψ (n) be the subformula of Θ(w Σ ) corresponding to the application(s) of the transition rule q(y) an − → ψ at the (n − 1)-th step. Let ν = w D [y 1 ← d 1 , . . . , y #(q) ← d #(q) ]. Because I, w D | = ∀y 1 . . .∀y #(q) . q (n−1) (y) → ψ (n) , we have I ∈ [[ψ (n) ]] ν and let K be one of the minimal interpretations such that K ⊆ I and K ∈ [[ψ (n) ]] ν . It is not hard to see that K exists and is unique, otherwise we could take the pointwise intersection of two or more such interpretations. We define the interpretation K(q) = K(q (n) ) for all q ∈ Q. We have that K ∈ [[ψ]] µ ν -if K was not minimal, K was not minimal to start with, contradiction. Then we extend the execution G by appending to each node labeled with a configuration q(d 1 , . . . , d #(q) ) the cube c(K ). By repeating this step for all leaves of a tree in G, we obtain an execution of A over w. "⊇" Let T be an execution of A over w. We show that I T is a minimal interpretation such that I T , w D | = Θ(w Σ ), by induction on n ≥ 0. For n = 0, T is a cube from c([[ι]] µ ) , by definition. Then I T | = ι (0) and moreover, it is a minimal such interpretation. For the inductive case n > 0, let w = u · (a n , ν n ) for a word u. Let G be the restriction of T to u. Consequently, I G is the restriction of I T to Q (≤n−1) . By the inductive hypothesis, I G is a minimal interpretation such that I G , u D | = Θ(u Σ ). Since I T (q (n) ) = { d 1 , . . . , d #(q) | q(d 1 , . . . , d #(q) ) labels a node on the n-th level in T }, we have I T , w D | = ϕ, for each subformula ϕ = ∀y 1 . . . ∀y #(q) . q (n−1) (y) → ψ (n) of Θ(w Σ ) , by the execution semantics of A. This is the case because the children of each node labeled with q(d 1 , . . . , d #(q) ) on the (n − 1)-th level of T form a cube from c([[ψ]] µ ν ), where ν is a valuation that assigns each y i the value d i and behaves like w D , otherwise. Now supppose, for a contradiction, that I T is not minimal and let J I T be an interpretation such that J, w D | = Θ(w Σ ). First, we show that the restriction J ′ of J to n−1 i=0 Q (i) must coincide with I G . Assuming this is not the case, i.e. J ′ I G , contradicts the minimality of I G . Then the only possibility is that J(q (n) ) I T (q (n) ), for some q ∈ Q. Let p 1 (y 1 , . . . , y #(p 1 ) ) an − → ψ 1 , . . . , p k (y 1 , . . . , y #(p k ) ) an − → ψ k be the set of transition rules in which the predicate symbol q occurs on the right-hand side. Then it must be the case that, for some node on the (n − 1)-th level of G, labeled with a configuration p i (d 1 , . . . , d #(p i ) ), the set of children does not form a minimal cube from c([[ψ i (n) ]] µ ), which contradicts the execution semantics of A. ⊓ ⊔ Next, we give a logical characterization of acceptance, relative to a given sequence of input events α ∈ Σ * . To this end, we constrain the path formula Θ(α) by requiring that only final states of A occur on the last level of the execution. The result is the acceptance formula for α: Υ(α) def = Θ(α) ∧ q∈Q\F ∀y 1 . . . ∀y #(q) . q (n) (y) → ⊥(2) The top-level universal quantifiers from a subformula ∀y 1 . . . ∀y #(q) . q (i) (y) → ψ of Υ(α) will be referred to as path quantifiers, in the following. Notice that path quantifiers are distinct from the transition quantifiers that occur within a formula ψ of a transition rule q(y 1 , . . . , y #(q) ) a(X) − −− → ψ of A. The acceptance formula Υ(A) is false in every interpretation of the predicates that assigns a non-empty set to a non-final predicate occurring on the last level in the execution forest. The relation between the words accepted by A and the acceptance formula above, is formally captured by the following lemma: Lemma 2. Given an automaton A = Σ, X, Q, ι, F, ∆ , for every word w ∈ Σ[X] * , the following are equivalent: there exists an interpretation I such that I, w D | = Υ(w Σ ), 2. w ∈ L(A). Proof : "(1) ⇒ (2)" Let I be an interpretation such that I, w D | = Υ(w Σ ). By Lemma 1, A has an execution T over w such that I = I T . To prove that T is accepting, we show that (i) all paths in T have length n and that (ii) the frontier of T is labeled with final configurations only. First, assume that (i) there exists a path in T of length 0 ≤ m < n. Then there exists a node on the m-th level, labeled with some configuration q(d 1 , . . . , q #(q) ), that has no children. By the definition of the execution semantics of A, we have c([[ψ]] µ η ) = ∅, where q(y) a m+1 (X) − −−−− → ψ is the transition rule of A that applies for q and a m+1 and η = w D [y 1 ← d 1 , . . . , y #(q) ← d #(q) ]. Hence [[ψ]] η = ∅, and because I, w D | = Υ(α), we obtain that I, η | = q(y) → ψ (m+1) , thus d 1 , . . . , d #(q) I(q). However, this contradicts the fact that I = I T and that q(d 1 , . . . , d #(q) ) labels a node of T . Second, assume that (ii), there exists a frontier node of T labeled with a configuration q(d 1 , . . ., d #(q) ) such that q ∈ Q \ F. Since I, w D | = ∀y 1 . . . ∀y #(q) . q(y) → ⊥, by a similar reasoning as in the above case, we obtain that d 1 , . . . , d #(q) I(q), contradiction. "(2) ⇒ (1)" Let T be an accepting execution of A over w. We prove that I T , w D | = Υ(w Σ ). By Lemma 1, we obtain I T , w D | = Θ(w Σ ). Since every path in T is of length n and all nodes on the n-th level of T are labeled by final configurations, we obtain that I T , w D | = q∈Q\F ∀y 1 . . . ∀y #(q) . q (n) (y) → ⊥, trivially. ⊓ ⊔ As an immediate consequence, one can decide whether A accepts some word w with a given input sequence w Σ = α, by checking whether Υ(α) is satisfiable. However, unlike non-alternating infinite-state models of computation, such as counter automata (nondeterministic programs with integer variables), the satisfiability query for an acceptance (path) formula falls outside of known decidable theories, supported by standard SMT solvers. There are basically two reasons for this, namely(i) the presence of predicate symbols, and (ii) the non-trivial alternation of quantifiers. To understand this point, consider for example, the decidable theory of Presburger arithmetic [23]. Adding even only one monadic predicate symbol to it yields undecidability in the presence of non-trivial quantifier alternation [8]. However the quantifier-free fragment of Presburger arithmetic extended with predicate symbols can be shown to be decidable, using a Nelson-Oppen style congruence closure argument [20]. To tackle this problem, we start from the observation that acceptance formulae have a particular form, which allows the elimination of path quantifiers and of predicates, by a couple of satisfiability-preserving transformations. The result of applying these transformations is a formula with no predicate symbols, whose only quantifiers are those introduced by the transition rules of the automaton, referred to as transition quantifiers. We shall further assume ( §4) that the first order theory of the data sort D has quantifier elimination, which allows to effectively decide the satisfiability of such formulae. For the time being, let us formally define the elimination of transition quantifiers and predicates, respectively. Consider a given sequence of input events α = a 1 . . . a n and denote by α i the prefix a 1 . . . a i of α, for i ∈ [n], where α 0 = ǫ. Definition 3. Let Θ(α 0 ), . . ., Θ(α n ) be the sequence of formulae defined by Θ(α 0 ) def = ι (0) and, for all i ∈ [1, n]: Θ(α i ) def = Θ(α i−1 )∧ q (i−1) (t 1 ,...,t #(q) ) occurs in Θ(α i−1 ) q(y 1 ,...,y #(q) ) a i (X) − −− →ψ∈∆ q (i−1) (t 1 , . . . , t #(q) ) → ψ (i) [t 1 /y 1 , . . . , t #(q) /y #(q) ] We write Υ(α) for the prenex normal form of the formula: Θ(α n ) ∧ q (n) (t 1 ,...,t #(q) ) occurs in Θ(α n ) q∈Q\F q (n) (t 1 , . . . , t #(q) ) → ⊥ . Observe that Υ(α) contains no path quantifiers, as required. On the other hand, the scope of the transition quantifiers in Υ(α) exceeds the right-hand side formulae from the transition rules, as shown by the following example. Example 1. Consider the automaton A = {a 1 , a 2 }, {x}, {q, q f }, ι, {q f }, ∆ , where: ι = ∃z . z ≥ 0 ∧ q(z) ∆ = {q(y) a 1 (x) − −− → x ≥ 0 ∧ ∀z . z ≤ y → q(x + z), q(y) a 2 (x) − −− → y < 0 ∧ q f (x + y)} For the input event sequence α = a 1 a 2 , the acceptance formula is: Υ(α) = ∃z . z ≥ 0 ∧ q (0) (z) ∧ ∀y . q (0) (y) → [x (1) ≥ 0 ∧ ∀z . z ≥ y → q (1) (x (1) + z)] ∧ ∀y . q (1) (y) → [y < 0 ∧ q f (2) (x (2) + y)] The result of eliminating the path quantifiers, in prenex normal form, is shown below: Υ(α) = ∃z 1 ∀z 2 . z 1 ≥ 0 ∧ q (0) (z 1 ) ∧ [q (0) (z 1 ) → x (1) ≥ 0 ∧ (z 2 ≥ z 1 → q (1) (x (1) + z 2 ))] ∧ [q (1) (x (1) + z 2 ) → x (1) + z 2 < 0 ∧ q f (2) (x (2) + x (1) + z 2 )] The next lemma establishes a formal relation between the satisfiability of an acceptance formula Υ(α) and that of the formula Υ(α), obtained by eliminating the path quantifiers from Υ(α). Lemma 3. For any input event sequence α = a 1 . . .a n and each valuation ν : X (≤n) → D, the following hold: 1. for all interpretations I, if I, ν | = Υ(α) then I, ν | = Υ(α). 2. if there exists an interpretation I such that I, ν | = Υ(α) then there exists an inter- pretation J ⊆ I such that J, ν | = Υ(α). Proof : (1) Trivial, since every subformula q(t 1 , . . . , t #(q) ) → ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ] of Υ(α) is entailed by a subformula ∀y 1 . . . ∀y #(q) . q(y 1 , . . . , y #(q) ) → ψ of Υ(α). (2) By repeated applications of the following fact: Fact 1 Given formulae φ and ψ, such that no predicate atom with predicate symbol q occurs in ψ(y 1 , . . ., y #(q) ), for each valuation ν, if there exists an interpretation I such that I, ν | = φ ∧ q(t 1 ,...,t #(q) ) occurs in φ q(t 1 , . . . , t #(q) ) → ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ] then there ex- ists a valuation J such that J(q) ⊆ I(q) and J(q ′ ) = I(q ′ ) for all q ′ ∈ Q \ {q} and J, ν | = φ ∧ ∀y 1 . . .∀y #(q) . q(y 1 , . . . , y #(q) ) → ψ. Proof : Assume w.l.o.g. that φ is quantifier free. The proof can be easily generalized to the case φ has quantifiers. Let J(q) = { t ν 1 , . . . , t ν #(q) ∈ I(q) | q(t 1 , . . . , t #(q) ) occurs in φ} and J(q ′ ) = I(q ′ ) for all q ′ ∈ Q \ {q}. Since I, ν | = φ, we obtain that also J, ν | = φ because the tuples of values in I(q) \ J(q) are not interpretations of terms that occur within subformulae q(t 1 , . . . , t #(q) ) of φ. Moreover, q(t 1 ,...,t #(q) ) occurs in φ q(t 1 , . . . , t #(q) ) → ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ] and ∀y 1 . . . ∀y #(q) . q(y 1 , . . . , y #(q) ) → ψ are equivalent under J, thus J, ν | = ∀y #(q) . q(y 1 , . . . , y #(q) ) → ψ, as required. ⊓ ⊔ This concludes the proof. ⊓ ⊔ We proceed with the elimination of predicate atoms from Υ(α), defined below. Definition 4. Let Θ(α 0 ), . . ., Θ(α n ) be the sequence of formulae defined by Θ(α 0 ) def = ι (0) and, for all i ∈ [1, n], Θ(α i ) is obtained by replacing each occurrence of a predicate atom q (i−1) (t 1 , . . . , t #(q) ) in Θ(α i−1 ) with the formula ψ (i) [t 1 /y 1 , . . . , t #(q) /y #(q) ], where q(y) a i (X) − −− → ψ ∈ ∆. We write Υ(α) for the formula obtained by replacing, in Θ(α), each occurrence of a predicate q (n) , such that q ∈ Q \ F (resp. q ∈ F), by ⊥ (resp. ⊤). Example 2 (Contd. from Example 1). The result of the elimination of predicate atoms from the acceptance formula in Example 1 is shown below: Υ(α) = ∃z 1 ∀z 2 . z 1 ≥ 0 ∧ [x (1) ≥ 0 ∧ (z 2 ≥ z 1 → x (1) + z 2 < 0)] Since this formula is unsatisfiable, by Lemma 5 below, no word w with input event sequence w Σ = a 1 a 2 is accepted by the automaton A from Example 1. At this point, we prove the formal relation between the satisfiability of the formulae Υ(α) and Υ(α). Since there are no occurrences of predicates in Υ(α), for each valuation ν : X (≤n) → D, there exists an interpretation I such that I, ν | = Υ(α) if and only if J, ν | = Υ(α), for every interpretation J. In this case we omit I and simply write ν | = Υ(α). Proof : By induction on n ≥ 0. The base case n = 0 is trivial, since Υ(A) = Υ(A) = ι (0) . For the induction step, we rely on the following fact: Fact 2 Given formulae φ and ψ, such that φ is positive q(t 1 , . . . , t #(q) ) is the only one occurrence of the predicate symbol q in φ and no predicate atom with predicate symbol q occurs in ψ(y 1 , . . ., y #(q) ), for each interpretation I and each valuation ν, we have: I, ν | = φ ∧ q(t 1 , . . . , t #(q) ) → ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ] ⇔ ν | = φ[ψ[t 1 /y 1 , . . ., t #(q) /y #(q) ]/q(t 1 , . . . , t #(q) )] . Proof : We assume w.l.o.g. that φ is quantifier-free. The proof can be easily generalized to the case φ has quantifiers. "⇒" We distinguish two cases: -if t ν 1 , . . . , t ν #(q) ∈ I(q) then I, ν | = ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ]. Since φ is positive, replac- ing q(t 1 , . . ., t #(q) ) with ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ] does not change the truth value of φ under ν, thus ν | = φ[ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ]/q(t 1 , . . . , t #(q) )]. -else, t ν 1 , . . . , t ν #(q) I(q), thus ν | = φ[⊥/q(t 1 , . . ., t #(q) )]. Since φ is positive and ⊥ entails ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ], we obtain ν | = φ[ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ]/q(t 1 , . . . , t #(q) )] by monotonicity. "⇐" Let I be any interpretation such that I(q) = { t ν 1 , . . . , t ν #(q) | ν | = ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ]}. We distinguish two cases: if I(q) ∅ then I, ν | = q(t 1 , . . . , t #(q) ) and ν | = ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ]. Thus replacing ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ] by q(t 1 , . . . , t #(q) ) does not change the truth value of φ under I and ν, and we obtain I, ν | = φ. Moreover, I, ν | = ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ] implies I, ν | = q(t 1 , . . . , t #(q) ) → ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ]. -else I(q) = ∅, hence ν | = ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ], thus ν | = φ[⊥/q(t 1 , . . . , t #(q) )]. Be- cause φ is positive, we obtain I, ν | = φ by monotonicity. But I, ν | = q(t 1 , . . . , t #(q) ) → ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ] trivially, because I, ν | = q(t 1 , . . . , t #(q) ). ⊓ ⊔ This concludes the proof. ⊓ ⊔ Finally, we define the acceptance of a word with a given input event sequence by means of a formula in which no predicate atom occurs. As previously discussed, several decidable theories, such as Presburger arithmetic, become undecidable if predicate atoms are added to them. Therefore, the result below makes a step forward towards deciding whether the automaton accepts a word with a given input sequence, by reducing this problem to the satisfiability of a quantified formula without predicates. ⊓ ⊔ Closure Properties Given a positive formula φ, we define the dual formula φ ∼ recursively as follows: (φ 1 ∨ φ 2 ) ∼ = φ 1 ∼ ∧ φ 2 ∼ (φ 1 ∧ φ 2 ) ∼ = φ 1 ∼ ∨ φ 2 ∼ (t ≈ s) ∼ = ¬(t ≈ s) (∃x . φ 1 ) ∼ = ∀x . φ 1 ∼ (∀x . φ 1 ) ∼ = ∃x . φ 1 ∼ (¬(t ≈ s)) ∼ = t ≈ s (q(x 1 , . . ., x #(q) )) ∼ = q(x 1 , . . . , x #(q) ) Observe that, because predicate atoms do not occur negated in φ, there is no need to define dualization for formulae of the form ¬q(x 1 , . . . , x #(q) ). The following theorem shows closure of automata under all boolean operations: Theorem 1. Given automata A i = Σ, X, Q i , ι i , F i , ∆ i , for i = 1, 2, such that Q 1 ∩ Q 2 = ∅, the following hold: 1. L(A ∩ ) = L(A 1 ) ∩ L(A 2 ), where A ∩ = Σ, X, Q 1 ∪ Q 2 , ι 1 ∧ ι 2 , F 1 ∪ F 2 , ∆ 1 ∪ ∆ 2 , 2. L(A i ) = Σ[X] * \ L(A i ), where A i = Σ, X, Q i , ι ∼ , Q i \ F i , ∆ ∼ i and, for i = 1, 2: ∆ ∼ i = {q(y) a(X) − −− → ψ ∼ | q(y) a(X) − −− → ψ ∈ ∆ i } . Moreover, |A ∩ | = O(|A 1 | + |A 2 |) and |A i | = O(|A i |), for all i = 1, 2. Proof : (1) "⊆" Let w ∈ L(A ∩ ) be a word and T be an execution of A ∩ over w. Since Q 1 ∩ Q 2 = ∅, it is possible to partition T into T 1 and T 2 such that the roots of T i form a cube from c([[ι i ]] µ ), for all i = 1, 2. Because ∆ 1 ∩ ∆ 2 = ∅, by induction on |w| ≥ 0, one shows that T i is an execution of A i over w, for all i = 1, 2. Finally, because T is accepting, we obtain that T 1 and T 2 are accepting, respectively, hence w ∈ L(A 1 ) ∩ L(A 2 ). "⊇" Let w ∈ L(A 1 ) ∩ L(A 2 ) and let T i an accepting execution of A i over w, for all i = 1, 2. We show that T 1 ∪ T 2 is an execution of A ∩ over w, by induction on |w| ≥ 0. For the base case |w| = 0, we have T i ∈ c([[ι i ]] µ ) for all i = 1, 2 and since Q 1 ∩ Q 2 = ∅, we have T 1 ∪ T 2 ∈ c([[ι 1 ∧ ι 2 ]] µ ) . The induction step follows as a consequence of the fact that ∆ 1 ∪ ∆ 2 is the set of transition rules of A ∩ . Finally, since both T 1 and T 2 are accepting, T 1 ∪ T 2 is accepting as well. Moreover, we have: |A ∩ | = |ι 1 ∧ ι 2 | + q(y) a(X) − −− →ψ∈∆ 1 ∪∆ 2 |ψ| = 1 + |ι 1 | + |ι 2 | + q(y) a(X) − −− →ψ∈∆ 1 |ψ| + q(y) a(X) − −− →ψ∈∆ 2 |ψ| . (2) Let w ∈ Σ[X] * be a word. We denote by Υ A 1 (w Σ ) and Υ A 1 (w Σ ) [resp. Υ A 1 (w Σ ) and Υ A 1 (w Σ )] the formulae Υ(w Σ ) and Υ(w Σ ) for A 1 and A 1 , respectively. It is enough to show that Υ A 1 (w Σ ) = ¬Υ A 1 (w Σ ) and apply Lemma 5 to prove that w ∈ L(A 1 ) ⇔ w L(A 1 ). Since the choice of w was arbitrary, this proves L(A 1 ) = Σ[X] * \ L(A 1 ). By induction on the number of predicate atoms in Υ A 1 (w Σ ) that are replaced during the generation of Υ A 1 (w Σ ). The proof relies on the following fact: Fact 3 Let φ be a positive formula and let q(t 1 , . . . , t #(q) ) be the only occurrence of a predicate symbol within φ. Then, every formula φ with no predicate occurrences: ¬φ[ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ]/q(t 1 , . . . , t #(q) )] ≡ φ ∼ [¬ψ[t 1 /y 1 , . . . , t #(q) /y #(q) ]/q(t 1 , . . . , t #(q) )]. Proof : By induction on the structure of φ. ⊓ ⊔ The Emptiness Problem The problem of checking emptiness of a given automaton is undecidable, even for automata with predicates of arity two, whose transition rules use only equalities and disequalities, having no transition quantifiers [4]. Since even such simple classes of alternating automata have no general decision procedure for emptiness, we use an abstractionrefinement semi-algorithm based on lazy annotation [18,19]. In a nutshell, a lazy annotation procedure systematically explores the set of execution paths (in our case, sequences of input events) in search of an accepting execution. Each path has a corresponding path formula that defines all words accepted along that path. If the path formula is satisfiable, the automaton accepts a word. Otherwise, the path is said to be spurious. When a spurious path is encountered, the search backtracks and the path is annotated with a set of learned facts, that marks this path as infeasible. The semi-algorithm uses moreover a coverage relation between paths, ensuring that the continuations of already covered paths are never explored. Sometimes this coverage relation provides a sound termination argument, when the automaton is empty. We check emptiness of first order alternating automata using a version of the IM-PACT lazy annotation semi-algorithm [18]. An analogous procedure is given in [11], for a simpler model of alternating automata, that uses only predicates or arity zero (booleans) and no transition quantifiers. For simplicity, we do not present the details of this algorithm and shall content ourselves of several high-level definitions. Given a finite input event alphabet Σ, for two sequences α, β ∈ Σ * , we say that α is a prefix of β, written α β, if α = βγ for some sequence γ ∈ Σ * . A set S of sequences is: prefix-closed if for each α ∈ S , if β α then β ∈ S , and complete if for each α ∈ S , there exists a ∈ Σ such that αa ∈ S if and only if αb ∈ S for all b ∈ Σ. Observe that a prefix-closed set is the backbone of a tree whose edges are labeled with input events. If the set is, moreover, complete, then every node of the tree has either zero successors, in which case it is called a leaf, or it has a successor edge labeled with a for each input event a ∈ Σ. Definition 5. An unfolding of an automaton A = Σ, X, Q, ι, F, ∆ is a finite partial mapping U : Σ * ⇀ fin Form + (Q, ∅), such that: 1. dom(U) is a finite prefix-closed complete set, 2. U(ǫ) = ι, and 3. for each sequence αa ∈ dom(U), such that α ∈ Σ * and a ∈ Σ: (1) Moreover, U is safe if for each α ∈ dom(U), the formula U(α)∧ q∈Q\F ∀y 1 . . .∀ y #(q) . q(y) → ⊥ is unsatisfiable. U(α) (0) ∧ q(y) a(X) − −− →ψ ∀y 1 . . . ∀y #q . q (0) (y) → ψ (1) | = U(αa) Lazy annotation semi-algorithms [18,19] build unfoldings of automata trying to discover counterexamples for emptiness. If the automaton A in question is non-empty, a systematic enumeration of the input event sequences 2 from Σ * will suffice to discover a word w ∈ L(A), provided that the first order theory of the data domain D is decidable (Lemma 2). However, if L(A) = ∅, the enumeration of input event sequences may, in principle, run forever. The typical way of fighting this divergence problem is to define a coverage relation between the nodes of the unfolding tree. Definition 6. Given an unfolding U of an automaton A = Σ, X, Q, ι, F, ∆ a node α ∈ dom(U) is covered by another node β ∈ dom(U), denoted α ⊑ β, if and only if there exists a node α ′ α such that U(α ′ ) | = U(β). Moreover, U is closed if and only if every leaf from dom(U) is covered by an uncovered node. A lazy annotation semi-algorithm will stop and report emptiness provided that it succeeds in building a closed and safe unfolding of the automaton. Notice that, by Definition 6, for any three nodes of an unfolding U, say α, β, γ ∈ dom(U), if α ≺ β and α ⊑ γ, then β ⊑ γ as well. As we show next (Theorem 2), there is no need to expand covered nodes, because, intuitively, there exists a word w ∈ L(A) such that α w Σ and α ⊑ γ only if there exists another word u ∈ L(A) such that γ u Σ . Hence, exploring only those input event sequences that are continuations of γ (and ignoring those of α) suffices in order to find a counterexample for emptiness, if one exists. An unfolding node α ∈ dom(U) is said to be spurious if and only if Υ(α) is unsatisfiable. In this case, we change (refine) the labels of (some of the) prefixes of α (and that of α), such that U(α) becomes ⊥, thus indicating that there is no real execution of the automaton along that input event sequence. As a result of the change of labels, if a node γ α used to cover another node from dom(U), it might not cover it with the new label. Therefore, the coverage relation has to be recomputed after each refinement of the labeling. The semi-algorithm stops when (and if) a safe complete unfolding has been found. For a detailed presentation of the emptiness procedure, we refer to [11]. Theorem 2. If an automaton A has a nonempty safe closed unfolding then L(A) = ∅. Proof : Let U be a safe and complete unfolding of A, such that dom(U) ∅. Suppose, by contradiction, that there exists a word w ∈ L(A) and let α def = w Σ . Since w ∈ L(A), by Lemma 2, there exists an interpretation I such that I, w D | = Υ(α). Assume first that α ∈ dom(U). In this case, one can show, by induction on the length n ≥ 0 of w, that Θ(α) | = U(α) (n) , thus I, w D | = U(α) (n) . Since I, w D | = Υ(α), we have I, w D | = q∈Q\F ∀y 1 . . . ∀y #(q) . q (n) (y) → ⊥, hence U(α) (n) ∧ q∈Q\F ∀y 1 . . . ∀y #(q) . q (n) (y) → ⊥. By renaming q (n) with q in the previous formula, we obtain U(α) ∧ ∀y 1 . . . ∀y #(q) . q(y) → ⊥ is satisfiable, thus U is not safe, contradiction. We proceed thus under the assumption that α dom(U). Since dom(U) is a nonempty prefix-closed set, there exists a strict prefix α ′ of α that is a leaf of dom(U). Since U is closed, the leaf α ′ must be covered and let α 1 α ′ α be a node such that U(α 1 ) | = U(β 1 ), for some uncovered node β 1 ∈ dom(U). Let γ 1 be the unique sequence such that α 1 γ 1 = α. By Definition 6, since α 1 ⊑ β 1 and w Σ = α 1 γ 1 ∈ L(A), there exists a word w 1 and a cube c 1 ∈ c([[U(α 1 )]]) ⊆ c([[U(β 1 )]]) , such that w 1Σ = γ 1 and A accepts w 1 starting with c 1 . If β 1 γ 1 ∈ dom(U), we obtain a contradiction by a similar argument as above. Hence β 1 γ 1 dom(U) and there exists a leaf of dom(U) which is also a prefix of β 1 γ 1 . Since U is closed, this leaf is covered by an uncovered node β 2 ∈ dom(U) and let α 2 ∈ dom(U) be the minimal (in the prefix partial order) node such that β 1 α 2 β 1 γ 1 and α 2 ⊑ β 2 . Let γ 2 be the unique sequence such that α 2 γ 2 = β 1 γ 1 . Since β 1 is uncovered, we have β 1 α 2 and thus |γ 1 | > |γ 2 |. By repeating the above rea-soning for α 2 , β 2 and γ 2 , we obtain an infinite sequence |γ 1 | > |γ 2 | > . . ., which is again a contradiction. ⊓ ⊔ As mentioned above, we check emptiness of first order alternating automata using the same method previously used to check emptiness of a simpler model of alternating automata, which uses boolean constants for control states and whose transition rules have no quantifiers [11]. The higher complexity of the automata model considered here, manifests itself within the interpolant generation procedure, used to refine the labeling of the unfolding. We discuss generation of interpolants in the next section. Interpolant Generation Typically, when checking the unreachability of a set of program configurations [18], the interpolants used to annotate the unfolded control structure are assertions about the values of the program variables in a given control state, at a certain step of an execution. However, in an alternating model of computation, it is useful to distinguish between (i) locality of interpolants w.r.t. a given control state (control locality) and (ii) locality w.r.t. a given time stamp (time locality). In logical terms, control-local interpolants are defined by formulae involving a single predicate symbol, whereas time-local interpolants involve only predicates q (i) and variables x (i) , for a single i ≥ 0. Remark When considering an alternating model of computation, control-local interpolants are not always enough to prove emptiness, because of the synchronization of several branches of the computation on the same sequence of input values. Consider, for instance, an automaton with the following transition rules and final state q f : q 0 (y) a(x) − − → q 1 (y + x) ∧ q 2 (y − x) q 1 (y) a(x) − − → y + x > 0 ∧ q f q 1 (y) a(x) − − → q 1 (y + x) q 2 (y) a(x) − − → y − x > 0 ∧ q f q 2 (y) a(x) − − → q 2 (y − x) Started in an initial configuration q 0 (0) with an input word (a, ν 1 ) . . . (a, ν n−1 )(a, ν n ), such that ν i (x) = k i , the automaton executes as follows: q 0 (0) (a,ν 1 ) −−− → {q 1 (k 1 ), q 2 (−k 1 )} . . . (a,ν n−1 ) − −−−− → {q 1 ( n−1 i=1 k i ), q 2 (− n−1 i=1 k i )} (a,νn) −−− → ∅ An overapproximation of the set of cubes generated after one or more steps is defined by the formula: ∃x 1 ∃x 2 . q 1 (x 1 ) ∧ q 2 (x 2 ) ∧ x 1 + x 2 ≈ 0. Observe that a control-local formula using one occurrence of a predicate would give a too rough overapproximation of this set, unable to prove the emptiness of the automaton. First, let us give the formal definition of the class of interpolants we shall work with. Given a formula φ, the vocabulary of φ, denoted V(φ) is the set of predicate symbols q ∈ Q (i) and variables x ∈ X (i) , occurring in φ, for some i ≥ 0. For a term t, its vocabulary V(t) is the set of variables that occur in t. Observe that quantified variables and the interpreted function symbols of the data theory 3 do not belong to the vocabulary of a formula. By P + (φ) [P − (φ)] we denote the set of predicate symbols that occur in φ under an even [odd] number of negations. Definition 7 ([17]). Given formulae φ and ψ such that φ ∧ ψ is unsatisfiable, a Lyndon interpolant is a formula I such that φ | = I, the formula I ∧ ψ is unsatisfiable, V(I) ⊆ V(φ) ∩ V(ψ), P + (I) ⊆ P + (φ) ∩ P + (ψ) and P − (I) ⊆ P − (φ) ∩ P − (ψ). In the rest of this section, let us fix an automaton A = Σ, X, Q, ι, F, ∆ . Due to the above observation, none of the interpolants considered will be control-local and we shall use the term local to denote time-local interpolants, with no free variables. Definition 8. Given a non-empty sequence of input events α = a 1 . . . a n ∈ Σ * , a generalized Lyndon interpolant (GLI) is a sequence (I 0 , . . ., I n ) of formulae such that, for all k ∈ [n − 1]: 1. P − (I k ) = ∅, 2. ι (0) | = I 0 and I k ∧ q(y) a i (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (k) (y) → ψ (k+1) | = I k+1 , 3. I n ∧ q∈Q\F ∀y 1 . . . ∀y #(q) . q(y) is unsatisfiable. Moreover, the GLI is local if and only if V(I k ) ⊆ Q (k) , for all k ∈ [n]. The following proposition states the existence of local GLI for the theories in which Lyndon's Interpolation Theorem holds. Proposition 1. If there exists a Lyndon interpolant for any two formulae φ and ψ, such that φ ∧ ψ is unsatisfiable, then any sequence of input events α = a 1 . . . a n ∈ Σ * , such that Υ(α) is unsatisfiable, has a local GLI (I 0 , . . ., I n ). Proof : By definition, Υ(α) is the formula: ι (0) ∧ n i=1 q(y) a i (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (i−1) (y) → ψ (i) ∧ q∈Q\F ∀y 1 . . .∀y #(q) . q (n) (y) → ⊥ We define the formulae: 1, n], and V(ψ) ⊆ Q (n) . We apply Lyndon's Interpolation Theorem for the formulae ι (0) and n i=1 ϕ i ∧ ψ and obtain a formula I 0 , such that ι (0) | = I 0 , ϕ i def = q(y) a i (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (i−1) (y) → ψ (i) , for all i ∈ [1, n] ψ def = q∈Q\F ∀y 1 . . . ∀y #(q) . q (n) (y) → ⊥ Observe that V(ι (0) ) ⊆ Q (0) , V(ϕ i ) ⊆ Q (i−1) ∪ Q (i) ∪ X (i) , for all i ∈ [I 0 ∧ n i=1 ϕ i ∧ ψ is unsatisfiable, V(I 0 ) ⊆ V(ι (0) ) ∩ ( n i=1 V(ϕ i ) ∪ V(ψ)) ⊆ Q (0) and P − (I 0 ) ⊆ P − (ι (0) ) ∩ ( n i=1 P − (ϕ) ∪ P − (ψ)) = ∅. Repeating the reasoning for the formulae I 0 ∧ ϕ 1 and n i=2 ϕ i ∧ ψ, we obtain I 1 , such that I 0 ∧ ϕ 1 | = I 1 , I 1 ∧ n i=2 ϕ i ∧ ψ is unsatisfiable, V(I 1 ) ⊆ (V(I 0 ) ∪ V(ϕ 1 )) ∩ ( n i=2 V(ϕ i ) ∪ V(ψ)) ⊆ Q (1) and P − (I 1 ) ⊆ (P − (I 0 ) ∪ P − (ϕ 1 )) ∩ ( n i=2 P − (ϕ i ) ∪ P − (ψ)) = ∅. Continuing in this way, we obtain formulae I 0 , I 1 , . . . , I n as required. ⊓ ⊔ The main problem with the local GLI construction described in the proof of Proposition 1 is that the existence of Lyndon interpolants (Definition 7) is guaranteed in principle, but the proof is non-constructive. Building an interpolant for an unsatisfiable conjunction of formulae φ ∧ ψ is typically the job of the decision procedure that proves the unsatisfiability and, in general, there is no such procedure, when φ and ψ contain predicates and have non-trivial quantifier alternation. In this case, some provers use instantiation heuristics for the universal quantifiers that are sufficient for proving unsatisfiability, however these heuristics are not always suitable for interpolant generation. Consequently, from now on, we assume the existence of an effective Lyndon interpolation procedure only for decidable theories, such as the quantifier-free linear (integer) arithmetic with uninterpreted functions (UFLIA, UFLRA, etc.) [25]. This is where the predicate-free path formulae (Definition 4) come into play. For a given event sequence α, the automaton A accepts a word w such that w Σ = α if and only if Υ(α) is satisfiable. Assuming further that the equality atoms in the transition rules of A are written in the language of a decidable first order theory, such as Presburger arithmetic, Lemma 5 gives us an effective way of checking emptiness of A, relative to a given event sequence. However, this method does not cope well with lazy annotation, because there is no way to extract, from the unsatisfiability proof of Υ(α), the interpolants needed to annotate α. This is because (i) the formula Υ(α), obtained by repeated substitutions (Definition 4) loses track of the steps of the execution, and (ii) quantifiers that occur nested in Υ(α) make it difficult to write Υ(α) as an unsatisfiable conjunction of formulae from which interpolants are extracted (Definition 7). The solution we adopt for the first issue (i) consists in partially recovering the timestamped structure of the acceptance formula Υ(α) using the formula Υ(α), in which only transition quantifiers occur. The second issue (ii) is solved under the additional assuption that the theory of the data domain D has witness-producing quantifier elimination. More precisely, we assume that, for each formula ∃x . φ(x), there exists an effectively computable term τ, in which x does not occur, such that ∃x . φ and φ[τ/x] are equisatisfiable. These terms, called witness terms in the following, are actual definitions of the Skolem function symbols from the following folklore theorem: ⊓ ⊔ Examples of witness-producing quantifier elimination procedures can be found in the literature for e.g. linear integer (real) arithmetic (LIA,LRA), Presburger arithmetic and boolean algebra of sets and Presburger cardinality constraints (BAPA) [16]. Under the assumption that witness terms can be effectively built, let us describe the generation of a non-local GLI for a given input event sequence α = a 1 . . . a n . First, we generate successively the acceptance formula Υ(α) and its equisatisfiable forms Υ(α) = Q 1 x 1 . . . Q m x m . Φ and Υ(α) = Q 1 x 1 . . . Q m x m . Φ, both written in prenex form, with matrices Φ and Φ, respectively. Because we assumed that the first order theory of D has quantifier elimination, the satisfiability problem for Υ(α) is decidable. If Υ(α) is satisfiable, we build a counterexample for emptiness w such that w Σ = α and w D is a satisfying assignment for Υ(α). Otherwise, Υ(α) is unsatisfiable and there exist witness terms τ i 1 . . . τ i ℓ , where {i 1 , . . . , i ℓ } = { j ∈ [1, m] | Q j = ∀}, such that Φ[τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ] is unsatisfiable (Theorem 3). Then it turns out that the formula Φ[τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ], obtained analogously from the matrix of Υ(α), is unsatisfiable as well (Lemma 6). Because this latter formula is structured as a conjunction of formulae ι (0) ∧ φ 1 . . . ∧ φ n ∧ ψ, where V(φ k ) ∩ Q (≤n) ⊆ Q (k−1) ∪ Q (k) and V(ψ) ∩ Q (≤n) ⊆ Q (n) , it is now possible to use an existing interpolation procedure for the quantifier-free theory of D, extended with uninterpreted function symbols, to compute a sequence of non-local GLI (I 0 , . . . , I n ) such that V(I k ) ∩ Q (≤n) ⊆ Q (k) , for all k ∈ [n]. 1 and 2). The formula Υ(α) (Example 2) is unsatisfiable and let τ 2 = z 1 be the witness term for the universally quantified variable z 2 . Replacing z 2 with τ 2 in the matrix of Υ(α) (Example 1) yields the unsatisfiable conjunction: Example 3 (Contd. from Examples z 1 ≥ 0 ∧ q (0) (z 1 ) ∧ q (0) (z 1 ) → x (1) ≥ 0 ∧ (z 1 ≥ z 1 → q (1) (x (1) + z 1 )) ∧ q (1) (x (1) + z 1 ) → x (1) + z 1 < 0 ∧ q f (2) (x (2) + x (1) + z 1 ) A non-local GLI for the above is ( q (0) (z 1 ) ∧ z 1 ≥ 0, x (1) ≥ 0 ∧ q (1) (x (1) + z 1 ) ∧ z 1 ≥ 0, ⊥). A function ξ : N → N is [strictly] monotonic iff for each n < m we have ξ(n) ≤ ξ(m) [ξ(n) < ξ(m)] and finite-range iff for each n ∈ N the set {m | ξ(m) = n} is finite. If ξ is finite-range, we denote by ξ −1 max (n) ∈ N the maximal value m such that ξ(m) = n. The lemma below gives the proof of correctness for the construction of non-local GLI. Lemma 6. Given a non-empty input event sequence α = a 1 . . . a n ∈ Σ * , such that Υ(α) is unsatisfiable, let Q 1 x 1 . . . Q m x m . Φ be a prenex form of Υ(α) and let ξ : [1, m] → [n] be a monotonic function mapping each transition quantifier to the minimal index from the sequence Θ(α 0 ), . . . , Θ(α n ) where it occurs. Then one can effectively build: 1. witness terms τ i 1 , . . . , τ i ℓ , where {i 1 , . . . , i ℓ } = { j ∈ [1, m] | Q j = ∀} and V(τ i j ) ⊆ X (≤ξ(i j )) ∪ {x k | k < i j , Q k = ∃}, ∀ j ∈ [1, ℓ] such that Φ[τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ] is unsatisfiable, and 2. a GLI (I 0 , . . . , I n ) for α, such that V(I k ) ⊆ Q (k) ∪ X (≤k) ∪ {x j | j < ξ −1 (k), Q j = ∃}, for all k ∈ [n]. Proof : (1) If Υ(α) is unsatisfiable, by Lemmas 3 and 4, we obtain that, successively Υ(α) and Υ(α) are unsatisfiable. Let Q 1 x 1 . . . Q m x m . Φ and Q 1 x 1 . . . Q m x m . Φ be prenex forms for Υ(α) and Υ(α), respectively. Since we assumed that the first order theory of the data domain has witness-producing quantifier elimination, using Theorem 3 one can effectively build witness terms τ i 1 , . . . , τ i ℓ , where {i 1 , . . . , i ℓ } = {i ∈ [1, m] | Q i = ∀} and: -V(τ i j ) ⊆ X (≤ξ(i j )) ∪ {x k | k < i j , Q k = ∃}, for all j ∈ [1, ℓ] and -Φ[τ i 1 /x i 1 , . . ., τ i ℓ /x i ℓ ] is unsatisfiable. Let Φ 0 , . . . , Φ n be the sequence of quantifier-free formulae, defined as follows: -Φ 0 is the matrix of some prenex form of ι (0) , for all i = 1, . . . , n, let Φ i be the matrix of some prenex form of: Φ i def = Φ i−1 ∧ q (i−1) (t 1 ,...,t #(q) ) occurs in Φ i−1 q(y 1 ,...,y #(q) ) a i (X) − −− →ψ∈∆ q (i−1) (t 1 , . . . , t #(q) ) → ψ (i) [t 1 /y 1 , . . . , t #(q) /y #(q) ] def = φ i It is easy to see that Φ is the matrix of some prenex form of: Φ n ∧ q (n) (t 1 ,...,t #(q) ) occurs in Φ n q∈Q\F q (n) (t 1 , . . ., t #(q) ) → ⊥ def = ψ Applying the equivalence from Fact 2 in the proof of Lemma 4, we obtain a sequence of quantifier-free formulae Φ 0 , . . . , Φ n such that Φ i ≡ Φ i , for all i ∈ [n] and Φ is obtained from Φ n by replacing each occurrence of a predicate atom q(t 1 , . . . , t #(q) ) in Φ n by ⊥ if q ∈ Q \ F and by ⊤ if q ∈ F. Clearly Φ ≡ Φ, thus Φ[τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ] ≡ Φ[τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ] ≡ ⊥. (2) With the notation introduced at point (1), we have Φ = Φ 0 ∧ n i=1 φ i ∧ψ. Consider the sequence of witness terms τ i 1 , . . . , τ i ℓ , whose existence is proved by point (1) 1, ℓ], and moreover ξ −1 is strictly monotonic, we obtain: . Because V(τ i j ) ⊆ X (≤ξ(i j )) ∪ {x k | k < i j , Q k = ∃}, for all j ∈ [-V( Φ 0 [τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ]) ⊆ Q (0) ∪ X (0) ∪ {x j | j < ξ −1 max (0), Q j = ∃}, -V(φ i [τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ]) ⊆ Q (i−1) ∪ Q (i) ∪ X (≤i) ∪ {x j | j < ξ −1 max (i), Q j = ∃}, for all i ∈ [1, n], -V(ψ[τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ]) ⊆ Q (n) ∪ X (≤n) ∪ {x j | j ∈ [1, m], Q j = ∃}. By repeatedly applying Lyndon's Interpolation Theorem, we obtain a sequence of formulae (I 0 , . . . , I n ) such that: -Φ 0 [τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ] | = I 0 and V(I 0 ) ⊆ Q (0) ∪ X (0) ∪ {x j | j < ξ −1 max (0), Q j = ∃}, -I k−1 ∧ φ i [τ i 1 /x i 1 , . . . , τ i ℓ /x i ℓ ] | = I k and V(I k ) ⊆ Q (k) ∪ X (≤k) ∪ {x j | j < ξ −1 max (k), Q j = ∃}, for all k ∈ [1, n], -I n ∧ ψ[τ i 1 /x i 1 , . . ., τ i ℓ /x i ℓ ] is unsatisfiable. To show that (I 0 , . . . , I n ) is a GLI for a 1 . . . a n , it is sufficient to notice that q(y) a k (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (k) (y) → ψ (k+1) | = φ k for all k ∈ [1, n]. Consequently, we obtain: ι (0) | = Φ 0 | = I 0 , by Theorem 3, -I k−1 ∧ q(y) a k (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (k−1) (y) → ψ (k) | = I k−1 ∧ φ k | = I k , and -I n ∧ q∈Q\F ∀y 1 . . . ∀y #(q) . q(y) → ⊥ | = I n ∧ ψ | = ⊥, as required by Definition 8. ⊓ ⊔ In conclusion, under two assumptions about the first order theory of the data domain, namely the(i) witness-producing quantifier elimination, and (ii) Lyndon interpolation for the quantifier-free fragment with uninterpreted functions, we developped a rather generic method that produces generalized Lyndon interpolants for unfeasible input event sequences. Moreover, each formula I k in the interpolant refers only to the current predicate symbols Q (I k ) , the current and past input variables X (≤k) and the existentially quantified transition variables introduced at the previous steps {x j | j < ξ −1 max (k), Q j = ∃}. The remaining question is how to use such non-local interpolants to label the unfolding of an automaton (Definition 5) and to compute the coverage between nodes of the unfolding (Definition 6). Unfolding with Non-local Interpolants As required by Definition 5, the unfolding U of an automaton A = Σ, X, Q, ι, F, ∆ is labeled by formulae U(α) ∈ Form + (Q, ∅), with no free symbols, other than predicate symbols, such that the labeling is compatible with the transition relation of the automaton, according to the point (3) of Definition 5. The following lemma describes the refinement of the labeling of an input sequence α of length n by a non-local GLI (I 0 , . . . , I n ), such that V(I k ) ⊆ Q (k) ∪ X (≤k) ∪ x k , where x k are the existentially quantified variables from the prenex normal form of Υ(α k ). Lemma 7. Let U be an unfolding of an automaton A = Σ, X, Q, ι, F, ∆ such that α = a 1 . . . a n ∈ dom(U) and (I 0 , . . . , I n ) be a GLI for α. The mapping U ′ : dom(U) → Form + (Q, ∅) defined as: -U ′ (α k ) = U(α k ) ∧ J k , for all k ∈ [n] , where J k is the formula obtained from I k by replacing each time-stamped predicate symbol q (k) by q and existentially quantifying each free variable in I k , We write I j k for the formula in which each predicate symbol q (k) is replaced by q ( j) . Then the following entailment holds: -U ′ (β) = U(β) if β ∈ dom(U) and β α,I 0 k ∧ q(y) a k (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (0) (y) → ψ (1) | = I 1 k+1 Because J k is obtained by removing the time stamps from the predicate symbols and existentially quantifying all the free variables of I k , we also obtain, applying Fact 4 below: J (0) k ∧ q(y) a k (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (0) (y) → ψ (1) | = J (1) k+1 Since U satisfies the labeling condition of Definition 5 (3) and U ′ (α k ) = U(α k ) ∧ J k , we obtain, as required: (1) . U ′ (α k ) (0) ∧ q(y) a k (X) − −− →ψ∈∆ ∀y 1 . . . ∀y #(q) . q (0) (y) → ψ (1) | = U ′ (α k+1 ) Fact 4 Given formulae φ(x, y) and ψ(x) such that φ(x, y) | = ψ(x), we also have ∃x . φ(x, y) | = ∃x . ψ(x). Proof : For each choice of a valuation for the existentially quantified variables on the left-hand side, we chose the same valuation for the variables on the right-hand side. ⊓ ⊔ ⊓ ⊔ Observe that, by Lemma 6 (2), the set of free variables of a GLI formula I k consists of (i) variables X (≤k) keeping track of data values seen in the input at some earlier moment in time, and (ii) variables that track past choices made within the transition rules. Basically, it is not important when exactly in the past a certain input has been read or when a choice has been made, as only the value of the variable determines the future behavior. Intuitively, existential quantification of these variables does the job of ignoring when in the past these values have been seen. The last ingredient of the lazy annotation semi-algorithm based on unfoldings consist in the implementation of the coverage check, when the unfolding of an automaton is labeled with conjunctions of existentially quantified formulae with predicate symbols, obtained from interpolation. By Definition 6, checking whether a given node α ∈ dom(U) is covered amounts to finding a prefix α ′ α and a node β ∈ dom(U) such that U(α ′ ) | = U(β), or equivalently, the formula U(α ′ ) ∧ ¬U(β) is unsatisfiable. However, the latter formula, in prenex form, has quantifier prefix in the language ∃ * ∀ * and, as previously mentioned, the satisfiability problem for such formulae becomes undecidable when the data theory subsumes Presburger arithmetic [8]. Nevertheless, if we require just a yes/no answer (i.e. not an interpolant) recently developped quantifier instantiation heuristics [24] perform rather well in answering a large number of queries in this class. Observe, moreover, that coverage does not need to rely on a complete decision procedure. If the prover fails in answering the above satisfiability query, then the semi-algorithm assumes that the node is not covered and continues exploring its successors. Failure to compute complete coverage may lead to divergence (non-termination) and ultimately, to failure to prove emptiness, but does not affect the soundness of the semi-algorithm (real counterexamples will still be found). Applications The main application of first order alternating automata is checking inclusion between various classes of automata extended with variables ranging over infinite domains that recognize languages over infinite alphabets. The most widely known such classes are timed automata [1] and finite-memory (register) automata [14]. In both cases, complementation is not possible inside the class and inclusion is undecidable. Our contribution is providing a systematic semi-algorithm for these decision problems. In addition, the method described in §4 can extend our previous generic register automata [10] inclusion checking framework, by allowing monitor (right-hand side) automata to have local variables, that are not visible in the language. Another application is checking safety (mutual exclusion, absence of deadlocks, etc.) and liveness (termination, lack of starvation, etc.) properties of parameterized concurrent programs, consisting of an unbounded number of replicated threads that communicate via a fixed set of global variables (locks, counters, etc.). The verification of parametric programs has been reduced to checking the emptiness of a (possibly infinite) sequence of first order alternating automata, called predicate automata [4,5], encoding the inclusion of the set of traces of a parametric concurrent program into increasingly general proof spaces, obtained by generalization of counterexamples. The program and the proof spaces are first order alternating automata over the infinite alphabet of pairs consisting of program statements and thread identifiers. Timed Automata The standard definition of a finite timed word is a sequence of pairs (a 1 , τ 1 ), . . ., (a n , τ n ) ∈ (Σ × R) * , where R is the set of real numbers, such that 0 ≤ τ i < τ i+1 , for all i ∈ [1, n − 1]. Intuitively, τ i is the moment in time where the input event a i occurs. Given a set C of clocks, the set Φ(C) of clock constraints is defined inductively as the set of formulae x ≤ c, x ≥ c, ¬δ, δ 1 ∧ δ 2 , where x ∈ C, c ∈ Q is a rational constant and δ, δ 1 , δ 2 ∈ Φ(X). A timed automaton is a tuple T = Σ, S , S 0 , F,C, E , where: Σ is a finite set of input events, S is a finite set of states, S 0 , F ⊆ S are sets of initial and final states, respectively, C is a finite set of clocks and E ⊆ S × Σ × S × 2 C × Φ(C) is the set of transitions (s, a, s ′ , λ, δ) from state s to state s ′ with symbol a, λ is the set of clocks to be reset and δ is a clock constraint. A run of T over a timed word w = (a 1 , τ 1 ) . . . (a n , τ n ) is a sequence (s 0 , γ 0 ) . . . (s n , γ n ), where s i ∈ S , γ i : C → R are clock valuations, for all i ∈ [n], and: s 0 ∈ S 0 and γ 0 (x) = 0 for all x ∈ C, for all i ∈ [n], there exists a transition (s i , a i , s i+1 , λ i , δ i ) ∈ E such that γ i + τ i+1 − τ i | = δ i , and for all x ∈ C, γ i+1 (x) = 0 if x ∈ λ i and γ i+1 (x) = γ i (x) + τ i+1 − τ i , otherwise. Here τ 0 def = 0 and γ i + τ i+1 − τ i is the valuation mapping each x ∈ C to γ i (x) + τ i+1 − τ i . The run is accepting iff s n ∈ F, in which case T accepts w. As usual, we denote by L(T ) the set of finite words accepted by T . It is well-known that, in general, there is no timed automaton accepting the complement language (Σ × R) * \ L(T ) and, moreover, the language inclusion problem is undecidable [1]. Given a timed automaton T = Σ, S , S 0 , F,C, E , we define a first order alternating automaton A T = Σ, {t}, Q T , ι T , F T , ∆ T , with a single input variable t, ranging over R, such that each timed word w = (a 1 , τ 1 ) . . . (a n , τ n ) corresponds to a unique data word d(w) = (a 1 , ν 1 ) . . . (a n , ν n ) such that ν i (t) = τ i for all i ∈ [1, n] and L(A T ) = {d(w) | w ∈ L(T )}. The only difficulty here is capturing the fact that all the clocks of T evolve at the same pace, which is easily done using a technique from [7], which replaces each clock q i (y 1 , . . . , y k , z) a(t) − − → t > z ∧ δ(z − y 1 , . . . , z − y k ) ∧ q j (y ′ 1 , . . . , y ′ k , t) where y ′ i stands for z if x i ∈ λ and for y i , otherwise. Moreover, nothing else is in ∆ T . We establish the following connection between a timed automaton and its corresponding first order alternating automaton. transition relation and F ⊆ S is the set of final states. A run of A over an input word a 1 . . . a n ∈ Σ * is a sequence (s 0 , v 0 ) . . .(s n , v n ) such that v 0 = u and, for all i ∈ [1, n], exactly one of the following holds: -if there exists k ∈ [1, r] such that a i = (v i−1 ) k then v i = v i−1 and (s i−1 , k, s i ) ∈ µ, -otherwise a i [v i−1 ], ρ(s i−1 ) is defined, (v i ) ρ(s i−1 ) = a i , for each k ∈ [1, r] \ {ρ(s i−1 )}, we have (v i ) k = (v i−1 ) k and (s i−1 , ρ(s i−1 ), s i ) ∈ µ. Intuitively, if the input symbol is already stored in some register, the automaton moves to the next state if, moreover, the transition relation allows it, otherwise it copies the input to the register indicated by the reassignment, erasing its the previous value, and moves according to the transition relation. The translation of register automata to first order alternating automata is quite natural, because registers can be encoded as arguments of predicate atoms. Formally, given a register automaton R = S , s 0 , u, ρ, µ, F , such that S = {s 0 , . . . , s m }, we define the al- x y i ∧ q j (y 1 , . . . , y k−1 , x, y k+1 , . . . , y r ) Moreover, nothing else is in ∆ R . The connection between register automata and first order alternating automata is stated below. Proposition 3. Given a register automaton R = S , s 0 , u, ρ, µ, F over an infinite alphabet Σ, the first order alternating automaton A R = {α}, Q R , ι R , F R , ∆ R recognizes the languge L(A R ) = {(α, a 1 ) . . . (α, a n ) | a 1 . . . a n ∈ L(R)}. Proof : "⊆" Let w = (α, a 1 ) . . . (α, a n ) ∈ L(A R ). First, it is easy to show that each execution of A R , that starts in some cube c ∈ c([[ι R ]] µ ), is a linear tree with labels q 0 (v 0 ), . . . , q n (v 0 ) such that v 0 = u. Second by induction on n ≥ 0, we prove that A R has a run as above over w only if R has a run (q 0 , v 0 ), . . . , (q n , v n ) over a 1 . . . a n . "⊇" Let w = a 1 . . . a n ∈ L(R) and q 0 (v 0 ), . . . , q n (v 0 ) be a run of R over w, such that v 0 = u. By induction on n ≥ 0, we can build an execution of A R over (α, a 1 ) . . .(α, a n ) that is a linear tree with labels q 0 (v 0 ), . . . , q n (v n ). ⊓ ⊔ Consequently, the language inclusion problem "given register automata R 1 and R 2 , does L(R 1 ) ⊆ L(R 2 )?" is reduced in polynomial time to emptiness problem L(A R 1 ) ∩ L(A R 2 ) = ∅, for which ( §4) provides a semi-algorithm. Notice further that the encoding of register automata as first order alternating automata uses no transition quantifiers. Predicate Automata The model of predicate automata [4,5] has emerged recently as a tool for checking safety and liveness properties of parameterized concurrent programs, in which there is an unbounded number of replicated threads that communicate via global variables. Predicate automata recognize finite sequences of actions that are pairs (σ, i), where σ is from a finite set Σ of program statements and i ∈ N ranges over an unbounded set of thread identifiers. To avoid clutter, we shall view a pair (σ, i) as a data symbol (σ, ν) where ν(x) = i, for a designated input variable x. Since thread identifiers can only be compared for equality, the data theory of predicate automata is the first order theory of equality. Moreover, transition quantifiers are only needed for checking termination and, generally, liveness properties [5]. However, the execution semantics of predicate automata differs from that of first order automata with respect to the following detail: initial configurations and successors of predicate automata are defined using the entire sets of models of the initial sentence and transition rules, not just the minimal ones, as in our case. Formally, a run of a predicate automaton P = Σ, {x}, Q, ι, F, ∆ over a word (a 1 , ν 1 ) . . . In fact, as shown next, this more simple execution semantics is equivalent, from the language point of view, with the semantics given by Definitions 1 and 2. We believe that the semantics of first order alternating automata based on minimal models is important for its relation to the textbook semantics of boolean alternating automata [3]. Proposition 4. Given a predicate automaton P = Σ, {x}, Q, ι, F, ∆ , let A P be the first order alternating automaton that has the same description as P. Then L(P) = L(A P ). Proof : "⊆" Let w = (a 1 , ν 1 ) . . . (a n , ν n ) ∈ L(P) be a word and I 0 , . . ., I n be an accepting execution of P over w. Let I (i) j be the interpretation that associates each predicate q (i) the set I j (q), for i, j ∈ [n]. Then one builds, by induction on n ≥ 0, an execution T of A P such that I T ⊆ n i=0 I (i) i , where I T is the unique interpretation associated with T . Since I 0 , . . . , I n is accepting, we have I (n) n (q (n) ) = ∅, for all q ∈ Q \ F and hence I T (q (n) ) = ∅, for all q ∈ Q \ F and, consequently w ∈ L(A P ). "⊇" Let w = (a 1 , ν 1 ) . . . (a n , ν n ) ∈ L(A P ) be a word and T be an accepting execution of A P over w. We define the sequence of interpretations I 0 , . . . , I n as I i (q) = I T (q (i) ), for each i ∈ [n] and each q ∈ Q. By induction on n ≥ 0 one shows that I 0 , . . . , I n is an execution P. Moreover, since T is accepting, we have I n (q) = I T (q (n) ) = ∅, for each q ∈ Q \ F, thus w ∈ L(P). ⊓ ⊔ As before, this result enables using the semi-algorithm from §4 for checking emptiness of predicate automata. We point out that, although quantifier-free predicate automata with predicates of arity one are decidable for emptiness [4], currently there is no method for checking emptiness of predicate automata with predicates of arity greater than one, other than the explicit enumeration of cubes. Moreover, no method for dealing with emptiness in the presence of transition quantifiers is known to exist. Experimental Results We have implemented a version of the IMPACT semi-algorithm [18] in a prototype tool called FOADA, which is avaliable online [6]. The tool is written in Java and uses the Z3 SMT solver [27], via the JavaSMT interface [13], for spuriousness and coverage queries Definition 1 . 1Given a word w = (a 1 , ν 1 ) . . . (a n , ν n ) ∈ Σ[X] * and a cube c, an execution of A = Σ, X, Q, ι, F, ∆ over w, starting with c, is a (possibly infinite) forest T = {T 1 , T 2 , . . .}, where each T i is a tree labeled with configurations, such that: 1. c = {T (ǫ) | T ∈ T } is the set of configurations labeling the roots of T 1 , T 2 , . . . and 2. if q(d 1 , . . . , d #(q) ) labels a node on the level j ∈ [n − 1] in T i , then the labels of its children form a cube from c([[ψ]] µ η ), where η = ν j+1 [y 1 ← d 1 , . . . , y #(q) ← d #(q) ] and q(y 1 , . . . , y #(q) ) a j+1 (X) − −−−− → ψ ∈ ∆ is a transition rule of A. Definition 2. An execution T over w, starting with c, is accepting if and only if all paths in T have the same length n, and the frontier of each tree T ∈ T is labeled with final configurations q(d 1 , . . . , d #(q) ), where q ∈ F. If A has an accepting execution over w starting with a cube c ∈ c([[ι]] µ ), then A accepts w and let L(A) be the set of words accepted by A. 2, do there exist automata A ∩ , A ∪ and A 1 such that L(A ∩ ) = L(A 1 ) ∩ L(A 2 ), L(A ∪ ) = L(A 1 ) ∪ L(A 2 ) and L(A 1 ) = Σ[X] * \ L(A 1 ) ? 2. emptiness: given an automaton A, is L(A) = ∅? Lemma 4 . 4For any input event sequence α = a 1 . . .a n and each valuation ν : X (≤n) → D, there exists a valuation I such that I, ν | = Υ(α) if and only if ν | = Υ(α). Lemma 5 . 5Given an automaton A = Σ, X, Q, ι, F, ∆ , for every word w ∈ Σ[X] * , we have w D | = Υ(w Σ ) if and only if w ∈ L(A). Proof : By Lemma 2, w ∈ L(A) if and only if I, w D | = Υ(w Σ ), for some interpretation I. By Lemma 3, there exists an interpretation I such that I, w D | = Υ(w Σ ) if and only if there exists an interpretation J such that J, ν | = Υ(w Σ ). By Lemma 4, there exists an interpretation J such that J, ν | = Υ(w Σ ) if and only if ν | = Υ(w Σ ). Theorem 3 ( 3[2]). Given Q 1 x 1 . . . Q n x n . φ a first order sentence, where Q 1 , . . . , Q n ∈ {∃, ∀} and φ is quantifier-free, let η idef = f i (y 1 , . . . , y k i ) if Q i = ∀ and η i def = x i if Q i = ∃,where f i is a fresh function symbol and {y 1 , . . . , y k i } = {x j | j < i, Q j = ∃}. Then the entailment Q 1 x 1 . . . Q n x n . φ | = φ[η 1 /x 1 , . . . , η n /x n ] holds. Proof : See [2, Theorem 2.1.8] and [2, Lemma 2.1.9]. is an unfolding of A. Proof : The new set of formulae U ′ (α 0 ), . . ., U ′ (α n ) complies with Definition 5, because: -U ′ (α 0 ) ≡ ι, since, by point 2 of Definition 8, we have ι (0) | = I 0 , thus ι | = J 0 and U ′ (α 0 ) = U(α 0 ) ∧ J 0 ≡ ι ∧ J 0 ≡ ι, and by Definition 8 (3) we have, for all k ∈ [n − 1]: 1 . . . ∀y #(q) . q (k) (y) → ψ (k+1) | = I k+1 x i of T by a variable y i tracking the difference between the values of t and x i .Formally, if C = {x 1 , . . . , x k } and S = {s 1 , . . . , s m }, we define Q T def = {q 1 , . . . , q m } , where #(q i ) = k + 1 for all i ∈ [1, m], ι T def = s i ∈S 0 q i (0, . . ., 0), F T def = {q i | s i ∈ F}and, for each transition (s i , a, s j , λ, δ) ∈ E, ∆ T contains the rule: Proposition 2 . 2Given a timed automaton T = Σ, S , S 0 , F,C, E , the first order alternating automaton A T = Σ, {t}, Q T , ι T , F T , ∆ T recognizes the language L(A T ) = {d(w) | w ∈ L(T )}. ternating automaton A R = {α}, {x}, Q R , ι R , F R , ∆ R , where α Σ, Q R def = {q 0 , . . . , q m } and #(q i ) = r for all i ∈ [m], ι R def = q 0 (u), F R def = {q i | s i ∈ F}and, for each transition (s i , k, s j ) ∈ µ, ∆ T contains the rule:q i (y 1 , . . . , y r ) α(x) −− → y k = x ∧ q j (y 1 , . . . , y r ) ∨ r i=1 (a n , ν n ) is a sequence of interpretations I 0 , . . . , I n such that I 0 ∈ [[ι]] and for each i ∈ [1, n], each q ∈ Q and each tupled 1 , . . . , d #(q) ∈ I i−1 (q), we have I i ∈ [[ψ]] ν , for each rule q(y 1 , . . . , y #(q) ) a i (x) − −− → ψ ∈ ∆, where ν = ν i [y 1 ← d 1 , . . . , y #(q) ← d #(q) ].The run is accepting if and only if I(q) = ∅ for all q ∈ Q \ F. is the valuation which assigns d to x and behaves like ν elsewhere. For a formula φ and a valuation ν, we define [[φ]] ν def = {I | I, ν | = φ} and drop the ν subscript for sentences. A sentence φ is satisfiable (unsatisfiable) if [[φ]] ∅ ([[φ]] = ∅). An element of [[φ]] is called a model of φ. A formula φ is valid if I, ν | = φfor every interpretation I and every valuation ν. For two formulae φ and ψ we write φ | = ψ for [[φ]] ⊆ [[ψ]], in which case we say that φ entails ψ. Interpretations are partially ordered by the pointwise subset order, defined as I 1 ⊆ I 2 if and only if p I 1 ⊆ p I 2 for each predicate p ∈ Pred. Given a set S of interpretations, a minimal element I ∈ S is an interpretation such that for no other interpretation I ′ ∈ S \ {I} do we have I ′ ⊆ I. For a formula φ and a valuation ν, we denote by [[φ]] µ ν and [[φ]] µ the set of minimal interpretations from [[φ]] ν and [[φ]], respectively. Note that a configuration is not a logical term since data values cannot be written in logic. For instance, using breadth-first search. E.g., the arithmetic operators of addition and multiplication, when D is the set of integers. Proof : "⊆" Let w = (a 1 , ν 1 ) . . . (a n , ν n ) ∈ L(A T ) be a data word. We show the existence of a timed word (a 1 , τ 1 ) . . . (a n , τ n ) ∈ L(T ) such that ν i (t) = τ i , for all i ∈[1, n], by induction on n ≥ 0. In fact we shall prove the following stronger statements:1. each execution of A T over w starting with a cube c ∈ c([[ι T ]] µ ) is a linear tree, in which each node has at most one child.for each executionT has an execution (s i 0 , γ 0 ) . . . (s i n , γ n ) over the timed word (a 1 , τ 1 ) . . . (a n , τ n ), such that, for all i ∈[1, n]and all ℓ ∈ℓ . The first point above is by inspection of ι T = s i ∈S 0 q i (0, . . ., 0) and of the rules from ∆ T . Indeed, each minimal model of ι T corresponds to a cube q(0, . . ., 0) and each rule has exactly one predicate atom on its right-hand side, thus each node of the execution will have at most one successor. The second point is by induction on n ≥ 0."⊇" Let w = (a 1 , τ 1 ) . . . (a n , τ n ) ∈ L(T ) be a timed word. By induction on n ≥ 0, we show that for each run (s i 0 , γ 0 ) . . .(s i n , γ n ) of T over w, A T has a linear executionAn easy consequence is that the timed language inclusion problem "given timed automata T 1 and T 2 , does L(T 1 ) ⊆ L(T 2 ) ?" is reduced in polynomial time to the emptiness problem L(A T 1 )∩L(A T 2 ) = ∅, for which ( §4) provides a semi-algorithm. Observe, moreover, that no transition quantifiers are needed to encode timed automata as first order alternating automata.Register AutomataFinite-memory automata, most commonly referred to as register automata[14]are among the first attempts at lifting the finite alphabet restriction of classical Rabin-Scott automata. In a nutshell, a register automaton is a finite-state automaton equipped with a finite set of registers x 1 , . . . , x r able to copy input values and compare them with subsequent input. Consequently, basic results from classical automata theory, such as the pumping lemma or the closure under complement do not hold in this model and, moreover, inclusion of languages recognized by register automata is undecidable[21].Let Σ be an infinite alphabet, # be a symbol not in Σ and r > 0 be an integer constant, denoting the number of registers. An assignment is a word v = v 1 . . .v r such that if v i = v j and i j then v i = #, for all i, j ∈ [1, r]. We write [v] for the set {v i | i ∈ [1, r]} of values in the assignment v. A finite-memory (register) automaton is a tuple R = S , q 0 , u, ρ, µ, F , where S is a finite set of states, q 0 ∈ S is the initial state, u = u 1 . . . u r is the initial assignment, ρ : S → [1, r] is the reassignment partial function, µ ⊆ S × [1, r] × S is the and also for interpolant generation. The experiments were carried out on a MacOS x64 -1.3 GHz Intel Core i5 -8 GB 1867 MHz LPDDR3 machine.The experimental results, reported inTable 1, come from several sources, namely predicate automata models (*.pa)[4,5]available online[22], timed automata inclusion problems (abp.ada, train.ada, rr-crossing.foada), array logic entailments (array rotation.ada, array simple.ada, array shift.ada) and hardware circuit verification (hw1.ada, hw2.ada), initially considered in[10]. The train-simpleN. foada and fischer-mutexN.foada examples are parametric verification problems in which one checks inclusions of the form N i=1 L(A i ) ⊆ L(B), where A i is the i-th copy of the same template automaton.The advantage of using FOADA over the INCLUDER [9] tool from[10]is the possibility of having infinite alphabet automata with hidden (local) variables, whose values are not visible in the input. In particular, this is essential for checking inclusion of timed automata that use internal clocks to control the computation. A theory of timed automata. R Alur, D L Dill, Theor. Comput. Sci. 1262R. Alur and D. L. Dill. A theory of timed automata. Theor. Comput. Sci., 126(2):183-235, 1994. The Classical Decision Problem. E Börger, E Grädel, Y Gurevich, Perspectives in Mathematical Logic. SpringerE. Börger, E. Grädel, and Y. Gurevich. The Classical Decision Problem. Perspectives in Mathematical Logic. Springer, 1997. . A K Chandra, D C Kozen, L J Stockmeyer, Alternation. J. ACM. 281A. K. Chandra, D. C. Kozen, and L. J. Stockmeyer. Alternation. J. ACM, 28(1):114-133, 1981. Proof spaces for unbounded parallelism. A Farzan, Z Kincaid, A Podelski, SIGPLAN Not. 501A. Farzan, Z. Kincaid, and A. Podelski. Proof spaces for unbounded parallelism. SIGPLAN Not., 50(1):407-420, Jan. 2015. Proving liveness of parameterized programs. A Farzan, Z Kincaid, A Podelski, Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, LICS '16. the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, LICS '16ACMA. Farzan, Z. Kincaid, and A. Podelski. Proving liveness of parameterized programs. In Proceedings of the 31st Annual ACM/IEEE Symposium on Logic in Computer Science, LICS '16, pages 185-196. ACM, 2016. First Order Alternating Data Automata (FOADA). First Order Alternating Data Automata (FOADA). https://github.com/cathiec/FOADA. A closed-form evaluation for extended timed automata. L Fribourg, LSV- 98-2Laboratoire Spécification et Vérification. ENS Cachan, FranceResearch ReportL. Fribourg. A closed-form evaluation for extended timed automata. Research Report LSV- 98-2, Laboratoire Spécification et Vérification, ENS Cachan, France, Mar. 1998. Presburger arithmetic with unary predicates is π 1 1 complete. J Y Halpern, The Journal of Symbolic Logic. 562J. Y. Halpern. Presburger arithmetic with unary predicates is π 1 1 complete. The Journal of Symbolic Logic, 56(2):637-642, 1991. Abstraction refinement and antichains for trace inclusion of infinite state systems. R Iosif, A Rogalewicz, T Vojnar, Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016). R. Iosif, A. Rogalewicz, and T. Vojnar. Abstraction refinement and antichains for trace inclusion of infinite state systems. In Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2016), pages 71-89, 2016. Abstraction refinement for emptiness checking of alternating data automata. R Iosif, X Xu, Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2018). R. Iosif and X. Xu. Abstraction refinement for emptiness checking of alternating data au- tomata. In Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2018), pages 93-111, 2018. First Order Alternation. R Iosif, X Xu, ArXiv 1811.02398Technical ReportR. Iosif and X. Xu. First Order Alternation. Technical Report ArXiv 1811.02398, https://arxiv.org/abs/1811.02398, 2018. . Javasmt, JavaSMT. https://github.com/sosy-lab/java-smt. Finite-memory automata. M Kaminski, N Francez, Theoretical Computer Science. 1342M. Kaminski and N. Francez. Finite-memory automata. Theoretical Computer Science, 134(2):329 -363, 1994. Parallel Proofs for Parallel Programs. Z Kincaid, University of TorontoPhD thesisZ. Kincaid. Parallel Proofs for Parallel Programs. PhD thesis, University of Toronto, 2016. Software synthesis procedures. V Kuncak, M Mayer, R Piskac, P Suter, Commun. ACM. 552V. Kuncak, M. Mayer, R. Piskac, and P. Suter. Software synthesis procedures. Commun. ACM, 55(2):103-111, 2012. An interpolation theorem in the predicate calculus. R C Lyndon, Pacific J. Math. 91R. C. Lyndon. An interpolation theorem in the predicate calculus. Pacific J. Math., 9(1):129- 142, 1959. Lazy abstraction with interpolants. K L Mcmillan, Proc. of CAV'06. of CAV'06Springer4144K. L. McMillan. Lazy abstraction with interpolants. In Proc. of CAV'06, volume 4144 of LNCS. Springer, 2006. Lazy annotation revisited. K L Mcmillan, CAV2014, Proceedings. Springer International PublishingK. L. McMillan. Lazy annotation revisited. In CAV2014, Proceedings, pages 243-259. Springer International Publishing, 2014. Fast decision procedures based on congruence closure. G Nelson, D C Oppen, J. ACM. 272G. Nelson and D. C. Oppen. Fast decision procedures based on congruence closure. J. ACM, 27(2):356-364, Apr. 1980. Finite state machines for strings over infinite alphabets. F Neven, T Schwentick, V Vianu, ACM Trans. Comput. Log. 53F. Neven, T. Schwentick, and V. Vianu. Finite state machines for strings over infinite alpha- bets. ACM Trans. Comput. Log., 5(3):403-435, 2004. Über die Vollstandigkeit eines gewissen Systems der Arithmetik. Comptes rendus du I Congrés des Pays Slaves. M Presburger, WarsawM. Presburger.Über die Vollstandigkeit eines gewissen Systems der Arithmetik. Comptes rendus du I Congrés des Pays Slaves, Warsaw 1929. Solving quantified linear arithmetic by counterexample-guided instantiation. A Reynolds, T King, V Kuncak, Formal Methods in System Design. 513A. Reynolds, T. King, and V. Kuncak. Solving quantified linear arithmetic by counterexample-guided instantiation. Formal Methods in System Design, 51(3):500-532, 2017. Constraint solving for interpolation. A Rybalchenko, V Sofronie-Stokkermans, J. Symb. Comput. 4511A. Rybalchenko and V. Sofronie-Stokkermans. Constraint solving for interpolation. J. Symb. Comput., 45(11):1212-1233, 2010. Alternating automata and program verification. M Y Vardi, SpringerBerlin HeidelbergM. Y. Vardi. Alternating automata and program verification, pages 471-485. Springer Berlin Heidelberg, 1995. . Z3, Solver, Z3 SMT Solver. https://rise4fun.com/z3.
[ "https://github.com/cathiec/FOADA.", "https://github.com/sosy-lab/java-smt." ]
[ "AN IMPROVED BERRY-ESSÉEN BOUND OF LEAST SQUARES ESTIMATION FOR FRACTIONAL ORNSTEIN-UHLENBECK PROCESSES", "AN IMPROVED BERRY-ESSÉEN BOUND OF LEAST SQUARES ESTIMATION FOR FRACTIONAL ORNSTEIN-UHLENBECK PROCESSES" ]
[ "Yong Chen ", "Xiangmeng Gu " ]
[]
[]
The aim of this paper is twofold. First, it offers a novel formula to calculate the inner product of the bounded variation function in the Hilbert space H associated with the fractional Brownian motion with Hurst parameter H ∈ (0, 1 2 ). This formula is based on a kind of decomposition of the Lebesgue-Stieljes measure of the bounded variation function and the integration by parts formula of the Lebesgue-Stieljes measure. Second, as an application of the formula, we explore that as T → ∞, the asymptotic line for the square of the norm of the bivariate function f T (t, s) = e −θ|t−s| 1 {0≤s,t≤T } in the symmetric tensor space H ⊙2 (as a function of T ), and improve the Berry-Esséen type upper bound for the least squares estimation of the drift coefficient of the fractional Ornstein-Uhlenbeck processes with Hurst parameter H ∈ ( 1 4 , 1 2 ). The asymptotic analysis of the present paper is much more subtle than that of Lemma 17 in Hu, Nualart, Zhou(2019) and the improved Berry-Esséen type upper bound is the best improvement of the result of Theorem 1.1 in Chen, Li (2021). As a by-product, a second application of the above asymptotic analysis is given, i.e., we also show the Berry-Esséen type upper bound for the moment estimation of the drift coefficient of the fractional Ornstein-Uhlenbeck processes where the method is obvious different to that of Proposition 4.1 in Sottinen, Viitasaari(2018).
null
[ "https://export.arxiv.org/pdf/2210.00420v1.pdf" ]
252,683,875
2210.00420
b7341ad3741cb3162bf6b570bd499497c4a01c50
AN IMPROVED BERRY-ESSÉEN BOUND OF LEAST SQUARES ESTIMATION FOR FRACTIONAL ORNSTEIN-UHLENBECK PROCESSES 2 Oct 2022 Yong Chen Xiangmeng Gu AN IMPROVED BERRY-ESSÉEN BOUND OF LEAST SQUARES ESTIMATION FOR FRACTIONAL ORNSTEIN-UHLENBECK PROCESSES 2 Oct 2022Fractional Brownian motionFractional Ornstein-Uhlenbeck pro- cessBerry-Esséen bound MSC 2010: 60G1560G2262M09 The aim of this paper is twofold. First, it offers a novel formula to calculate the inner product of the bounded variation function in the Hilbert space H associated with the fractional Brownian motion with Hurst parameter H ∈ (0, 1 2 ). This formula is based on a kind of decomposition of the Lebesgue-Stieljes measure of the bounded variation function and the integration by parts formula of the Lebesgue-Stieljes measure. Second, as an application of the formula, we explore that as T → ∞, the asymptotic line for the square of the norm of the bivariate function f T (t, s) = e −θ|t−s| 1 {0≤s,t≤T } in the symmetric tensor space H ⊙2 (as a function of T ), and improve the Berry-Esséen type upper bound for the least squares estimation of the drift coefficient of the fractional Ornstein-Uhlenbeck processes with Hurst parameter H ∈ ( 1 4 , 1 2 ). The asymptotic analysis of the present paper is much more subtle than that of Lemma 17 in Hu, Nualart, Zhou(2019) and the improved Berry-Esséen type upper bound is the best improvement of the result of Theorem 1.1 in Chen, Li (2021). As a by-product, a second application of the above asymptotic analysis is given, i.e., we also show the Berry-Esséen type upper bound for the moment estimation of the drift coefficient of the fractional Ornstein-Uhlenbeck processes where the method is obvious different to that of Proposition 4.1 in Sottinen, Viitasaari(2018). Introduction Unless otherwise specified, the Hurst parameter in this paper is always assumed to be H ∈ (0, 1 2 ). This article has two main purposes. One is to improve the Berry-Esséen bound of the least squares estimation of the drift coefficients of the fractional Ornstein-Uhlenbeck process based on continuous sample observations. The second is to give an easy to calculate formula for the inner product of Hilbert space H connected by fractional Brownian motion when it is restricted to bounded variation function. For the two purposes of this paper, the former can be regarded as a very effective application of the latter. In addition, as an accessory product, we also give the second application of the latter: The method of proving Berry-Esséen bound for moment estimation of drift coefficients of fractional Ornstein-Uhlenbeck process is different from that of Proposition 4.1 of [1] and Theorem 5.4 of [2]. The conclusions of this paper are novel. It is particularly worth emphasizing that, as far as we know, there is no alternative method to obtain the upper bound of the improved Berry-Esséen class for the least squares estimation of drift coefficients. In addition, we also point out that the binary function in symmetric tensor space H ⊙2 obtained by using this method f T (t, s) = e −θ|t−s| 1 {0≤s,t≤T } (1.1) The asymptotic property of norm square (see Proposition 1.9) is much more precise than that of Lemma 17 in [3]. Specifically, we consider the fractional Ornstein-Uhlenbeck process based on continuous time observation dX t = −θX t dt + σdB H t , X 0 = 0, 0 ≤ t ≤ T,(1.2) Berry-Esséen class upper bounds for two kinds of estimators of drift coefficients, including θ > 0 is the drift coefficient σ > 0 is the volatility coefficient, B H t is the one-dimensional fractional Brownian motion with Hurst parameter H, and its covariance function is given by the following formula: R H (t, s) = 1 2 (t 2H + s 2H − |t − s| 2H ). (1.3) Without losing generality, the following is constant σ = 1. Reference [3] minimized the following formula T 0 |Ẋ t + θX t | 2 dt, (1.4) and calculating the limit of the second moment of the sample (orbit) of the OU process lim T →∞ 1 T T 0 X 2 t dt,(1.5) When traversal is constructed (i.e. θ > 0), the least squares estimation and moment estimation of the drift coefficient are respectively: θ T = − T 0 X t dX t T 0 X 2 t dt = θ − T 0 X t dB H t T 0 X 2 t dt , (1.6) θ T = 1 HΓ(2H)T T 0 X 2 t dt − 1 2H . (1.7) As in reference [3], this paper does not discuss the meaning of the first random integral about the fractional OU process X t at the right end of (1.6), but only regards it as a formal integral, that is, it is only understood as substituting the direct form of equation (1.2) into the integral. The second random integral about B H t at the right end of (1.6) obtained after substitution is understood as a divergent (or skorohold) integral about fractional Brownian motion, but its meaning as a statistic in the sense of standard statistics is not studied. Of course, the statistical meaning of the second statistic moment estimation is completely clear. Further, by verifying the fourth order moment theorem, reference [3] gives the strong convergence and asymptotic normality of the least squares estimate and the moment estimate. The two asymptotic properties of the norm of the binary function f T (t, s) and its contraction are the key steps. For the former, they use a formula of the inner product of space H and tensor space H ⊗2 , see (2.5) for details. This formula is the expression formula for the inner product of bounded variation function in Hilbert space associated with the general second moment process given by the integral by parts formula in combination with [4]: the inner product is equal to the integral of the product of the covariance function of the second moment process with respect to the measure derived from two bounded variation functions. For the norm of the compression of binary function 1 √ T f T (t, s), they use Fourier transform to prove that it tends to zero, see (2.6). Based on the above results in [3], reference [5] gives the convergence rate between the distribution of the least squares estimate and its asymptotic distribution, that is, the upper bound of Berry-Esséen class: when H ∈ (0, 1 2 ) and T are sufficiently large, random variable √ T (θ T − θ) and the upper bound of Kolmogorov distance of normal random variable is T −β , we have: 1 4 ], 1 − 2H, H ∈ ( 1 4 , 1 2 ). β =    1 2 , H ∈ [0, (1.8) Here, the method of proving the Berry-Esséen bound of the least squares estimate is based on the Corollary 1 of [6] and two asymptotic analyses of the binary function f T (t, s). The method to prove the Berry-Esséen bound of moment estimation is to transform the fourth order moment into the two asymptotic analyses of the binary function f T (t, s) through the multiplication formula of multiple Wiener integrals, and to estimate the inner product of f T (t, s) and h T (t, s) (see (1.17)), see [7] and [8]. Different from this, Proposition 4.1 of [1] and Theorem 5.4 of [2], the proof of the Berry-Esséen bound for moment estimation is to transform the fourth-order moment into an asymptotic analysis of the stationary solution of the fractional OU process by the Wick formula, and the latter is known, see [9]. Review (1.8), when H = 1 2 − ε and ε sufficiently small, β tends to zero. This is the same as when H = 1 2 , the known Berry-Esséen bound of √ T (θ T − θ) is 1 √ T , which is very far away, so a reasonable guess is: "when H ∈ ( 1 4 , 1 2 ) , the upper bound of Berry-Esséen class is still 1 √ T ." We will prove this conjecture in this paper. It can be seen from the proof of Theorem 1.1 in [7] that the key problem is a more precise asymptotic analysis of the norm of the bivariate function f T (t, s). Therefore, obtaining the asymptotic analysis of this binary function norm is the key step of this paper, and we describe the result of this asymptotic analysis as the following theorem: 1 2 ). For binary function f T (t, s) in space H ⊗2 , see (1.1), There is a normal number C H,θ that does not depend on T , so that when T is sufficiently large, there is an inequality Theorem 1.1. Let θ > 0, H ∈ (0,f T 2 H ⊗2 − 2(HΓ(2H)) 2 σ 2 H T ≤ C H,θ (1.9) holds, where σ 2 H = (4H − 1) + 2Γ(2 − 4H)Γ(4H) Γ(2H)Γ(1 − 2H) . (1.10) Remark 1.2. (1) The upper bound given by formula (1.9) in this paper is a constant C H,θ , which is independent of T , that is, the order of the upper bound of T is 0. In contrast, the upper bound corresponding to the result of Lemma 3.11 in [5] is T 2H , that is, the order of the upper bound of T is 2H. Furthermore, the upper bound in this paper is the best upper bound in the sense of the asymptote below. (2) In fact, the conclusion obtained in this paper is stronger than (1.9). That is, this paper actually obtains the square of the norm of the binary function f T (t, s), as a function of T , the asymptote when T → ∞: lim T →∞ f T 2 H ⊗2 − 2(HΓ(2H)) 2 σ 2 H T = C H ,(1.11) Here C H ∈ R is a constant that depends only on H and is independent of T . See the proof of Theorem 1.1 in Section 3 of this paper for details. We emphasize that in this paper, the intercept term C H of the asymptote is irrelevant, while the existence and slope of the asymptote play a key role. (3) The standard o, O symbols in asymptotic analysis are used to compare Lemma 17 in [3], Lemma 3.11 in [5], and the formula (1.9) in this paper as follows: when T → ∞, we have: 1 T f T 2 H ⊗2 − 2(HΓ(2H)) 2 σ 2 H = o(1), (1.12) 1 T f T 2 H ⊗2 − 2(HΓ(2H)) 2 σ 2 H = O(T 2H−1 ), (1.13) 1 T f T 2 H ⊗2 − 2(HΓ(2H)) 2 σ 2 H = O(T −1 ). (1.14) As a comparison, the method of Lemma 17 in [3] can obtain formula (1.12) succinctly, and the method of Lemma 3.11 in [5] is based on Lemma 17 in [3]. However, this method cannot be further improved, that is, the above formula (1.14) cannot be obtained. In other words, the method of using the new formula for calculating the inner product of fractional Brownian motion given in this paper is, as far as we know, still irreplaceable. Starting from the asymptotic analysis given by the above theorem, the following theorem shows that when H ∈ ( 1 4 , 1 2 ), the improved Berry-Esséen bound of the least squares estimate is the 1 √ T guessed above, and as a by-product, the Berry-Esséen bound of the moment estimate is also 1 √ T : Theorem 1.3. Let Z be a standard normal random variable and H ∈ (0, 1 2 ). Then there is a normal number C θ, H , and it does not depend on T , so that when T is large enough, there is Berry-Esséen inequality sup z∈R P ( T θσ 2 H (θ T − θ) ≤ z) − P (Z ≤ z) ≤ C θ,H √ T ; (1.15) sup z∈R P ( 4H 2 T θσ 2 H (θ T − θ) ≤ z) − P (Z ≤ z) ≤ C θ,H √ T ; (1.16) hold, where σ 2 H as (1.10). Remark 1.4. We point out that in the Berry-Esséen class inequality estimates of two statistics (see (3.24) and (3.25)), part of the source of the upper bound 1 √ T is based on the key inequality (3.17) in [3], that is, the upper bound of the function f T (s, t) about its own compressed f T ⊗ 1 f T norm in space H ⊗2 is the key fact of √ T . The upper bound estimation is obtained by another formula for calculating the inner product in H, namely Fourier transform, as shown in formula (2.6). In a word, we finally get the Berry-Esséen class upper bound estimates of the two statistics in this paper using four very different formulas for calculating the inner product of H: (1.23), (2.3), (2.5) and (2.6). In other words, except for the formula (2.8) for calculating the inner product using the operator K * H , all the other four formulas for calculating the inner product of H mentioned in Section 2 have been used. In this paper, the method of proving Berry-Esséen inequality (1.16) of moment estimation is based on the following proposition, which gives the estimation of the inner product of binary functions f T , h T in tensor space H ⊗2 . Here, binary function h T (t, s) = e −θ(T −t)−θ(T −s) 1 {0≤s,t≤T } . (1.17) Proposition 1.5. Let the binary functions f T , h T be given in (1.1) and (1.17) respectively, then there is a constant C H independent of T , which makes the following inequality hold: | f T , h T H ⊗2 | ≤ C H . (1.18) Remark 1.6. Proposition 1.5 and Theorem 1.1 have the same point in that they are both the inner product of two bivariate functions estimated in H ⊗2 . The difference is that the former actually divides the integral region into nine blocks, and finally reduces it to three kinds of integral calculations by symmetry and other methods. The latter takes advantage of the particularity of the function h T (s, t) to separate variables, so it regresses to the problem of estimating the inner product of two univariate functions in H, and conveniently uses the inner product calculation formula in Inference 2.4. Compared with the two methods, the whole process of the former is very complicated and the latter is very simple. However, since the function f T (s, t) is not variable separated, the latter method is not applicable to the former. As far as we know, we do not know whether there are other simpler methods to prove the conclusion of Theorem 1.1. In the second half of this section, we give a new formula for calculating the inner product of H ∈ (0, 1 2 ) space-time H and symmetric tensor space H ⊙2 . See Propositions 1.9 and 1.12. The new formula is similar to but also obviously different from the following well-known facts to some extent: When Hurst parameter H ∈ (0, 1 2 ), the formula for the inner product of two disjoint functions f and g in Hilbert space H connected by fractional Brownian motion is the same as that for the inner product when H ∈ ( 1 2 , 1), see [9,12], or see Corollary 2.4. The new formula for calculating the inner product given in Proposition 1.9 of this paper can be explained as follows: The integral region [0, T ] 2 is divided into the following three parts. For the double integral on region κ 1 := (u, v) ∈ [0, T ] 2 : 0 ≤ v ≤ u − 1 ≤ T − 1 (1.19) and κ 2 := (u, v) ∈ [0, T ] 2 : 0 ≤ u ≤ v − 1 ≤ T − 1 (1.20) the Partial integral formula on the measure is applied twice, while for the double integral on region κ 3 := (u, v) ∈ [0, T ] 2 : 0 ∨ (u − 1) ≤ v ≤ (u + 1) ∧ T (1.21) the Partial integral formula on the measure is applied only once. The integral domain decomposition is shown in Figure 1. In particular, the following more special form is used in this paper, that is, let 0 ≤ a < b ≤ T , and g = f · 1 [a,b] , where f is a differentiable function, then: ν g (dx) = f ′ (x) · 1 [a,b] (x)dx + f (x) · δ a (x) − δ b (x) dx, (1.22) Here δ a (·) is a dirac generalized function whose mass is concentrated at point a. For ease of use, we use the notation partial ∂g ∂x to represent the "density function" in the form of measure (1.22). Remark 1.8. The details of the above measures can be found in [4], which is the source of the new inner product expression formula in this paper, and is also one of the starting points of this paper. The purpose of introducing this measure is to use the Partial integral formula about this measure. In other words, its convenience is to absorb the values of endpoints a, b into the measure through two Dirac generalized functions (or Dirac single point measure) ν g , so that it is convenient to use the Partial integral formula of ν g . See Lemma 2.2 for details. Proposition 1.9. If f, g ∈ V [0,T ] , then f, g H =α H T 1 g(t)dt t−1 0 f (s)(t − s) 2H−2 ds + T 1 f (s)ds s−1 0 g(t)(s − t) 2H−2 dt − H T 0 g(t)dt T 0 t 2H−1 − sgn(t − s)|t − s| 2H−1 νf t (ds), (1.23) includingf t (s) = f (s) · 1 [(t−1)∨0,(t+1)∧T ] (s)) is a family of functions with s as the independent variable and t as the parameter. The meaning of νf t (ds)) is given in notation 1.7 and (1.22). In addition, take any two positive numbers ε 1 , ε 2 ∈ (0, T ), recordf f, g H =α H T ε 1 g(t)dt t−ε 1 0 f (s)(t − s) 2H−2 ds + T ε 2 f (s)ds s−ε 2 0 g(t)(s − t) 2H−2 dt − H T 0 g(t)dt T 0 t 2H−1 − sgn(t − s)|t − s| 2H−1 νf t (ds). (1.24) Remark 1.10. The significant difference between the inner product calculation formula (Proposition 1.9) obtained by the above division of the integral region and the inner product calculation formula when the supports do not intersect is that the latter requires that the support set of the binary function is Figure 1 to some extent. {(u, v) : 0 ≤ v ≤ u ≤ T } or {(u, v) : 0 ≤ v ≤ u ≤ T } a Note that H ⊗2 and H ⊙2 are quadratic tensor product spaces and quadratic symmetric tensor product spaces of H . Proposition 1.12 will give the calculation formula of the inner product of binary symmetric functions in H ⊙2 , which is the direct inference of Proposition 1.9, so the derivation details are omitted below. For the convenience of expression, the following marks are introduced: I.e.: ∂ a ∂s f (s) is the density function of measure νf a corresponding to functionf a (s) in Proposition 1.9, see also (1.22). (2) Operators of V ⊗2 [0,T ] → L(C 0 , R) ⊗ V [0,T ] : ∂ a ∂s ϕ(s, t) = ∂ ∂s ϕ(s, t) · 1 [(a−1)∨0,(a+1)∧T ] (s) . (3) Operators of V ⊗2 [0,T ] → L(C 0 , R) ⊗2 : ∂ a ∂ b ∂s∂t ϕ(s, t) = ∂ 2 ∂s∂t ϕ(s, t) · 1 [(a−1)∨0,(a+1)∧T ] (s) · 1 [(b−1)∨0,(b+1)∧T ] (t) , i.e. ∂ a ∂ b ∂s∂t ϕ(s, t) is the Lebesgue-Stieljes measure density function, which connected by the binary function ϕ(s, t)·1 [(a−1)∨0,(a+1)∧T ] (s)·1 [(b−1)∨0,(b+1)∧T ] (t) on the space [(a−1)∨0, (a+1)∧T ]×[(b−1)∨0, (b+1)∧T ], B [(a−1)∨0, (a+1)∧T ]×[(b−1)∨0, (b+1)∧T ] Proposition 1.12. Let s = (s 1 , s 2 ), t = (t 1 , t 2 ) and ( s, t) ∈ κ i × κ j , i, j = 1, 2, 3, κ i see (1.19)-(1.21). if φ, ψ ∈ V ⊙2 [0,T ] , then ψ, φ H ⊗2 =α 2 H 2 i,j=1 κ i ×κ j ψ(s 1 , t 1 )φ(s 2 , t 2 )|s 1 − s 2 | 2H−2 |t 1 − t 2 | 2H−2 d sd t − 2α H 2 i=1 κ 3 ×κ i ψ(s 1 , t 1 ) ∂ s 1 ∂s 2 φ(s 2 , t 2 ) ∂R H ∂s 1 (s 1 , s 2 )|t 1 − t 2 | 2H−2 d sd t + κ 3 ×κ 3 ψ(s 1 , t 1 ) ∂R H ∂s 1 (s 1 , s 2 ) ∂R H ∂t 1 (t 1 , t 2 ) ∂ s 1 ∂ t 1 ∂s 2 ∂t 2 φ(s 2 , t 2 )d sd t, (1.25) where R H (t 1 , t 2 ) is the covariance of fractional Brownian motion(see (1.3)), Op- erators ∂ a ∂s , ∂ a ∂ b ∂s∂t see mark 1.11. Remark 1.13. (1) If ψ, φ is asymmetric, then 2 i=1 κ 3 ×κ i ψ(s 1 , t 1 ) ∂ s 1 ∂s 2 φ(s 2 , t 2 ) ∂R H ∂s 1 (s 1 , s 2 )|t 1 − t 2 | 2H−2 d sd t = 2 i=1 κ i ×κ 3 ψ(s 1 , t 1 ) ∂ t 1 ∂t 2 φ(s 2 , t 2 ) ∂R H ∂t 1 (t 1 , t 2 )|s 1 − s 2 | 2H−2 d sd t. (2) The essence of inner product formula (1.25) is also measure decomposition, that is, for any given ( Figure 2 for details), where s 1 , t 1 ) ∈ [0, T ] 2 , ν φ , the measure ν φ on [0, T ] 2 associated with the binary function φ(s 2 , t 2 ) ∈ V ⊙2 [0,T ] is decomposed into the sum of the measures derived from the limitation of φ(s 2 , t 2 ) itself on M ij , B(M ij ) , i, j = 1, 2, 3, (seeM 11 = (s 2 , t 2 ) ∈ [0, T ] 2 , s 2 ≤ s 1 − 1, t 2 ≤ t 1 − 1 , M 33 = (s 2 , t 2 ) ∈ [0, T ] 2 , s 1 − 1 < s 2 ≤ s 1 + 1, t 1 − 1 < t 2 ≤ t 1 + 1 , Others are similar and here M ij is the closure of M ij . (s 1 , t 1 ) T T t 2 s 2 M 33 M 13 M 23 M 31 M 11 M 21 M 32 M 12 M 22 s 1 − 1s 1 + 1 t 1 − 1 t 1 + 1 Figure 2. Schematic diagram of measure decomposition The rest of this paper is arranged as follows: in Section 2 we briefly review various known calculation formulas of inner product in H and prove Proposition 1.9. In Section 3 we prove Proposition 1.5, Theorem 1.1 and Theorem 1.3. As an appendix, in Section 4 we give asymptotes of various multiple integrals used in the proof of Theorem 1.1. Finally, we point out that the constants C H , C H,θ independent to T , and can be different from line to line. Preparation knowledge and proof of new calculation formula of inner product in H Preparation knowledge. Record E as all the real valued ladder functions on [0, T ], and assign the inner product on them: 1 [a,b) , 1 [c,d) H = E (B H b − B H a )(B H d − B H c ) . (2.1) H is the Hilbert space of E after completion. On the premise of preserving the linear structure and norm, the mapping 1 [0,t] → B H t is extended to H, and the isometric isomorphic mapping is recorded as ϕ → B H (ϕ). And called {B H (ϕ), ϕ ∈ H} is a Gaussian equidistant process connected with Hilbert space H. The expression formula of H inner product in Hilbert space is discussed in two cases. 1. When H > 1 2 , The covariance of B H t can be written as R H (t, s) = α H s 0 du t 0 |u − v| 2H−2 dv, (2.2) where α H = H(2H − 1) . But for any f, g ∈ H, we have f, g H = α H T 0 g(s)ds T 0 f (t)|s − t| 2H−2 dt,(2. 3) It should be noted that the elements in H are not necessarily ordinary functions. 2. For any given s ∈ [0, T ], when H < 1 2 , the defective integral of |s − t| 2H−2 on [0, T ] does not converge, so the covariance function of B H t cannot be directly expressed in the form of (2.2). At this time, the elements in space H are ordinary functions, but the formula (2.3) of inner product in Hilbert space H is generally not true. But what is interesting is that the covariance of B H t increment satisfies the formula: If let 0 ≤ a < b ≤ c < d ≤ T , then E[(B H b − B H a )(B H d − B H c )] = α H b a du d c |u − v| 2H−2 dv, (2.4) This leads to the conclusion that if f, g ∈ H supports are disjoint, the inner product formula (2.3) still holds, see [9,12]. By the way, we point out that Corollary 2.4 in this paper can also lead to this known conclusion. In reference [4], a formula is given for limiting the inner product of Hilbert space H to the bounded variation function V [0,T ] . If f, g ∈ V [0,T ] then f, g H = [0,T ] 2 R H (s, t)ν f (ds)ν g (dt) = − [0,T ] 2 g(t) ∂R H ∂t (s, t)dt ν f (ds). (2.5) refer to [3] [4] [5]. In addition, with the help of Fourier transform, the following formula for the inner product of Hilbert space H is sometimes very useful: f, g H = Γ(2H + 1) sin(πH) 2π R F f (ξ)F g(ξ) |ξ| 1−2H dξ, (2.6) here f, g can be taken from some proper subspace of H, see [13] for details. Finally, with the help of kernel function K H (t, s) = c H t s H− 1 2 (t − s) H− 1 2 − (H − 1 2 )s 1 2 −H t s u H− 3 2 (u − s) H− 1 2 du (2.7) and operator K * H (K * H φ)(t) = K H (T, t)φ(T ) + T t ∂K H ∂s (s, t)[φ(s) − φ(t)]ds, People transform the inner product of Hilbert space H into the inner product of elements in L 2 ([0, T ]). φ, ψ H = K * H φ(t), K * H ψ(t) L 2 ([0,T ]) . (2.8) This inner product formula establishes the theoretical relationship between H and L 2 ([0, T ]), but people usually do not directly use it to calculate the inner product. For details, see [14]. 2.2. A new formula for calculating inner product. Let f , g be monotone non decreasing functions on R, then the Lebesgue-Stieljes measure associated with bounded variation functions (f − g) on R is defined as ν (f −g) =ν f −ν g . Here,ν f is the Lebesgue-Stieljes positive measure on R, B(R) associated with monotone non decreasing function f on R, and we emphasize that right continuity of f is not required here. In fact, the value of function f at the discontinuous point and its Lebesgue-Stieljes measureν f is independent (this point has been implicitly used many times in this paper). For details, see the Theorem 1.7.9 and exercise 1.7.12 in [15]. From the uniqueness theorem of measures, it is easy to deduce the following well-known lemmas: Lemma 2.1. If F, G is are bounded variation functions on R, order Ψ = F + G thenν Ψ =ν F +ν G , (2.9) In particular, when F, G ∈ V [0,T ] , order Ψ = F + G then ν Ψ = ν F + ν G ,(2. 10) Here, ν Ψ is the limit of the measureν Ψ 0 associated with the extension Ψ 0 of function Ψ on R on [0, T ], B([0, T ]) , see notation 1.7. The following Lemma 2.2 on the Partial integral formula of measure is one of the main bases for proving Proposition 1.9, which is taken from Lemma 3.1 of [16]. The key is to regard the value of the function at two endpoints in the general Partial integral formula as a measure about two Dirac points (or called Dirac δ generalized function). The two point measures are absorbed into the Lebesgue-Stieljes measure associated with the bounded variation function. This processing method first extends the bounded variation function, and then restricts the Lebesgue-Stieljes measure generated by the extended function back to the original support set of the bounded variation function. See [4,16] for details, and the notation 1.7 and formula (1.22) in this paper. Lemma 2.2. Let [a, b] be a compact interval with positive length, φ:[a, b] → R be continuous on [a, b] and differentiable on (a, b). If φ ′ is absolutely integrable, then for any f ∈ V [a,b] , we have − [a,b] f (t)φ ′ (t)dt = [a,b] φ(t)ν f (dt). (2.11) where ν f is a restriction on [a, b], B([a, b]) of the Lebesgue-Stieljes measure on R, B(R) associated with f 0 (x) = f (x), if x ∈ [a, b], 0, other . From formulas (2.4), (2.5), and the above lemma, we have the following inference: Corollary 2.4. Let 0 ≤ a < b ≤ c < d ≤ T , the bounded variation functions f (s) and g(t) are supported on [a, b] and [c, d] respectively. If H ∈ (0, 1 2 ), then f, g H = α H b a f (s)ds d c g(t)(t − s) 2H−2 dt. (2.12) Remark 2.5. Since the set of bounded variation functions V [0,T ] is a dense subset of Hilbert space H, the inner product formula (2.12) based on the continuity of the inner product is still valid for any function supporting disjoint in Hilbert space H. In the Partial integration formula of measure, namely Lemma 2.2, take φ ≡ 1 to get: Corollary 2.6. Let function ϕ ∈ V ⊗2 [0,T ] , and ∂a ∂s , ∂a∂ b ∂s∂t as shown in mark 1.11. If functions f, g are bounded Borel measurable functions on set [(a−1)∨0, (a+1)∧T ] and set [(b − 1) ∨ 0, (b + 1) ∧ T ] respectively, then: (a+1)∧T (a−1)∨0 ∂ a ∂s ϕ(s, t)ds = 0, (b+1)∧T (b−1)∨0 g(t)dt (a+1)∧T (a−1)∨0 ∂ a ∂ b ∂s∂t ϕ(s, t)ds = 0, (a+1)∧T (a−1)∨0 f (s)ds (b+1)∧T (b−1)∨0 ∂ a ∂ b ∂s∂t ϕ(s, t)dt = 0. In the rest of this section, we give the proof of Proposition 1.9. Proof of Proposition 1.9: The method is to use measure decomposition. First, determine t ∈ [0, T ], and divide s ∈ [0, T ] into the following three inter- vals O 1 := [0, (t − 1) ∨ 0), O 2 := [(t − 1) ∨ 0, (t + 1) ∧ T ], O 3 = ((t + 1) ∧ T, T ]; s 0 T O 1 O 2 O 3 t + 1 t − 1 Then the function f (s) is decomposed into the restriction of the above three intervals, so that the measure ν f is decomposed into the sum of the Lebesgue-Stieljes measures associated with the three. The specific steps are as follows: First, review formula (2.5): f, g H = − T 0 g(t)dt T 0 ∂R H ∂t (s, t)ν f (ds). (2.13) For any given t ∈ [0, T ], decompose function f (s) ∈ V [0,T ] as shown in the figure above: f (s) = f (s) 1 [0,(t−1)∨0] (s) + 1 [(t−1)∨0,(t+1)∧T ] (s) + 1 ((t+1)∧T,T ] (s) : = f 1 t (s) +f t (s) + f 2 t (s). According to Lemma 2.1 know the measure ν f has the following decomposition: ν f = ν f 1 t + νf t + ν f 2 t , (2.14) Here, the four measures are Lebesgue-Stieljes measures defined on [0, T ], B([0, T ] . Substitute formula (2.14) into formula (2.13) to get: f, g H = − T 0 g(t)dt T 0 ∂R H ∂t (s, t)ν f 1 t (ds) − T 0 g(t)dt T 0 ∂R H ∂t (s, t)νf t (ds) − T 0 g(t)dt T 0 ∂R H ∂t (s, t)ν f 2 t (ds) := I 1 + I 2 + I 3 . (2.15) Note that the support set of function f 1 t (s) is [0, (t − 1) ∨ 0], then T 0 ∂R H ∂t (s, t)ν f 1 t (ds) =    0 t ∈ [0, 1], T 0 1 [0,t−1] (s) ∂R H ∂t (s, t)ν f 1 t (ds) t ∈ (1, T ]. Note that when t ∈ (1, T ], integral in the right end of the above equation T 0 1 [0,t−1] (s) ∂R H ∂t (s, t)ν f 1 t (ds) = t−1 0 ∂R H ∂t (s, t)ν f 1 t (ds), where, in fact, measure ν f 1 t at the right end can be understood as only defined on [0, t − 1], B([0, t − 1]) ; however function ∂ 2 R H ∂s∂t (s, t) , as a function of argument s, is absolutely integrable on [0, t − 1], so it is obtained from Lemma 2.2: t−1 0 ∂R H ∂t (s, t)ν f 1 t (ds) = − t−1 0 f (s) ∂ 2 R H ∂s∂t (s, t)ds. Thus: I 1 = α H T 1 g(t)dt t−1 0 f (s)(t − s) 2H−2 ds . (2.16) Similarly, because the support set of function f 2 t (s) is [(t + 1) ∧ T, T ], we have T 0 ∂R H ∂t (s, t)ν f 2 t (ds) =    T t+1 ∂R H ∂t (s, t)ν f 2 t (ds) t ∈ [0, T − 1), 0 t ∈ [T − 1, T ], and T t+1 ∂R H ∂t (s, t)ν f 2 t (ds) = − T t+1 f (s) ∂ 2 R H ∂s∂t (s, t)ds. and I 3 = α H T 1 f (s)ds s−1 0 g(t)(s − t) 2H−2 dt.(2.f T , h T H ⊗2 = f T (t, ·), φ T H , φ T H . (3.1) Secondly, calculate the inner product f T (t, ·), φ T H when t ∈ [0, T ] is taken. According to the linear property of the inner product, we have: f T (t, ·), φ T H = f 1 , h 1 H + f 1 , h 2 H + f 2 , h 1 H + f 2 , h 2 H ,(3.2) here function f 1 (·) = f T (t, ·)1 [0,t) (·), f 2 (·) = f T (t, ·)1 [t,T ] (·) h 1 (·) = φ T (·)1 [0,t) (·), h 2 (·) = φ T (·)1 [t,T ] (·). According to the support set and inner product calculation formula (2.5) or (2.3) of the above four functions, respectively we have: f 2 , h 1 H = α H e t−T T t du t 0 dv e −u+v (u − v) 2H−2 , f 1 , h 2 H = α H e −t−T t 0 du T t dv e u+v (v − u) 2H−2 , f 1 , h 1 H = −He −t−T [0,t] 2 e u+v (1 − δ t (u)) v 2H−1 − |v − u| 2H−1 sgn(v − u) dudv, = He −T t 0 (e v−t + e t−v )v 2H−1 dv, f 2 , h 2 H = −He t−T [t,T ] 2 e −u+v (−1 + δ t (u) − δ T (u)) v 2H−1 − |v − u| 2H−1 sgn(v − u) dudv = He t−T T −t 0 dy y 0 (e −x − e x )x 2H−1 dx + T −t 0 (e −x + e x )x 2H−1 dx . Then we enlarge the inner product of the above items and φ T : record i, j = 1, 2, f i , h j H , φ T H ≤ T 0 dt f i , h j H T 0 e s−T (1 − δ T (s)) ∂R(t, s) ∂t ds . (3.3) We then assert that there is a constant C H independent of T , so that for any given t ∈ [0, T ], and we have inequality T 0 e s−T (1 − δ T (s)) ∂R(t, s) ∂t ds ≤ C H × e −T t 2H−1 + e t−T + (T − t) 2H−1 1 (T −1,T ] (t) + (T − t) 2H−2 1 [0,T −1] (t) (3.4) holds. In fact, T 0 e s−T (1 − δ T (s)) ∂R(t, s) ∂t ds = H t 2H−1 T 0 e s−T ds − 1) + e t−T − t 0 e s−t (t − s) 2H−1 ds + T t e s−t (s − t) 2H−1 ds − (T − t) 2H−1 ≤ C H × t 2H−1 e −T + e t−T − t 0 e −u u 2H−1 du + 2 1 0 e u u 2H−1 du + (T − t) 2H−1 1 (T −1,T ] (t) + 1 [0,T −1] (t) e t−T T −t 1 e u u 2H−1 du − (T − t) 2H−1 ≤ C H × e −T t 2H−1 + e t−T + (T − t) 2H−1 1 (T −1,T ] (t) + (T − t) 2H−2 1 [0,T −1] (t) , where, the previous inequality can be seen in Lemma 2.2 of [9]. Finally, it is easy to see that there is a constant C H independent of T , which makes T T −1 f i , h j H (T − t) 2H−1 dt ≤ C H ,(3.f i , h j H e −T t 2H−1 + e t−T + (T − t) 2H−2 1 [0,T −1] (t) dt ≤ C H . (3.6) Combining inequalities (3.3), (3.5) and (3.6): There is a constant C H independent of T , so that f i , h j H , φ T H ≤ C H . According to the identities (3.1) and (3.2), we can get the inequality (1.18). ✷ Proof of Theorem 1.1: Recall that in Remark 1.1.2, we will draw a conclusion that is stronger than the formula (1.9) required by the theorem. That is, the square of the norm of the binary function f T (t, s) is taken as the function of T and the asymptote (1.11) when T → ∞. Equation (1.11) is proved in several steps as follows. Step 1. According to Proposition 1.12, we get the decomposition formula Step 2. Solve the asymptote of function M 11 (T )+M 12 (T ) when T → ∞. First, of f T 2 H ⊗2 . f T 2 H ⊗2 =α 2 H 2 i,j=1 κ i ×κ j e −|s 1 −t 1 | e −|s 2 −t 2 | |s 1 − s 2 | 2H−2 |t 1 − t 2 | 2H−2 d sd t − 2α H 2 i=1 κ 3 ×κ i e −|s 1 −t 1 | ∂ s 1 ∂s 2 e −|s 2 −t 2 | ∂R H ∂s 1 (s 1 , s 2 ) |t 1 − t 2 | 2H−2 d sd t + κ 3 ×κ 3 e −|s 1 −t 1 | ∂R H ∂s 1 (s 1 , s 2 ) ∂R H ∂t 1 (t 1 , t 2 ) ∂ s 1 ∂ t 1 ∂s 2 ∂t 2 e −|s 2 −t 2 | d sd t :=α 2 H 2 i,j=1 M ij (T ) − 2α H 2 i=1 M 3i (T ) + M 33 (T ). Making the change of variables x = T − s 1 , y = T − t 1 , u = T − s 2 , v = T − t 2 ,weM 11 (T ) = T 1 ds 1 T 1 e −|s 1 −t 1 | dt 1 s 1 −1 0 (s 1 − s 2 ) 2H−2 ds 2 t 1 −1 0 (t 1 − t 2 ) 2H−2 e −|t 2 −s 2 | dt 2 = 2 T 1 e −s 1 ds 1 s 1 1 e t 1 dt 1 s 1 −1 0 (s 1 − s 2 ) 2H−2 ds 2 t 1 −1 0 (t 1 − t 2 ) 2H−2 e −|t 2 −s 2 | dt 2 . (3.8) M 12 (T ) = T 1 ds 1 T −1 0 dt 1 s 1 −1 0 (s 1 − s 2 ) 2H−2 ds 2 T t 1 +1 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 .T × (4H − 1) ∞ 1 e −u u 2H−2 du 2 + 2 ∞ 1 (e 1−u + e −1−u )u 2H−2 du + 2(4H − 1) ∞ 1 e −u u 2H−2 du u 1 e v v 2H−2 dv + C H . (3.10) Step 3. Solve the asymptote of function M 31 (T ) + M 32 (T ) when T → ∞. First, M 31 (T ) = κ 3 ×κ 1 e −|s 1 −t 1 | ∂ s 1 ∂s 2 e −|s 2 −t 2 | ∂R H ∂s 1 (s 1 , s 2 )(t 1 − t 2 ) 2H−2 d sd t = H T 0 ds 1 T 1 e −|s 1 −t 1 | dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 (s 1 +1)∧T (s 1 −1)∨0 ∂ s 1 ∂s 2 e −|s 2 −t 2 | × s 2H−1 1 − sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 ds 2 . (3.11) From Inference 2.6, we have: T 0 s 2H−1 1 ds 1 T 1 e −|s 1 −t 1 | dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 (s 1 +1)∧T (s 1 −1)∨0 ∂ s 1 ∂s 2 e −|s 2 −t 2 | ds 2 = 0. Put it in (3.11), we have M 31 (T ) = −H T 0 ds 1 T 1 e −|s 1 −t 1 | dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 (s 1 +1)∧T (s 1 −1)∨0 ∂ s 1 ∂s 2 e −|s 2 −t 2 | × sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 ds 2 := H × [N(T ) −Ñ (T )],(3.12) here N(T ) = T 0 ds 1 T 1 e −|t 1 −s 1 | dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 (s 1 +1)∧T (s 1 −1)∨0 sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 × sgn(s 2 − t 2 )e −|t 2 −s 2 | ds 2 ; (3.13) N (T ) = T 0 ds 1 T 1 e −|t 1 −s 1 | dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 (s 1 +1)∧T (s 1 −1)∨0 e −|t 2 −s 2 | × sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 δ (s 1 −1)∨0 (s 2 ) − δ (s 1 +1)∧T (s 2 ) ds 2 . (3.14) Similarly, we have: M 32 (T ) = −H T 0 ds 1 T 1 dt 2 t 2 −1 0 e −|s 1 −t 1 | (t 2 − t 1 ) 2H−2 dt 1 (s 1 +1)∧T (s 1 −1)∨0 ∂ s 1 ∂s 2 e −|s 2 −t 2 | × sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 ds 2 := H × [U(T ) −Ũ (T )],(3.15) here U(T ) = T 0 ds 1 T 1 dt 2 t 2 −1 0 e −|t 1 −s 1 | (t 2 − t 1 ) 2H−2 dt 1 (s 1 +1)∧T (s 1 −1)∨0 sgn(s 2 − t 2 )e −|t 2 −s 2 | × sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 ds 2 ; (3.16) U (T ) = T 0 ds 1 T 1 dt 2 t 2 −1 0 e −|t 1 −s 1 | (t 2 − t 1 ) 2H−2 dt 1 (s 1 +1)∧T (s 1 −1)∨0 e −|t 2 −s 2 | × sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 δ (s 1 −1)∨0 (s 2 ) − δ (s 1 +1)∧T (s 2 ) ds 2 .2HT × ∞ 1 e −u u 2H−1 du − 2H(e −1 + e) + (4H − 1) 1 0 (e x + e −x )x 2H−1 dx + e −1 1 0 (e x − e −x )x 2H−1 dx − (1 + e −2 ) . (3.18) Step 4. Solve the asymptote of function M 33 (T ). Similar to the method for dealing with items M 31 (T ) and M 32 (T ) in step 3, we expand ∂R H ∂s 1 (s 1 , s 2 ) and ∂R H ∂t 1 (t 1 , t 2 ) in turn, and use Inference 2.6 twice consecutively to obtain: M 33 (T ) = κ 3 ×κ 3 e −|s 1 −t 1 | ∂R H ∂s 1 (s 1 , s 2 ) ∂R H ∂t 1 (t 1 , t 2 ) ∂ s 1 ∂ t 1 ∂s 2 ∂t 2 e −|s 2 −t 2 | d sd t = H T 0 ds 1 T 0 e −|s 1 −t 1 | dt 1 (t 1 +1)∧T (t 1 −1)∨0 ∂R H ∂t 1 (t 1 , t 2 )dt 2 × (s 1 +1)∧T (s 1 −1)∨0 s 2H−1 1 − |s 1 − s 2 | 2H−1 sgn(s 1 − s 2 ) ∂ s 1 ∂ t 1 ∂s 2 ∂t 2 e −|s 2 −t 2 | ds 2 = −H 2 T 0 ds 1 T 0 e −|s 1 −t 1 | dt 1 (s 1 +1)∧T (s 1 −1)∨0 |s 1 − s 2 | 2H−1 sgn(s 1 − s 2 )ds 2 × (t 1 +1)∧T (t 1 −1)∨0 t 2H−1 1 − |t 1 − t 2 | 2H−1 sgn(t 1 − t 2 ) ∂ s 1 ∂ t 1 ∂s 2 ∂t 2 e −|s 2 −t 2 | dt 2 = H 2 T 0 ds 1 T 0 e −|s 1 −t 1 | dt 1 (s 1 +1)∧T (s 1 −1)∨0 |s 1 − s 2 | 2H−1 sgn(s 1 − s 2 )ds 2 × (t 1 +1)∧T (t 1 −1)∨0 |t 1 − t 2 | 2H−1 sgn(t 1 − t 2 ) ∂ s 1 ∂ t 1 ∂s 2 ∂t 2 e −|s 2 −t 2 | dt 2 . (3.19) Note that the bivariate joint "density function" in the above equation can be expressed as: ∂ s 1 ∂ t 1 ∂s 2 ∂t 2 e −|s 2 −t 2 | = e −|s 2 −t 2 | × − 1 − sgn(t 2 − s 2 )[δ (s 1 −1)∨0 (s 2 ) − δ (s 1 +1)∧T (s 2 )] − sgn(s 2 − t 2 )[δ (t 1 −1)∨1 (t 2 ) − δ (t 1 +1)∧T (t 2 )] + (δ (s 1 −1)∨0 − δ (s 1 +1)∧T )(s 2 )(δ (t 1 −1)∨0 − δ (t 1 +1)∧T )(t 2 ) Substitute the above joint density function into formula (3.19) to obtain: M 33 (T ) = H 2 × [−L(T ) + 2P (T ) + Q(T )],(3.20) where L(T ) = [0,T ] 2 e −|t 1 −s 1 | ds 1 dt 1 (s 1 +1)∧T (s 1 −1)∨0 ds 2 (t 1 +1)∧T (t 1 −1)∨0 sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 × sgn(t 1 − t 2 )|t 1 − t 2 | 2H−1 e −|t 2 −s 2 | dt 2 , (3.21) P (T ) = [0,T ] 2 e −|t 1 −s 1 | ds 1 dt 1 (s 1 +1)∧T (s 1 −1)∨0 ds 2 (t 1 +1)∧T (t 1 −1)∨0 e −|t 2 −s 2 | sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 × sgn(t 1 − t 2 )|t 1 − t 2 | 2H−1 sgn(s 2 − t 2 )[δ (s 1 −1)∨0 (s 2 ) − δ (s 1 +1)∧T (s 2 )] dt 2 , (3.22) Q(T ) = [0,T ] 2 e −|t 1 −s 1 | ds 1 dt 1 (s 1 +1)∧T (s 1 −1)∨0 ds 2 (t 1 +1)∧T (t 1 −1)∨0 e −|t 2 −s 2 | sgn(s 1 − s 2 )|s 1 − s 2 | 2H−1 × sgn(t 1 − t 2 )|t 1 − t 2 | 2H−1 (δ (s 1 −1)∨0 − δ (s 1 +1)∧T )(s 2 )(δ (t 1 −1)∨0 − δ (t 1 +1)∧T )(t 2 ) dt 2 . For term Q(T ), first integrate Dirac function, then convert it into four double integrals, when T → ∞, we can directly calculate that its asymptote is (6e −2 + 2)T + C H . Lemma 4.7 and Lemma 4.8 respectively give the asymptotes of the terms L(T ) and P (T ) when T → ∞, and combine the three asymptotes. According to formula (3.20), the asymptote of M 33 (T ) is: 2H 2 T × −2(4H + 1) 1 0 e −u u 2H−1 du u 0 e v v 2H−1 dv + (4H + 1) 1 0 e −u u 2H−1 du 2 +4H 1 0 e −1−u − e −1+u u 2H−1 du + e −2 + 3 + C H . (3.23) Finally, the above three steps give the asymptotes of M 11 (T )+M 12 (T ), M 31 (T )+ M 32 (T ), and M 33 (T ) as functions of T when T → ∞, and obtain (3.10), (3.18) and (3.23) respectively. Then it is known from the decomposition formula (3.7) that the norm of the binary function f T (t, s) is taken as the function of T , and the asymptote exists when T → ∞. Finally, from the uniqueness of (1.12) and function limit, it is concluded that (1.11) holds. ✷ Remark 3.1. A by-product of the proof of Theorem 1.1 is the following seemingly tedious analytical identities: 2(HΓ(2H)) 2 4H − 1 + 2Γ(2 − 4H)Γ(4H) Γ(2H)Γ(1 − 2H) = A 3 + 2α H (α H A 1 − A 2 ), Here A 1 , A 2 , A 3 are the slope values of asymptotes (3.10), (3.18) and (3.23) respectively. It should be emphasized that the true meaning of Theorem 1.1 lies in "The norm of the bivariate function f T (t, s) is taken as a function of T . When T → ∞, the existence of asymptote." As for the specific value of the slope of the asymptote, it is not so important. Therefore, in order to save space, this paper will not verify this analytic identity. Proof of Theorem 1.3: First, we have to prove the Berry-Esséen type inequality (1.15) of the least squares estimator. Therefore, From the proof of [11] Theorem 1.1, we know that sup z∈R P ( T θσ 2 H (θ T − θ) ≤ z) − P (Z ≤ z) ≤ C θ,H × max 1 √ T , 1 T f T 2 H ⊗2 − 2(HΓ(2H)) 2 σ 2 H . (3.24) Therefore, according to Theorem 1.1, we can know that inequality (1.15) is true. Then we prove Berry-Esséen inequality of moment estimator (1.16). According to the proof of Theorem 1.1 in [8], sup z∈R P ( T θσ 2 H (θ T − θ) ≤ z) − P (Z ≤ z) ≤ C θ,H × max 1 √ T , 1 T f T 2 H ⊗2 − 2(HΓ(2H)) 2 σ 2 H , | f T , h T H ⊗2 | T .2T × (4H − 1) ∞ 1 e −v v 2H−2 dv v 1 e u u 2H−2 du + ∞ 1 e 1−u u 2H−2 du + C H . (4.2) Proof. We take the integral variable (s 1 , t 1 ) of the quadruple integral 1 2 M 11 (T ). We first decompose the region [0, s 1 − 1] × [0, t 1 − 1] of the integral variable (s 2 , t 2 ) into {0 ≤ t 2 ≤ t 1 − 1, t 2 ≤ s 2 ≤ s 1 − 1]} ∪ {0 ≤ s 2 ≤ t 2 ≤ t 1 − 1}. The in- tegral 1 2 M 11 (T ) restricted to the corresponding subregion is called J 1 (T ), J 2 (T ) respectively, where J 1 (T ) = T 1 ds 1 s 1 1 dt 1 t 1 −1 0 dt 2 s 1 −1 t 2 ds 2 (s 1 − s 2 ) 2H−2 (t 1 − t 2 ) 2H−2 e t 1 −s 1 −|t 2 −s 2 | , J 2 (T ) = T 1 ds 1 s 1 1 dt 1 t 1 −1 0 dt 2 t 2 0 ds 2 (s 1 − s 2 ) 2H−2 (t 1 − t 2 ) 2H−2 e t 1 −s 1 −|t 2 −s 2 | . Then we try to obtain the asymptote of the quadruple integral J 1 (T ). Making the change of variables u = s 1 − s 2 , v = t 1 − t 2 , x = s 1 − t 1 + v. By the symmetry, we have J 1 (T ) = T 1 ds 1 s 1 1 e −2x dx x 1 e v v 2H−2 dv x 1 e u u 2H−2 du =2 T 1 ds 1 s 1 1 e −2x dx x 1 e v v 2H−2 dv v 1 e u u 2H−2 du. Lemma 4.1 and Partial integration formulate indicate that when T → ∞, the asymptote of J 1 (T ) is T × ∞ 1 e −v v 2H−2 dv v 1 e u u 2H−2 du + C H . (4.3) Then we take the asymptote of J 2 (T ). Making the change of variables u = s 1 − s 2 , v = t 1 − t 2 , x = s 1 − t 1 + v, we have J 2 (T ) = T 1 ds 1 s 1 1 e −u u 2H−2 du u 1 dx x 1 e v v 2H−2 dv Lemma 4.1 and Partial integration formulate imply that when T → ∞, the asymp- tote of J 2 (T ) is T × ∞ 1 e −u u 2H−2 du u 1 dx x 1 e v v 2H−2 dv + C H = T × ∞ 1 e −u u 2H−2 du u 1 e v v 2H−2 (u − v)dv + C H .) into {0 ≤ s 1 − 1 ≤ t 1 ≤ T − 1} ∪ {0 ≤ t 1 ≤ s 1 − 1 ≤ T − 1}. The integral M 12 (T ) restricted to corresponding subregion isJ 1 (T ) andJ 2 (T ) respectively, wherē J 1 (T ) = T 1 ds 1 T −1 s 1 −1 dt 1 s 1 −1 0 ds 2 T t 1 +1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 , J 2 (T ) = T 1 ds 1 s 1 −1 0 dt 1 s 1 −1 0 ds 2 T t 1 +1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 . Then we try to obtain the asymptote ofJ 1 (T ) when T → ∞. Making the change of variables u = t 2 − t 1 , v = u + s 1 − 1 − s 2 and x = t 2 − s 2 , we havē J 1 (T ) = T 1 dt 2 t 2 1 e −x dx x 1 e −|x−v−1| dv v 1 (v − u + 1) 2H−2 u 2H−2 du By the Partial integration formulate and Fubini Theorem, when T → ∞, the asymptote is as follow T × ∞ 1 e −x dx x 1 e −|x−v−1| dv v 1 (v − u + 1) 2H−2 u 2H−2 du + C H = T × ∞ 1 u 2H−2 du ∞ u e v (v − u + 1) 2H−2 dv ∞ 1+v e 1−2x dx + ∞ 1 u 2H−2 du ∞ u e −v−1 (v − u + 1) 2H−2 dv 1+v v dx + C H = 3 2 T × ∞ 1 e −u u 2H−2 du 2 + C H . (4.6) We next obtain the asymptote of the integralJ 2 (T ) when T → ∞. Fix the integral variable (s 1 , t 2 ) ofJ 2 (T ). We decompose the integral region [0, s 1 − 1] 2 of integral variable (t 1 , s 2 ) into {0 ≤ s 2 ≤ t 1 ≤ s 1 − 1} ∪ {0 ≤ t 1 ≤ s 2 ≤ s 1 − 1}. The integralJ 2 (T ) restricted to corresponding subregion isJ 21 (T ) andJ 22 (T ) respectively, wherē J 21 (T ) = T 1 ds 1 s 1 −1 0 dt 1 t 1 0 ds 2 T t 1 +1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 , J 22 (T ) = T 1 ds 1 s 1 −1 0 ds 2 s 2 0 dt 1 T t 1 +1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 .s 1 −1 0 dt 1 t 1 0 ds 2 s 1 t 1 +1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 , J 212 (T ) = T 1 ds 1 s 1 −1 0 dt 1 t 1 0 ds 2 T s 1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 . For integralJ 211 (T ), making the change of variables u = t 2 − t 1 , v = s 1 − t 1 , x = s 1 − s 2 , we havē J 211 (T ) = T 1 ds 1 s 1 1 e −x x 2H−2 dx x 1 dv v 1 e −u u 2H−2 du. The Partial integration formulate implies that when T → ∞, the asymptote of J 211 (T ) is T ∞ 1 e −x x 2H−2 dx x 1 e −u u 2H−2 (x − u)du + C H . (4.7) For integralJ 211 (T ), making the change of variables u = t 2 − t 1 , v = s 1 − t 1 , x = s 1 − s 2 , we havē J 212 (T ) = T 1 dt 2 t 2 1 e −y dy y 1 u 2H−2 du u 1 e −v (y − u + v) 2H−2 dv. The Partial integration formulate implies that when T → ∞, the asymptote of J 212 (T ) is T × ∞ 1 e −y dy y 1 u 2H−2 du u 1 e −v (y − u + v) 2H−2 dv + C H = T × 2 ∞ 1 e −u u 2H−2 du u 1 e −v v 2H−1 dv − ∞ 1 e −u u 2H−2 du 2 + C H . (4.8) Fix the integral variable (s 1 , s 2 , t 1 ) ofJ 22 (T ) again. We decompose the integral region [t 1 +1, T ] of integral variable t 2 into [t 1 +1, s 2 +1]∪[s 2 +1, s 1 ]∪[s 1 , T ].s 2 +1 t 1 +1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 , J 222 (T ) = T 1 ds 1 s 1 −1 0 ds 2 s 2 0 dt 1 s 1 s 2 +1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 , J 223 (T ) = T 1 ds 1 s 1 −1 0 ds 2 s 2 0 dt 1 T s 1 (s 1 − s 2 ) 2H−2 (t 2 − t 1 ) 2H−2 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 . For integralJ 221 (T ) and integralJ 222 , making the change of variables, we havē J 221 (T ) = T 1 ds 1 s 1 1 e −v dv v 1 (v − z + 1) 2H−2 dz z 1 x 2H−2 e −|z−x−1| dx, J 222 (T ) = T 1 ds 1 s 1 1 e −v dv v 1 e −x x 2H−2 dx x 1 e z−1 (v − z + 1) 2H−2 dz. The Partial integration formulate and Fubini Theorem imply that when T → ∞, the asymptotes ofJ 221 (T ) andJ 222 (T ) are T × ∞ 1 e −v dv v 1 (v − z + 1) 2H−2 dz z 1 x 2H−2 e −|z−x−1| dx + C H = 3 2 T × ∞ 1 e −u u 2H−2 du 2 + C H (4.9) T × ∞ 1 e −v dv v 1 e −x x 2H−2 dx x 1 e z−1 (v − z + 1) 2H−2 dz + C H = 2T × ∞ 1 e −x x 2H−2 x 1 e −y (y 2H−1 − y 2H−2 )dy + C H . (4.10) For integralJ 223 (T ), making the change of variables, we havē J 223 (T ) = T 1 dt 2 t 2 1 e −u u 2H−2 du u 1 dy y 1 e −x x 2H−2 dx. By the Partial integration formulate, we obtain when T → ∞, the asymptotes ofJ 223 (T ) is T × ∞ 1 e −u u 2H−2 du u 1 e −x x 2H−2 (u − x)dx + C H . (4.11) Combining (4.7)-(4.11), we obtain the asymptote ofJ 2 (T ) T (4H − 5 2 ) ∞ 1 e −u u 2H−2 du 2 + 2 ∞ 1 e −1−u u 2H−2 du .T × ∞ 1 e −u u 2H−2 du × e −1 − e + (4H − 1) 1 0 (e x − e −x )x 2H−1 dx + 1 0 (e x−1 − e −x−1 )x 2H−1 dx + C H . (4.13) Proof. We divide the integral region {0 ≤ s 1 ≤ T, (s 1 − 1) ∨ 0 ≤ s 2 ≤ (s 1 + 1) ∧ T } of integral variable (s 1 , s 2 ) of N(T ) into {0 ≤ s 1 − 1 ≤ s 2 ≤ s 1 ≤ T } ∪ {0 ≤ s 2 − 1 ≤ s 1 ≤ s 2 ≤ T } ∪ {0 ≤ s 2 ≤ s 1 ≤ 1} ∪ {0 ≤ s 1 ≤ s 2 ≤ 1} . The integral N(T ) over the corresponding region is N 1 (T ), N 2 (T ), N 3 (T ), N 4 (T ), where N 1 (T ) = [1,T ] 2 e −|t 1 −s 1 | dt 1 ds 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 s 1 s 1 −1 × sgn(s 2 − t 2 )e −|t 2 −s 2 | (s 1 − s 2 ) 2H−1 ds 2 , N 2 (T ) = − [1,T ] 2 dt 1 ds 2 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 s 2 s 2 −1 × sgn(s 2 − t 2 )e −|t 1 −s 1 |−|t 2 −s 2 | (s 2 − s 1 ) 2H−1 ds 1 , N 3 (T ) = T 1 dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 1 0 ds 1 s 1 0 × sgn(s 2 − t 2 )e −|t 1 −s 1 |−|t 2 −s 2 | (s 1 − s 2 ) 2H−1 ds 2 , N 4 (T ) = T 1 dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 1 0 ds 2 s 2 0 × sgn(s 2 − t 2 )e −|t 1 −s 1 |−|t 2 −s 2 | (s 2 − s 1 ) 2H−1 ds 1 . First, by the absolute integrability of the double integral ∞ 1 e −t 1 dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 [0,1] 2 |s 2 − s 1 | 2H−1 e s 1 ds 1 ds 2 , we know the limit of N 3 (T ), N 4 (T ) exists when T → ∞. Therefore, integral N(T ) and integral N 1 (T ) + N 2 (T ) have asymptotes with the same slope but different intercepts. Next we take the asymptotes of N 1 (T ) and N 2 (T ) respectively. We then should decompose the integral region [1, T ] 2 of integral variable (s 1 , t 1 ) of N 1 (T ) into 1 ≤ t 1 ≤ s 1 ≤ T and 1 ≤ s 1 ≤ t 1 ≤ T to take the asymptote of N 1 (T ). And we have N 1 (T ) =( T 1 ds 1 s 1 1 dt 1 + T 1 dt 1 t 1 1 ds 1 ) t 1 −1 0 dt 2 s 1 s 1 −1 × (s 1 − s 2 ) 2H−1 (t 1 − t 2 ) 2H−2 sgn(s 2 − t 2 )e −|t 1 −s 1 |−|t 2 −s 2 | ds 2 :=N 11 (T ) + N 12 (T ) For N 11 (T ), making the change of variables u = t 1 − t 2 , v = s 1 − s 2 , we obtain N 11 (T ) = 1 0 e v v 2H−1 dv T 1 e −2s 1 ds 1 s 1 1 e 2t 1 dt 1 t 1 1 e −u u 2H−2 du. Therefore, by the Partial integration formulate, we obtain when T → ∞, the asymptote of N 11 (T ) is T 2 1 0 e v v 2H−1 dv ∞ 1 e −u u 2H−2 du + C H . (4.14) For N 12 (T ), we fix integral variable t 1 and decompose the integral region [1, t 1 ] × [0, t 1 − 1] of integral variable (s 1 , t 2 ) into 0 ≤ t 2 ≤ s 1 − 1 ≤ t 1 − 1 and 1 ≤ s 1 ≤ t 2 + 1 ≤ t 1 . Then N 12 (T ) split into the sum of the following two integrals: N 12 (T ) = T 1 dt 1 ( t 1 1 ds 1 s 1 −1 0 dt 2 + t 1 −1 0 dt 2 t 2 +1 1 ds 1 ) s 1 s 1 −1 × (s 1 − s 2 ) 2H−1 (t 1 − t 2 ) 2H−2 sgn(s 2 − t 2 )e s 1 −t 1 −|t 2 −s 2 | ds 2 :=O 1 (T ) + O 2 (T ) For O 1 (T ), making the change of variables u = t 1 − t 2 , v = s 1 − t 2 , x = s 1 − s 2 , we have O 1 (T ) = 1 0 e x x 2H−1 dx T 1 dt 1 t 1 1 e −u u 2H−2 (u − 1)du. Therefore, by the Partial integration formulate, when T → ∞, the asymptotes of O 1 (T ) is T × 1 0 e x x 2H−1 dx ∞ 1 e −u (u 2H−1 − u 2H−2 )du + C H . (4.15) For O 2 (T ), making the change of variables u = t 1 − t 2 , v = t 1 − s 1 + 1, x = s 1 − s 2 indicates O 2 (T ) = T 1 dt 1 t 1 1 e 1−v dv v 1 u 2H−2 du 1 0 x 2H−1 × sgn(u − v − x + 1)e −|u−v−x+1| dx. By the Partial integration formulate, when T → ∞, the asymptote of O 2 (T ) is T ∞ 1 e 1−v dv v 1 u 2H−2 du 1 0 x 2H−1 sgn(u − v − x + 1)e −|u−v−x+1| dx + C H = T ∞ 1 e −u u 2H−2 du 1 0 e x 1 2 x 2H−1 − x 2H dx + C H ,(4.16) The last equation is obtained by decomposing the region {1 ≤ u ≤ v < ∞} into {1 ≤ u ≤ v ≤ 1 + u < ∞}∪{1 ≤ u ≤ v − 1 < ∞}, using the Fubini theorem. Combining (4.15) and (4.16), we obtain the asymptote of N 12 (T ) is: T 1 0 e x−1 x 2H−1 dx − ∞ 1 e 1−u u 2H−2 du + (4H − 3 2 ) ∞ 1 e −u u 2H−2 du 1 0 e x x 2H−1 dx + C H . (4.17) Combining the equation and the asymptote (4.14), the asymptote of N 1 (T ) is T 1 0 e x−1 x 2H−1 dx − ∞ 1 e 1−u u 2H−2 du + (4H − 1) ∞ 1 e −u u 2H−2 du 1 0 e x x 2H−1 dx + C H . (4.18) To take the asymptote of −N 2 (T ), we decompose the integral region [1, T ] 2 of integral variable (t 1 , s 2 ) of −N 2 (T ) into {1 ≤ t 1 ≤ s 2 ≤ T } ∪ {1 ≤ s 2 ≤ t 1 ≤ T }. Then we obtain −N 2 (T ) = T 1 ds 2 s 2 1 dt 1 + T 1 dt 1 t 1 1 ds 2 t 1 −1 0 dt 2 s 2 s 2 −1 × (s 2 − s 1 ) 2H−1 (t 1 − t 2 ) 2H−2 sgn(s 2 − t 2 )e −|t 1 −s 1 |−|t 2 −s 2 | ds 1 :=N 21 (T ) + N 22 (T ). Making the change of variables, we have N 21 (T ) = T 1 ds 2 s 2 1 e −v dv v 1 u 2H−2 du 1 0 x 2H−1 e −|x−v+u| dx. The Partial integration formulate and making the change of variable z = v − u imply that when T → ∞, the asymptote of N 21 (T ) is T × ∞ 1 u 2H−2 du ∞ u e −v dv 1 0 x 2H−1 e −|x−v+u| dx + C H = T × (2H + 1 2 ) 1 0 e −x x 2H−1 dx − e −1 × ∞ 1 e −u u 2H−2 du + C H . (4.19) For N 22 (T ), we fix integral variable t 1 and decompose the integral region [0, t 1 −1]× [1, t 1 ] of integral variable (t 2 , s 2 ) into 1 ≤ t 2 + 1 ≤ s 2 ≤ t 1 and 1 ≤ s 2 ≤ t 2 + 1 ≤ t 1 . Then N 22 (T ) split into the sum of the following two integrals: N 22 (T ) = T 1 dt 1 t 1 −1 0 dt 2 t 1 t 2 +1 ds 2 + t 1 1 ds 2 t 1 −1 s 2 −1 dt 2 s 2 s 2 −1 × (s 2 − s 1 ) 2H−1 (t 1 − t 2 ) 2H−2 sgn(s 2 − t 2 )e −|t 1 −s 1 |−|t 2 −s 2 | ds 1 :=O ′ 1 (T ) + O ′ 2 (T ). Making the change of variables u = t 1 − t 2 , v = t 1 + 1 − s 2 , x = s 2 − s 1 , we have O ′ 1 (T ) = T 1 dt 1 t 1 1 e −u u 2H−2 du u 1 dv 1 0 e −x x 2H−1 dx, O ′ 2 (T ) = T 1 dt 1 t 1 1 e −v dv v 1 e −|u−v+1| u 2H−2 sgn(u − v + 1)du 1 0 e 1−x x 2H−1 dx. The Partial integration formulate implies that when T → ∞, the asymptotes of Combining (4.19) and (4.22), the asymptote of N 2 (T ) is O ′ 1 (T ) and O ′ 2 (T ) are T × 1 0 e −x x 2H−1 dx ∞ 1 e −u u 2H−2 (u − 1)du + C H , (4.20) T × 1 0 e 1−x x 2H−1 dx ∞ 1 e −v dv v 1 e −|u−v+1| u 2H−2 sgn(u − v + 1)du + C H = T 2 × 1 0 e −x x 2H−1 dx × ∞ 1 e −u u 2H−2 du + C H .T 1 0 e −x−1 x 2H−1 dx − ∞ 1 e −1−u u 2H−2 du + (4H − 1) ∞ 1 e −u u 2H−2 du 1 0 e −x x 2H−1 dx + C H . The asymptote of N(t) is (4.13), which is obtained by the asymptote (4.18) of N 1 (t) minus the above equation. Proof. By integrating Dirac function, we write the quadruple integralÑ (T ) as the sum of the following two triple integrals: N(T ) =Ñ 1 (T ) +Ñ 2 (T ), (4.24) wherẽ N 1 (T ) = T 0 ds 1 T 1 e −|t 1 −s 1 | dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 s 1 − (s 1 − 1) ∨ 0 2H−1 e −|t 2 −(s 1 −1)∨0| , N 2 (T ) = T 0 ds 1 T 1 e −|t 1 −s 1 | dt 1 t 1 −1 0 (t 1 − t 2 ) 2H−2 dt 2 (s 1 + 1) ∧ T − s 1 2H−1 e −|t 2 −(s 1 +1)∧T | . First, solve the asymptote of triple integralÑ 1 (T ). Then divide the integral region s 1 ∈ [0, T ] into [0, 1) ∪ [1, T ]. Making the change of variable u = t 1 − t 2 , we get that the triple integral of the subinterval s 1 ∈ [0, 1) connection is: 1 0 s 2H−1 1 ds 1 T 1 e −(t 1 −s 1 ) dt 1 t 1 −1 0 e −t 2 (t 1 − t 2 ) 2H−2 dt 2 = 1 0 e s s 2H−1 1 ds 1 T 1 e −2t 1 dt 1 t 1 1 e u u 2H−2 du. When T → ∞, its limit exists. Then the asymptote ofÑ 1 (T ) and the triple integralÑ 11 (T ) = T 1 ds 1 T 1 e −|t 1 −s 1 | dt 1 t 1 −1 0 e −|t 2 −s 1 +1| (t 1 − t 2 ) 2H−2 dt 2 connected withÑ 1 (T ) in the integral sub region s 1 ∈ [1, T ] have the same slope asymptote (different intercept terms). Making the change of variables w = s 1 ∨ t 1 , v = |s 1 − t 1 | , u = t 1 − t 2 Triple integralÑ 11 (T ) is rewritten as N 11 (T ) = T 1 dw w−1 0 e −v dv w−v 1 u 2H−2 e −(v+u−1) du + w 1 u 2H−2 e −|v−u+1| du .u ′ = u − 1, x = v + u ′ We know that the first part of triple integral (4.25) is T 1 dw w−1 0 e −v dv w−v 1 u 2H−2 e −(v+u−1) du = T 1 dw w−1 0 e −2x dx x 0 (1 + u ′ ) 2H−2 e u ′ du ′ . It is easy to know from the Partial integral formula and Fubini theorem that the asymptote of the above formula is T ∞ 0 e −2x dx x 0 (1 + u ′ ) 2H−2 e u ′ du ′ + C H = 1 2 T ∞ 1 e 1−u u 2H−2 du + C H . (4.26) Making the change of variable u ′ = u − 1 and Fubini theorem, the second part of triple integral (4.25) is T 1 dw w−1 0 e −v dv w 1 u 2H−2 e −|v−u+1| du = T 1 dw w−1 0 e −v dv 1+v 1 u 2H−2 e u−v−1 du + w 1+v u 2H−2 e 1+v−u du = T 1 dw w−1 0 e −2v dv v 0 e u ′ (u ′ + 1) 2H−2 du ′ + T 1 dw w−1 0 e −u ′ (u ′ + 1) 2H−2 u ′ du ′ According to the Partial integral formula, the asymptote of the above formula is T ∞ 0 e −2x dx x 0 (1 + u ′ ) 2H−2 e u ′ du ′ + ∞ 0 e −u ′ (1 + u ′ ) 2H−2 u ′ du ′ + C H = T 1 2 ∞ 1 e 1−u u 2H−2 du + ∞ 1 e 1−u u 2H−2 (u − 1)du + C H . (4.27) Combining (4.26) and (4.27), we get the asymptote of triple integralÑ 1 (T ) as: T × ∞ 1 e 1−u u 2H−1 du + C H . (4.28) Next, we solve the asymptote of triple integralÑ 2 (T ). Similarly, we divide the integral region s 1 ∈ [0, T ] into [0, T − 1] ∪ (T − 1, T ]. Making the change of variable u = t 1 − t 2 , we obtain the limit existence of triple integrals associated with subinterval s 1 ∈ (T − 1, T ] when T → ∞. Therefore, the asymptote ofÑ 2 (T ) and the following triple integral N 21 (T ) = T −1 0 ds 1 T 1 e −|t 1 −s 1 | dt 1 t 1 −1 0 e −|t 2 −s 1 −1| (t 1 − t 2 ) 2H−2 dt 2 have the same slope (different intercept terms). For triple integralsÑ 21 (T ), first making the change of variable u = t 1 − t 2 , and then we divide the integral region [0, T − 1] × [1, T ] of the integral variable (s 1 , t 1 ) as follows: {1 ≤ t 1 ≤ s 1 ≤ T − 1} ∪ {0 ≤ t 1 − 1 ≤ s 1 ≤ t 1 ∧ (T − 1) ≤ T } ∪ {0 ≤ s 1 ≤ t 1 − 1 ≤ T − 1} , we havẽ N 21 (T ) = T −1 ds 1 × t 1 1 e −|t 1 −s 1 |−|t 1 −s 1 −u−1| u 2H−2 du. (4.29) According to the Partial integral formula, the asymptotes of the first, second and third parts of triple integral (4.29) are: T 2 × ∞ 1 e −1−u u 2H−2 du + C H , T × ∞ 1 e −1−u u 2H−2 du + C H , T × ∞ 1 e −1−u u 2H−1 + 1 2 u 2H−2 + C H . Combining the three asymptotes above, we get triple integral the asymptote of N 2 (T ) is T × ∞ 1 e −1−u u 2H−1 + 2u 2H−2 + C H . (4.30) Finally, we combine the asymptote (4.28) ofÑ 1 (T ) and the asymptote (4.30) of N 2 (T ) to obtain the asymptote (4.23) ofÑ(T ). Lemma 4.6. Record two quadruple integrals U(T ),Ũ (T ), as given in (3.16) and (3.17). When T → ∞, their asymptotes are: T × ∞ 1 e −u u 2H−2 du × e −1 − e + (4H − 1) 1 0 (e x − e −x )x 2H−1 dx + 1 0 (e x−1 − e −x−1 )x 2H−1 dx + C H ,(4.4T × (4H + 1) 1 0 e −u u 2H−1 du u 0 e v v 2H−1 dv − (2H + 1 2 ) 1 0 e −u u 2H−1 du 2 + 1 0 (e −u−1 − e u−1 )u 2H−1 du + C H . (4.33) + (t 1 +1)∧T t 1 )dt 2 . (4.34) The integral values of the four integral L(T ) in the above four sub regions are recorded as LL 1 (T ), L 2 (T ), L 3 (T ), L 4 (T ). It is easy to know by symmetry: L 1 (T ) = L 4 (T ), L 2 (T ) = L 3 (T ). (4.35) Then remove the two symbols ∨ in L 1 (T ) respectively, and further decompose the integral into the sum of the following four integrals: L 1 (T ) =( T 1 ds 1 s 1 s 1 −1 ds 2 + 1 0 ds 1 s 1 0 ds 2 )( T 1 dt 1 t 1 t 1 −1 + 1 0 dt 1 t 1 0 ) × (s 1 − s 2 ) 2H−1 (t 1 − t 2 ) 2H−1 e −|t 1 −s 1 |−|t 2 −s 2 | dt 2 :=L 11 (T ) + L 12 (T ) + L 13 (T ) + L 14 (T ) First, we notice that the integral L 14 (T ) is independent of T , and we get L 12 (T ) = L 13 (T ) through symmetry. Making the change of variables u = s 1 − s 2 , v = t 1 − t 2 , we can deduce: It is required to solve the asymptote of triple integral P 1 (T ). First, we divide the region {(s 1 , t 1 ) ∈ [0, T ] 2 } into the following parts: According to the Partial integration formula, When T → ∞, the asymptote of the above formula and the asymptote of triple integral P 1 (T ) are both (combined with the limit existence of integrals in the first six regions, it can be seen that they are only different from each other in terms of intercept C H ): L 12 (T ) = T × Similarly, the asymptote of triple integral P 2 (T ) has the same slope as the asymptote of the following integral Finally, we combine the two asymptotes of P 1 (T ) and P 2 (T ) to obtain the asymptote (4.38) of P (T ). THIS ENGLISH VERSION IS TRANSLATED BY HANXIAO GENG FROM A CHI-NESE VERSION SUBMITTED TO ACTA MATHEMATICA SCIENTIA. Notation 1. 7 .Figure 1 . 71Record α H = H(2H − 1). Let V [0,T ] be the whole set of bounded variation functions defined on [0, T ]. For any f ∈ V [0,T ] , f 0 is defined as f 0 (x) = f (x), x ∈ [0, H schematic diagram of integral domain decomposition in new calculation formula of inner product Record ν f is the limit of Lebesgue-Stieljes measure on R, B(R) of f 0 (x) connection on [0, T ], B([0, T ] . Notation 1 . 11 . 111Let L(C 0 , R) be the whole set of bounded linear functionals defined on the set of compact supported continuous functions C 0 . Let a, b ∈ [0, T ], define three linear operators as follows: (1) Operators of V [0,T ] → L(C 0 , R): ∂ a ∂s f (s) = ∂ ∂s f (s) · 1 [(a−1)∨0,(a+1)∧T ] (s) , ✷ 3 . 3Proof of main Theorems Without losing generality, this section assumes the parameters θ = 1 in definitions (1.1) and (1.17) of binary functions f T (t, s) and h T (t, s). Proof of Proposition 1.5: Firstly, let t ∈ [0, T ] be determined, and understand f T (t, ·) as a function of one variable on s ∈ [0, T ]. Then notice that the binary function h T can be expressed as a function of one variable φ T (t) = e t−T 1 [0,T ] (t) tensor about oneself, namely h T (t, s) = φ T (t)φ T (s). So according to the Fubini theorem, we have: H have: M 11 (T ) = M 22 (T ) and M 12 (T ) = M 21 (T ). ⊗2 = M 33 (T ) + 2 α 2 H M 11 (T ) + M 12 (T ) − α H M 31 (T ) + M 32 (T ) . (3.7) is the symmetry of integral with respect to two variables s 1 , t 1 . According to Lemma 4.2 and Lemma 4.3, we get the function M 11 (T ) + M 12 (T ), and the asymptote when T → ∞ is: Lemma 4.6, we get that the asymptote of M 31 (T ) + M 32 (T ) is: . 1 . 1Let α < 0. There is a positive number C α which depends only on α such that for any x > 1, we havex 1 e u u α du < C α × e x .(4.1) Lemma 4.2. Let (3.8) be the expression of the quadruple integral M 11 (T ). When T → ∞, The asymptote of M 11 (T ) is given by 11 (T ) can be obtained by combining (4.3) and (4.4), and we obtain the asymptote (4.2) of M 11 (T ). Lemma 4 . 3 .e 43Let (3.9) be the expression of the quadruple integral M 12 (T ), then when T → ∞, the asymptote of M 12 (T ) is T (4H − 1) −1−u u 2H−2 du + C H . (4.5) Proof. We first decompose the integral region [1, T ] × [0, T − 1] of the integral variable (s 1 , t 1 ) of the quadruple integral M 12 (T Fix the integral variable (s 1 , t 1 , s 2 ) ofJ 21 (T ) again. We decompose the integral region [t 1 +1, T ] of integral variable t 2 into [t 1 +1, s 1 ]∪[s 1 , T ]. The integralJ 21 (T ) restricted to corresponding subregion isJ 211 (T ) andJ 212 (T ) The integralJ 22 (T ) restricted to corresponding subregion isJ 221 (T ),J 222 (T ),J 223 (T ) we obtain the asymptote (4.5) ofJ(T ) by combining (4.6) and (4.12) Lemma 4.4. Let (3.13) be the expression of the quadruple integral N(T ), then when T → ∞, the asymptote of N(T ) is 4.20) and (4.21), we obtain the asymptote of N 22 (T ) + C H .. Lemma 4. 5 . 5Let (3.14) be the expression of the quadruple integralÑ (T ), then when T → ∞, the asymptote ofÑ (T ) is T × (1 + e −2 ) + (2H + 1)e −1 + (2H − 1)e ∞ 1 e −u u 2H−2 du + C H . (4.23) of Lemma 4.6 is basically consistent with the proof of Lemma 4.4 and Lemma 4.5 above. Considering the length of the article, its details are omitted. Lemma 4 . 7 . 47The marked quadruple integral L(T ) is given by(3.21). Then the asymptote of L(T ) when T → ∞ is: ×e |s−t+v−u| v 2H−1 dv, Therefore, the lim T →∞ L 12 (T ) exists. Again according to symmetry and making the change of variables u = s1 − s 2 , v = t 1 − t 2 , x = s 1 − t 1 , it is obtained that: |x−u+v| v 2H−1 dvIt can be seen from the Partial integration formula that the asymptotes of L 11 (T ) and L 1 (T ) when T → ∞ are both (only the intercept term C H with different difference2H−1 dv + C H (4.36)The above equation divides the integral region of (u, v) into {0 ≤ u ≤ v ≤ 1} ∪ {0 ≤ v ≤ u ≤ 1}, for the second sub region, we divide the integral region of x into [0, u − v] ∪ (u − v,∞) again, and then get it from Fubini theorem. Similarly, it is decomposed as follows: (s 1 − s 2 ) 2H−1 (t 2 − t 1 ) 2H−1 e −|t 1 −s 1 |−|t 2 −s 2 | dt 1 := − (L 21 (T ) + L 22 (T ) + L 23 (T ) + L 24 (T )), Where the integral L 24 (T ) is independent of T , and the existence of lim T →∞ L 22 (T )+ L 23 (T ) is deduced by making the change of variablesv = t 1 − t 2 , u = s 1 − s 2 .Then, it can be seen from making the change of variablesw = max {s 1 , t 2 } , x = |s 1 − t 2 | , u = s 1 − s 2 , v = t 2 − t 1 that the quadruple integral L 21 (T ) is: −|x−v|−u u 2H−1 v 2H−1 dudv.It is known from the Partial integration formula that the asymptotes of −L 21 (T ) and L 2 (T ) when T → ∞ are both (only different intercept terms C H ): equation decomposes the integral domain of (x, v) into {0 ≤ x ≤ v ≤ 1}∪ {0 ≤ v ≤ 1,x > v}, and then obtains it from Fubini theorem. Finally, from (4.34) and (4.35), combining the asymptotes (4.36) and (4.37) of L 1 (T ) and L 2 (T ) when T → ∞, it is obtained that the asymptote of L(T ) is (4.33). Lemma 4 . 8 .∨ 0 2H− 1 + 1 2H− 1 : 480111The quadruple integral P (T ) is given in equation(3.22). Then the asymptote of P (T ) when T → ∞ is:2T × 1 − e −2 − (u−1 − e −u−1 )u 2H−1 du + C H . (4.38)Proof. First, by integrating Dirac function, we write P (T ) as the sum of the following two triple integrals:t 1 − t 2 )|t 1 − t 2 | 2H−1 dt 2 × e −|t 2 −(s 1 −1)∨0| sgn (s 1 − 1) ∨ 0 − t 2 s 1 − (s 1 − 1) e −|t 2 −(s 1 +1)∧T | sgn (s 1 + 1) ∧ T − t 2 (s 1 + 1) ∧ T − s = P 1 (T ) + P 2 (T ). [0, 3 ]e 3× [0, 1], [3, T ] × [0, 1], [0, 1] × [1, T ], [1, T − 1] × [T − 1, T ],[T − 1, T ] × [1, T − 1], [T − 1, T ] 2 , [1, T − 1] 2 .It is clear that the triple integral P 1 (T ) restricted in sub-region [0, 3] × [0, 1] is independent of T and when T → ∞, the limit of triple integral P 1 (T ) in subregion[3, T ] × [0, 1] exists. Making the change of variables u = t 2 − t 1 , we have that when T → ∞, the limit of triple integral P 1 (T ) in sub-region [0, 1] × [1, T ] exists; Making the change of variables y = T − t 1 , u = t 2 − t 1 , when T → ∞, the limit of triple integral P 1 (T ) in sub region [1, T − 1] × [T − 1, T ] exists. It can be seen from making the change of variables x = T − s 1 , y = T − t 1 , u = t 2 − t 1 that the integral of triple integral P 1 (T ) in the sub region [T − 1, T ] 2 is also independent of T ; When T → ∞, the triple integral P 1 (T ) is in the sub region [T − 1, T ] × [1, T − 1] the limit of the integral exists.Making the change of variablesw = max {s 1 , t 1 } , v = |s 1 − t 1 | , u = t 2 − t 1we can see that the triple integral P 1 (T ) in sub region [1,T − 1] 2 is: t 1 − t 2 )|t 1 − t 2 | 2H−1 e −|t 2 −(s 1 −1)| sgn s 1 − 1 − t −|v+u+1| + e −|u−v+1| sgn(u − v + 1) |u| 2H−1 sgn(u)du. ee −|v+u+1| + e −|u−v+1| sgn(u − v + 1) |u| 2H−1 sgn(u)du + C H −u−1 |u| 2H−1 sgn(u)(1 + u)du + C H . t 1 − t 2 )|t 1 − t 2 | 2H−1 e −|t 2 −(s 1 +1)| sgn s 1 + 1 − t −|u−v−1| + e −|u+v−1| sgn(u + v − 1) |u| 2H−1 sgn(u)duThus, we get that the asymptote of the triple integral P 2 (T ) is: −|u−v−1| + e −|u+v−1| sgn(u + v − 1) |u| 2H−1 sgn(2H−1 sgn(u)(u − 1)du + C H . Remark 2.3. Lemma 2.2 is a rewriting of the Partial integral formula of continuous monotone increasing function (such as[15] exercise 1.7.17). Its proof comes from Proposition 1.6.41 of[4] and[15]. See Lemma 3.1 of[16] for details. t (s) = f (s) · 1 [(t−ε 1 )∨0,(t+ε 2 )∧T ] (s),then (1.23)can be generalized as: ds 1 T 1 e −|t 1 −s 1 | dt 1 t 1 1 e −|t 1 −s 1 −u−1| u 2H−2 du = T −1 1 ds 1 s 1 1 dt 1 + T 1 dt 1 t 1 ∧(T −1) t 1 −1 ds 1 + T 1 dt 1 t 1 −1 0 Proof. The starting point is to remove the following two absolute value symbols from the quadruple integral L(T ): |s 1 − s 2 | |s 1 − s 2 | and |t 1 − t 2 |. That is, first of all, we divide the integral region of the integral variables s 2 , t 2 as follows:(s 1 +1)∧T (s 1 −1)∨0 ds 2 (t 1 +1)∧T (t 1 −1)∨0 dt 2 = ( s 1 (s 1 −1)∨0 + (s 1 +1)∧T s 1 )ds 2 ( t 1 (t 1 −1)∨0 Parameter estimation for the Langevin equation with stationaryincrement Gaussian noise. T Sottinen, L Viitasaari, Stat Inference Stoch Process. 213Sottinen T, Viitasaari L. Parameter estimation for the Langevin equation with stationary- increment Gaussian noise. Stat Inference Stoch Process, 2018, 21(3): 569-601 Berry-Esseen bounds of second moment estimators for Gaussian processes observed at high frequency. S Douissi, K Es-Sebaiy, G Kerchev, I Nourdin, Electron J Statist. 20221Douissi S, Es-Sebaiy K, Kerchev G, Nourdin I. Berry-Esseen bounds of second moment estimators for Gaussian processes observed at high frequency. Electron J Statist, 2022, 16(1): 636-670 Parameter estimation for fractional Ornstein-Uhlenbeck processes of general Hurst parameter. Statistical Inference for Stochastic Processes. Y Hu, D Nualart, H Zhou, 22Hu Y, Nualart D, Zhou H. Parameter estimation for fractional Ornstein-Uhlenbeck processes of general Hurst parameter. Statistical Inference for Stochastic Processes, 2019, 22(1): 111- 142 On the Wiener integral with respect to the fractional Brownian motion on an interval. M Jolis, 330Journal of mathematical analysis and applicationsJolis M. On the Wiener integral with respect to the fractional Brownian motion on an interval. Journal of mathematical analysis and applications, 2007, 330(2): 1115-1127 Berry-Esséen bound for the parameter estimation of fractional Ornstein-Uhlenbeck processes with the hurst parameter H ∈ (0, 1 2 ). Y Chen, Y Li, Communications in Statistics-Theory and Methods. 202113Chen Y, Li Y. Berry-Esséen bound for the parameter estimation of fractional Ornstein- Uhlenbeck processes with the hurst parameter H ∈ (0, 1 2 ). Communications in Statistics- Theory and Methods, 2021, 50(13): 2996-3013 Optimal Berry-Esséen bound for statistical estimations and its application to SPDE. Y T Kim, H S Park, Journal of Multivariate Analysis. 155Kim Y T, Park H. S. Optimal Berry-Esséen bound for statistical estimations and its appli- cation to SPDE. Journal of Multivariate Analysis, 2017, 155: 284-304. Parameter estimation for an Ornstein-Uhlenbeck process driven by a general gaussian noise. Y Chen, H Zhou, Acta Mathematica Scientia. 412Chen Y, Zhou H. Parameter estimation for an Ornstein-Uhlenbeck process driven by a general gaussian noise. Acta Mathematica Scientia, 2021, 41B(2): 573-595. Parameter Estimation for an Ornstein-Uhlenbeck Processes driven by a general Gaussian Noise with Hurst Parameter H ∈ (0, 1 2 ). Y Chen, X M Gu, Y Li, arXiv:2111.15292arXiv preprintChen Y, Gu X M, Li Y. Parameter Estimation for an Ornstein-Uhlenbeck Processes driven by a general Gaussian Noise with Hurst Parameter H ∈ (0, 1 2 ). arXiv preprint arXiv:2111.15292, 2021. Fractional Ornstein-Uhlenbeck processes. P Cheridito, H Kawaguchi, M Maejima, Electron J Probab. 83ppCheridito P, Kawaguchi H, Maejima M. Fractional Ornstein-Uhlenbeck processes. Electron J Probab, 2003, 8(3), 14 pp. Parameter estimation for fractional Ornstein-Uhlenbeck processes. Statistics & probability letters. Y Hu, D Nualart, 80Hu Y, Nualart D. Parameter estimation for fractional Ornstein-Uhlenbeck processes. Sta- tistics & probability letters, 2010, 80(11-12): 1030-1038 Berry-Esséen bound for the parameter estimation of fractional OrnsteinUhlenbeck processes. Y Chen, N H Kuang, Y Li, Stochastics and Dynamics. 2020042050023Chen Y, Kuang N H, Li Y. Berry-Esséen bound for the parameter estimation of fractional OrnsteinUhlenbeck processes. Stochastics and Dynamics, 2020, 20(04): 2050023 Stochastic calculus for fractional Brownian motion and related processes. Y Mishura, Lecture Notes in Mathematics. Springer-VerlagMishura Y S. Stochastic calculus for fractional Brownian motion and related processes. Lecture Notes in Mathematics 1929. Springer-Verlag, Berlin, 2008 Integration questions related to fractional Brownian motion. V Pipiras, M S Taqqu, Probability Theory and Related Fields. 118Pipiras V, Taqqu M. S. Integration questions related to fractional Brownian motion. Prob- ability Theory and Related Fields, 2000, 118(2):251-91 The Malliavin calculus and related topics. D Nualart, SpringerNualart D. The Malliavin calculus and related topics. Springer, 2006 An introduction to measure theory. T Tao, American Mathematical SocietyProvidenceTao T. An introduction to measure theory. Providence: American Mathematical Society, 2011 Berry-Esséen bounds and almost sure CLT for the quadratic variation of a general Gaussian process. Y Chen, Z Ding, Y Li, arXiv:2106.018512021arXiv preprintChen Y, Ding Z, Li Y. Berry-Esséen bounds and almost sure CLT for the quadratic variation of a general Gaussian process. arXiv preprint arXiv:2106.01851, 2021 . P R Nanchang, China, P. R. China Email address: [email protected] School of Mathematics and Statistics. 330022Center for Applied Mathematics, School of Mathematics and Statistics, Jiangxi Normal University ; Jiangxi Normal UniversityCenter for Applied Mathematics, School of Mathematics and Statistics, Jiangxi Normal University, 330022, Nanchang, P. R. China Email address: [email protected] School of Mathematics and Statistics, Jiangxi Normal University, 330022, Nan- chang, P. R. China
[]